AU2002322410B2 - Imaging system and methodology employing reciprocal space optical design - Google Patents

Imaging system and methodology employing reciprocal space optical design Download PDF

Info

Publication number
AU2002322410B2
AU2002322410B2 AU2002322410A AU2002322410A AU2002322410B2 AU 2002322410 B2 AU2002322410 B2 AU 2002322410B2 AU 2002322410 A AU2002322410 A AU 2002322410A AU 2002322410 A AU2002322410 A AU 2002322410A AU 2002322410 B2 AU2002322410 B2 AU 2002322410B2
Authority
AU
Australia
Prior art keywords
sensor
lens
image
size
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2002322410A
Other versions
AU2002322410B8 (en
AU2002322410A1 (en
Inventor
Andrew G. Cartlidge
Howard Fein
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PALANTYR RESEARCH LLC
Angkor Tech LLP
Original Assignee
DANIEL BORTNICK
HIMANSHU AMIN
PALANTYR RES Inc
Angkor Tech LLP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/900,218 external-priority patent/US6664528B1/en
Priority claimed from US10/166,137 external-priority patent/US6884983B2/en
Priority claimed from US10/189,326 external-priority patent/US7132636B1/en
Application filed by DANIEL BORTNICK, HIMANSHU AMIN, PALANTYR RES Inc, Angkor Tech LLP filed Critical DANIEL BORTNICK
Priority claimed from PCT/US2002/021392 external-priority patent/WO2003005446A1/en
Publication of AU2002322410A1 publication Critical patent/AU2002322410A1/en
Application granted granted Critical
Publication of AU2002322410B2 publication Critical patent/AU2002322410B2/en
Publication of AU2002322410B8 publication Critical patent/AU2002322410B8/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/58Optics for apodization or superresolution; Optical synthetic aperture systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/40Optical focusing aids
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • G02B21/08Condensers
    • G02B21/082Condensers for incident illumination only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • G02B21/08Condensers
    • G02B21/086Condensers for transillumination only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/361Optical details, e.g. image relay to the camera or image sensor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Studio Devices (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Holo Graphy (AREA)
  • Image Input (AREA)
  • Lenses (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Description

WO 03/005446 PCT/US02/21392 Title: IMAGING SYSTEM AND METHODOLOGY EMPLOYING RECIPROCAL SPACE OPTICAL DESIGN RELATED APPLICATION This application claims the benefit of U.S. Patent Application Serial No.
09/900,218, which was filed July 6, 2001, and entitled IMAGING SYSTEM AND METHODOLOGY EMPLOYING RECIPROCAL SPACE OPTICAL DESIGN.
TECHNICAL FIELD The present invention relates generally to image and optical systems, and more particularly to a system and method to facilitate imaging performance via an image transfer medium that projects characteristics of a sensor to an object field of view.
BACKGROUND OF THE INVENTION Microscopes facilitate creating a large image of a tiny object. Greater magnification can be achieved if the light from an object is made to pass through two lenses compared to a simple microscope with one lens. A compound microscope has two or more converging lenses, placed in line with one another, so that both lenses refract the light in turn. The result is to produce an image that is magnified more than either lens could magnify alone. Light illuminating the object first passes through a short focal length lens or lens group, called the objective, and then travels on some distance before being passed through a longer focal length lens or lens group, called the eyepiece. A lens group is often simply referred to singularly as a lens. Usually these two lenses are held in paraxial relationship to one another, so that the axis of one lens is arranged to be in the same orientation as the axis of the second lens. It is the nature of the lenses, their properties, their relationship, and the relationship of the objective lens to the object that determines how a highly magnified image is produced in the eye of the observer.
The first lens or objective, is usually a small lens with a very small focal length. A specimen or object is placed in the path of a light source with sufficient intensity to -2oo 0C WO 03/005446 PCT/US02/21392 illuminate as desired. The objective lens is then lowered until the specimen is very close to, but not quite at the focal point of the lens. Light leaving the specimen and passing through the objective lens produces a real, inverted and magnified image behind the lens, in the microscope at a point generally referred to as the intermediate image plane. The second lens or eyepiece, has a longer focal length and is placed in the microscope so that the image produced by the objective lens falls closer to the eyepiece than one focal length ¢€3 (that is, inside the focal point of the lens). The image from the objective lens now becomes the object for the eyepiece lens. As this object is inside one focal length, the second lens refracts the light in such a way as to produce a second image that is virtual, inverted and magnified. This is the final image seen by the eye of the observer.
Alternatively, common infinity space or infinity corrected design microscopes employ objective lenses with infinite conjugate properties such that the light leaving the objective is not focused, but is a flux of parallel rays which do not converge until after passing through a tube lens where the projected image is then located at the focal point of the eyepiece for magnification and observation. Many microscopes, such as the compound microscope described above, are designed to provide images of certain quality to the human eye through an eyepiece. Connecting a Machine Vision Sensor, such as a Charge Coupled Device (CCD) sensor, to the microscope so that an image may be viewed on a monitor presents difficulties. This is because the image quality provided by the sensor and viewed by a human eye decreases, as compared to an image viewed by a human eye directly through an eyepiece. As a result, conventional optical systems for magnifying, observing, examining, and analyzing small items often require the careful attention of a technician monitoring the process through an eyepiece. It is for this reason, as well as others, that Machine-Vision or computer-based image displays from the aforementioned image sensor displayed on a monitor or other output display device are not of quality perceived by the human observer through the eyepiece.
3 SUMMARY OF THE INVENTION According to one aspect of the invention there is provided an imaging system, comprising: a sensor having one or more receptors, the receptors having a size; and an image transfer medium having a resolution size parameter, the image transfer medium operative to scale the receptor size to about the resolution size parameter in an object field of view, the image transfer medium comprising a multiple lens configuration, the multiple lens configuration comprising a first lens positioned toward the object field of view and a second lens positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the image transfer medium.
According to another aspect of the invention there is provided a method of producing an image, comprising: determining a pitch size between adjacent pixels on a sensor; determining a resolvable object size in an object field of view; and scaling the pitch size through an optical medium to correspond with the resolvable object size in an object field of view, the image transfer medium comprising a multiple lens configuration, the multiple lens configuration comprising a first lens positioned toward the object field of view and a second lens positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the image transfer medium.
According to another aspect of the invention there is provided a machine vision system, comprising: N Jhielboume\Cases\Patent\51000-S1999\PSIS16AU\Specis\JS]816 AU Specificaion 2008-I-2doc 2/01109 4 00 O an imaging system for collecting image data from a C product or process, comprising: Sa sensor having one or more receptors; and at least one optical device to direct light from an object field of view to the one or more receptors of the sensor, the at least one optical device provides a mapping Sof receptor size to about a size of a diffraction limited object in the object field of view, the optical device C- comprising a multiple lens configuration, the multiple C 10 lens configuration comprising a first lens positioned Stoward the object field of view and a second lens C positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the optical device; and a controller that receives the image data and employs the image data in connection with fabrication or control of the product or process.
The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conduction with the drawings.
N \NMelboum\Case\PaIent\500.5 1999\PS1816 AU\Speci\P5816AU Speciflcation 2008-1-2 doc 2/01/08 WO 03/005446 PCT/US02/21392 BRIEF DESCRIPTION OF THE DRAWINGS Fig. I is a schematic block diagram illustrating an imaging system in accordance with an aspect of the present invention.
Fig. 2 is a diagram illustrating a k-space system design in accordance with an aspect of the present invention.
Fig. 3 is a diagram of an exemplary system illustrating sensor receptor matching in accordance with an aspect of the present invention.
Fig. 4 is a graph illustrating sensor matching considerations in accordance with an aspect of the present invention.
Fig. 5 is a graph illustrating a Modulation Transfer Function in accordance with an aspect of the present invention.
Fig. 6 is a graph illustrating a figure of merit relating to a Spatial Field Number in accordance with an aspect of the present invention.
Fig. 7 is a flow diagram illustrating an imaging methodology in accordance with an aspect of the present invention.
Fig. 8 is a flow diagram illustrating a methodology for selecting optical parameters in accordance with an aspect of the present invention.
Fig. 9 is a schematic block diagram illustrating an exemplary imaging system in accordance with an aspect of the present invention.
Fig. 10 is a schematic block diagram illustrating a modular imaging system in accordance with an aspect of the present invention.
Figs. 11-13 illustrate alternative imaging systems in accordance with an aspect of the present invention.
Figs. 14-18 illustrate exemplary applications in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION The present invention relates to an optical and/or imaging system and methodology. According to one aspect of the present invention, a k-space filter is provided that can be configured from an image transfer medium such as optical media WO 03/005446 PCT/US02/21392 that correlates image sensor receptors to an object field of view. A variety of illumination sources can also be employed to achieve one or more operational goals and for versatility of application. The k-space design of the imaging system of the present invention promotes capture and analysis automated and/or manual) of images having a high Field Of View (FOV) at substantially high Effective Resolved Magnification as compared to conventional systems. This can include employing a small Numerical Aperture (NA) associated with lower magnification objective lenses to achieve very high Effective Resolved Magnification. As a consequence, images having a substantially large Depth Of Field (DOF) at very high Effective Resolved Magnification are also realized. The k-space design also facilitates employment of homogeneous illumination sources that are substantially insensitive to changes in position, thereby improving methods of examination and analysis.
According to another aspect of the present invention, an objective lens to object distance Working Distance) can be maintained in operation at low and high power effective resolved magnification imaging, wherein typical spacing can be achieved at about 0.1 mm or more and about 20 mm or less, as opposed to conventional microscopic systems which can require significantly smaller (as small as 0.01 mm) object to objective lens distances for comparable similar order of magnitude) Effective Resolved Magnification values. In another aspect, the Working Distance is about 0.5 mm or more and about 10 mm or less. It is to be appreciated that the present invention is not limited to operating at the above working distances. In many instances the above working distances are employed, however, in some instances, smaller or larger distances are employed. It is further noted that oil immersion or other Index of Refraction matching media or fluids for objective lenses are generally not required substantially no improvement to be gained) at one or more effective image magnification levels of the present invention yet, still exceeding effective resolved magnification levels achievable in conventional microscopic optical design variations including systems employing "infinity-corrected" objective lenses.
The k-space design of the present invention defines that a small "Blur Circle" or diffraction limited point/spot at the object plane is determined by parameters of the design to match image sensor receptors or pixels with a substantially one-to-one correspondence WO 03/005446 PCT/US02/21392 by "unit-mapping" of object and image spaces for associated object and image fields.
This enables the improved performance and capabilities of the present invention. One possible theory of the k-space design results from the mathematical concept that since the Fourier Transform of both an object and an image is formed in k-space (also called "reciprocal space"), the sensor should be mapped to the object plane in k-space via optical design techniques and component placement in accordance with the present invention. It is to be appreciated that a plurality of other transforms or models can be utilized to configure and/or select one or more components in accordance with the present invention.
For example, wavelet transforms, LaPlace (s-transforms), z-transforms as well as other transforms can be similarly employed.
The k-space design methodology is unlike conventional optical systems designed according to geometric, paraxial ray-trace and optimization theory, since the k-space optimization facilitates that the spectral components of the object tissue sample, particle, semiconductor) and the image are the same in k-space, and thus quantized.
Therefore, there are substantially no inherent limitations imposed on a Modulation Transfer Function (MTF) describing contrast versus resolution and absolute spatial resolution in the present invention. Quantization, for example, in k-space yields a substantially unitary Modulation Transfer Function not realized by conventional systems.
It is noted that high MTF, Spatial Resolution, and effective resolved image magnification can be achieved with much lower magnification objective lenses with desirable lower Numerical Apertures generally less than about 50x with a numerical aperture of generally less than about 0.7) through "unit-mapping" of projected pixels in an "Intrinsic Spatial Filter" provided by the k-space design.
If desired, "infinity-corrected" objectives can be employed with associated optical component and illumination, as well as spectrum varying components, polarization varying components, and/or contrast or phase varying components. These components can be included in an optical path-length between an objective and the image lens within an "infinity space". Optical system accessories and variations can thus be positioned as interchangeable modules in this geometry. The k-space design, in contrast to conventional microscopic imagers that utilize "infinity-corrected" objectives, enables the maximum optimization of the infinity space geometry by the "unit-mapping" concept.
WO 03/005446 PCT/US02/21392 This implies that there is generally no specific limit to the number of additional components that can be inserted in the "infinity space" geometry as in conventional microscopic systems that typically specify no more than 2 additional components without optical correction.
The present invention also enables a "base-module" design that can be configured and reconfigured in operation for a plurality of different applications if necessary to employ either transmissive or reflected illumination, if desired. This includes substantially all typical machine vision illumination schemes darkfield, brightfield, phase-contrast), and other microscopic transmissive techniques (Kohler, Abbe), in substantially any offset and can include Epi-illumination and variants thereof. The systems of the present invention can be employed in a plurhlity of opto-mechanical designs that are robust since the k-space design is substantially not sensitive to environmental and mechanical vibration and thus generally does not require heavy structural mechanical design and isolation from vibration associated with conventional microscopic imaging instruments. Other features can include digital image processing, if desired, along with storage local database, image data transmissions to remote computers for storage/analysis) and display of the images produced in accordance with the present invention computer display, printer, film, and other output media). Remote signal processing of image data can be provided, along with communication and display of the image data via associated data packets that are communicated over a network or other medium, for example.
Referring initially to Fig. 1, an imaging system 10 is illustrated in accordance with an aspect of the present invention. The imaging system 10 includes a sensor 20 having one or more receptors such as pixels or discrete light detectors (See illustrated below in Fig. 3) operably associated with an image transfer medium 30. The image transfer medium 30 is adapted or configured to scale the proportions of the sensor 20 at an image plane established by the position of the sensor 20 to an object field of view illustrated at reference numeral 34. A planar reference 36 of X and Y coordinates is provided to illustrate the scaling or reduction of the apparent or virtual size of the sensor 20 to the object field of view 34. Direction arrows 38 and 40 illustrate the direction of reduction of the apparent size of the sensor 20 toward the object field of view 34.
WO 03/005446 PCT/US02/21392 The object field of view 34 established by the image transfer medium 30 is related to the position of an object plane 42 that includes one or more items under microscopic examination (not shown). It is noted that the sensor 20 can be substantially any size, shape and/or technology digital sensor, analog sensor, Charge Coupled Device (CCD) sensor, CMOS sensor, Charge Injection Device (CID) sensor, an array sensor, a linear scan sensor) including one or more receptors of various sizes and shapes, the one or more receptors being similarly sized or proportioned on a respective sensor to be responsive to light visible, non-visible) received from the items under examination in the object field of view 34. As light is received from the object field of view 34, the sensor 20 provides an output 44 that can be directed to a local or remote storage such as a memory (not shown) and displayed from the memory via a computer and associated display, for example, without substantially any intervening digital processing straight bit map from sensor memory to display), if desired. It is noted that local or remote signal processing of the image data received from the sensor 20 can also occur.
For example, the output 44 can be converted to electronic data packets and transmitted to a remote system over a network and/or via wireless transmissions systems and protocols for further analysis and/or display. Similarly, the output 44 can be stored in a local computer memory before being transmitted to a subsequent computing system for further analysis and/or display.
The scaling provided by the image transfer medium 30 is determined by a novel kspace configuration or design within the medium that promotes predetermined k-space frequencies of interest and mitigates frequencies outside the predetermined frequencies.
This has the effect of a band-pass filter of the spatial frequencies within the image transfer medium 30 and notably defines the imaging system 10 in terms of resolution rather than magnification. As will be described in more detail below, the resolution of the imaging system 10 determined by the k-space design promotes a plurality of features in a displayed or stored image such as having high effective resolved magnification, high absolute spatial resolution, large depth of field, larger working distances, and a unitary Modulation Transfer Function as well as other features.
WO 03/005446 PCT/US02/21392 In order to determine the k-space frequencies, a "pitch" or spacing is determined between adjacent receptors on the sensor 20, the pitch related to the center-to-center distance of adjacent receptors and about the size or diameter of a single receptor. The pitch of the sensor 20 defines the Nyquist "cut-off' frequency band of the sensor. It is this frequency band that is promoted by the k-space design, whereas other frequencies are mitigated. In order to illustrate how scaling is determined in the imaging system 10, a small or diffraction limited spot or point 50 is illustrated at the object plane 42. The diffraction limited point 50 represents the smallest resolvable object determined by optical characteristics within the image transfer medium 30 and is described in more detail below. A scaled receptor 54, depicted in front of the field of view 34 for exemplary purposes, and having a size determined according to the pitch of the sensor 20, is matched or scaled to be about the same size in the object field of view 34 as the diffraction limited point In other words, the size of any given receptor at the sensor 20 is effectively reduced in size via the image transfer medium 30 to be about the same size (or matched in size) to the size of the diffraction limited point 50. This also has the effect of filling the object field of view 34 with substantially all of the receptors of the sensor 20, the respective receptors being suitably scaled to be similar in size to the diffraction limited point 50. As will be described in more detail below, the matching/mapping of sensor characteristics to the smallest resolvable object or point within the.obj ect field. of view 34 defines the imaging system 10 in terms of absolute spatial resolution and thus, enhances the operating performance of the system.
An illumination source 60 can be provided with the present invention in order that photons from that source can be transmitted through and/or reflected from objects in the field of view 34 to enable activation of the receptors in the sensor 20. It is noted that the present invention can potentially be employed without an illumination source 60 if potential self-luminous objects fluorescent or phosphorescent biological or organic material sample, metallurgical, mineral, and/or other inorganic material and so forth) emit enough radiation to activate the sensor 60. Light Emitting Diodes, however, provide an effective illumination source 60 in accordance with the present invention. Substantially WO 03/005446 PCT/US02/21392 any illumination source 60 can be applied including coherent and non-coherent sources, visible and non-visible wavelengths. However, for non-visible wavelength sources, the sensor 20 would also be suitably adapted. For example, for an infrared or ultraviolet source, an infrared or ultraviolet sensor 20 would be employed, respectively. Other illumination sources 60 can include wavelength-specific lighting, broad-band lighting, continuous lighting, strobed lighting, Kohler illumination, Abbe illumination, phasecontrast illumination, darkfield illumination, brightfield illumination, and Epi illumination. Transmissive or reflective lighting techniques specular and diffuse) can also be applied.
Referring now to Fig. 2, a system 100 illustrates an image transfer medium in accordance with an aspect of the present invention. The image transfer medium depicted in Fig.l can be provided according to the k-space design concepts described above and more particularly via a k-space filter 110 adapted, configured and/or selected to promote a band of predetermined k-space frequencies 114 and to mitigate frequencies outside of this band. This is achieved by detennining a pitch which is the distance between adjacent receptors 116 in a sensor (not shown) and sizing optical media within the filter 110 such that the pitch of the receptors 116 is matched in size with a diffraction-limited spot 120. The diffraction-limited spot 120 can be determined from the optical characteristics of the media in the filter 110. For example, the Numerical Aperture of an optical medium such as a lens defines the smallest object or spot that can be resolved by the lens. The filter 110 performs a k-space transformation such that the size of the pitch is effectively matched, "unit-mapped", projected, correlated, and/or reduced to the size or scale of the diffraction limited spot 120.
It is to be appreciated that a plurality of optical configurations can be provided to achieve the k-space filter 110. One such configuration can be provided by an aspherical lens 124 adapted such to perform the k-space transformation and reduction from sensor space to object space. Yet another configuration can be provided by a multiple lens arrangement 128, wherein the lens combination is selected to provide the filtering and scaling. Still yet another configuration can employ a fiber optic taper 132 or image conduit, wherein multiple optical fibers or array of fibers are configured in a funnel-shape WO 03/005446 PCT/US02/21392 to perform the mapping of the sensor to the object field of view. It is noted that the fiber optic taper 132 is generally in physical contact between the sensor and the object under examination contact with microscope slide). Another possible k-space filter 110 arrangement employs a holographic (or other diffractive or phase structure) optical element 136, wherein a substantially flat optical surface is configured via a hologram (or other diffractive or phase structure) computer-generated, optically generated, and/or other method) to provide the mapping in accordance with the present invention.
The k-space optical design as enabled by the k-space filter 110 is based upon the "effective projected pixel-pitch" of the sensor, which is a figure derived from following ("projecting") the physical size of the sensor array elements back through the optical system to the object plane. In this manner, conjugate planes and optical transform spaces are matched to the Nyquist cut-off of the effective receptor or pixel size. This maximizes the effective resolved image magnification and the Field Of View as well as the Depth Of Field and the Absolute Spatial Resolution. Thus, a novel application of optical theory is provided that does not rely on conventional geometric optical design parameters of paraxial ray-tracing which govern conventional optics and imaging combinations. This can further be described in the following manner.
A Fourier transform of an object and an image is formed (by an optical system) in k-space (also referred to as "reciprocal-space"). It is this transform that is operated on for image optimization by the k-space design of the present invention. For example, the optical media employed in the present invention can be designed with standard, relatively non-expensive "off-the-shelf' components having a configuration which defines that the object and image space are "unit-mapped" or "unit-matched" for substantially all image and object fields. A small Blur-circle or diffraction-limited spot 120 at the object plane is defined by the design to match the pixels in the image plane at the image sensor of choice) with substantially one-to-one correspondence and thus the Fourier transforms of pixelated arrays can be matched. This implies that, optically by design, the Blur-circle is scaled to be about the same size as the receptor or pixel pitch. The present invention is defined such that it constructs an Intrinsic Spatial Filter such as the k-space filter 110.
Such a design definition and implementation enables the spectral components of both the object and thc image in k-space to be about the same or quantized. This also defines that WO 03/005446 PCT/US02/21392 the Modulation Transfer Function (MTF) (the comparison of contrast to spatial resolution) of the sensor is matched to the MTF of the object Plane.
Fig. 3 illustrates an optical system 200 in accordance with an aspect of the present invention. The system 200 includes a sensor 212 having a plurality of receptors or sensor pixels 214. For example, the sensor 212 is an M by N array of sensor pixels 214, having M rows and N columns 640 x 480, 512 x 512, 1280 x 1024, and so forth), M and N being integers respectively. Although a rectangular sensor 212 having generally square pixels is depicted, it is to be understood and appreciated that the sensor can be substantially any shape circular, elliptical, hexagonal, rectangular, and so forth). It is to be further appreciated that respective pixels 214 within the array can also be substantially any shape or size, the pixels in any given array 212 being similarly sized and shaped in accordance with an aspect of the present invention.
The sensor 212 can be substantially any technology digital sensor, analog sensor, Charge Coupled Device (CCD) sensor, CMOS sensor, Charge Injection Device (CID) sensor, an array sensor, a linear scan sensor) including one or more receptors (or pixels) 214. According to one aspect of the present invention, each of the pixels 214 is similarly sized or proportioned and responsive to light visible, non-visible) received from the items under examination, as described herein.
The sensor 212 is associated with a lens network 216, which is configured based on performance requirements of the optical system and the pitch size of sensor 212. The lens network 216 is operative to scale (or project) proportions pixels 214) of the sensor 212 at an image plane established by the position of the sensor 212 to an object field of view 220 in accordance with an aspect of the present invention. The object field of view 220 is related to the position of an object plane 222 that includes one or more items (not shown) under examination.
As the sensor 212 receives light from the object field of view 220, the sensor 212 provides an output 226 that can be directed to a local or remote storage such as a memory (not shown) and displayed from the memory via a computer and associated display, for example, without substantially any intervening digital processing straight bit map from sensor memory to display), if desired. It is noted that local or remote signal processing of the image data received from the sensor 212 can also occur. For example, WO 03/005446 PCT/US02/21392 the output 226 can be converted to electronic data packets and transmitted to a remote system over a network for further analysis and/or display. Similarly, the output 226 can be stored in a local computer memory before being transmitted to a subsequent computing system for further analysis and/or display.
The scaling (or effective projecting) of pixels 214 provided by the lens network 216 is determined by a novel k-space configuration or design in accordance with an aspect of the present invention. The k-space design of the lens network 216 promotes predetermined k-space frequencies of interest and mitigates frequencies outside the predetermined frequency band. This has the effect of a band-pass filter of the spatial frequencies within the lens network 216 and notably defines the imaging system 200 in terms of resolution rather than magnification. As will be described below, the resolution of the imaging system 200 determined by the k-space design promotes a plurality of features in a displayed or stored image, such as having high "Effective Resolved Magnification" (a figure of merit described in following), with related high absolute spatial resolution, large depth of field, larger working distances, and a unitary Modulation Transfer Function as well as other features.
In order to determine the k-space frequencies, a "pitch" or spacing 228 is determined between adjacent receptors 214 on the sensor 212. The pitch pixel pitch) corresponds to the center-to-center distance of adjacent receptors, indicated at 228, which is about the size or diameter of a single receptor when the sensor includes all equally sized pixels. The pitch 228 defines the Nyquist "cut-off' frequency band of the sensor 212. It is this frequency band that is promoted by the k-space design, whereas other frequencies are mitigated. In order to illustrate how scaling is determined in the imaging system 200, a point 230 of a desired smallest resolvable spot size is illustrated at the object plane 222. The point 230, for example, can represent the smallest resolvable object determined by optical characteristics of the lens network 216. That is, the lens network is configured to have optical characteristics magnification, numerical aperture) so that respective pixels 214 are matched or scaled to be about the same size in the object field of view 220 as the desired minimum resolvable spot size of the point 230.
For purposes of illustration, a scaled receptor 232 is depicted in front of the field of view WO 03/005446 PCT/US02/21392 220 as having a size determined according to the pitch 228 of the sensor 212, which is about the same as the point 230.
By way of illustration, the lens network 216 is designed to effectively reduce the size of each given receptor pixel) 214 at the sensor 212 to be about the same size matched in size) to the size of the point 230, which is typically the minimum spot size resolvable by the system 210. It is to be understood and appreciated that the point 230 can be selected to a size representing the smallest resolvable object determined by optical characteristics within the lens network 216 as determined by diffraction rules diffraction limited spot size). The lens network 216 thus can be designed to effectively scale each pixel 214 of the sensor 212 to any size that is equal to or greater than the diffraction limited size. For example, the resolvable spot size can be selected to provide for any desired image resolution that meets such criteria.
After the desired resolution (resolvable spot size) is selected, the lens network 216 is designed to provide the magnification to scale the pixels 214 to the object field of view 220 accordingly. This has the effect of filling the object field of view 220 with substantially all of the receptors of the sensor 212, the respective receptors being suitably scaled to be similar in size to the point 230, which corresponds to the desired resolvable spot size. The matching/mapping of sensor characteristics to the desired smallest) resolvable object or point 230 within the object field of view 220 defines the imaging system 200 in terms of absolute spatial resolution and enhances the operating performance of the system in accordance with an aspect of the present invention.
By way of further illustration, in order to provide unit-mapping according to this example, assume that the sensor array 212 provides a pixel pitch 228 of about 10.0 microns. The lens network 216 includes an objective lens 234 and a secondary lens 236.
For example, the objective lens 234 can be set at infinite conjugate to the secondary lens 236, with the spacing between the objective and secondary lenses being flexible. The lenses 234 and 236 are related to each other so as to achieve a reduction from sensor space defined at the sensor array 220 to object space defined at the object plane 222. It is noted that substantially all of the pixels 214 are projected into the object field of view 220, which is defined by the objective lens 234. For example, the respective pixels 214 are scaled through the objective lens 234 to about the dimensions of the desired minimum WO 03/005446 PCT/US02/21392 resolvable spot size. In this example, the desired resolution at the image plane 222 is one micron. Thus, a magnification of ten times is operative to back project a ten micron pixel to the object plane 222 and reduce it to a size of one micron.
The reduction in size of the array 212 and associated pixels 214 can be achieved by selecting the transfer lens 236 to have a focal length "D2" (from the array 212 to the transfer lens 236) of about 150 millimeters and by selecting the objective lens to have a focal length "DI" (from the objective lens 236 to the object plane 222) of about millimeters, for example. In this manner, the pixels 214 are effectively reduced in size to about 1.0 micron per pixel, thus matching the size of the of the desired resolvable spot 230 and filling the object field of view 220 with a "virtually-reduced" array of pixels. It is to be understood and appreciated that other arrangements of one or more lenses can be employed to provide the desired scaling..
In view of the foregoing description, those skilled in the art will understand and appreciate that the optical media lens network 216) can be designed, in accordance with an aspect of the present invention, with standard, relatively inexpensive "off-theshelf' components having a configuration that defines that the object and image space are "unit-mapped" or "unit-matched" for substantially all image and object fields. The lens network 216 and, in particular the objective lens 234, performs a Fourier transform of an object and an image in k-space (also referred to as "reciprocal-space"). It is this transform that is operated on for image optimization by the k-space design of the present invention.
A small Blur-circle or Airy disk at the object plane is defined by the design to match the pixels in the image plane at the image sensor of choice) with substantially one-to-one correspondence with the Airy disk and thus the Fourier transforms of pixilated arrays can be matched. This implies that, optically by design, the Airy disk is scaled through the lens network 216 to be about the same size as the receptor or pixel pitch. As mentioned above, the lens network 216 is defined so as to construct an Intrinsic Spatial Filter a k-space filter). Such a design definition and implementation enables the spectral components of both the object and the image in k-space to be about the same or quantized. This also defines that a Modulation Transfer Function (MTF) (the comparison of contrast to spatial resolution) of the sensor can be matched to the MTF of the object Plane in accordance with an aspect of the present invention.
WO 03/005446 PCT/US02/21392 As illustrated in Fig. 3, k-space is defined as the region between the objective lens 234 and the secondary lens 236. It is to be appreciated that substantially any optical media, lens type and/or lens combination that reduces, maps and/or projects the sensor array 212 to the object field of view 220 in accordance with unit or k-space mapping as described herein is within the scope of the present invention.
To illustrate the novelty of the exemplary lens/sensor combination depicted in Fig.
3, it is noted that conventional objective lenses, sized according to conventional geometric paraxial ray techniques, are generally sized according to the magnification, Numeric Aperture, focal length and other parameters provided by the objective. Thus, the objective lens would be sized with a greater focal length than subsequent lenses that approach or are closer to the sensor (or eyepiece in conventional microscope) in order to provide magnification of small objects. This can result in magnification of the small objects at the object plane being projected as a magnified image of the objects across "portions" of the sensor and results in known detail blur Rayleigh diffraction and other limitations in the optics), empty magnification problems, and Nyquist aliasing among other problems at the sensor. The k-space design of the present invention operates in an alternative manner to conventional geometrical paraxial ray design principles. That is, the objective lens 234 and the secondary lens 236 operate to provide a reduction in size of the sensor array 212 to the object field of view 220, as demonstrated by the relationship of the lenses.
An illumination source 240 can be provided with the present invention in order that photons from that source can be transmitted through and/or reflected from objects in the field of view 234 to enable activation of the receptors in the sensor 212. It is noted that the present invention can potentially be employed without an illumination source 240 if potential self-luminous objects objects or specimens with emissive characteristics as previously described) emit enough radiation to activate the sensor 12. Substantially any illumination source 240 can be applied including coherent and non-coherent sources, visible and non-visible wavelengths. However, for non-visible wavelength sources, the sensor 212 would also be suitably adapted. For example, for an infrared or ultraviolet source, an infrared or ultraviolet sensor 212 would be employed, respectively. Other suitable illumination sources 240 can include wavelength-specific lighting, broad-band WO 03/005446 PCT/US02/21392 lighting, continuous lighting, strobed lighting, Kohler illumination, Abbe illumination, phase-contrast illumination, darkfield illumination, brightfield illumination, Epi illumination, and the like. Transmissive or reflective specular and diffuse) lighting techniques can also be applied.
Fig. 4 illustrates a graph 300 of mapping characteristics and comparison between projected pixel size on the X-axis and diffraction-limited spot resolution size on the Y-axis. An apex 310 of the graph 300 corresponds to unit mapping between projected pixel size and the diffraction limited spot size, which represents an optimum relationship between a lens network and a sensor in accordance with the present invention.
It is to be appreciated that the objective lens 234 (Fig. 3) should generally not be selected such that the diffraction-limited size of the smallest resolvable objects are smaller than a projected pixel size. If so, "economic waste" can occur wherein more precise information is lost selecting an object lens more expensive than required, such as having a higher numerical aperture). This is illustrated to the right of a dividing line 320 at reference 330 depicting a projected pixel 340 larger that two smaller diffraction spots 350. In contrast, where an objective is selected with diffraction-limited performance larger than the projected pixel size, blurring and empty magnification can occur. This is illustrated to the left of line 320 at reference numeral 360, wherein a projected pixel 370 is smaller than a diffraction-limited object 380. It is to be appreciated, however, that even if substantially one-to-one correspondence is not achieved between projected pixel size and the diffraction-limited spot, a system can be configured with less than optimum matching 0.1 20%, 95% down from the apex 310 on the graph 300 to the left or right of the line 320) and still provide suitable performance in accordance with an aspect of the present invention. Thus, less than optimal matching is intended to fall within the spirit and the scope of present invention.
It is further to be appreciated that the diameter of the lenses in the system as illustrated in Fig. 3, for example, should be sized such that when a Fourier Transform is performed from object space to sensor space, spatial frequencies of interest that are in the band pass region described above frequencies utilized to define the size and shape of a pixel) are substantially not attenuated. This generally implies that larger diameter lenses WO 03/005446 PCT/US02/21392 about 10 to 100 millimeters) should be selected to mitigate attenuation of the spatial frequencies of interest.
Referring now to Fig. 5, a Modulation Transfer function 400 is illustrated in accordance with the present invention. On a Y-axis, modulation percentage from 0 to 100% is illustrated defining percentage of contrast between black and white. On an Xaxis, Absolution Spatial Resolution is illustrated in terms of microns of separation. A line 410 illustrates that modulation percentage remains substantially constant at about 100% over varying degrees of spatial resolution. Thus, the Modulation Transfer Function is about 1 for the present invention up to about a limit imposed by the signal to noise sensitivity of the sensor. For illustrative purposes, a conventional optics design Modulation Transfer Function is illustrated by line 420 which may be an exponential curve with generally asymptotic limits characterized by generally decreasing spatial resolution with decreasing modulation percentage (contrast).
Fig. 6 illustrates a quantifiable Figure of Merit (FOM) for the present invention defined as dependent on two primary factors: Absolute Spatial Resolution (RA, in microns), depicted on the Y axis and the Field Of View in microns) depicted on the X axis of a graph 500. A reasonable FOM called "Spatial Field Number" can be expressed as the ratio of these two previous quantities, with higher values of S being desirable for imaging as follows: S F I/RA A line 510 illustrates that the FOM remains substantially constant across the field of view and over different values of absolute spatial resolution which is an enhancement over conventional systems.
Figs. 7, 8, 14, 15, and 16 illustrate methodologies to facilitate imaging performance in accordance with the present invention. While, for purposes of simplicity of explanation, the methodologies may be shown and described as a series of acts, it is to be understood and appreciated that the present invention is not limited by the order of acts, as some acts may, in accordance with the present invention, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state WO 03/005446 PCT/US02/21392 diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the present invention.
Turning now to Figure 7 and proceeding to 610, lenses are selected having diffraction-limited characteristics at about the same size of a pixel in order to provide unit-mapping and optimization of the k-space design. At 614, lens characteristics are also selected to mitigate reduction of spatial frequencies within k-space. As described above, this generally implies that larger diameter optics are selected in order to mitigate attenuation of desired k-space frequencies of interest. At 618, a lens configuration is selected such that pixels, having a pitch at the image plane defined by the position of a sensor are scaled according to the pitch to an object field of view at about the size of a diffraction-limited spot unit-mapped) within the object field of view. At 622, an image is generated by outputting data from a sensor for real-time monitoring and/or storing the data in memory for direct display to a computer display and/or subsequent local or remote image processing and/or analysis within the memory.
Fig. 8 illustrates a methodology that can be employed to design an optical/imaging system in accordance with an aspect of the present invention. The methodology begins at 700 in which a suitable sensor array is chosen for the system. The sensor array includes a matrix of receptor pixels having a known pitch size, usually defined by the manufacturer.
The sensor can be substantially any shape rectangular, circular, square, triangular, and so forth). By way of illustration, assume that a sensor of 640x480 pixels having a pitch size of 10 pin is chosen. It is to be understood and appreciated that an optical system can be designed for any type and/or size of sensor array in accordance with an aspect of the present invention.
Next at 710, an image resolution is defined. The image resolution corresponds to the smallest desired resolvable spot size at the image plane. The image resolution can be defined based on the application(s) for which the optical system is being designed, such as any resolution that is greater than or equal to a smallest diffraction limited size. Thus, it is to be appreciated that resolution becomes a selectable design parameter that can be tailored to provide desired image resolution for virtually any type of application. In contrast, most conventional systems tend to limit resolution according to Rayleigh WO 03/005446 PCT/US02/21392 diffraction, which provides that intrinsic spatial resolution of the lenses cannot exceed limits of diffraction for a given wavelength.
After selecting a desired resolution (710), a suitable amount of magnification is determined at 720 to achieve such resolution. For example, the magnification is functionally related to the pixel pitch of the sensor array and the smallest resolvable spot size. The magnification can be expressed as follows: Eq. 1
Y
wherein: x is the pixel pitch of the sensor array; and y is the desired image resolution (minimum spot size).
So, for the above example where the pixel pitch is 10 jrm and assuming a desired image resolution of 1 m, Eq. 1 provides an optical system of power ten. That is, the lens system is configured to back-project each 10 pm pixel to the object plane and reduce respective pixels to the resolvable spot size of 1 micron.
The methodology of Fig. 8 also includes a determination of a Numerical Aperture at 730. The Numerical Aperture (NA) is determined according to well-established diffraction rules that relate NA of the objective lens to theminimum resolvable spot size determined at 710 for the optical system. By way of example, the calculation of NA can be based on the following equation: NA Eq. 2
Y
where: Z is the wavelength oflight being used in the optical system; and y is the minimum spot size determined at 710).
Continuing with the example in which the optical system has a resolved spot size ofy 1 micron, and assuming a wavelength of about 500 nm green light), a NA 0.25 satisfies Eq. 2. It is noted that relatively inexpensive commercially available objectives of power 10 provide numerical apertures of 0.25.
It is to be understood and appreciated that the relationship between NA, wavelength and resolution represented by Eq. 2 can be expressed in different ways according to various factors that account for the behavior of objectives and condensers.
Thus, the determination at 730, in accordance with an aspect of the present invention, is WO 03/005446 PCT/US02/21392 not limited to any particular equation but instead simply obeys known general physical laws in which NA is functionally related to the wavelength and resolution. After the lens parameters have been designed according to the selected sensor (700), the corresponding optical components can be arranged to provide an optical system (740) in accordance with an aspect of the present invention.
Assume, for purposes of illustration, that the example optical system created according to the methodology of Fig. 8 isto be employed for microscopic-digital imaging.
By way of comparison, in classical microscopy, in order to image and resolve structures of a size approaching 1 micron (and below), magnifications of many hundreds usually are required. The basic reason for this is that such optics conventionally have been designed for the situation when the sensor of choice is the human eye. In contrast, the methodology of Fig. 8 designs the optical system in view of the sensor, which affords significant performance increases at reduced cost.
In the k-space design methodology, according to an aspect of the present invention, the optical system is designed around a discrete sensor that has known fixed dimensions. As a result, the methodology can provide a far more straight-forward, robust, and inexpensive optical system design approach to "back-project" the sensor size onto the object plane and calculate a magnification factor. A second part of the methodology facilitates that the optics that provide the magnification have a sufficient NA to optically resolve a spot of similar dimensions as the back-projected pixel. Advantageously, an optical system designed in accordance with an aspect of the present invention can utilize custom and/or off-the-shelf components. Thus, for this example, inexpensive optics can be employed in accordance with an aspect of the present invention to obtain suitable results, but well-corrected microscope optics are relatively inexpensive. If customdesigned optics are utilized, in accordance with an aspect of the present invention, then the range of permissible magnifications and numerical apertures becomes substantial, and some performance gains can be realized over the use of off-the-shelf optical components.
In view of the concepts described above in relation to Figs. 1-8, a plurality of related imaging applications can be enabled and enhanced by the present invention. For example, these applications can include but are not limited to imaging, control, inspection, microscopy and/or other automated analysis such as: WO 03/005446 PCT/US02/21392 Bio-medical analysis cell colony counting, histology, frozen sections, cellular cytology, Haematology, pathology, oncology, fluorescence, interference, phase and many other clinical microscopy applications); Particle Sizing Applications Pharmaceutical manufacturers, paint manufacturers, cosmetics manufacturers, food process engineering, and others); Air quality monitoring and airborne particulate measurement clean room certification, environmental certification, and so forth); Optical defect analysis, and other requirements for high resolution microscopic inspection of both transmissive and opaque materials (as in metallurgy, automated semiconductor inspection and analysis, automated vision systems, 3-D imaging and so forth); and Imaging tehelmologies such as cameras, copiers, FAX machines and medical systems.
Figs. 9, 10,11, 12, and 13 illustrate possible example systems that can be constructed employing the concepts previously described above in relation to Figs. 1-8.
Fig. 9 is a flow diagram of light paths in an imaging system 800 adapted in accordance with the present invention.
The system 800 employs a light source 804 emitting illuminating light that is received by a light condenser 808. Output from the light condenser 808 can be directed by a fold mirror 812 to a microscope condenser 816 that projects illuminating light onto a slide stage 820, wherein an object (not shown, positioned on top of, or within the slide stage) can be imaged in accordance with the present invention. The slide stage 820 can be automatically positioned (and/or manually) via a computer 824 and associated slide feed 828 in order to image one or more objects in a field of view defined by an objective lens 832. It is noted that the objective lens 832 and/or other components depicted in the system 800 may be adjusted manually and/or automatically via the computer 824 and associated controls (not shown) servo motors, tube slides, linear and/or rotary position encoders, optical, magnetic, electronic, or other feedback mechanisms, control software, and so forth) to achieve different and/or desired image characteristics magnification, focus, which objects appear in field of view, depth of field and so forth).
Light output from the objective lens 832 can be directed through an optional beam WO 03/005446 PCT/US02/21392 splitter 840, wherein the beam splitter 840 is operative with an alternative epiillumination section 842 (to light objects from above slide stage 820) including light shaping optics 844 and associated light source 848. Light passing through the beam splitter 840 is received by an image forming lens 850. Output from the image forming lens 850 can be directed to a CCD or other imaging sensor or device 854 via a fold mirror 860. The CCD or other imaging sensor or device 854 converts the light received from the object to digital information for transmission to the computer 824, wherein the object image can be displayed to a user in real-time and/or stored in memory at 864. As noted above, the digital information defining the image captured by the CCD or other imaging sensor or device 854 can be routed as bit-map information to the display/memory 864 by the computer 824. If desired, image processing such as automatic comparisons with predetermined samples or images can be performed to determine an identity of and/or analyze the object under examination. This can also include employment of substantially any type of image processing technology or software that can be applied to the captured image data within the memory 864.
Fig. 10 is a system 900 depicting an exemplary modular approach to imaging design in accordance with an aspect of the present invention. The system 900 can be based on a sensor array 910 provided in off-the-shelf camera) with a pixel pitch of approximately 8 microns (or other dimension), for example, wherein array sizes can vary from 640x480 to 1280x1024 (or other dimensions as noted above). The system 900 includes a modular design wherein a respective module is substantially isolated from another module, thus, mitigating alignment tolerances.
The modules can include: a camera/sensor module, 914 including an image-forming lens 916 and/or fold mirror 918; an epi-illumination module 920 for insertion into a k-space region 922; a sample holding and presentation module 924; a light-shaping module 930 including a condenser 934; and a sub-stage lighting module 940.
WO 03/005446 PCT/US02/21392 It is noted that the system 900 can advantageously employ commerciallyavailable components such as for example: condenser optics 934 (NA 1) for the light presentation; Olympus U-SC-2) standard plan achromatic objective lenses 944 of power and numerical aperture (4x, 0-10), (10x, 0-25), (20x, 0-40), (40x, 0-65) selected to satisfy the desired characteristic that for a given magnification, the projected pixelpitch at the object plane is similar in dimensions to the diffraction-limited resolved spot of the optics.
Olympus 1-UB222, 1-UB223, 1-UB225, 1-UB227) The system 900 utilizes an infinity-space (k-space) between the objective lens 944 and the image-forming lens 916 in order to facilitate the insertion of auxiliary and/or additional optical components, modules, filters, and so forth in the k-space region at 922 such as for example, when the image-forming lens 916 is adapted as an f 150mm achromatic triplet. Furthermore, an infinity-space (k-space) between the objective lens 944 and the image-forming lens 916 can be provided in order to facilitate the injection of light (via a light-forming path) into an optical path for epi-illumination. For example, the light-forming path for epi-illumination can include: a light source 950 such as an LED driven from a current-stabilised supply; HP a transmission hologram for source homogenisation and the imposition of a spatial virtual-source at 950; POC light shaping diffuser polyester film
FWHM)
a variable aperture at 960 to restrict the NA of the source 950 to that of the imaging optics, thereby mitigating the effect of scattered light entering the image-forming optical path; Thorlabs iris diaphragm SM1D12 0-5 12-0 mm aperture) WO 03/005446 PCT/US02/21392 a collection lens at 960 employed to maximize the light gathered from the virtual source 950, and to match the k-space characteristics of the source to that of the imaging optics; and f= 50mm aspheric lens, f= 50mm achromatic doublet) a partially-reflective beam splitter 964 employed to form a coaxial light path and image path. For example, the optic 964 provides a reflectivity on a first surface (at an inclination of 45 degrees), and is broadband antireflection coated on a second surface.
The sub-stage lighting module 940 is provided by an arrangement that is substantially similar to that of the epi-illumination described above for example: a light source 970 (an LED driven from a current-stabilised supply); HP a transmission hologram (associated with light source 970) for the purposes of source homogenisation and the imposition of a spatial virtual-source; POC light shaping diffuser polyester film 30-degree FWHM) a collection lens 974 employed to maximize the light gathered from the virtual source 970, and to match the k-space characteristics of the source to that of the imaging optics; f=50mm aspheric lens, f= 50mm achromatic doublet) a variable aperture 980 to restrict the NA of the source 970 to that of the imaging optics, thereby mitigating the effect of scattered light entering the image-forming optical path; Thorlabs iris diaphragm SM1D12 0-5 12-0 mm aperture) a mirror 988 utilized to turn the optical path through 90 degrees and provide fine-adjustment in order to accurately align the optical modules; and a relay lens (not shown) employed to accurately position the image of the variable aperture 980 onto the object plane (at slide 990), thereby, along with WO 03/005446 PCT/US02/21392 suitable placement of a holographic diffuser, thus, achieving Kohler illumination.
f= 100mm simple piano-convex lens).
As described above, a computer 994 and associated display/memory 998 is provided to display in real-time and/or store/process digital image data captured in accordance with the present invention.
Fig. 11 illustrates a system 1000 in accordance with an aspect of the present invention. In this aspect, a sub-stage lighting module 1010 Kohler, Abbe) can project light through a transmissive slide 1020 (object under examination not shown), wherein an achromatic objective lens 1030 receives light from the slide and directs the light to an image capture module at 1040. It is noted that the achromatic objective lens 1030 and/or slide 1020 can be manually and/or automatically controlled to position the object(s) under examination and/or position the objective lens.
Fig. 12 illustrates a system 1100 in accordance with an aspect of the present invention. In this aspect, a top-stage or epi-illumination lighting module 1110 can project light to an opaque slide 1120 (object under examination not shown), wherein an objective lens 1130 (can be compound lens device or other type) receives light from the slide and directs the light to an image capture module at 1040. As noted above, the objective lens 1130 and/or slide 1120 can be manually and/or automatically controlled to position the object(s) under examination and/or position the objective lens. Fig. 13 depicts a system 1200 that is similar to the system 1000 in Fig. 11 except that a compound objective lens 1210 is employed in place of an achromatic objective lens.
The imaging systems and processes described above in connection with Figs. 1-13 may thus be employed to capture/process an image of a sample, wherein the imaging systems are coupled to a processor or computer that reads the image generated by the imaging systems and compares the image to a variety of images in an on-board data store in any number of current memory technologies.
WO 03/005446 PCT/US02/21392 For example, the computer can include an analysis component to perform the comparison. Some of the many algorithms employed in image processing include, but are not limited to convolution (on which many others are based), FFT, DCT, thinning (or skeletonisation), edge detection and contrast enhancement. These are usually implemented in software but may also use special purpose hardware for speed. FFT (fast Fourier transform) is an algorithm for computing the Fourier transform of a set of discrete data values. Given a finite set of data points, for example, a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also addresses the essentially identical inverse concerns of reconstructing a signal from the frequency data. DCT (discrete cosine transform) is a technique for expressing a waveform as a weighted sum of cosines. There are a various extant programming languages designed for image processing which include but are not limited to those such as IDL, Image Pro, Matlab, and many others. There are also no specific limits to the special and custom image processing algorithms that may be written to perform functional image manipulations and analyses.
The k-space design of the present invention also allows for direct optical correlation of the Fourier Frequency information contained in the image with stored information to perform real-time optically correlated image processed analyses of a given sample object.
Fig. 14 illustrates a particle sizing application 1300 that can be employed with the systems and processes previously described. Particle sizing can include real-time, closed/open loop monitoring, manufacturing with, and control of particles in view of automatically determined particles sizes in accordance with the k-space design concepts previously described. This can include automated analysis and detection techniques for various particles having similar or different sizes (n different sizes, n being an integer) and particle identification of m-shaped/dimensioned particles, m being an integer). In one aspect of the present invention, desired particle size detection and analysis can be achieved via a direct measurement approach. This implies that the absolute spatial resolution per pixel relates directly (or substantially thereto) in units of linear measure to the imaged particles without substantial account of the particle medium and associated particle distribution. Direct measurement generally does not create a model but rather WO 03/005446 PCT/US02/21392 provides a metrology and morphology of the imaged particles in any given sample. This mitigates processing of modelling algorithms, statistical algorithms, and other modelling limitations presented by current technology. Thus, an issue becomes one of sample handling and form that enhances the accuracy and precision of measurements since the particle data is directly imaged and measured rather than modelled, if desired.
Proceeding to 1310 of the particle sizing application 1300, particle size image parameters are determined. For example, basic device design can be configured for imaging at desired Absolute Spatial Resolution per pixel and Effective Resolved Magnification as previously described. These parameters determine field of view (FOV), depth of field (DOF), and working distance for example. Real-time measurement can be achieved by asynchronous imaging of a medium at selected timing intervals, in real-time at common video rates, and/or at image capture rates as desired. Real-time imaging can also be achieved by capturing images at selected times for subsequent image processing. Asynchronous imaging can be achieved by capturing images at selected times by pulsing an instrument illumination at selected times and duty cycles for subsequent image processing.
At 1320, a sample introduction process is selected for automated (or manual) analysis. Samples can be introduced into an imaging device adapted in accordance with the present invention in any of the following (but not limited to) imaging processes: 1) All previously described methods and transmissive media as well as: 2) Individual manual samples in cuvettes, slides, and/or transmissive medium.
3) Continuous flow of particles in stream of gas or liquid, for example.
4) With an imaging device configured for reflective imaging, samples may be opaque and presented on an opaque "carrier" (automated and/or manual) without substantial regard to the material analyzed.
At 1330, a process control and/or monitoring system is configured. Real-time, closed loop and/or open loop monitoring, manufacturing with closing loop around particle size), and control of processes by direct measurement of particle characteristics size, shape, morphology, cross section, distribution, density, packing fraction, and other parameters can be automatically determined). It is to be appreciated that although WO 03/005446 PCT/US02/21392 direct measurement techniques are performed on a given particle sample, that automated algorithms and/or processing can also be applied to the imaged sample if desired.
Moreover, a direct measurement-based particle characterization device can be installed at substantially any given point in a manufacturing process to monitor and communicate particle characteristics for process control, quality control, and so forth by direct measurement.
At 1340, a plurality of different sample types can be selected for analysis. For example, particle samples in any of the aforementioned forms can be introduced in continuous flow, periodic, and/or asynchronous processes for direct measurement in a device as part of a process closed-feedback-loop system to control, record, and/or communmicate particle characteristics of a given sample type (can also include open loop techniques if desired). Asynchronous and/or synchronous (the first defines imaging with a triggering signal sent by an event, or trigger signal initiated by an event or object generating a trigger signal to initiate imaging, the second defines imaging with a timing signal sent to trigger illumination. Asynchronous and/or synchronous imaging can be achieved by pulsing an illumination source to coincide with the desired image field with substantially any particle flow rate. This can be controlled by a computer, for example, and/or by a "trigger" mechanism, either mechanical, optical, and/or electronic, to "flash" solid state illumination on and off with a given duty cycle so that the image sensor captures, displays and records the image for processing and analysis. This provides a straight-forward process of illuminating and imaging given that it effectively can be timed to "stop the action" or rather, "freeze" the motion of the flowing particles in the medium. In addition, this enables that a sample within the image field to capture particles within the field for subsequent image processing and analysis.
Real-time (or substantially real time), clused loop and/or open loop monitoring, manufacturing with, and control of processes by k-space-based, direct measurement of particle characterization at 1340 is applicable to a broad range of processes including (but not limited to): Ceramics, metal powders, pharmaceuticals, cement, minerals, ores, coatings, adhesives, pigments, dyes, carbon black, filter materials, explosives, food preparations, health cosmetic emulsions, polymers, plastics, micelles, beverages and many more particle-based substances requiring process monitoring and control.
WO 03/005446 PCT/US02/21392 Other applications include but are not limited to: Instrument calibration and standards; Industrial-hygiene research; Materials research; Energy and combustion studies; Diesel- and gasoline-engine emissions measurements; Industrial emissions sampling; Basic aerosol research; Environmental studies; Bio-aerosol detection; Pharmaceutical research; Health and agricultural experiments; Inhalation toxicology; and/or Filter testing.
At 1350, software and/or hardware based computerized image processing/analysis can occur. Images from a device adapted in accordance with the present invention can be processed in accordance with substantially any hardware and/or software process.
Software-based image processing can be achieved by custom software and/or commercially available software since the image file formats are digital formats bit maps of captured particles).
Analysis, characterization, and so forth can also be provided by the following: For example, analyses can be metrologic (direct measurement based) and/or comparative (data-base) based.
WO 03/005446 PCT/US02/21392 Comparative analyses can include comparisons to a database of image data for known particles and/or variants thereof. Advanced image processing can characterize and catalog images in real-time and/or periodic sample-measurements. Data can be discarded and/or recorded as desired, whereas data matching known sample characteristics can begin a suitable selected response, for example. Furthermore, a device adapted in accordance with the present invention can be linked for communication in any data transmission process. This can included wireless, broadband, phone modem, standard telecom, Ethernet or other network protocols Internet, TCP/IP, Bluetooth, cable TV transmissions as well as others).
Fig. 15 illustrates a fluorescence application 1400 in accordance with an aspect of the present invention that can be employed with the systems and processes previously described. A k-space system is adapted in accordance with the present invention having a light system that includes a low intensity light source at 1410, such as a Light Emitting Diode (LED), emitting light having a wavelength of about 250 to about 400 nm ultraviolet light). The LED can be employed to provide for epi-illumination, transillumination as described herein (or other type). The use of an LED (or other low power UV light source) also enables waveguide illumination in which the UV excitation wavelength is introduced onto a planar surface supporting the object under test at 1420, such that evanescent-wave coupling of the UV light can excite fluorophores within the object. For example, the UV light can be provided at about a right angle to a substrate on which the object lies. At 1430, the LED (or other light source or combinations thereof) can emit light for a predetermined time period and/or be controlled in a strobe-like manner emitting pulses at a desired rate. At 1440, excitation is applied to the object for the period determined at 1430. At 1450, automated and/or manual analysis is performed on the object during (and/or thereabout) the excitation period.
By way of illustration, the object which is sensitive to ultraviolet in that it fluoresces in response to excitation of UV light from the light source. Fluorescence is a condition of a material (organic or inorganic) in which the material continues to emit light while absorbing excitation light. Fluorescence can be an inherent property of a material auto-fluorescence) or it can be induced, such as by employing flurochrome stains or dyes. The dye can have an affinity to a particular protein or other receptiveness so as to WO 03/005446 PCT/US02/21392 facilitate discovering different conditions associated with the object. In one particular example, fluorescence microscopy and/or digital imaging provides a manner in which to study various materials that exhibit secondary fluorescence.
By way of further example, the UV LED (or other source) can produce intense flashes of UV radiation for a short time period, with an image being constructed by a sensor (sensor adapted to the excitation wavelength) a short time later milliseconds to seconds). This mode can be employed to investigate the time decay characteristics of the fluorescent components of the object (or sample) being tested. This may be important where two parts of the object (or different samples) may respond fluoresce substantially the same under continuous illumination, but may have differing emission decay characteristics.
As a result of using the low power UV light source, such as the LED, the light from the light source can cause at least a portion of the object under test to emit light, generally not in the ultraviolet wavelength. Because at least a portion of the object fluoresces, pre- or post-fluorescence images can be correlated relative to those obtained during fluorescence of the object to ascertain different characteristics of the object.
In contrast, most conventional fluorescence systems are configured to irradiate a specimen and then to separate the much weaker re-radiating fluorescent light from the brighter excitation light, typically through filters. In order to enable detectable fluorescence, such conventional systems usually require powerful light sources. For example, the light sources can be mercury or xenon arc (burner) lamps, which produce high-intensity illumination powerful enough to image fluorescence specimens. In addition to running hot typically 100-250 Watt lamps), these types of light sources typically have short operating lives 10-100 hours). In addition, a power supply for such conventional light sources often includes a timer to help track the number of use hours, as arc lamps tend to become inefficient and are more likely to shatter, if utilized beyond their rated lifetime. Moreover, mercury burners generally do not provide even intensity across the spectrum from ultraviolet to infrared, as much of the intensity of the mercury burner is expended in the near ultraviolet. This often requires precision filtering to remove undesired light wavelengths.
WO 03/005446 PCT/US02/21392 Accordingly, it will be appreciated that using a UV LED, in accordance with an aspect of the present invention, provides a substantially even intensity at a desired UV wavelength to mitigate power consumption and heat generated through its use. Additionally, the replacement cost of a LED light source is significantly less than conventional lamps.
Fig. 16 illustrates a thin films application 1500 in accordance with an aspect of the present invention. Films and thin films can be characterized in general terms as thin layers (varying from molecular thickness(es) to significant microscopic to macroscopic thickness(es) of some material, or multiple materials, deposited in a manner suitable to respective materials onto various substrates of choice and can include (but are not limited to) any of the following: metallic coating reflective, including partial, opaque, and transmissive), optical coatings interference, transmission, anti-reflective, pass-band, blocking, protective, multi-coat, and so forth), plating metallic, oxide, chemical, anti-oxidant, thermal, and so forth), electrically conductive macro and micro-circuit deposited and constructed), optically conductive deposited optical materials of varying index of refraction, micro- and macro-optical "circuits."). This can also include other coatings and layered film and film-like materials on any substrate which can be characterized by deposition in various manners so as to leave a desired layer of some material(s) on said substrate in a desired thickness, consistency, continuity, uniformity, adhesion, and other parameters associated with any given deposited film. Associated thin film analysis can include detection of micro bubbles, voids, microscopic debris, depositing flaws, and so forth.
Proceeding to 1510, a k-space system is configured for thin film analysis in accordance with an aspect of the present invention. The application of a k-space imaging device to the problem of thin-film inspection and characterization can be employed in identifying and characterizing flaws in a thin film or films for example. Such a system can be adapted to facilitate: 1) manual observation of a substrate with deposited thin film of all types; 2) automatic observation/analysis and characterization of a substrate with deposited thin film of all types for pass-fail inspection; 3) automatic observation and characterization of a substrate with deposited thin film of all types for computer-controlled comparative disposition, this can WO 03/005446 PCT/US02/21392 include image data written to recording media of choice CD-ROM, DVD-ROM) for verification, certification, and so forth.
A k-space device can be configured for imaging at desired Absolute Spatial Resolution (ASR) per pixel and desired Effective Resolved Magnification (ERM). These parameters facilitate determining FOV, DOF, and WD, for example. This can include objective-based design configurations and/or achromat-design configurations for wide FOV and moderate ERM, and ASR). Illumination can be selected based on inspection parameters as trans-illumination and/or epi-illumination, for example.
At 1520, a substrate is mounted in an imager in such a manner as to be scanned by: 1) movement of an optical imaging path-length by optical scanning method; and/or 2) indexing an object being tested directly by a process of mechanical motion and control automatic by computer or manual by operator). This facilitates an inspection of an entire surface or portion of the surface as desired.
As noted above in context of particle sizing, asynchronous imaging at selected timing intervals and/or in real-time for respective scanned areas determined by FOV) of substrate at common video rates and/or at image capture rates can be provided. Images of indexed and/or scanned areas can be captured with desired frequency for subsequent image processing. In addition, samples can be introduced into the device manually and/or in an automated manner from a "feed" such as from a conveyor system.
At 1530, operational parameters for thin film applications are determined and applied. Typical operational parameters can include (but are not limited to: 1) Imaging of various flaws and characteristics including, but not limited to, particles and holes on a surface(s) (or within) a thin film; 2) Modular designs which can be varied as needed for both reflective and transparent surfaces; 3) Automated counting and categorization of surface flaws by size, location, and/or number on successively indexed (and/or "scanned") image areas (with index identification and totals for respective sample surfaces); 4) Register location of defects for subsequent manual inspection; WO 03/005446 PCT/US02/21392 Provide images in standard format(s) for subsequent porting via Ethernet or other protocol) or manual and/or automated image processing for archive and documentation on a computer, server, and/or client; and/or 6) Nominal scan time per surface of seconds to minutes dependent on total area. Scan and indexing speed generally understood to vary with sample area and subsequent processing.
At 1540, software and/or hardware based computerized image processing/analysis can occur. Images from a device adapted in accordance with the present invention can be processed in accordance with substantially any hardware and/or software process.
Software-based image processing can be achieved by custom software and/or commercially available software since the image file formats are digital formats bit maps of captured films).Analysis, characterization and so forth can also be provided by the following: For example, analyses can be metrologic (direct measurement based) and/or comparative (data-base) based. Comparative analyses can include comparisons to a database of image data for mknown films and/or variants thereof. Advanced image processing can characterize and catalog images in real-time and/or periodic samplemeasurements. Data can be discarded and/or recorded as desired, whereas data matching known sample characteristics can begin a suitable selected response, for example.
Furthermore, a device adapted in accordance with the present invention can be linked for communication in any data transmission process. This can included wireless, broadband, phone modem, standard telecom, Ethernet or other network protocols Internet, TCP/IP, Bluetooth, cable TV transmissions as well as others).
In another aspect of the present invention, an imaging system adapted as described above provides high effective resolved magnification and high spatial resolution among other features of biological material and methods that can be combined to provide improved biological material imaging systems and methods. The biological material imaging systems and methods of the present invention enable the production of improved images (higher effective magnification, improved resolution, improved depth of field, and the like) leading to the identification of biological materials as well as the classification of biological materials (for example as normal or abnormal).
WO 03/005446 PCT/US02/21392 Biological material includes microorganisms (organisms too small to be observed with the unaided eye) such as bacteria, virus, protozoans, fungi, and ciliates; cell material from organisms such cells (lysed, intracellular material, or whole cells), proteins, antibodies, lipids, and carbohydrates, tagged or untagged; and portions of organisms such as clumps of cells (tissue samples), blood, pupils, irises, finger tips, teeth, portions of the skin, hair, mucous membranes, bladder, breast, male/female reproductive system components, muscle, vascular components, central nervous system components, liver, bone, colon, pancreas, and the like. Since the biological material imaging system of the present invention can employ a relatively large working distance, portions of the human body may be directly examined without the need for removing a tissue sample.
Cells include human cells, non-human animal cells, plant cells, and synthetic/research cells. Cells include prokaryotic and eukaryotic cells. Cells may be healthy, cancerous, mutated, damaged, or diseased.
Examples of non-human cells include anthrax, Actinomycetes spp., Azotobacter, Bacillus anthracis, Bacillus cereus, Bacteroides species, Bordetella pertussis, Borrelia burgdorferi, Campylobacter jejuni, Chlamydia species, Clostridium species, Cyanobacteria, Deinococcus radiodurans, Escherichia coli, Enterococcus, Haemophilus influenzae, Helicobacter pylori, Klebsiella pneumoniae, Lactobacillus spp., Lawsonia intracellularis, Legionellae, Listeria spp., Micrococcus spp., Mycobacterium leprae, Mycobacterium tuberculosis, Myxobacteria, Neisseria gonorrheoeae, Neisseria meningitidis, Prevotella spp., Pseudomonas spp., Salmonellae, Serratia marcescens, Shigella species, Staphylococcus aureus, Streptococci, Thiomargarita namibiensis, Treponema pallidum, Vibrio cholerae, Yersinia enterocolitica, Yersinia pestis, and the like.
Additional examples of biological material are those that cause illness such as colds, infections, malaria, chlamydia, syphilis, gonorrhea, conjunctivitis, anthrax, meningitis, botulism, diarrhea, brucellosis, campylobacter, candidiasis, cholera, coccidioidomycosis, cryptococcosis, diphtheria, pneumonia, foodborne infections, glanders (burkholderia mallei), influenzae, leprosy, histoplasmosis, legionellosis, leptospirosis, listeriosis, melioidosis, nocardiosis, nontuberculosis mycobacterium, peptic ulcer disease, pertussis, pneumonia, psittacosis, salmonella enteritidis, shigellosis, WO 03/005446 PCT/US02/21392 sporotrichosis, strep throat, toxic shock syndrome, trachoma, typhoid fever, urinary tract infections, lyme disease, and the like. As described later, the present invention further relates to methods of diagnosing any of the above illnesses.
Examples of human cells include fibroblast cells, skeletal muscle cells, neutrophil white blood cells, lymphocyte white blood cells, erythroblast red blood cells, osteoblast bone cells, chondrocyte cartilage cells, basophil white blood cells, eosinophil white blood cells, adipocyte fat cells, invertebrate neurons (Helix aspera), mammalian neurons, adrenomedullary cells, melanocytes, epithelial cells, endothelial cells; tumor cells of all types (particularly melanoma, myeloid leukemia, carcinomas of the lung, breast, ovaries, colon, kidney, prostate, pancreas and testes), cardiomyocytes, endothelial cells, epithelial cells, lymphocytes (T-cell and B cell), mast cells, eosinophils, vascular intimal cells, hepatocytes, leukocytes including mononuclear leukocytes, stem cells such as haemopoetic, neural, skin, lung, kidney, liver and myocyte stem cells, osteoclasts, chondrocytes and other connective tissue cells, keratinocytes, melanocytes, liver cells, kidney cells, and adipocytes. Examples of research cells include transformed cells, Jurkat T cells, NIH3T3 cells, CHO, COS, etc.
A useful source of cell lines and other biological material may be found in ATCC Cell Lines and Hybridomas, Bacteria and Bacteriophages, Yeast, Mycology and Botany, and Protists: Algae and Protozoa, and others available from American Type Culture Co.
(Rockville, all of which are herein incorporated by reference. These are nonlimiting examples as a litany of cells and other biological material can be listed.
The identification or classification of biological material can in some instances lead to the diagnosis of disease. Thus, the present invention also provides improved systems and methods of diagnosis. For example, the present invention also provides methods for detection and characterization of medical pathologies such as cancer, pathologies ofmusculoskeletal systems, digestive systems, reproductive systems, and the alimentary canal, in addition to atherosclerosis, angiogenesis, arteriosclerosis, inflamation, atherosclerotic heart disease, myocardial infarction, trauma to arterial or veinal walls, neurodegenerative disorders, and cardiopulmonary disorders. The present invention also provides methods for detection and characterization of viral and bacterial infections. The present invention also enables assessing the effects of various agents or WO 03/005446 PCT/US02/21392 physiological activities on biological materials, in both in vitro and in vivo systems. For example, the present invention enables assessment of the effect of a physiological agent, such as a drug, on a population of cells or tissue grown in culture.
The biological material imaging system of the present invention enables computer driven control or automated process control to obtain data from biological material samples. In this connection, a computer or processor, coupled with the biological material imaging system, contains or is coupled to a memory or data base containing images of biological material, such as diseased cells of various types. In this context, automatic designation of normal and abnormal biological material maybe made. The biological material imaging system secures images from a given biological material sample, and the images are compared with images in the memory, such as images of diseased cells in the memory. In one sense, the computer/processor performs a comparison analysis of collected image data and stored image data, and based on the results of the analysis, formulates a determination of the identity of a given biological material; of the classification of a given biological material (normal/abnormal, cancerous/non-cancerous, benign/malignant, infected/not infected, and the like); and/or of a condition (diagnosis).
If the computer/processor determines that a sufficient degree of similarity is present between particular images from a biological material sample and saved images (such as of diseased cells or of the same biological material), then the image is saved and data associated with the image may be generated, if the computer/processor determines that a sufficient degree of similarity is not present between particular image of a biological material sample and saved images of diseased cells/particular biological material, then the biological material sample is repositioned and additional images are compared with images in the memory. It is to be appreciated that statistical methods can be applied by the computer/processor to assist in the determination that a sufficient degree of similarity is present between particular images from a biological material sample and saved images of biological material. Any suitable correlation means, memory, operating system, analytical component, and software/hardware may be employed by the computer/processor.
Referring to Figure 17, an exemplary aspect of an automated biological material WO 03/005446 PCT/US02/21392 imaging system 1600 in accordance with one aspect of the present invention enabling computer driven control or automated process control to obtain data from biological material samples is shown. An imaging system 1602 described/configured in connection with Figs. 1-16 above may be employed to capture an image of a biological material 1604.
The imaging system 1602 is coupled to a processor 1606 and/or computer that reads the image generated by the imaging system 1602 and compares the image to a variety of images in the data store 1608.
The processor 1606 contains an analysis component to make the comparison.
Some of the many algorithms used in image processing include convolution (on which many others are based), FFT, DCT, thinning (or skeletonisation), edge detection and contrast enhancement. These are usually implemented in software but may also use special purpose hardware for speed. FFT (fast Fourier transform) is an algorithm for computing the Fourier transform of a set of discrete data values. Given a finite set of data points, for example, a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also addresses the essentially identical inverse concerns of reconstructing a signal from the frequency data. DCT (discrete cosine transform) is technique for expressing a waveform as a weighted sum of cosines. There are several applications designed for image processing, CELIP (cellular language for image processing) and VPL (visual programming language).
The data store 1608 contains one or more sets of predetermined images. The images may include normal images of various biological materials and/or abnormal images of various biological materials (diseased, mutated, physically disrupted, and the like). The images stored in the data store 1608 provide a basis to determine whether or not a given captured image is similar or not similar (or the degree of similarity) to the stored images. In one aspect, the automated biological material imaging system 1600 can be employed to determine if a biological material sample is normal or abnormal. For example, the automated biological material imaging system 1600 can identify the presence of diseased cells, such as cancerous cells, in a biological material sample, thereby facilitating diagnosis of a given disease or condition. In another aspect, the automated biological material imaging system 1600 can diagnose the illnesses/diseases listed above by identifying the presence of an illness causing biological material (such as WO 03/005446 PCT/US02/21392 an illness causing bacteria described above) and/or determining that a given biological material is infected with an illness causing entity such as a bacteria or determining that a given biological material is abnormal (cancerous).
In yet another aspect, the automated biological material imaging system 1600 can be employed to determine the identity of a biological material of unknown origin. For example, the automated biological material imaging system 1600 can identify a white powder as containing anthrax. The automated biological material imaging system 1600 can also facilitate processing biological material, such as performing white blood cell or red blood cell counts on samples of blood, for example.
The computer/processor 1606 may be coupled to a controller which controls a servo motor or other means of moving the biological material sample within an object plane so that remote/hands free imaging is facilitated. That is, motors, adjusters, and/or other mechanical means can be employed to move the biological material sample slide within the object field of view.
Moreover, since the images of the biological material examination process are optimized for viewing from a computer screen, television, and/or closed circuit monitor, remote and web based viewing and control may be implemented. Real time imaging facilitates at least one of rapid diagnosis, data collection/generation, and the like.
In another aspect, the biological material imaging system is directed to a portion of a human (such as lesion on an arm, haze on the cornea, and the like) and images fonned.
The images can be sent to a computer/processor (or across network such as Internet), which is instructed to identify the possible presence of a particular type of diseased cell (an image of which is stored in memory). When a diseased cell is identified, the computer/processor instructs the system to remove/destroy the diseased cell, for example, employing a laser, liquid nitrogen, cutting instrument, and/or the like.
Fig. 18 depicts a high-level machine vision system 1800 in accordance with the subject invention. The system 1800 includes an imaging system 10 (Fig. 1) in accordance with the subject invention. The imaging system 10 is discussed in substantial detail supra and thus further discussion regarding details related thereto is omitted for sake of brevity.
The imaging system 10 can be employed to collect data relating to a product or process 1810, and provide the image information to a controller 1820 that can regulate the product WO 03/005446 PCT/US02/21392 or process 1810, for example, with respect to production, process control, quality control, testing, inspection, etc. The imaging system 10 as noted above provides for collecting image data at a granularity not achievable by many conventional systems. Moreover, the robust image data provided by the subject imaging system 10 can afford for highly effective machine vision inspection of the product or process 1810. For example, minute product defects typically not detectable by many conventional machine vision systems can be detected by the subject system 1800 as a result of the image data collected by the imaging system 10. The controller 1810 can be any suitable controller or control system employed in connection with a fabrication scheme, for example. The controller 1810 can employ the collected image data to reject a defective product or process, revise a product or process, accept a product or process, etc. as is common to machine-vision based control systems. It is to be appreciated that the system 1800 can be employed in any suitable machine-vision based environment, and all such applications of the subject invention are intended to fall within the scope of the hereto appended claims.
For example, the subject system 1800 could be employed in connection with semiconductor fabrication where device and/or process tolerances are critical to manufacturing consistent reliable semiconductor-based products. Thus, the product 1810 could represent a semiconductor wafer, for example, and the imaging system 1800 could be employed to collect data critical dimensions, thicknesses, potential defects, other physical aspects...) relating to devices being formed on the wafer. The controller 1820 can employ the collected data to reject the wafer because of various defects, modify a process in connection with fabricating devices on the wafer, accept the wafer, etc.
What has been described above are preferred aspects of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
43 00 It is to be understood that, if any prior art CI publication is referred to herein, such reference does not C constitute an admission that the publication forms a part of the common general knowledge in the art, in Australia or any other country.
SIn the claims which follow and in the preceding description, except where the context requires otherwise C- due to express language or necessary implication, the word C 10 "comprise" or variations such as "comprises" or O "comprising" is used in an inclusive sense, i.e. to <c specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
N Wlelbourne\Cases\Patent\5 IOCO-1999PS181 6.A\Spccis\PS181 AU Specificaion 2008-1I.2doc 2/01/08

Claims (12)

1. An imaging system, comprising: a sensor having one or more receptors, the receptors having a size; and an image transfer medium having a resolution size 0 parameter, the image transfer medium operative to scale the receptor size to about the resolution size parameter C- in an object field of view, the image transfer medium C 10 comprising a multiple lens configuration, the multiple Slens configuration comprising a first lens positioned ^c toward the object field of view and a second lens positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the image transfer medium.
2. The system of claim 1, the imaging transfer medium providing a k-space filter that correlates a pitch associated with the one or more receptors to a diffraction-limited spot within the image transfer medium.
3. The system of claim 2, the pitch being unit-mapped to about the size of the diffraction-limited spot within the image transfer medium.
4. The system of claim 1, the image transfer medium further comprising at least one of an aspherical lens, a multiple lens configuration, a fiber optic taper, an image conduit, and a holographic optic element.
The system of claim 1, the sensor further comprising an M by N array of pixels associated with the one or more receptors, M and N representing integer rows and columns respectively, the sensor further comprising at least one of a digital sensor, an analog sensor, a Charge Coupled Device (CCD) sensor, a CMOS sensor, a Charge Injection NV 4elbourni\CasesPaen\.5 1000-51999T S1816 AU\Specis\P51816AU Specification 2008-I-2doc 2/01/08 45 00 Device (CID) sensor, an array sensor, and a linear scan sensor.
6. The system of claim 1, further comprising a computer and a memory to receive an output from the sensor, the computer at least one of stores the output in the memory, Sperforms automated analysis of the output in the memory, and maps the memory to a display to enable manual analysis C1 of an image. N
7. The system of claim 1, further comprising an Cl illumination source to illuminate one or more non-luminous objects within an object field of view, the illumination source comprises at least one of a Light Emitting Diode, wavelength-specific lighting, broad-band lighting, continuous lighting, strobed lighting, Kohler illumination, Abbe illumination, phase-contrast illumination, darkfield illumination, brightfield illumination, Epi illumination, coherent light, non- coherent light, visible light and non-visible light, the non-visible light being suitably matched to a sensor adapted for non-visible light.
8. The system of claim 7, the non-visible light further comprising at least one of infrared and ultraviolet wavelengths.
9. The system of claim 1, further comprising an associated application, the application including at least one of imaging, control, inspection, microscopy automated analysis, bio-medical analysis, cell colony counting, histology, frozen section analysis, cellular cytology, Haematology, pathology, oncology, fluorescence, interference, phase analysis, biological materials analysis, particle sizing applications, thin films analysis, air quality monitoring, airborne particulate measurement, optical defect analysis, metallurgy, N:\Mebourne\CasesPtent\5l1000 -1999\P5 816 A1f\Spccis\P5816 AU Specificaton 20081-2,doc 2/01/08 46 00 O semiconductor inspection and analysis, automated vision C systems, 3-D imaging, cameras, copiers, FAX machines and Smedical systems applications.
10. A method of producing an image, comprising: determining a pitch size between adjacent pixels on a sensor; determining a resolvable object size in an object C field of view; and M 10 scaling the pitch size through an optical medium to Scorrespond with the resolvable object size in an object C field of view, the image transfer medium comprising a multiple lens configuration, the multiple lens configuration comprising a first lens positioned toward the object field of view and a second lens positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the image transfer medium.
11. A machine vision system, comprising: an imaging system for collecting image data from a product or process, comprising: a sensor having one or more receptors; and at least one optical device to direct light from an object field of view to the one or more receptors of the sensor, the at least one optical device provides a mapping of receptor size to about a size of a diffraction limited object in the object field of view, the optical device comprising a multiple lens configuration, the multiple lens configuration comprising a first lens positioned toward the object field of view and a second lens positioned toward the sensor, the first lens sized to have a focal length smaller than the second lens to provide an apparent reduction of the receptor size within the optical device; and a controller that receives the image data and employs N \Melbournc\Cases\Paent\ I 00051999TP5 1816 AU\Specis\P5 1816AU Specification 2008- I-2doc 2/01/08 47 00 o the image data in connection with fabrication or control CI of the product or process.
12. The machine vision system of claim 11 being employed in a semiconductor-based processing system. S13. The system of any one of claims 1 to 9, or 11 or 12, Sand substantially as herein described with reference to C- the accompanying drawings. C S14. The method of claim 11, and substantially as herein C- described with reference to the accompanying drawings. N.\Melbourne\Cases\Paten\51000-S19S9\P5 1816 AU\Specis\PS1816 AU Specification 2008-I-2doc 2/01/08
AU2002322410A 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design Ceased AU2002322410B8 (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US09/900,218 2001-07-06
US09/900,218 US6664528B1 (en) 2001-07-06 2001-07-06 Imaging system and methodology employing reciprocal space optical design
US10/166,137 2002-06-10
US10/166,137 US6884983B2 (en) 2002-06-10 2002-06-10 Imaging system for examining biological material
US10/189,326 2002-07-02
US10/189,326 US7132636B1 (en) 2001-07-06 2002-07-02 Imaging system and methodology employing reciprocal space optical design
PCT/US2002/021392 WO2003005446A1 (en) 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design

Publications (3)

Publication Number Publication Date
AU2002322410A1 AU2002322410A1 (en) 2003-05-22
AU2002322410B2 true AU2002322410B2 (en) 2008-02-07
AU2002322410B8 AU2002322410B8 (en) 2008-05-01

Family

ID=31998992

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2002322410A Ceased AU2002322410B8 (en) 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design

Country Status (10)

Country Link
EP (1) EP1405346A4 (en)
JP (2) JP2005534946A (en)
KR (1) KR100941062B1 (en)
CN (1) CN100477734C (en)
AU (1) AU2002322410B8 (en)
BR (1) BR0210852A (en)
CA (1) CA2453049C (en)
IL (2) IL159700A0 (en)
MX (1) MXPA04000167A (en)
NZ (1) NZ530988A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11419694B2 (en) 2017-03-28 2022-08-23 Fujifilm Corporation Endoscope system measuring size of subject using measurement auxiliary light
US11490785B2 (en) 2017-03-28 2022-11-08 Fujifilm Corporation Measurement support device, endoscope system, and processor measuring size of subject using measurement auxiliary light

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58102538A (en) * 1981-12-14 1983-06-18 Fujitsu Ltd Manufacture of semiconductor device
JP4907816B2 (en) 1999-11-05 2012-04-04 ライシオ ベネコール オサケユイチア Edible fat compound
US7630065B2 (en) 2005-02-21 2009-12-08 Olympus Corporation Low-light specimen image pickup unit and low-light specimen image pickup apparatus
JP2009043139A (en) 2007-08-10 2009-02-26 Mitsubishi Electric Corp Position detecting device
US7782452B2 (en) * 2007-08-31 2010-08-24 Kla-Tencor Technologies Corp. Systems and method for simultaneously inspecting a specimen with two distinct channels
JP5693228B2 (en) * 2007-11-14 2015-04-01 バイオセンサーズ インターナショナル グループ、リミテッド Automatic coating apparatus and method
JP2010117705A (en) * 2008-10-14 2010-05-27 Olympus Corp Microscope for virtual-slide creating system
ES2623375T3 (en) * 2009-10-20 2017-07-11 The Regents Of The University Of California Holography and incoherent cell microscopy without a lens on a chip
CN102053051A (en) * 2009-10-30 2011-05-11 西门子公司 Body fluid analysis system as well as image processing device and method for body fluid analysis
KR101832526B1 (en) * 2010-08-05 2018-04-13 오르보테크 엘티디. Lighting system
JP5784393B2 (en) * 2011-07-11 2015-09-24 オリンパス株式会社 Sample observation equipment
DE102011055945A1 (en) * 2011-12-01 2013-06-06 Leica Microsystems Cms Gmbh Method and device for examining a sample
CN102661715A (en) * 2012-06-08 2012-09-12 苏州富鑫林光电科技有限公司 CCD (charge coupled device) type clearance measurement system and method
EP3523054A4 (en) 2016-10-04 2020-06-03 The Regents of The University of California Multi-frequency harmonic acoustography for target identification and border detection
EP3320829A1 (en) * 2016-11-10 2018-05-16 E-Health Technical Solutions, S.L. System for integrally measuring clinical parameters of visual function
KR101887527B1 (en) * 2017-04-05 2018-08-10 경북대학교 산학협력단 Apparatus for spectrum measuring of full-color hologram and Method thereof
KR101887523B1 (en) * 2017-04-05 2018-08-10 경북대학교 산학협력단 System for spectrum measuring of small area using microscope
DE112018003311T5 (en) * 2017-06-29 2020-03-26 Sony Corporation SYSTEM, METHOD AND COMPUTER PROGRAM FOR MEDICAL IMAGING
US11380438B2 (en) 2017-09-27 2022-07-05 Honeywell International Inc. Respiration-vocalization data collection system for air quality determination
WO2019089998A1 (en) * 2017-11-01 2019-05-09 The Regents Of The University Of California Imaging method and system for intraoperative surgical margin assessment
CN108197560B (en) * 2017-12-28 2022-06-07 努比亚技术有限公司 Face image recognition method, mobile terminal and computer-readable storage medium
US20190200906A1 (en) * 2017-12-28 2019-07-04 Ethicon Llc Dual cmos array imaging
CN111198192B (en) * 2018-11-20 2022-02-15 深圳中科飞测科技股份有限公司 Detection device and detection method
CN111988499B (en) * 2019-05-22 2022-03-15 印象认知(北京)科技有限公司 Imaging layer, imaging device, electronic apparatus, wave zone plate structure and photosensitive pixel
US10876949B2 (en) 2019-04-26 2020-12-29 Honeywell International Inc. Flow device and associated method and system
FR3098930B1 (en) * 2019-07-18 2023-04-28 Univ Versailles Saint Quentin En Yvelines DEVICE FOR OBSERVING A CELL OR A SET OF LIVING CELLS
CN110440853B (en) * 2019-07-24 2024-05-17 沈阳工程学院 Monitoring dust removing system
CN112782175A (en) * 2019-11-11 2021-05-11 深圳中科飞测科技股份有限公司 Detection equipment and detection method
US11391613B2 (en) 2020-02-14 2022-07-19 Honeywell International Inc. Fluid composition sensor device and method of using the same
US11835432B2 (en) 2020-10-26 2023-12-05 Honeywell International Inc. Fluid composition sensor device and method of using the same
CN113155755B (en) * 2021-03-31 2022-05-24 中国科学院长春光学精密机械与物理研究所 On-line calibration method for micro-lens array type imaging spectrometer

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4410804A (en) * 1981-07-13 1983-10-18 Honeywell Inc. Two dimensional image panel with range measurement capability

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4806774A (en) * 1987-06-08 1989-02-21 Insystems, Inc. Inspection system for array of microcircuit dies having redundant circuit patterns
JPH01154016A (en) * 1987-12-10 1989-06-16 Nikon Corp Microscope
JP3245882B2 (en) * 1990-10-24 2002-01-15 株式会社日立製作所 Pattern forming method and projection exposure apparatus
JPH0695001A (en) * 1992-09-11 1994-04-08 Nikon Corp Microscopic device
JPH0772377A (en) * 1993-06-14 1995-03-17 Nikon Corp Autofocusing device for microscope
JPH08160303A (en) * 1994-12-02 1996-06-21 Olympus Optical Co Ltd Object observing device
JP3123457B2 (en) * 1996-05-13 2001-01-09 株式会社ニコン microscope
US6016210A (en) * 1997-12-15 2000-01-18 Northrop Grumman Corporation Scatter noise reduction in holographic storage systems by speckle averaging
EP1203256A1 (en) * 1999-07-09 2002-05-08 Cellavision AB Microscope filter for automatic contrast enhancement
EP1096295A1 (en) * 1999-10-28 2001-05-02 Itt Manufacturing Enterprises, Inc. Apparatus and method for providing optical sensors with improved resolution
TWI240249B (en) * 2004-03-03 2005-09-21 Asustek Comp Inc Disc drive with tilt-preventing tray

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4410804A (en) * 1981-07-13 1983-10-18 Honeywell Inc. Two dimensional image panel with range measurement capability

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11419694B2 (en) 2017-03-28 2022-08-23 Fujifilm Corporation Endoscope system measuring size of subject using measurement auxiliary light
US11490785B2 (en) 2017-03-28 2022-11-08 Fujifilm Corporation Measurement support device, endoscope system, and processor measuring size of subject using measurement auxiliary light

Also Published As

Publication number Publication date
JP2005534946A (en) 2005-11-17
AU2002322410B8 (en) 2008-05-01
CA2453049C (en) 2011-10-25
CN1550039A (en) 2004-11-24
IL159700A0 (en) 2004-06-20
KR20040031769A (en) 2004-04-13
NZ530988A (en) 2006-09-29
IL159700A (en) 2010-12-30
MXPA04000167A (en) 2005-06-06
JP2009258746A (en) 2009-11-05
BR0210852A (en) 2004-08-24
CN100477734C (en) 2009-04-08
CA2453049A1 (en) 2003-01-16
KR100941062B1 (en) 2010-02-05
EP1405346A1 (en) 2004-04-07
EP1405346A4 (en) 2008-11-05

Similar Documents

Publication Publication Date Title
AU2002322410B2 (en) Imaging system and methodology employing reciprocal space optical design
AU2002322410A1 (en) Imaging system and methodology employing reciprocal space optical design
ZA200400961B (en) Imaging system and methodology employing reciprocal space optical design.
US7692131B2 (en) Imaging system and methodology with projected pixels mapped to the diffraction limited spot
US7338168B2 (en) Particle analyzing system and methodology
US7863552B2 (en) Digital images and related methodologies
US7248716B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design
US6998596B2 (en) Imaging system for examining biological material
US7109464B2 (en) Semiconductor imaging system and related methodology
US7105795B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design
US7385168B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design
US7288751B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design
US20110009163A1 (en) High numerical aperture telemicroscopy apparatus
JP2012506060A (en) Automated scanning cytometry using chromatic aberration for multi-plane image acquisition.
Lee et al. A smartphone-based Fourier ptychographic microscope using the display screen for illumination
US7439478B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design having at least one pixel being scaled to about a size of a diffraction-limited spot defined by a microscopic optical system
Arpa et al. Single lens off-chip cellphone microscopy
US7132636B1 (en) Imaging system and methodology employing reciprocal space optical design
US8634067B2 (en) Method and apparatus for detecting microscopic objects
WO2003005446A1 (en) Imaging system and methodology employing reciprocal space optical design
Pushpa et al. Advances in Microscopy and Its Applications with Special Reference to Fluorescence Microscope: An Overview

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: AMEND THE PRIORITY DETAILS FROM 10/000.000 02 JUL 2002 US TO 10/189,326 02 JUL 2002 US

TH Corrigenda

Free format text: IN VOL 22, NO 6, PAGE(S) 700 UNDER THE HEADING APPLICATIONS ACCEPTED -NAME INDEX UNDER THE NAME PALANTYR RESEARCH, INC, APPLICATION NUMBER 2002322410, UNDER INID (71), CORRECT THE APPLICANT NAME TO PALANTYR RESEARCH LLC

FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired