MXPA04000167A - Imaging system and methodology employing reciprocal space optical design. - Google Patents

Imaging system and methodology employing reciprocal space optical design.

Info

Publication number
MXPA04000167A
MXPA04000167A MXPA04000167A MXPA04000167A MXPA04000167A MX PA04000167 A MXPA04000167 A MX PA04000167A MX PA04000167 A MXPA04000167 A MX PA04000167A MX PA04000167 A MXPA04000167 A MX PA04000167A MX PA04000167 A MXPA04000167 A MX PA04000167A
Authority
MX
Mexico
Prior art keywords
sensor
image
size
illumination
analysis
Prior art date
Application number
MXPA04000167A
Other languages
Spanish (es)
Inventor
Fein Howard
Original Assignee
Palantyr Res Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/900,218 external-priority patent/US6664528B1/en
Priority claimed from US10/166,137 external-priority patent/US6884983B2/en
Priority claimed from US10/189,326 external-priority patent/US7132636B1/en
Application filed by Palantyr Res Llc filed Critical Palantyr Res Llc
Priority claimed from PCT/US2002/021392 external-priority patent/WO2003005446A1/en
Publication of MXPA04000167A publication Critical patent/MXPA04000167A/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/40Optical focusing aids
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/58Optics for apodization or superresolution; Optical synthetic aperture systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • G02B21/08Condensers
    • G02B21/082Condensers for incident illumination only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • G02B21/08Condensers
    • G02B21/086Condensers for transillumination only
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/361Optical details, e.g. image relay to the camera or image sensor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)
  • Studio Devices (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Input (AREA)
  • Holo Graphy (AREA)
  • Lenses (AREA)

Abstract

An imaging system (10) and methodology (600) is provided to facilitate optical imaging performance. The system (10) includes a sensor (20) having one or more receptors and an image transfer medium (30) to scale the sensor and receptors to an object field of view (54). A computer (824), memory (864), and/or display (864) associated with the sensor (20) provides storage and/or display of information relating to output from the receptors to produce and/or process an image, wherein a plurality of illumination sources (60) can also be utilized in conjunction with the image transfer medium (30). The image transfer medium (30) can be configured as a k-space filter (110) that correlates a pitch (116) associated with the receptors to a diffraction-limited spot (50) within the object field of view (54), wherein the pitch (116) can be unit-mapped to about the size of the diffraction- limited spot (50) within the object field of view (54).

Description

WO 03/005446 Al H'll II II li li! II ??? Mil li li III II I II (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, ML, MR, For two-letter codes and other abbreviations, refer to the "Guid-NE, SN, TD, TG). Notes on Codes and Abbreviations "appearing at the beginning of each regular issue of the PCT Gazette. Published: - with intemational search report - befare the expiration of the time limit for amending the claims and to be republished in the event of receipt of amendments 1 SYSTEM OF IMAGES AND METHODOLOGY THAT USES AN OPTICAL DESIGN OF RECIPROCAL SPACE RELATED REQUEST This application claims the benefit of the Patent Application of E.U.A. Serial No. 09 / 900,218, which was submitted on July 6, 2001, and is entitled SYSTEM OF FORMATION OF IMAGES AND METHODOLOGY THAT USES OPTIMUM DESIGN OF RECEPTIVE SPACE.
TECHNICAL FIELD The present invention relates generally to imaging and optical systems, and more particularly to a system and method for facilitating the performance of images by means of an image transfer means that projects the characteristics of a sensor to an objective field of vision.
BACKGROUND OF THE INVENTION Microscopes facilitate the creation of a large image of a tiny object. The greater amplification can be achieved if the light of an object is passed through two lenses compared to a simple microscope with a lens. A compound microscope 2 it has two or more converging lenses, placed in line with each other, so that both lenses refract light in turn. The result is to produce an image that is amplified 'more than any lens can amplify alone. The light that illuminates the object first passes through a lens of short focal length or group of lenses, called the lens, and then travels some distance before it passes through a lens of large focal length or group of lenses, eye call. A group of lenses is often referred to simply as a lens. Normally, these two lenses are maintained in paraxial relationship with each other, so that the axis of a lens is accommodated to be in the same orientation as the axis of the second lens. It is the nature of the lenses, their properties, their relationship, and the object's relation to the object that determine how a highly magnified image is produced in the observer's eye. The first lens or lens is usually a lens with a very small focal length. A specimen or object is placed in the path of a light source with sufficient intensity to illuminate as desired. The target is then lowered until the specimen is very close, but not enough at the focal point of the lens. The light that leaves the specimen and passes through the lens produces a real image, inverted and amplified behind the lens, in the microscope at a point generally referred to as the intermediate image plane. The second lens or ocular, has a larger focal length and is placed in the microscope so that the image produced by the lens falls closer to the eyepiece than a focal length (that is, within the focal point of the lens). The objective image now becomes the object for the eyepiece. Since this object is within a focal length, the second lens refracts the light in such a way that it produces a second image that is virtual, inverted and amplified. This is the final image seen by the observer's eye. Alternatively, infinity or common utility-corrected design microscopes employ lenses with infinite conjugation properties so that the light coming out of the lens is not focused, but rather a stream of parallel rays that do not converge until after passing through a tube lens where the projected image is then located at the focal point of the eyepiece for amplification and observation. Many microscopes, such as the composite microscope described above, are designed to provide images of certain quality to the human eye through an eyepiece. The connection of a Machine Vision Sensor, such as a Charge Coupled Device (CCD) sensor to the microscope so that an image can be viewed on a monitor presents difficulties. This is because the image quality provided by the sensor and seen by the human eye decreases, when compared to an image seen by the human eye directly through an eyepiece. As a result, conventional optical systems for amplifying, observing, examining, and analyzing small items often require attention.
Careful attention from a technician who monitors the process through an eyepiece. It is for this reason, as well as others, that the computer-based or Machine Vision image is displayed from the aforementioned image sensor displayed on a monitor or other output presentation device are not equally perceived by the human observer through of the eyepiece.
COMPENDIUM OF THE INVENTION The following represents a simplified compendium of the invention in order to provide a basic understanding of some aspects of the invention. This compendium is not an extensive review of the invention. It is not intended to identify the key or critical elements of the invention nor to outline the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later. The present invention refers to a system and methodology that facilitate the performance of images of optical imaging systems. With respect to various parameters of the optical and / or imaging system, many performance improvement commands can be made on conventional systems (eg, greater effective resolved amplification, longer working distances, increased absolute spatial resolution, increased spatial field of edition, increased depth of field, 5 Modulation Transfer Function of approximately 1, immersion objectives in oil and eyepieces not required). This is achieved by adapting an image transfer medium (e.g., one or more lenses, fiber optic means, or other means) to a sensor having one or more receivers (e.g., pixels) so that the receivers of the sensors are effectively scaled (eg, "mapped", "sized", "projected", "matched", "reduced") to occupy an objective field of view of approximately the scale or size associated with a limited diffraction point or point within the objective field of vision. In this way, a pass filter band of spatial frequencies is what is known as Fourier space or "space k" is achieved so that the projected size (projection in one direction from the sensor to the target space) of the receiver fills up in space k. In other words, the image transfer means is adapted, configured and / or selected such that a transformation in space k is achieved, where a design determination a priori causes the space k or the band pass frequencies of interest to be preserve completely and the frequencies above and below the frequencies of space k are mitigated. It is noted that the frequencies above and below the frequencies of space k tend to cause blurring and contrast reduction and are generally associated with conventional optical system designs that define intrinsic constraints in a Transfer Function of 6.
Modulation and "optical noise". This further illustrates that the systems and methods of the present invention are in contravention or opposition to conventional geometric paraxial ray designs. Consequently, many known optical design limitations associated with conventional systems are mitigated by the present invention. In accordance with one aspect of the present invention, a system and methodology of a "k-space" design is provided, which defines a "unit mapping" of the Modulation Transfer Function (MTF) of a target plane to the picture plane ratio. The space design k projects the image plane pixels or receivers towards the target plane to promote an optimal theoretical relationship. This is defined by a one-to-one correspondence substantially between the image sensor receivers and the projected target plane units (eg, units defined by the smaller resolving points or points in the objective field of vision)., which are agreed according to the size of the receiver. The Space design k defines the "unit mapping" or "unit correlation" acts as an effective "Intrinsic Space Filter" which implies that the spectral components of an object and an image in space k (also referred to as "reciprocal space") ") are substantially correlated or quantified. The advantages provided by the space design k result in the system and methodology capable of much greater effective resolution amplification with Field of View 7 concomitantly related and more increased, Depth of Field, Absolute Spatial Resolution, and Working Distances using dry objective image formation, for example, and without employing conventional oil immersion techniques that have intrinsic limitations inherent to the aforementioned parameters. One aspect of the present invention relates to an optical system that includes an optical sensor having an arrangement of light receptors having a pitch of pixels. A lens optically associated with the optical sensor is configured with optical parameters functionally related to the pitch and a desired resolution of the optical system. As a result, the lens is operative to substantially map a portion of an object having the desired resolution along the optical path with an associated one of the light receivers. Another aspect of the present invention relates to a method for designing an optical system. The method includes selecting a sensor with a plurality of light receptors having a pitch of pixels. A desired minimum point size resolution is selected for the system and a configured lens or an existing lens selected with optical parameters based on pixel pitch and the desired minimum point size is provided to map or plot the plurality of light receptors for part of the image according to the desired resolution. The following description and the attached drawings establish in 8 detail certain illustrative aspects of the invention. These aspects are indicative, however, of only some of the various forms in which the principles of the invention may be employed and the present invention is intended to include all aspects and their equivalents. Other advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered together with the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic block diagram illustrating an image forming system according to an aspect of the present invention. Figure 2 is a diagram illustrating a space system design k according to an aspect of the present invention. Figure 3 is a diagram of an exemplary system illustrating the correlation of the sensor receiver according to an as- pect of the present invention. Figure 4 is a graph illustrating sensor correlation considerations according to an aspect of the present invention. Figure 5 is a graph illustrating a Modulation Transfer Function according to an aspect of the present invention. Figure 6 is a graph that illustrates a merit figure with 9 relationship to a Spatial Field Number according to an aspect of the present invention. Figure 7 is a flow diagram illustrating an image forming methodology according to an aspect of the present invention. Figure 8 is a flow diagram illustrating a methodology for selecting optical parameters according to an aspect of the present invention. Figure 9 is a schematic block diagram illustrating an exemplary imaging system in accordance with an aspect of the present invention. Figure 10 is a schematic block diagram illustrating a modular imaging system in accordance with an aspect of the present invention. Figures 11-13 illustrate alternative imaging systems in accordance with one aspect of the present invention. Figures 14-18 illustrate exemplary applications in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION The present invention relates to an optical and / or image formation system and methodology. In accordance with one aspect of the present invention, a space filter k is provided which can be configured from a transfer means of 10. images such as an optical medium that correlates the image sensor receivers with an objective field of vision. A variety of lighting sources can also be used to achieve one or more operational goals and for versatility of application. The space design k of the imaging system of the present invention promotes the capture and analysis (eg, automated and / or manual) of images that have a high Field of View (FOV) in substantially higher effective Resolutional Amplification when compare with conventional systems. This may include using a small Numerical Aperture (NA) associated with lower amplification objectives to achieve a very high Effective Resolutional Amplification. As a consequence, images that have a substantially large Depth of Field (DOF) in very high Effective Resolutional Amplification are also performed. The space design k also facilitates the use of homogeneous light sources that are substantially insensitive to changes in position, thereby improving the methods of examination and analysis. According to another aspect of the present invention, a goal-to-objective distance (e.g., Working Distance) can be maintained in operation in effective low and high magnification effective resolution imaging, where typical space can be achieved in approximately 0.1 mm or more than about 20 mm or less, as opposed to conventional microscopic systems that may require distances of 11 object to significantly smaller targets (as small as 0.01 mm) for values of comparable Effective Resolutional Amplification (eg order 'similar in magnitude). In another aspect, the Working Distance is approximately 0.5 mm or more and approximately 10 mm or less. It will be appreciated that the present invention is not limited to the operation in the distance of the previous work. In many cases, the above working distances are used, however, in some cases, smaller and larger distances are employed. It is further noted that immersion in oil or other refractive index of the correlation medium or fluid for the objective lens is generally not required (e.g., substantially no improvement to be gained) at one or more levels of effective image amplification of the present invention still exceeding the effective resolving amplification levels that can be achieved in conventional microscopic optical design versions that include systems employing "corrected infinity" objectives. The space design k of the present invention defines that a small "Defocus Circle" or limited diffraction point / point in the target plane is determined by the design parameters to correlate the image sensor receivers or pixels with a correspondence of one by one substantially by the "unit mapping" of the object and the image spaces for the associated object and the image fields. This allows the improved performance and capabilities of the present invention. A possible theory 12 of the space design k results from the mathematical concept that put the Fourier Transform of an object and the image is formed in the space k (also called "reciprocal space"), the sensor must be mapped to the objective plane in space k by techniques of optical design and placement of components according to the present invention. It will be appreciated that a plurality of other transforms or patterns can be used to configure and / or select one or more components according to the present invention. For example, the transforms of tiny waves LaPlace (transformed s), transformed z as well as other transforms can be used similarly. The methodology of space design k unlike conventional optical systems is designed according to the geometric theory, paraxial ray track and optimization, since the optimization of space k facilitates that the spectral components of the object (for example, sample of tissue, particle, semiconductor) and the image have the same space k, and in this way be quantified. Therefore, there are substantially no inherent limitations imposed on a Modulation Transfer Function (MTF) that describes the contrast against resolution and absolute spatial resolution of the present invention. The quantization, for example, in space k produces a substantially unitary Modulation Transfer Function not performed by conventional systems. It is observed that high MTF, Spatial Resolution and effective resolution image amplification can be achieved with 13 much lower amplification with desirable lower Numeric Apertures (eg, generally less than 50x with a numerical aperture of generally less than 0.7) through "unit mapping" of the projected pixels in an "Intrinsic Space Filter" provided by the space design k. If desired, the "corrected infinity" objectives can be employed with the associated optical component and illumination, as well as the varied spectrum components, polarization variation components, and / or contrast or phase variation component. These components can be included in an optical path length between a lens and the image lens within an "Infinite space". The accessories and variations of the optical system can thus be placed as interchangeable modules in this geometry. The design of space k, in contrast to those formed of conventional microscopic images that use the "corrected infinity" objectives, allow the maximum optimization of the infinite space geometry through the concept of "unit mapping". This implies that there is generally no specific limit to the number of additional components that can be inserted into the "infinite space" geometry as in conventional microscopic systems that typically specify no more than two additional components without optical correction. The present invention also allows a "base module" design that can be configured and reconfigured in operation for a plurality of its different applications if necessary to employ transmitted or reflected illumination, if desired, this includes substantially all typical machine vision illumination schemes (e.g., darkfield, brightfield, phase / contrast, and other microscopic transmissive techniques (Kohler, Abbe) , in substantially any displacement and may include Epi illumination and variations thereof The systems of the present invention may be employed in a plurality of optomechanical designs that are strong since the space design k is substantially not sensitive to ambient vibration and mechanical and in this way generally do not require heavy mechanical structural design and vibration isolation associated with conventional microscopic imaging instruments.Other features may include digital image processing, if desired, together with storage (eg base of local data, transmissions of image data to computer remotes for storage / analysis) and display of the images produced in accordance with the present invention (for example, computer display, printer, film and other output means). The remote signal processing of the image data may be provided, together with the communication and display of the image data by associated data packets communicating in another network or other medium, for example. Referring initially to Figure 1, an imaging system 10 is illustrated according to an aspect of the invention. present invention. The imaging system 10 includes a sensor 20 having one or more receivers such as pixels or discrete light detectors (see, for example, illustrated in the following in Figure 3) operatively associated with a transfer medium 30. images. The image transfer means 30 is adapted or configured to scale the proportions of the sensor 20 in an image plane established by the position of the sensor 20 to a target field of vision illustrated in the reference number 34. A planar reference 36 of X and Y coordinates is provided to illustrate the scale or reduction of the apparent or virtual size of the sensor 20 towards the objective field of view 34. The direction arrows 38 and 40 illustrate the direction of apparent size reduction of the sensor 20 to the objective field of vision 34.
The objective vision field 34 established by the image transfer means 30 is related to the position of a target plane 42 that includes one or more items under microscopic examination (not shown). It is noted that the sensor 20 can be of substantially any size, form and / or technology (eg digital sensor, analog sensor, Charge Coupled Device (CCD) sensor, CMOS sensor, Charge Injection Device (CID) sensor, a disposition sensor, a sensor of a sensor linear scan) including one or more receivers of various sizes and shapes, one or more receivers being similarly dimensioned or provided in a respective sensor to be light sensitive (eg, visible, not visible) received from the articles under examination in the objective field of vision 34. When the light is received from the objective field of vision 34, the sensor 20 provides an output 44 that can be directed to a local or remote store such as a memory (not shown) and deployed from the memory by means of a computer and associated display, for example, without substantially any digital processing intervention (for example, direct bitmap from the sensor memory to the screen), if desired, it is observed that the local signal processing or remote from the image data received from the sensor 20 may also occur. For example, the output 44 can be converted into electronic data packets and transmitted to a remote system in a network and / or by wireless transmission systems and protocols for further analysis and / or presentation. Similarly, the output 44 can be stored in a local computer memory before being transmitted to a subsequent computer system for further analysis and / or presentation. The scale provided by the image transfer means 30 is determined by a novel k-space configuration or design within the medium that promotes the predetermined k-space frequencies of interest and mitigates the frequencies outside the predetermined frequencies. This has the effect of a band-pass filter of the spatial frequencies within the image transfer medium 30 and notably defines the imaging system 10 in terms of resolution instead of 17 amplification. As will be described in more detail in the following, the resolution of the image formation system 10 determined by the space design k promotes a plurality of features in a displayed or stored image such as having high effective resolution amplification, high absolute spatial resolution , large depth of field, larger working distances and a Unitary Modulation Transfer Function as well as other characteristics. In order to determine the frequencies of space k, a "pitch" or space is determined between adjacent receivers in the sensor 20, the pitch related to the center-to-center distance of the adjacent receivers and approximately the size or diameter of a single receiver. The passage of the sensor 20 defines the "interrupted" frequency band of the sensor Nyquist. It is this frequency band that is promoted by the space design k, while other frequencies are mitigated. In order to illustrate how the scale is determined in the imaging system 10, a small or diffraction limited point 50 or point 50 is illustrated in the objective plane 42. The limited diffraction point 50 represents the smallest resolving object determined by the optical characteristics within the image transfer means 30 and is described in greater detail in the following. A scaled receiver 54, represented in front of the field of view 34 for exemplary purposes, and having a determined size according to the passage of the sensor 20, is correlated or scaled to be 18 about the same size in the objective vision field 34 as the limited diffraction point 50. In other words, the size of any given receiver in the sensor 20 is effectively reduced in size by the image transfer means 30 to be approximately the same size (or correlated in size) to the size of the limited diffraction point 50. It also has the effect of filling the target vision field 34 with substantially all of the sensor receivers 20, the respective receivers which are suitably scaled to be similar in size to the limited diffraction point 50. As will be described in greater detail in the following, the correlation / mapping of the characteristics of the sensor with the smallest object or resolutive point within the objective field of vision 34 defines the imaging system 10 in terms of absolute spatial resolution and this way, improves the operational performance of the system. A lighting source 60 can be provided with the present invention so that the photons of the source can be transmitted through and / or reflected in the objects in the field of view 34 to allow activation of the receivers in the sensor 20. It is noted that the present invention can be potentially used without a lighting source if the potential self-luminous objects (eg, fluorescent or phosphorescent, biological or organic material sample, metallurgical material, mineral or other inorganic materials, etc.) emit 19 sufficient radiation to activate the sensor 60. The Light Emitting Diodes, however, provide an effective light source 60 in accordance with the present invention. Substantially any lighting source 60 may be applied including coherent and non-coherent sources, visible and non-visible wavelengths. However, for non-visible wavelength sources, the sensor 20 can also be adapted appropriately. For example, for an infrared or ultraviolet source, an infrared or ultraviolet sensor 20 may be employed, respectively. Other lighting sources 60 may include specific wavelength illumination, broadband illumination, continuous illumination, strobe lighting, Kohler illumination, Abbe lighting, phase-contrast illumination, dark field illumination, bright field illumination, and Epi lighting. Transmissive or reflective lighting techniques (eg, specular and diffuse) can also be applied. Referring now to Figure 2, a system 100 illustrates an image transfer medium in accordance with an aspect of the present invention. The image transfer means 30 shown in Figure 1 can be provided in accordance with the concepts of space designs k described above and more particularly by a space filter k adapted, configured and / or selected to promote a band of frequency 114 of predetermined space k and to mitigate frequencies outside this band. This is achieved by determining a step "P" which is the distance between adjacent receivers 116 in a sensor (not shown) and the dimension of optical means within filter 110 such that the pitch "P" of receivers 116 correlates in size with a limited diffraction point 120. The limited diffraction point 120 can be determined from the optical characteristics of the medium in the filter 110. For example, the Numerical aperture of an optical medium such as a lens defines the smallest object or point that can be resolved by the lens. The filter 110 performs a transformation of space k so that the size of the step is effectively correlated, "mapped by unit", projected, correlated and / or reduced to the size or scale of the limited diffraction point 120. It will be appreciated that a plurality of optical configurations can be provided to achieve the filter 110 of space k. A configuration can be provided by a spherical lens 124 adapted to perform the space transformation k and the reduction of the sensor space to the object space. Yet another configuration can be provided by a multiple lens array 128, wherein the lens combination is selected to provide filtration and scale. Still another configuration may employ a fiber optic waveguide 132 or imaging conduit, wherein the multiple optical fibers or fiber arrangement are configured in a funnel shape to perform the mapping of the sensor to the target field of vision. It is noted that the fiber optic waveguide 132 is generally in physical contact between the sensor and the low object. examination (for example, contact with the microscope slide). Another possible space-k filter 110 disposed employs a holographic optical element 136 (or other diffractive or phase structure), wherein a substantially planar optical surface is configured by a hologram (or other diffractive or phase structure) (e.g., generated by computer, optically generated, and / or other method) to provide the mapping according to the present invention. The optical space design k enabled by the space filter k is based on the "projected effective pixel pitch" of the sensor which is a figure derived from the following physical size ("projecting") of the elements of the arrangement of the sensor again through the optical system of the target plane. In this way, the conjugation planes and the optical transform spaces are correlated with the Nyquist interruption of the effective receiver or the size of the plxel. This increases the effective result image amplification and the Field of View as well as the Depth of Field and Absolute Spatial Resolution. Thus, in a novel application of optical theory, it is provided that it does not rely on the conventional geometric optical design parameters of the paraxial ray tracking that governs the optical combinations and deformation of conventional images. This can also be described in the following way. A Fourier transformation of an object and an image is formed (by an optical system) in space k (also referred to 22). as "reciprocal space"). It is this transform that is operated for image optimization by the space design k of the present invention. For example, the optical medium employed in the present invention can be designed with standard, relatively inexpensive "in-house" components having a configuration that defines that the object and the image space are "mapped per unit" or correlated per unit for substantially all the image and object fields. A small defocusing circle or limited diffraction point 120 in the target plane is defined by the design to correlate the pixels in the image plane (e.g., in the image sensor of choice) with substantially one-to-one correspondence and in this way the Fourier transforms of the pixel arrangements can be correlated. This implies that, optically by design, the blurring circle is scaled to be approximately the same size as the receiver or pitch of pixels. The present invention is defined so that it constructs an Intrinsic Space Filter such as the space filter 110. Such a design and implementation definition allows the spectral components of the object and the image in the space k to be approximately the same or quantized. This also defines that the Modulation Transfer Function (MTF) (contrast comparison with spatial resolution) of the sensor correlates with the MTF of the target plane. Figure 3 illustrates an optical system 200 according to an aspect of the present invention. The system 200 includes a sensor 23 212 having a plurality of sensors or sensor pixels 214. For example, the sensor 212 is an M-by-N arrangement of sensor pixels 214, which have M rows and N columns (e.g., 640 x 480, 512 x 512, 1280 x 1024, etc.), M and N being numbers integers respectively. Although a rectangular sensor 612 having generally square pixels is depicted, it will be understood and appreciated that the sensor can be substantially of any shape (eg, circular, elliptical, hexagonal, rectangular, and so on). Further it will be appreciated that the respective pixels 214 within the arrangement can be substantially of any shape or size, the pixels in any given arrangement 212 being generally sized and shaped in accordance with one aspect of the present invention. The sensor 212 may be substantially of any technology (e.g., digital sensor, analog sensor, charge coupled device (CCD) sensor, CMOS sensor, charge injection device (CID) sensor, a disposition sensor, or a linear scan sensor) including one or more receivers (or pixels) 214. In accordance with one aspect of the present invention, each of the pixels 214 is similarly sized or provided and is responsible for the light (eg, visible, not visible) received from the items under review, as described herein. The sensor 212 is associated with a lens network 216, which is configured based on the performance requirements of the device. optical system and the passage size of the sensor 212. The lens network 216 is operative to scale (or project) the proportions (e.g. pixels 214) of the sensor 212 into an image plane set by the position of the sensor 212 in a field vision objective 220 according to an aspect of the present invention. The objective vision field 220 refers to the position of an objective plane 222 that includes one or more items (not shown) under examination. As the sensor 212 receives light from the target vision field 220, the sensor 212 provides an output 226 that can be directed to a local or remote store such as a memory (not shown) and displayed from the memory by a computer and display associated, for example, with substantially no digital intervention processing (eg, straight bitmap from the sensor memory to the screen), if desired. It is noted that local or remote signal processing of image data received from sensor 212 may also occur. For example, the output 226 may be converted into electronic data packets and transmitted to a remote system over a network for further analysis and / or presentation. Similarly, the output 226 may be stored in a memory of the local computer before it is transmitted to a subsequent computation system for further analysis and / or presentation. The scale (or effective projection) of pixels 214 provided by the lens network 216 is determined by a configuration of novel space k or design according to an aspect of the present invention. In the space design k of the lens network 216 promotes the frequencies of predetermined spaces k of interest and mitigates the frequencies outside the predetermined frequency band. This has the effect of a band pass filter of the spatial frequencies within the lens network 216 and notably defines the imaging system 200 in terms of resolution instead of amplification. As will be described in the following, the resolution of the image formation system 200 determined by the space design k promotes a plurality of features in a displayed or stored image, such as having a high "Effective Resolutive Amplification" (a figure of merit described). in the following), with related high absolute spatial resolution, large depth of field, larger working distances, and a Unitary Modulation Transfer Function as well as other features. To determine the frequencies of space k, a "step" or space 228 is determined between adjacent receivers 214 on sensor 212. The step (e.g., pixel pitch) corresponds to the center-to-center distance of adjacent receivers, indicated at 228, which is about the size or diameter of a simple receiver when the sensor includes all equally sized pixels. Step 228 defines the "interrupted" frequency band of Nyquist of sensor 212. It is this band of 26 frequency that is promoted by the design of space k, while other frequencies are mitigated. In order to illustrate how to scale is determined in the imaging system 200, a point 230 of a desired smaller resolving point size is illustrated in the objective plane 222. The point 230, for example, may represent the smallest resolver determined by the optical characteristics of the lens network 216. That is, the lens network is configured to have optical characteristics (e.g., amplification, numerical aperture) so that the respective pixels 214 are correlated or scaled to be approximately the same size in the objective field of vision 220 as the dot size. desired minimum resolution of point 230. For purposes of illustration, a scaled receiver 232 is shown in front of the field of view 220 as having a determined size according to step 228 of sensor 212, which is approximately the same as point 230. By way of illustration, the lens network 216 is designed to effectively reduce the size of each given receiver (e.g., pixel) 214 in the sensor 212 to be approximately the same size (e.g., correlated in size) to the spot size 230, which is typically the minimum resolving point size for the system 210. It will be understood and appreciated that the point 230 can be selected to a size representing the smallest resolutive object determined by the optical characteristics within the lens network 216 as determined by the diffraction rules (per 27). example, diffraction point size limited). The lens network 216 thus can be designed to effectively scale each pixel 214 of the sensor 212 to any size that is equal to or greater than the limited diffraction size. For example, the resolution point size can be selected to provide any desired image resolution that meets the criteria. After the desired resolution is selected (resolving point size), the lens network 216 is designed to provide the amplification for scaling the pixels 214 in the target vision field 220 accordingly. This has the effect of filling the target vision field 220 with substantially all the sensor receivers 212, the respective receivers are suitably scaled to be similar in size to the point 230, which corresponds to the desired resolving point size. The correlation / mapping of the characteristics of the sensor to the desired resolutive object (for example, smaller) or point 230 within the objective vision field 220 defines the imaging system 200 in terms of absolute spatial resolution and improves the operating performance of the system in accordance with an aspect of the present invention. By way of further illustration, in order to provide a unit mapping according to this example, assume that the sensor arrangement 212 provides a step 228 of pixels of approximately 10.0 microns. The lens network 216 includes a lens 234 and a secondary lens 236. For example, goal 234 28 it can be set in an infinite conjugation to the secondary lens 236, with the space between the lens and the secondary lens being flexible. The lenses 234 and 236 relate to each other to achieve a reduction of the extensor space defined in the sensor arrangement 220 to the target space defined in the objective plane 222. It is noted that substantially all pixels 214 are projected in the objective vision field 220, which is defined by objective 234. For example, respective pixels 214 are scaled through objective 234 at approximately the dimensions of the resolving point size. desired minimum. In this example, the desired resolution in the image plane 222 is a miera. In this way, a 10-fold amplification is operative to project a 10-micron pixel back to the target plane 222 and reduce it to a size of one miera. The reduction in size of layout 212 and the pixels 214 can be achieved by selecting the transfer lens 236 to have a focal length "D2" (from array 212 to transfer lens 236) of approximately 150 millimeters and by selecting the target to have a focal length "D1" (from objective lens 236 to objective plane 222) of approximately 15 millimeters, for example. In this way, the pixels 214 are effectively reduced in size of approximately 1.0 microns per pixel, thereby correlating the size of the desired resolving point 230 and filling the target vision field 220 with an arrangement of the same.
"Virtually reduced" of pixels. It will be understood and appreciated that other arrangements of one or more lenses may be employed to provide the desired scale. In view of the above description, those skilled in the art will understand and appreciate that the optical medium (e.g., the lens network 216 can be designed, in accordance with one aspect of the present invention, with relatively inexpensive "standard" components. "having a configuration that defines that the object and image space are" mapped or plotted per unit "or" correlated per unit "for substantially all the image and target fields .. The lens network 216 and, in particular, the objective 234 , performs a Fourier transform of an object of an image in space k (also referred to as "reciprocal space") It is this transformation that is operated for image optimization by the space design k of the present invention. Small defocus or Airy disk in the target plane is defined by the design to correlate the pixels in the image plane (for example, in the image image sensor). tion) or substantially a one-to-one correspondence with the Airoso disk and thus the Fourier transform of pixel arrays can be correlated. This implies that, optically by design, the Airoso disk is scaled through the lens network 216 to approximately the same size as the receiver or pixel pitch. As mentioned in the foregoing, the lens network 216 is define to build an Intrinsic Space Filter (for example, a space filter k). The definition of design and implementation allows the spectral components of the object and the image in the space k to be approximately the same or quantify. This also defines that a Modulation Transfer Function (MTF) (contrast comparison at spatial resolution) of the sensor can be correlated to the MTF of the target Plane in accordance with an aspect of the present invention. As illustrated in Figure 3, space k is defined as the region between objective 234 and secondary lens 236. It will be appreciated that substantially any optical means, lens type and / or lens combination that reduces, maps and / or projects the sensor arrangement 212 to the target vision field 220 in accordance with a unit or space mapping k as described in FIG. the present is within the scope of the present invention. To illustrate the novelty of the exemplary lens / sensor combination depicted in Figure 3, it is noted that conventional objective lenses, sized according to conventional geometric paraxial ray techniques, are generally sized according to the amplification, Numerical aperture, focal length and other parameters provided by the lens. In this way, the objective lens can be dimensioned with a greater focal length than subsequent lenses that approach or are closer to the sensor (or eyepiece in the conventional microscope) in order to provide magnification of small objects. This can 31 result in the enlargement of small objects in the objective plane that is projected as the magnified image of the objects through "portions" of the sensor and results in known detail blur (eg, Rayleigh diffraction and other limitations in the optics) , problems of empty amplification, and Nyquist distortion among other problems in the sensor. The space design k of the present invention operates in an alternative manner to the principles of conventional geometric paraxial ray designs. That is, objective 234 and secondary lens 236 operate to provide a reduction in size of sensor arrangement 212 toward the objective field of vision 220, as demonstrated by the lens ratio. A lighting source 240 can be provided with the present invention so that photons from that source can be transmitted through and / or reflected from objects in the field of view 234 to allow activation of the receivers in the sensor 212. It is noted that the present invention can potentially be used without a lighting source 240 if the potential self-luminous objects (e.g., objects or specimens with emissive characteristics as previously described) emit sufficient radiation to activate the sensor 12. Substantially any lighting source 240 can be applied including coherent and non-coherent sources, visible and non-visible wavelengths. However, for non-visible wavelength sources, the sensor 212 can be suitably adapted as well. For example, 32 for an infrared or ultraviolet source, an infrared or ultraviolet sensor 212 may be employed, respectively. Other suitable lighting sources 240 may include specific wavelength illumination, broadband illumination, continuous illumination, strobe lighting, Kohler illumination, Abbe illumination, contrast phase illumination, dark field illumination, bright field illumination, Epi illumination. and similar. Transmissive or reflective lighting techniques (for example, specular and diffuse) can also be applied. Figure 4 illustrates a graph 300 of the mapping characteristics in comparison between the projected pixel size on the X axis and the resolution "R" size of the limited diffraction point on the Y axis. An apex 310 of the graph 300 corresponds per unit mapping between the projected pixel size and the limited diffraction point size, which represents an optimal relationship between a lens network and a sensor according to the present invention. It will be appreciated that objective 234 (Figure 3) may not be generally selected so that the limited diffraction "R" size of the smaller resolutive objects is smaller than a projected pixel size. If so, "economic waste" can occur where more accurate information is lost (for example, by selecting a more expensive target than required, such as having a higher numerical aperture). This is illustrated to the right of a division line 320 at reference 330 representing a projected 340 pixel plus 33 long than two smaller 350 diffraction points. In contrast, where a lens with limited diffraction performance greater than the projected pixel size is selected, blurring and empty magnification may occur. This is illustrated to the left of line 320 in the reference number 360, where a projected pixel 370 is smaller than a limited diffraction object 380. It will be appreciated, however, that even if substantially one-to-one correspondence is not achieved between the projected pixel size and the limited diffraction point, a system can be configured with less than optimal correlation (e.g., 0.1%, 1% , 2%, 5%, 20%, 95% under apex 310 in graph 300 to the left or right of line 320) and still provide adequate performance according to one aspect of the present invention. In this way, a less than optimum correlation is intended to fall within the spirit and scope of the present invention. It will further be appreciated that the diameter of the lenses in the system as illustrated in Figure 3, for example, must be sized such that when a Fourier transform is made from the target space to the sensor space, the spatial frequencies of interest are in the bandpass region described in the above (eg, frequencies used to define the size and shape of the pixel) are substantially un-attenuated. This generally involves larger diameter lenses (eg, approximately 10 to 100 millimeters) that must be selected to mitigate the attenuation of the frequencies. spatial of interest. Referring now to Figure 5, a Modulation Transfer function 400 is illustrated in accordance with the present invention. A Y-axis, percentage of modulation from 0 to 100% is illustrated as defined in the percentage of contrast between black and white. On an X axis, the spatial resolution of absolution is illustrated in terms of microns and separation. A line 410 shows that the modulation percentage remains substantially constant at approximately 100% over several degrees of spatial resolution. Thus, the Modulation Transfer Function is approximately 1 for the present invention to approximately a limit imposed by the sensor's signal-to-noise sensitivity. For illustrative purposes, a Conventional Optical Design Modulation Transfer Function is illustrated by line 420 which may be an exponential curve with generally asymptotic boundaries characterized by generally decreasing spatial resolution with decreasing the percentage of modulation (contrast). Figure 6 illustrates a quantifiable Merit Figure (FOM) for the present invention defined as dependent on two main factors: Absolute Spatial Resolution (RA, in microns), represented on the Y axis and the Field of Vision (F, in microns) represented on the X axis of a 500 chart. A reasonable FOM called "Space Field Number" (S) can be expressed as the ratio of these two previous quantities, with values greater than S 35 being desirable for imaging as follows: S = F / RA A line 510 illustrates that the FON remains substantially constant across the field of view and over different values of absolute spatial resolution which is an improvement over conventional systems. Figures 7, 8, 14, 15 and 16 illustrate methodologies for facilitating the performance of images according to the present invention. While, for purposes of simplicity of explanation, the methodologies may be shown and described as a series of acts, it will be understood and appreciated that the present invention is not limited by the order of acts, since some acts may, in accordance with the present invention occur in different orders and / or concurrently with other acts of those shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Furthermore, not all illustrated acts may be required to implement a methodology in accordance with the present invention. Returning now to Figure 7 and proceeding to 610, the lenses are selected having limited diffraction characteristics in approximately the same size of a pixel to provide a unit mapping and optimization of the space design k. In 614, the characteristics of the lenses are also selected to mitigate 36 the reduction of spatial frequencies within the space k. As described above, this generally implies that optics of larger diameters are selected in order to mitigate the attenuation of the wanted space frequencies of interest. In 618, a lens configuration is selected such that the pixels, having a "P" pitch, in the image plane defined by the position of a sensor are scaled according to the step to an objective field of view in approximately the size of a limited diffraction point (for example, unit mapping) within the objective field of vision. In 622, an image is generated by producing data from a sensor for real-time monitoring and / or storage of data in memory for direct presentation on a computer screen and / or in local or remote image processing and / or analysis Subsequent within the memory. Figure 8 illustrates a methodology that can be employed to design an optical / imaging system in accordance with an aspect of the present invention. The methodology starts at 700 in which a suitable sensor arrangement is selected for the system. The sensor arrangement includes a matrix of receiver pixels having a known step size, normally defined by the manufacturer. The sensor can be substantially of any shape (for example, rectangular, circular, square, triangular, etc.). As an illustration, assume that a 640 x 480 pixel sensor that has a step size of 10 μm is selected. It will be understood and appreciated that a 37 The optical system can be designed for any type and / or size of sensor arrangement according to one aspect of the present invention. Then at 710, an image resolution is defined. The image resolution corresponds to the smallest desired resolution point size in the image plane. The image resolution can be defined based on the application or applications to which the optical system is being designed, such as any resolution that is greater than or equal to a smaller limited diffraction size. In this way it will be appreciated that the resolution becomes a selectable design parameter that can be adjusted to provide desired image resolution for virtually any type of application. In contrast, most conventional systems tend to limit resolution according to Rayleigh diffraction, which provides that the intrinsic spatial resolution of the lenses can not exceed the diffraction limits for a given wavelength. After selecting a desired resolution (710), a suitable amount of amplification is determined at 720 to achieve such resolution. For example, the amplification refers functionally to the pixel pitch of the sensor array and the smaller resolving point size. The amplification (M) can be expressed as follows: 38 Ec. 1 and wherein: x is the pixel pitch of the sensor arrangement; y y is the desired image resolution (minimum point size). So, for the previous example where the pixel pitch is 10 pm and assume a desired image resolution of 1 μ? T ?, Equation 1, provides an optical system of magnification to ten. That is, the lens system is configured to counterproject each pixel of 10 μ? to the target plane and reduce the respective pixels towards the resolving point size of 1 miera. The methodology in Figure 8 also includes a determination of a Numerical Aperture at 730. The Numerical Aperture (NA) is determined according to well-established diffraction rules that refer to NA of the objective lens toward the resolution point size. minimum determined at 710 for the optical system. As an example, the calculation of NA can be based on the following equation: NA- -5 x? Ec.2 where: ? is the wavelength of the light used in the optical system; and y is the minimum point size (for example, determined at 710). Continuing with the example in which the optical system has 39 a resolving point size of y = 1 micron, and assuming a wavelength of approximately 500 nm (eg, green light), an NA equal to 0.25 satisfies equation 2. It is noted that relatively commercially available economic objectives of magnification to ten provide numerical openings of 0.25. It will be understood and appreciated that the relationship between NA, wavelength and resolution represented by Equation 2 can be expressed in different ways according to several factors that explain the behavior of the objectives and capacitors. Thus, the determination 730, according to one aspect of the present invention, is not limited to any particular equation but in fact simply obeys the known general physical laws in which NA f untionally refers to the wavelength and resolution . After the lens parameters have been designed in accordance with the selected sensor (700), the corresponding optical components can be accommodated to provide an optical system (740) according to one aspect of the present invention. Assume, for purposes of illustration, that the exemplary optical system created in accordance with the methodology of Figure 8 will be used for the formation of microscopic digital images. By way of comparison, in classical microscopy, in order to be able to form an image and make resolutive the structures of a size that approaches a miera (and less), amplifications of many hundreds are normally required. The basic reason for this is that the 40 Optics have conventionally been designed for the situation when the sensor of choice is the human eye. In contrast, the methodology in Figure 8 designs the optical system in view of the sensor, which offers significant yield increases at reduced cost. In the space design methodology k, according to one aspect of the present invention, the optical system is designed around a discrete sensor having known fixed dimensions. As a result, the methodology can provide a design procedure and optical system more direct, thick and economical to "counter project" the size of the sensor on the target plan and calculate an amplification factor. A second part of the methodology facilitates that the optics provide the amplification that has a sufficient NA to make optically resolutive a point of similar dimensions as the counterprojected pixel. Advantageously, an optical system designed in accordance with an aspect of the present invention can utilize customary and / or existing components. Thus, for this example, inexpensive optics can be employed in accordance with one aspect of the present invention to obtain adequate results, but well-corrected microscope optics are relatively inexpensive. If custom-designed optics are used, in accordance with one aspect of the present invention, then the allowable range of amplification and numerical apertures becomes substantial, and certain performance gains can be made on the use of optical components of existence. 41 In view of the concepts described in the above in relation to Figures 1-8, a plurality of related imaging applications can be improved by the present invention. For example, these applications may include but are not limited to imaging, control, inspection, microscopy and / or other automated analysis such as: (1) Biomedical analysis (e.g., cell colony count, histology, frozen sections, cytology cellular, hematology, pathology, oncology, fluorescence, interference, phase and many other applications of clinical microscopy); (2) Particle Dimension Applications (for example, pharmaceutical manufacturers, paint manufacturers, cosmetics manufacturers, food process designs, and others); (3) Monitoring of air quality and measurement of airborne particles (for example, clean room certification, environmental certification, etc.); (4) Analysis of optical defect, and other requirements for high resolution microscopic inspection of both transmissive and opaque materials (such as in metallurgy, automated semiconductor inspection and analysis, automated editing systems, third-dimensional imaging, etc.); and (5) Imaging technologies such as cameras, copiers, fax machines and medical systems. Figures 9, 10, 11, 12, and 13 illustrate possible exemplary systems that can be constructed using the concepts 42 previously described in the previous one in relation to Figures 1-8; Figure 9 is a flow diagram of the light paths in an imaging system 800 adapted according to the present invention. System 800 employs a light source 804 that emits light that is received by a light capacitor 808. The output of the light condenser 808 can be directed by a retractable mirror 812 towards a microscope condenser 816 projecting the illumination light onto a slide 820 of the slide, wherein an object (not shown, placed on the top part of or within of the slide of the slide) can be imaged according to the present invention. The slide stage 820 can be placed automatically (and / or manually) by a computer 824 and the associated slide carrier 828 to form an image of one or more objects in a field of view defined by a 832 objective. It is noted that the objective 832 and / or other components represented in system 800 can be adjusted manually and / or automatically by computer 824 and associated controls (not shown) (e.g., servomotors, tubular slides, linear and / or rotary position encoders, optical, magnetic, electronic or other feedback mechanisms, control software, etc.) to achieve different and / or desired image characteristics (eg, amplification, focusing, whose objects appear in the field of vision, depth of field, etc.). 43 The light output of objective 832 can be directed through a divider 840 of the optional beam, wherein the beam splitter 840 is operative with an alternative epi illumination section 842 (for illuminating objects from above the slide stage 820) including optics 844 of light shaping and associated light source 848. Light passing through the beam splitter 840 is received by an imaging lens 850. The output of the imaging lens 850 can be directed to a CCD or other sensor or imaging device 854 by a foldable mirror 860. The CCD or other sensor or imaging device 854 converts the received light from the object into digital information for transmission to computer 824, wherein the target image can be displayed to a user in real time and / or stored in memory 864 As noted above, the digital information defining the image captured by the CCD or other sensor or imaging device 854 can be routed as bitmap information to the 864 screen / memory by the 824 computer. If you wish, the processing of images such as automatic comparisons with predetermined samples or images can be performed to determine an identity of and / or analyze the object under examination. This may also include the use of substantially any type of image processing technology or software that can be applied to the captured image data within the memory 864. FIG. 10 is a system 900 representing a 44 exemplary modular method for designing images according to one aspect of the present invention. The system 900 may be based on a sensor arrangement 910 (eg, provided in the stock chamber) with a pixel pitch of approximately 8 microns (or another dimension), for example, where the layout sizes may vary from 640x480 at 1280x1024 (or other dimensions as seen in the above). The system 900 includes a modular design wherein one respective module is substantially isolated from another module, thereby mitigating the alignment tolerances. The modules may include: • a camera / sensor module 914 that includes an image forming lens 916 and / or a foldable mirror 918; an epi lighting module 920 for insertion in a region 922 of space k; • a 924 module for containment and sample presentation; a light shaping module 930 including a capacitor 934; and • a subplate lighting module 940. It is noted that the system 900 can advantageously employ commercially available components such as for example: • condenser optics 934 (NA < = 1) for the presentation of light; Four. Five example, Olympus U-SC-2) Objectives 944 flat / achromatic standard of increase and numerical aperture, for example: (4x, O10), (10x, 0 · 25), (20x, 0 · 40), (40x, 0 · 65) selected to satisfy the desired characteristics that for a given amplification, the pixel pitch projected in the objective plane is similar to the dimensions at resolutive point of limited diffraction of optics. example, Olympus 1-UB222, UB223, 1-UB225, 1-UB227). The system 900 uses an infinite space (space k) between the objective 944 and the lens 916 for imaging to be able to facilitate the insertion of auxiliary and / or additional optical components, modules, filters, etc. in the region of space k in 922 such as for example, when the imaging lens 916 is adapted as an achromatic triplet of f = 150 mm. In addition, an infinite space (space k) between the objective 944 and the lens 916 of image formation can be provided to facilitate the injection of light (via a light-forming path) into an optical path for epi illumination. For example, the light forming path for epi illumination may include: a light source 950 such as an LED driven from a current stabilized supply; 46 (for example HP HLMP-CW30) • a projection hologram for the source homogenization and the imposition of a spatial virtual source in 950; (for example, 30 degree FWHM POC light shaping diffuser polyester film • a variable aperture at 960 to restrict the NA of the 950 source to that of the imaging optics, thereby mitigating the effect of the assembled light entering the optical imaging path (eg, Thorlabs iris diaphragm, SM1D12 0 «5-12Omm aperture) • a 960 collection lens used to augment the gathered light of the virtual 950 source, and to match the space characteristics k of the source with that of the imaging optics, and (e.g., spherical lens of f = 50 mm, achromatic doublet of f = 50 mm) • a partially reflective beam splitter 964 used to form a coaxial light path and image path For example, optics 964 provide a 50% reflectivity on a first surface (at a 45 degree inclination), and 47 It is coated with broadband antireflection on a second surface. The subplate illumination module 940 is provided by an arrangement that is substantially similar to that of the epi illumination described above, for example: a light source 970 (an LED powered from a stabilized supply with current); (for example, HP HLMP-CW30) • a transmission hologram (associated with light source 970) for purposes of source homogenization and the imposition of a spatial virtual source; (for example, 30 degree FWHM POC light shaping diffuser polyester film) · a 974 collection lens used to increase the collected light of the virtual source 970, and to match the characteristics of the source space k to that of the optics of image formation; (for example, spherical lens of f = 50 mm, achromatic doublet of f = 50 mm). • a variable aperture 980 for restricting the NA of the source 970 to that of the imaging optics, thereby mitigating the effect of the collected light entering the optical imaging path; 48 (eg, Thorlabs iris diaphragm, SM1D12 0 · 5-12 · 0 mm aperture) • a 988 mirror used to flip the optical path at 90 degrees and provide fine atment to precisely align the optical modules; and • a retransmission lens (not shown) used to precisely position the image of the variable aperture 980 on the objective plane (on the 990 slide), therefore, together with the proper placement of a holographic diffuser, thereby achieving a Kohler lighting. (for example, simple flat-convex lens f = 100 mm). As described in the foregoing, a computer 994 and associated screen / memory 998 is provided to display in real time and / or store / process digital image data captured in accordance with the present invention. Figure 11 illustrates a system 1000 according to an aspect of the present invention. In this aspect, a subplate illumination module 1010 (eg Kohler Abbe) can project light through a transmissive slide 1020 (object under examination not shown), where an achromatic lens 1030 receives the light from the slide and directs the light to an image capture module at 1040. It is noted that the achromatic objective 1030 49 and / or the slide 1020 can be controlled manually and / or automatically to place the object or objects under examination and / or to place the objective. Figure 12 illustrates a system 1100 according to one aspect of the present invention. In this regard, a module 1110 of upper stage illumination or epi illumination can project light onto an opaque 1120 slide (object under examination not shown), wherein a lens 1130 can be a composite lens device or another type) receives light from the slide and directs the light to an image capture module at 1040. As noted in the above, the objective 1130 and / or the slide 1120 can be controlled manually and / or automatically to place the object or objects under examination and / or to place the objective. Figure 13 depicts a system 1200 that is similar to system 1000 in Figure 11 except that a compound target 1210 is employed in place of an achromatic target. The image and process formation systems described in the foregoing together with Figures 1-13 can thus be used to capture / process an image of a sample, wherein the imaging systems are coupled to a processor or computer that it reads the image generated by the imaging systems and compares the image with a variety of images in an onboard data store in any number of current memory technologies. For example, the computer may include a component of 50 analysis to make the comparison. Some of the many algorithms used in image processing include, but are not limited to, convolution (on which many others rely), FFT, DCT, thinning (or skeletonization), edge detection, and contrast enhancement. These are usually implemented in software but can also use special-purpose hardware for speed. FFT (fast Fourier transform) is an algorithm for calculating the Fourier transform of a set of discrete data values. Given a finite set of data points, for example, a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also directs the essentially identical inverse concerns to reconstruct a signal from the frequency data. DCT (discrete cosine transform) is a technique to express a waveform as a weighted sum of cosines. There is a variety of existing programming languages designed to process images that include but are not limited to those such as IDL, Image Pro, Matlab, and many others. There are also non-specific limits for special and custom image processing algorithms that can be described to perform manipulations and analysis of functional images. The space design k of the present invention also allows the direct optics correlation for the Fourier frequency information contained in the image with information 51. stored to perform processed analyzes of optically correlated images in real time for a given sample object. Figure 14 illustrates a particle sizing application 1300 that can be employed with the systems and processes previously described. The particle sizing can include real time, closed / open loop monitoring, manufacture with, and particle control in view of the automatically determined particle sizes according to the space design concepts k previously described. This may include automated analysis and detection techniques for several particles having similar or different sizes (n different sizes, n being a whole number) and identification of particles of m shaped / sized particles, m being an integer. In one aspect of the present invention, the desired particle size detection and analysis can be achieved by a direct measurement method. This implies that the absolute spatial resolution per pixel refers directly (or substantially the same) in units of linear measurement to the image particles without substantial account of the particle medium and the associated particle distribution. The generally direct measurement does not create a model but rather provides a metrology and morphology of the image particles in any given sample. This mitigates the processing of modeling algorithms, statistical algorithms, and other modeling limitations 52 presented by current technology. In this way, the problem becomes one of the sample handling and form that improves the accuracy and precision of measurements since the particle data are formed directly in images and are measured instead of modeled, if desired. Proceeding to 1310 of the particle sizing application 1300, the parameters of particle size images are determined. For example, the basic device design can be configured to form an image in Absolute Spatial Resolution per pixel and Effective Resolutional Amplification as previously described. These parameters determine the field of vision (FOV), depth of field (DOF), and working distance (WD), for example. Real-time measurement can be achieved by the asynchronous imaging of a medium at selected time intervals, in real time at common video speeds, and / or at image capture speeds as desired. Real-time imaging can also be achieved by capturing images at selected times for subsequent image processing. Asynchronous image formation can be achieved by capturing images at selected times by pressing an instrument illumination at selected times and work cycles for subsequent image processing. In 1320, a sample introduction process is selected for automated (or manual) analysis. Samples can 53 entering an imaging device adapted in accordance with the present invention in any of the following imaging processes (but not limited to): 1) All methods and media previously described as well: 2) Individual manual samples in containers, slides and / or transmissible media; 3) Continuous flow of particles in gas or liquid vapor, for example. 4) With an image formation device configured for reflective images, the samples can be opaque and presented in an "opaque" carrier (automated and / or manual) without substantial respect to the material analyzed. In 1330, a process control and / or monitoring system is configured. Real time, closed loop and / or open loop monitoring, manufacturing with (for example, closed loop around particle size), and process control by direct measurement of particle characteristics (eg, size, shape, morphology) , cross section, distribution, density, packing fraction, and other parameters can be determined automatically). It will be appreciated that although direct measurement techniques are performed on a given particle sample, automated algorithms and / or processing can also be applied to the image sample if desired. In addition, a particle-based particle characterization device can 54 installed at substantially any given point in a manufacturing process to monitor and communicate particle characteristics for process control, quality control, etc. by direct measurement. At 1340, a plurality of different types of samples may be selected for analysis. For example, samples of particles in any of the aforementioned forms can be introduced in continuous flow, periodic and / or asynchronous processes for direct measurement in a device as part of a closed process feedback loop system for controlling, recording and / or or communicate the particle characteristics of a given sample type (may also include open-loop techniques if desired). Asynchronous and / or synchronous, the first defines the formation of images with a trigger signal sent by an event, or trigger signal initiated by an event or object, a trigger signal is generated to initiate image formation, the second defines the Imaging with a time signal sent to trigger the illumination. Asynchronous and / or synchronous image formation can be achieved by pressing a light source to match the desired image field with substantially any particle flow rate. This can be controlled by a computer, for example, and / or by a "trigger" mechanism, either mechanical, optical and / or electronic, to "flash" the solid state lighting on and off with a given duty cycle. way that the image sensor captures, 55 deploy and register the image for processing and analysis. This provides a direct process to illuminate and form an image since it can effectively be timed to "stop the action" or rather, "freeze" the movement of the particles that flow in the middle. In addition, this allows a sample within the image field to capture particles within the field for the processing and analysis of subsequent images. Real-time (or substantially real-time), closed-loop and / or open-loop monitoring, manufacturing with, and process control by direct measurement based on space k of particle characterization in 1340 can be applied to a wide range of processes that include (but are not limited to): Ceramics, powder, pharmaceutical, cement, minerals, ores, coatings, adhesives, pigments, dyes, carbon black, filtration materials, explosives, food preparations, health emulsions and cosmetics , polymers, plastics, micelles, beverages and many more substances based on particles that require monitoring and process control. Other applications include but are not limited to: · Calibration and instrument standards; • Industrial hygiene research; • Research of materials; • Energy and combustion studies; • Measurements of diesel and gasoline engine emissions; 56 • Sampling of industrial emissions; • Basic aerosol research; • Environmental studies; • Bioaerosol detection; · Pharmaceutical research; • Health and agriculture experiments; • Inhalation toxicology; and / or • Filtration test. In 1350, the processing / analysis of computer images based on software and / or hardware can be presented. The images of a device adapted in accordance with the present invention can be processed in accordance with substantially any hardware and / or software process. Software-based image processing can be achieved by custom software and / or commercially available software since the image file formats are digital formats (for example bitmaps of captured particles). In analysis, characterization, etc. can also be provided by the following: For example, the analysis can be based on metrological (direct measurement based) and / or comparative (database 'data). The comparative analysis may include comparisons in a database of data and images for known particles and / or variants thereof. Advanced image processing can characterize and catalog real-time images and / or measurements of periodic samples. The data can be discarded and / or 57 Register as desired, while the characteristics of known data correlation samples can begin a suitable selected response, for example. In addition, a device adapted in accordance with the present invention can be linked for communication in any data transmission process. This may include wireless, broadband, telephone modem, standard telecommunication, Ethernet or other network protocols (eg Internet, TCP / IP, Bluetooth, cable TV transmissions as well as others). Figure 15 illustrates a fluorescence application 1400 according to an aspect of the present invention that can be employed with the systems and processes previously described. A space system k is adapted in accordance with the present invention having a light system that includes a low intensity light source at 1410 such as a Light Emitting Diode (LED), emission light having a wavelength from about 250 to about 400 nm (e.g., ultraviolet light). The LED may be used to provide epi illumination, transillumination as described herein (or another type). The use of an LED (or other low-magnification ultraviolet light source) also allows for waveguide illumination in which the UV excitation wavelength is introduced into a flat surface supporting the object under test at 1420, so that the evanescent wave coupling of the UV light can excite the fluorophores within the object. For example, UV light can be provided in 58 approximately a right angle to a substrate on which the object lies. At 1430, the LED (or other light source or combinations thereof) can emit the light for a predetermined period of time and / or be controlled in a strobe-like manner that emits pulses at a desired speed. In 1440, the excitation is applied to the object during the period determined at 1430. In 1450, the automated and / or manual analysis is performed on the object during the excitation period (and / or approximate). By way of illustration, the object which is sensitive to ultra violet rays as it fluoresces in response to the excitation of the UV light from the light source. Fluorescence is a condition of a material (organic or inorganic) in which the material continues to emit light while absorbing the excitation light. Fluorescence can be an inherent property of a material (eg, autofluorescence) or can be induced, such as by using fluorochrome stains or dyes. The dye may have an affinity to a particular protein or other receptive capacity to facilitate the discovery of different conditions associated with the object. In a particular example, fluorescence microscopy and / or digital imaging provide a way in which to study various materials that show secondary fluorescence. By way of another example, the UV LED (or other source) can produce intense flashes of UV radiation for a short period of time, with an image being constructed by a sensor (sensor adapted to the excitation wavelength) for a moment 59 short back (for example, milliseconds to seconds). This mode can be used to investigate the time-lapse characteristics of the fluorescent components of the object (or samples) being tested. This can be important where two parts of the object (or different samples) can respond (eg, make the same fluorescent substantially under continuous illumination, but can have different emission drop characteristics.) As a result of using the UV light source of under magnification, such as the LED, light from the light source can cause at least a portion of the object under test to emit light, usually not at the ultraviolet wavelength, because at least a portion of the object makes fluorescence, the pre or post-fluorescent images can be correlated with those obtained during the fluorescence of the object to determine different characteristics of the object.In contrast, most conventional fluorescence systems are configured to irradiate a specimen and then separate the fluorescent light from much weaker reradiation of the brightest excitation light, typically to through filters. In order to allow detectable fluorescence, conventional systems usually require powerful light sources. For example, light sources may be mercury or xenon arc lamps (burner), which produce high density illumination sufficient to image fluorescent specimens. In addition to the operating current (for 60 example, typically 100-250 watt lamps) those types of light sources typically have short operating lives (eg, 10-100 hours). In addition, a power supply for conventional light sources often includes a stopwatch to help track the number of hours of use, since arc lamps tend to become inefficient and are more likely to fragment, if used more beyond your life of valued time. In addition, mercury burners generally do not provide uniform intensity across the ultraviolet to infrared spectrum, since much of the intensity of the mercury burner extends into the near ultra violet. This often requires precision filtering to remove unwanted wavelengths of light. Accordingly, it will be appreciated that using a UV LED, in accordance with an aspect of the present invention, provides a substantially uniform intensity at a desired UV wavelength to mitigate energy consumption and heat generated through use. Addition- ally, the cost of restitution of an LED light source is significantly lower than conventional lamps. Figure 16 illustrates an application 1500 of thin films according to one aspect of the present invention. Thin films and films can be broadly characterized as thin layers (varying in thickness or molecular thicknesses to thickness or microscopic to macroscopic thicknesses important of a certain material or multiple materials, deposited in a form suitable to the respective materials on various substrates of choice and may include (but are not limited to) any of the following: metallic coating (e.g., reflective, includes partial, opaque and transmissive), optical coatings (e.g., interference, transmission, anti-reflective, bandpass, blocking, protective, multi-coating, etc.), platinum (for example metal, oxide, chemical, antioxidant, thermal, etc.), electrically conductive (for example, macro and microcircuit deposited and constructed), optically conductive (for example, deposited optical materials of various indices of refraction, "micro and macro optical circuits." This can also include other coatings and film materials in layers and of the film type on any substrate that can be characterized by the deposition of various forms for leave a desired layer of some or some materials in the substrate in a desired thickness, consistency, continuity, uniformity, adhesion, and other parameters associated with any given deposited film. The associated thin film analysis may include the detection of microbubbles, cavities, microscopic debris, deposition faults, and the like. By proceeding to 1510, a space system k is configured for thin film analysis according to one aspect of the present invention. The application of a space imaging device k for the problem of thin film inspection and characterization can be employed to identify and characterize flaws in a thin film or films, for example. 62 Such a system can be adapted to facilitate: 1) manual observation of a substrate with thin film deposited of all types; 2) observation / analysis and automatic characterization of a thin film deposited substrate of all types for the inspection of fault of passage; 3) Automatic observation and characterization of a thin film deposited substrate of all types for computer controlled comparative deposition, this may include image data written on the recording media of choice (eg, CD-ROM, DVD-ROM) for verification, certification, etc. A space device k can be configured to image a desired Absolute Spatial Resolution (ASR) per pixel and desired Effective Resolutional Amplification (ERM). These parameters facilitate the determination of FOV, DOF, and WD, for example. This may include target-based design configurations and / or achromatic lens design configurations (e.g., for broad FOV and moderate ERM and ASR). The lighting can be selected based on inspection parameters such as transillumination and / or epi lighting for example. In 1520, a substrate is mounted on an image former in such a way that it will be scanned by: 1) the movement of an optical image path length by the optical scanning method; and / or 2) indexing an object that is directly tested by a mechanical movement and control process (eg, automatic by computer or manual by operator). This facilitates an inspection of an entire surface or portion of the surface that is desired. As seen in the above in the context of particle sizing, asynchronous imaging at selected time intervals and / or in real time for respective scanned areas (eg determined by FOV) of the substrate at common video rates and / or at image capture speeds can be provided. The images of indexed and / or scanned areas can be captured with desired frequency for subsequent image processing. In addition, samples can be entered into the device manually and / or in an automated form from a feed such as from a conveyor system. In 1530, the operational parameters for thin film applications are determined and applied. Typical operating parameters may include (but are not limited to: 1) Formation of images of various defects and features that include, but are not 64 limit to, particles and holes on the surface (s) (or inside) of a thin film; 2) Modular designs that can be varied when needed for reflective and transparent surfaces; 3) Automated counting and categorization of surface defects by size, location, and / or number of image areas of successfully indexed (and / or "scanned") images (with index identification and totals for respective sample surfaces); 4) Record the location of defects for subsequent manual inspection; 5) Provide images in standard format or formats for subsequent port placement (for example by Ethernet or other protocol) or manual and / or automated image processing for archiving and documentation on a computer, server and / or client; and / or 6) Nominal exploration time per surface from seconds to minutes depending on the total area. The scanning and indexing speed generally understood to vary with the sample area and the subsequent processing. 1540, image processing / analysis may occur 65 connputarizadas based on software and / or hardware. The images of a device adapted in accordance with the present invention can be processed in accordance with substantially any hardware and / or software process. Software-based image processing can be achieved by custom software and / or commercially available software since the image file formats are digital formats (e.g., bitmaps of captured movies). The analysis, characterization and so on can also be provided by the following: for example, the analyzes can be based on metrology (direct measurement based) and / or comparative (database). Comparative analyzes may include comparisons with a database of image data for known films and / or variants thereof. Advanced image processing can characterize and catalog real-time images and / or measurements of periodic samples. The data can be discarded and / or recorded as desired, while the characteristics of known samples of data correlation can begin a suitable selected response, for example. In addition, a device adapted in accordance with the present invention can be linked for communication in any data transmission process. This may include wireless, broadband, telephone modem, standard telecommunications, Ethernet or other network protocols (eg Internet, TCP / IP, Bluetooth, cable TV transmissions, as well as others). In another aspect of the present invention, a system of 66 Imaging adapted as described above provides high effective resolving amplification and high spatial resolution among other features and methods of biological material that can be combined to provide systems and imaging methods of improved biological material. The systems and methods of imaging of biological material of the present invention allow the production of improved images (higher effective amplification, improved resolution, improved depth of field, and the like) leading to the identification of biological materials as well as the classification of biological materials (for example, as normal or abnormal). The biological material includes microorganisms (organisms too small to be seen by the naked eye) such as bacteria, viruses, protozoa, fungi, and siliates; cellular material from organisms such as cells (Used, intracellular material, or whole cells), proteins, antibodies, lipids, and carbohydrates, labeled or unlabeled; and portions of organisms such as groups of cells (tissue samples), blood, pupils, irises, fingertips, teeth, portions of the skin, hair, mucous membranes, blisters, breast, components of the male reproductive system. females, muscle, vascular components, components of the central nervous system, liver, bone, colon, pancreas, and the like. Since the imaging system of biological material of the present invention can employ a relatively large working distance, the portions 67 of the human body can be examined directly without the need to remove a sample of tissue. The cells include human cells, non-human animal cells, plant cells and synthetic / research cells. The cells include prokaryotic and eukaryotic cells. The cells can be healthy, cancerous, mutated, damaged or diseased. Examples of human cells include Anthrax, Actinomycetes spp., Azotobacteria Bacterium of Anthrax Bacterium cereus, species of bacteria, whooping cough Bordetella, Borrelia, burgdorferi, Campylobacter jejuni, species of Chlamydia, Clostridium species, Cyanobacteria, Deinococcus radiodurans, Escherichia coli , Enterococcus, Haemophilus influenzae, Helicobacter pylori, Klebsiella pneumoniae, Lactobacillus spp., Lawsonia intracellularis, Legionella, Listeria spp., Micrococcus spp., ycobacterium leprae, Mycobacterium tuberculosis, Myxo bacteria, Neisseria gonorrheoeae, Neisseria meningitidis, Prevotella spp., Pseudomonas spp ., Salmonella, Serratia marcescens, Shigella species, Staphylococcus aureus, Streptococcus, Thiomargarita namibiensis, Syphilis bacteria, Vibrio cholerae, Yersinia enterocolitica, Yersinia pestis, and the like. Additional examples of biological material are those which cause disease such as colds, infections, malaria, chlamydia, syphilis, gonorrhea, conjunctivitis, Anthrax, meningitis, botulism, diarrhea, brucellosis, campylobacter, candidiasis, cholera, coccidioidomycosis, cryptococcosis, diphtheria, pneumonia, infections 68 for food, I eat (burkholderia mallei), influenza, leprosy, histoplasmosis, legionellosis, leptospirosis, listeriosis, melioidosis, norcardiosis, mycobacterial nontuberculosis, peptic ulcer disease, whooping cough, pneumonia, psittacosis, salmonella enteritidis, shigellosis, sporotrichosis, strep throat, toxic shock syndrome, trachoma, typhoid fever, urinary tract infections, Lyme disease and the like. As described below, the present invention also relates to methods for diagnosing any of the above diseases. Examples of human cells include fibroblast cells, skeletal muscle cells, leukocytes neutrophil white blood cells linfosito, red blood cells erythroblast, bone cells of the osteoblast cells, chondrocyte cartilage vasófilos white blood cells, eosinophil cells, fat cells adipocyte, neurons invertebrates (rough helix), mammary neurons, adrenomedullary cells, melanocytes, epithelial cells, endothelial cells; tumor cells of all types (particularly melanoma, myeloid leukemia, carcinomas of the lung, breast, ovaries, colon, kidney, prostate, pancreas and testes), cardiomyocytes, endothelial cells, epithelial cells, lymphoclites (T cell and B cell), mast cells, eosinophils, basal intimal cells, epatocytes, leukocytes including mononuclear, trunk, hemopoietic, neural, skin, lung, kidney, liver and similar myocyte trunk, osteoplast, chondrocyte and other leukocytes. tissue cells 69 connective, keratinocytes, melanocytes, liver cells, kidney cells, and adipocytes. Examples of research cells include transformed cells, Jurkat T cells, NIH3T3 and CHO cells, COS, etc. A useful source of cell lines and other biological material can be found in ATCC and hybridoma cell lines, Bacteria and Bacteriophages, Yeast, Mycology and Botany, and Protists: Algae and Protozoa, and others available from American Type Culture Co. (Rockville, Md.), Of which all are incorporated herein for reference. There are non-limiting examples such as an endless list of cells and other biological materials can be listed. The identification or classification of the biological material in some cases can lead to the diagnosis of the disease. Thus, the present invention also provides improved diagnostic systems and methods. For example, the present invention also provides methods for the detection and characterization of medical pathologies such as cancer, musculoskeletal system pathologies, digestive systems, reproductive systems, and the alimentary canal, in addition to atherosclerosis, angiogenesis, arteriosclerosis, inflammation, atherosclerotic heart, myocardial infarction, trauma to arterial or venal walls, neurodegenerative diseases, and cardiopulmonary diseases. The present invention also provides methods for the detection and characterization of viral infections and bacterial The present invention also allows the evaluation of the effects of various physiological agents or activities on biological materials, both in in vitro and in vivo systems. For example, the present invention allows the assessment of the effect of a physiological agent, such as a drug, on a population of cells or growth of tissues in culture. The biological material imaging system of the present invention allows computer-controlled control or automated process control to obtain data from samples of biological material. In this connection, a computer or processor, coupled with the imaging system of biological material contains or is coupled to a memory or database containing images of biological material, such as diseased cells of various types. In this context, automatic designation of normal and abnormal biological material can be made. The imaging system of biological material ensures images of a sample of given biological material, and images are compared with images in memory, such as images of diseased cells in memory. In a sense, the computer / processor performs a comparison analysis of collected image data and stored image data and, based on the results of the analysis, formulates a determination of the identity of a given biological material; of the classification of a given biological material (normal / abnormal, cancerous / non-cancerous, benign / malignant, infected / non-infected, and the like); and / or of a condition 71 (diagnosis). If the computer / processor determines that a sufficient degree of similarity is present between particular images of a sample of biological material and stored images (such as of diseased cells or of the same biological material), then the image is saved and the data associated with the image can be generated. If the computer / processor determines that a sufficient degree of similarity is not present between the particular image of a sample of biological material and the stored images of diseased cells / particular biological material, then the sample of biological material is repositioned and the additional images are compare with images in memory. It will be appreciated that statistical methods can be applied by the computer / processor to help. in the determination that a sufficient degree of similarity is present between particular images of a sample of biological material and the stored images of biological material. Any suitable means of correlation, memory, operating system, analytical component, and software / hardware can be used by the computer / processor. With reference to Figure 17, an exemplary aspect of a 1600 imaging system of automated biological material in accordance with one aspect of the present invention allows computer-controlled control or automated process control to obtain data from material samples. biological is shown. A 1602 imaging system 72 described / configured together with Figures 1-16 in the above can be used to capture an image of a biological material 1604. The imaging system 1602 is coupled to a processor 1606 and / or computer that reads the image generated by the imaging system 1602 and compares the image with a variety of images in the data store 1608. FIG. Processor 1606 contains an analysis component to make the comparison. Some of the many algorithms used in image processing include convolution (on which many others are based), FFT, DCT, thinning (or skeletonization), edge detection, and contrast enhancement. They are usually implemented in software but can also use special purpose hardware for speed. The FFT (fast Fourier transform) is an algorithm for calculating the Fourier transform of a set of discrete data values. Given a finite set of data points, for example, a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also essentially directs identical inverse concerns to reconstruct a frequency data signal. DCT (discrete cosine transform) is the technique to express a waveform as a weighted sum of cosines. There are several applications designed for image processing, for example, CELIP (cellular language for image processing) and VPL (visual programming language). 73 The data store 1608 contains one or more sets of predetermined images. The images may include normal images of various biological materials and / or abnormal images of various biological materials (diseased, mutated, physically decomposed, and the like). The images stored in the data store 1608 provide a basis for determining whether or not a given captured image is similar (or the degree of similarity) to the stored images. In one aspect, the imaging system 1600 of automated biological material can be used to determine whether a sample of biological material is normal or abnormal. For example, the 1600 imaging system of automated biological material can identify the presence of diseased cells, such as cancer cells, in a sample of biological material, thereby facilitating the diagnosis of a given disease or condition. In another aspect, the 1600 imaging system of automated biological material can diagnose the diseases / diseases listed above by identifying the presence of a biological material that causes the disease (such as a bacterium that causes the disease described in the foregoing). ) and / or determining that a given biological material is infected with a disease causing entity such as a bacterium or determining that a given biological material is abnormal (cancerous). In yet another aspect, the 1600 imaging system of automated biological material can be employed to provide determine the identity of a biological material of unknown origin. For example, the 1600 imaging system of automated biological material can identify a white powder as containing anthrax. The 1600 imaging system of automated biological material can also facilitate the processing of biological material, such as by making the counts of white blood cells or red blood cells in blood samples, for example. The computer / processor 1606 may be coupled to a controller that controls a servomotor or other means to move the sample of biological material within a target plane so as to facilitate remote, hands-free imaging. That is, motors, adjusters and / or other mechanical means can be used to move the sample slide of biological material within the objective field of vision. further, since the images of the process of examination of biological material are optimized to be seen from a computer screen, television, and / or monitor of closed circuit, remote and in the network based on the view and control can be implemented. Real-time imaging facilitates at least rapid diagnosis, data collection / generation, and the like. In another aspect, the imaging system of biological material is directed to a portion of a human (such as a lesion in an arm, haze in the cornea, and the like, and the images formed. computer / processor (or through the network such as the Internet), which is introduced to identify the possible presence of a particular type of diseased cell (an image of which is stored in memory). When a diseased cell is identified, the computer / processor instructs the system to remove / destroy the diseased cell, for example using a laser, liquid nitrogen, cutting instrument, and / or the like. Figure 18 represents a high-level machine vision system 1800 according to the subject invention. System 1800 includes an imaging system 10 (Figure 1) according to the subject invention. The imaging system 10 is determined in substantial detail above and thus further discussion regarding details related thereto is omitted for brevity. The imaging system 10 can be used to collect data in relation to a product or process 1810, and provide the image information to a controller 1820 that can regulate the product or process 1810, for example, with respect to production, control of process, quality control, testing, inspection, etc. The imaging system 10 as seen in the above provides collection of image data in a granularity not available for many conventional systems. In addition, the coarse image data provided by the object imaging system 10 can offer highly effective machine vision inspection of the 1810 product or process. For example, defects of 76 Tiny products typically can not be detected by conventional machine vision systems that can be detected by the 1800 object system as a result of the image data collected by the imaging system 10. The controller 1810 may be any suitable controller or control system employed in conjunction with a manufacturing scheme, for example. The 1810 controller can use the collected image data to reject a defective product or process, review a process or product, accept a product or process, etc., as is common for control systems based on machine vision. It will be appreciated that system 1800 can be employed in any environment based on suitable machine vision, and all applications of the subject invention are intended to fall within the scope of the appended claims thereto. For example, the 1800 object system can be used in conjunction with semiconductor fabrication where the tolerances of the device and / or process are critical for the manufacture of reliable, consistent semiconductor-based products. In this way, the product 1810 can represent a semiconductor disk, for example, and the imaging system 1800 can be used to collect data (eg critical dimensions, thicknesses, potential defects, other physical aspects ...) in relation to devices that are formed on the disk. The controller 1820 may employ the collected data to reject the disk due to 77 several defects, modify a process along with manufacturing devices on the disk, accept the disk, etc. What has been described in the above are preferred aspects of the present invention. Of course, it is not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but someone skilled in the art will recognize that many additional combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to encompass all alterations, modifications or variations that fall within the spirit and scope of the appended claims.

Claims (12)

78 CLAIMS
1. An imaging system, comprising: a sensor having one or more receivers, the receivers having a receiver size parameter; and an image transfer means having a resolution parameter that correlates with the receiver size parameter. The system according to claim 1, wherein the image transfer means provides a space filter k correlating a step associated with one or more receivers of a limited diffraction point within the image transfer medium. The system according to claim 2, wherein the step is mapped per unit in approximately the size of the limited diffraction point within an image transfer medium. The system according to claim 1, wherein the image transfer means further comprises at least one spherical lens, a multiple lens configuration, a fiber optic guide, an image duct, and a holographic optical element. . The system according to claim 1, wherein the sensor further comprises an M-by-N arrangement of pixels associated with one or more receivers, M and N represent rows and columns of integers respectively, the sensor further comprises at least one of a digital sensor, an analog sensor, a Load-coupled Device (CCD) sensor, a CMOS sensor, a Load Injection Device (CID) sensor, a disposition sensor, and a linear scan sensor. The system according to claim 1, which further comprises a computer and a memory to receive a sensor output, the computer at least stores the output in memory, performs the automated analysis of the output in memory, and trace memory on a screen to allow manual analysis of an image. The system according to claim 1, further comprising a light source for illuminating one or more non-luminous objects within a target field of view, the illumination source comprises at least one of a Light Emitting Diode, specific wavelength illumination, broadband illumination, continuous illumination, stroboscopic lighting, Kohler illumination, Abbe lighting, contrast phase lighting, dark field illumination, bright field illumination, Epi illumination, coherent lighting, non-illumination coherent, visible light and non-visible light, non-visible light correlates appropriately to a sensor adapted to non-visible light. The system according to claim 7, wherein the non-visible light further comprises at least one of lengths of 80 infrared and ultraviolet wave. 9. The system according to claim 1, further comprising an associated application, the application includes at least one of imaging, control, inspection, microscopy, automated analysis, biomedical analysis, cell colony counting, histology, analysis of frozen section, cell cytology, hematology, pathology, oncology, fluorescence, interference, phase analysis, analysis of biological materials, particle sizing applications, thin film analysis, air quality monitoring, measurement of airborne particles , analysis of optical defects, metallurgy, inspection and analysis of semiconductors, automated vision systems, 3D imaging, cameras, copiers, fax machines and applications of medical systems. 10. A method for producing an image, comprising: determining a step size between adjacent pixels in a sensor; determine a resolution object size in an optical medium; and scale the step size through the optical medium to correspond with the resolution object size. 11. A machine vision system, comprising: an image forming system for collecting image data of a product or process comprising: a sensor that has one or more receivers, the receivers have a receiver size parameter; and at least one optical device for directing light from a target field of vision to one or more sensor receivers, at least one optical device provides a mapping of the receiver size parameter of approximately one size of a limited diffraction object. that appears in the objective field of vision that has a resolution parameter that correlates with the receiver size parameter; and a controller that receives the image data and uses the image data, together with the manufacture or control of the product or process. 1
2. The machine vision system according to claim 11, which is employed in a semiconductor-based processing system.
MXPA04000167A 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design. MXPA04000167A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/900,218 US6664528B1 (en) 2001-07-06 2001-07-06 Imaging system and methodology employing reciprocal space optical design
US10/166,137 US6884983B2 (en) 2002-06-10 2002-06-10 Imaging system for examining biological material
US10/189,326 US7132636B1 (en) 2001-07-06 2002-07-02 Imaging system and methodology employing reciprocal space optical design
PCT/US2002/021392 WO2003005446A1 (en) 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design

Publications (1)

Publication Number Publication Date
MXPA04000167A true MXPA04000167A (en) 2005-06-06

Family

ID=31998992

Family Applications (1)

Application Number Title Priority Date Filing Date
MXPA04000167A MXPA04000167A (en) 2001-07-06 2002-07-03 Imaging system and methodology employing reciprocal space optical design.

Country Status (10)

Country Link
EP (1) EP1405346A4 (en)
JP (2) JP2005534946A (en)
KR (1) KR100941062B1 (en)
CN (1) CN100477734C (en)
AU (1) AU2002322410B8 (en)
BR (1) BR0210852A (en)
CA (1) CA2453049C (en)
IL (2) IL159700A0 (en)
MX (1) MXPA04000167A (en)
NZ (1) NZ530988A (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58102538A (en) * 1981-12-14 1983-06-18 Fujitsu Ltd Manufacture of semiconductor device
CA2388312C (en) 1999-11-05 2010-01-12 Raisio Benecol Oy Edible fat blends
US7630065B2 (en) 2005-02-21 2009-12-08 Olympus Corporation Low-light specimen image pickup unit and low-light specimen image pickup apparatus
JP2009043139A (en) 2007-08-10 2009-02-26 Mitsubishi Electric Corp Position detecting device
US7782452B2 (en) * 2007-08-31 2010-08-24 Kla-Tencor Technologies Corp. Systems and method for simultaneously inspecting a specimen with two distinct channels
EP2219700B1 (en) * 2007-11-14 2024-03-20 Biosensors International Group, Ltd. Automated coating apparatus and method
JP2010117705A (en) * 2008-10-14 2010-05-27 Olympus Corp Microscope for virtual-slide creating system
EP2491366B1 (en) * 2009-10-20 2016-12-28 The Regents of The University of California Incoherent lensfree cell holography and microscopy on a chip
CN102053051A (en) * 2009-10-30 2011-05-11 西门子公司 Body fluid analysis system as well as image processing device and method for body fluid analysis
WO2012017431A1 (en) * 2010-08-05 2012-02-09 Orbotech Ltd. Lighting system
JP5784393B2 (en) * 2011-07-11 2015-09-24 オリンパス株式会社 Sample observation equipment
DE102011055945A1 (en) * 2011-12-01 2013-06-06 Leica Microsystems Cms Gmbh Method and device for examining a sample
CN102661715A (en) * 2012-06-08 2012-09-12 苏州富鑫林光电科技有限公司 CCD (charge coupled device) type clearance measurement system and method
JP7079244B2 (en) 2016-10-04 2022-06-01 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Multi-frequency harmonic acoustics for target identification and boundary detection
EP3320829A1 (en) * 2016-11-10 2018-05-16 E-Health Technical Solutions, S.L. System for integrally measuring clinical parameters of visual function
WO2018180249A1 (en) * 2017-03-28 2018-10-04 富士フイルム株式会社 Measurement support device, endoscopic system, and processor
JP6666519B2 (en) 2017-03-28 2020-03-13 富士フイルム株式会社 Measurement support device, endoscope system, and processor
KR101887523B1 (en) * 2017-04-05 2018-08-10 경북대학교 산학협력단 System for spectrum measuring of small area using microscope
KR101887527B1 (en) * 2017-04-05 2018-08-10 경북대학교 산학협력단 Apparatus for spectrum measuring of full-color hologram and Method thereof
JP2020525055A (en) 2017-06-29 2020-08-27 ソニー株式会社 Medical imaging system, method and computer program
US11380438B2 (en) 2017-09-27 2022-07-05 Honeywell International Inc. Respiration-vocalization data collection system for air quality determination
CN111433603B (en) * 2017-11-01 2022-08-02 加利福尼亚大学董事会 Imaging method and system for intraoperative surgical margin assessment
US20190200906A1 (en) * 2017-12-28 2019-07-04 Ethicon Llc Dual cmos array imaging
CN108197560B (en) * 2017-12-28 2022-06-07 努比亚技术有限公司 Face image recognition method, mobile terminal and computer-readable storage medium
CN111198192B (en) * 2018-11-20 2022-02-15 深圳中科飞测科技股份有限公司 Detection device and detection method
CN111988499B (en) * 2019-05-22 2022-03-15 印象认知(北京)科技有限公司 Imaging layer, imaging device, electronic apparatus, wave zone plate structure and photosensitive pixel
FR3098930B1 (en) * 2019-07-18 2023-04-28 Univ Versailles Saint Quentin En Yvelines DEVICE FOR OBSERVING A CELL OR A SET OF LIVING CELLS
CN110440853B (en) * 2019-07-24 2024-05-17 沈阳工程学院 Monitoring dust removing system
CN112782175A (en) * 2019-11-11 2021-05-11 深圳中科飞测科技股份有限公司 Detection equipment and detection method
US11391613B2 (en) 2020-02-14 2022-07-19 Honeywell International Inc. Fluid composition sensor device and method of using the same
US11835432B2 (en) 2020-10-26 2023-12-05 Honeywell International Inc. Fluid composition sensor device and method of using the same
CN113155755B (en) * 2021-03-31 2022-05-24 中国科学院长春光学精密机械与物理研究所 On-line calibration method for micro-lens array type imaging spectrometer
CN114184590A (en) * 2021-12-15 2022-03-15 复旦大学 Testing arrangement of atmospheric fine particles oxidation potential
CN115032871B (en) * 2022-07-06 2024-06-21 大爱全息(北京)科技有限公司 Holographic free diffraction multilayer image display method, device and system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4410804A (en) * 1981-07-13 1983-10-18 Honeywell Inc. Two dimensional image panel with range measurement capability
US4806774A (en) * 1987-06-08 1989-02-21 Insystems, Inc. Inspection system for array of microcircuit dies having redundant circuit patterns
JPH01154016A (en) * 1987-12-10 1989-06-16 Nikon Corp Microscope
JP3245882B2 (en) * 1990-10-24 2002-01-15 株式会社日立製作所 Pattern forming method and projection exposure apparatus
JPH0695001A (en) * 1992-09-11 1994-04-08 Nikon Corp Microscopic device
JPH0772377A (en) * 1993-06-14 1995-03-17 Nikon Corp Autofocusing device for microscope
JPH08160303A (en) * 1994-12-02 1996-06-21 Olympus Optical Co Ltd Object observing device
JP3123457B2 (en) * 1996-05-13 2001-01-09 株式会社ニコン microscope
US6016210A (en) * 1997-12-15 2000-01-18 Northrop Grumman Corporation Scatter noise reduction in holographic storage systems by speckle averaging
JP2003504745A (en) * 1999-07-09 2003-02-04 セラビジョン アクチボラゲット Microscope filter that automatically increases contrast
EP1096295A1 (en) * 1999-10-28 2001-05-02 Itt Manufacturing Enterprises, Inc. Apparatus and method for providing optical sensors with improved resolution
TWI240249B (en) * 2004-03-03 2005-09-21 Asustek Comp Inc Disc drive with tilt-preventing tray

Also Published As

Publication number Publication date
AU2002322410B2 (en) 2008-02-07
EP1405346A1 (en) 2004-04-07
IL159700A (en) 2010-12-30
NZ530988A (en) 2006-09-29
CA2453049A1 (en) 2003-01-16
CN100477734C (en) 2009-04-08
IL159700A0 (en) 2004-06-20
EP1405346A4 (en) 2008-11-05
AU2002322410B8 (en) 2008-05-01
CA2453049C (en) 2011-10-25
JP2009258746A (en) 2009-11-05
KR100941062B1 (en) 2010-02-05
JP2005534946A (en) 2005-11-17
BR0210852A (en) 2004-08-24
CN1550039A (en) 2004-11-24
KR20040031769A (en) 2004-04-13

Similar Documents

Publication Publication Date Title
CA2453049C (en) Imaging system and methodology employing reciprocal space optical design
US7692131B2 (en) Imaging system and methodology with projected pixels mapped to the diffraction limited spot
US7863552B2 (en) Digital images and related methodologies
US7338168B2 (en) Particle analyzing system and methodology
AU2002322410A1 (en) Imaging system and methodology employing reciprocal space optical design
US7248716B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design
ZA200400961B (en) Imaging system and methodology employing reciprocal space optical design.
US6998596B2 (en) Imaging system for examining biological material
US7109464B2 (en) Semiconductor imaging system and related methodology
US9830501B2 (en) High throughput partial wave spectroscopic microscopy and associated systems and methods
JP2012506060A (en) Automated scanning cytometry using chromatic aberration for multi-plane image acquisition.
US7439478B2 (en) Imaging system, methodology, and applications employing reciprocal space optical design having at least one pixel being scaled to about a size of a diffraction-limited spot defined by a microscopic optical system
US7132636B1 (en) Imaging system and methodology employing reciprocal space optical design
US8634067B2 (en) Method and apparatus for detecting microscopic objects
US6815659B2 (en) Optical system and method of making same
WO2003005446A1 (en) Imaging system and methodology employing reciprocal space optical design
US20170019575A1 (en) Optical Methods and Devices For Enhancing Image Contrast In the Presence of Bright Background
JP2022517107A (en) Equipment, Systems and Methods for Solid Immersion Meniscus Lenses
US10866398B2 (en) Method for observing a sample in two spectral bands, and at two magnifications

Legal Events

Date Code Title Description
FG Grant or registration