US20170357083A1 - Maskless imaging of dense samples using multi-height lensfree microscope - Google Patents
Maskless imaging of dense samples using multi-height lensfree microscope Download PDFInfo
- Publication number
- US20170357083A1 US20170357083A1 US15/624,624 US201715624624A US2017357083A1 US 20170357083 A1 US20170357083 A1 US 20170357083A1 US 201715624624 A US201715624624 A US 201715624624A US 2017357083 A1 US2017357083 A1 US 2017357083A1
- Authority
- US
- United States
- Prior art keywords
- sample
- image sensor
- image
- images
- adjusting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 29
- 238000005286 illumination Methods 0.000 claims description 55
- 230000003287 optical effect Effects 0.000 claims description 17
- 239000013307 optical fiber Substances 0.000 claims description 9
- 238000000034 method Methods 0.000 abstract description 31
- 238000009595 pap smear Methods 0.000 description 42
- 238000005259 measurement Methods 0.000 description 25
- 238000011084 recovery Methods 0.000 description 25
- 210000004027 cell Anatomy 0.000 description 21
- 230000000644 propagated effect Effects 0.000 description 19
- 238000001000 micrograph Methods 0.000 description 13
- 238000013459 approach Methods 0.000 description 10
- 210000004369 blood Anatomy 0.000 description 10
- 239000008280 blood Substances 0.000 description 10
- 239000011521 glass Substances 0.000 description 9
- 238000001093 holography Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 7
- 238000000386 microscopy Methods 0.000 description 6
- 230000001427 coherent effect Effects 0.000 description 5
- 230000000052 comparative effect Effects 0.000 description 5
- 239000000835 fiber Substances 0.000 description 5
- 230000000873 masking effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000295 complement effect Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 210000003743 erythrocyte Anatomy 0.000 description 4
- 239000002245 particle Substances 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 210000000805 cytoplasm Anatomy 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011835 investigation Methods 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 210000003463 organelle Anatomy 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 102100026735 Coagulation factor VIII Human genes 0.000 description 1
- 101000911390 Homo sapiens Coagulation factor VIII Proteins 0.000 description 1
- WOBHKFSMXKNTIM-UHFFFAOYSA-N Hydroxyethyl methacrylate Chemical compound CC(=C)C(=O)OCCO WOBHKFSMXKNTIM-UHFFFAOYSA-N 0.000 description 1
- 239000006146 Roswell Park Memorial Institute medium Substances 0.000 description 1
- 238000010817 Wright-Giemsa staining Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000601 blood cell Anatomy 0.000 description 1
- 238000000339 bright-field microscopy Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001152 differential interference contrast microscopy Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000002839 fiber optic waveguide Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010884 ion-beam technique Methods 0.000 description 1
- 239000010410 layer Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000002609 medium Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000003801 milling Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 229910052724 xenon Inorganic materials 0.000 description 1
- FHNFHKCVQCLJFQ-UHFFFAOYSA-N xenon atom Chemical compound [Xe] FHNFHKCVQCLJFQ-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/0005—Adaptation of holography to specific applications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
- G03H1/0443—Digital holography, i.e. recording holograms with digital recording means
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
- G03H1/08—Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
- G03H1/0866—Digital holographic imaging, i.e. synthesizing holobjects from holograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/0068—Geometric image transformation in the plane of the image for image registration, e.g. elastic snapping
-
- G06T3/14—
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/0005—Adaptation of holography to specific applications
- G03H2001/005—Adaptation of holography to specific applications in microscopy, e.g. digital holographic microscope [DHM]
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
- G03H1/0443—Digital holography, i.e. recording holograms with digital recording means
- G03H2001/0447—In-line recording arrangement
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/04—Processes or apparatus for producing holograms
- G03H1/0443—Digital holography, i.e. recording holograms with digital recording means
- G03H2001/0454—Arrangement for recovering hologram complex amplitude
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H1/00—Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
- G03H1/26—Processes or apparatus specially adapted to produce multiple sub- holograms or to obtain images from them, e.g. multicolour technique
- G03H1/2645—Multiplexing processes, e.g. aperture, shift, or wavefront multiplexing
- G03H2001/2655—Time multiplexing, i.e. consecutive records wherein the period between records is pertinent per se
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H2210/00—Object characteristics
- G03H2210/10—Modulation characteristics, e.g. amplitude, phase, polarisation
- G03H2210/12—Phase modulating object, e.g. living cell
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H2210/00—Object characteristics
- G03H2210/50—Nature of the object
- G03H2210/55—Having particular size, e.g. irresolvable by the eye
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03H—HOLOGRAPHIC PROCESSES OR APPARATUS
- G03H2240/00—Hologram nature or properties
- G03H2240/50—Parameters or numerical values associated with holography, e.g. peel strength
- G03H2240/56—Resolution
Definitions
- the field of the invention generally relates to imaging systems and methods and more particularly imaging systems that have particular application in the imaging and analysis of small particles such as cells, organelles, cellular particles and the like.
- the pixel size now starts to be a limiting factor for spatial resolution since the recorded holographic fringes are no longer magnified.
- the detection NA approaches ⁇ 1.
- the finite pixel size at the sensor chip can unfortunately record holographic oscillations corresponding to only an effective NA of ⁇ 0.1-0.2, which limits the spatial resolution to ⁇ 2 ⁇ m. While, in principle, a higher spatial density of pixels could be achieved by reducing pixel size at the sensor to e.g., ⁇ 1 ⁇ m, this has obvious technological challenges to use in a large FOV.
- the microscope works based on partially-coherent lensfree digital in-line holography using multiple light sources (e.g., light-emitting diodes—LEDs) placed at ⁇ 3-6 cm away from the sample plane such that at a given time only a single source illuminates the objects, projecting in-line holograms of the specimens onto a CMOS sensor-chip. Since the objects are placed very close to the sensor chip (e.g., ⁇ 1-2 mm) the entire active area of the sensor becomes the imaging field-of-view, and the fringe-magnification is unit.
- multiple light sources e.g., light-emitting diodes—LEDs
- LEDs light-emitting diodes
- these holographic diffraction signatures are unfortunately under-sampled due to the limited pixel size at the CMOS chip (e.g., ⁇ 2-3 ⁇ m).
- CMOS chip e.g., ⁇ 2-3 ⁇ m
- several lensfree holograms of the same static scene are recorded as different LEDs are turned on and off, which creates sub-pixel shifted holograms of the specimens.
- these sub-pixel shifted under-sampled holograms can be digitally put together to synthesize an effective pixel size of e.g., ⁇ 300-400 nm, which can now resolve/sample much larger portion of the higher spatial frequency oscillations within the lensfree object hologram.
- This super-resolved (SR) in-line hologram however, still suffers from twin-image artifact, which is common to all in-line hologram recording geometries.
- twin-image elimination method requires as input the location estimations of the objects within the imaging field-of-view.
- a threshold or a segmentation algorithm can be used to automatically estimate the objects' locations (creating the object support) for relatively sparse samples.
- this object support is difficult to estimate which can create challenges in removal of the twin-image artifact.
- a new approach is needed for pixel super-resolution holographic microscopy.
- a new approach for pixel super-resolution holographic microscopy uses multiple (e.g., 2-5) lensfree intensity measurements that are each captured at a different heights (i.e., z 2 as seen in FIG. 1A ) from the detector-array.
- each lensfree super-resolved hologram is synthesized with ⁇ 10-100 ⁇ m change in the relative height of the object with respect to the detector-chip surface (although other distances are contemplated), after which the super-resolved holograms are digitally registered and aligned to each other to take into account possible rotations and shifts.
- a method of imaging a sample includes illuminating a sample spaced apart from an image sensor at a first distance with an illumination source; obtaining an image of the sample from an image sensor with the sample spaced apart from the image sensor at the first distance; illuminating the sample spaced apart from an image sensor at a one or more distances different from the first distance with an illumination source; obtaining an image of the sample from an image sensor with the sample spaced apart from the image sensor at the one or more distances different from the first distance; registering the images of the sample obtained at the one or more distances and the image of the sample obtained at the first distance; iteratively recovering lost phase information from the registered images; and reconstructing an amplitude or phase image of the sample based at least in part on the recovered lost phase information.
- a method of imaging a sample includes illuminating a sample spaced apart from an image sensor at a first distance with an illumination source at a plurality of different locations.
- Lower resolution image frames of the sample are obtained from an image sensor at plurality of different locations with the sample spaced apart from the image sensor at the first distance.
- a higher resolution image of the sample at the first distance is recovered based at least in part on the plurality of lower resolution image frames obtained at the first distance.
- the sample is illuminated at a one or more sample-to-sensor distances different from the first distance with an illumination source at a plurality of different locations.
- Lower resolution image frames are obtained of the sample from an image sensor at plurality of different locations with the sample spaced apart from the image sensor at the one or more distances different from the first distance.
- a higher resolution image of the sample is recovered at the one or more distances based at least in part on the plurality of lower resolution image frames obtained at the one or more distances.
- the higher resolution images of the sample obtained at the one or more distances and the higher resolution image of the sample obtained at the first distance are registered with one another. Lost phase information from the registered higher resolution images is iteratively recovered. Amplitude or phase images of the sample are reconstructed based at least in part on the recovered lost phase information.
- a system for imaging objects within a sample includes an image sensor; an illumination source configured to scan along a surface relative to the image sensor and illuminate the sample at a plurality of different locations; a sample interposed between the image sensor and the illumination source; means for adjusting an actual or effective distance between the sample and the image sensor; and at least one processor configured to reconstruct an image of the sample based on the images obtained from illumination source at the plurality of different scan positions.
- a system for imaging a sample includes an image sensor; one or more illumination sources coupled to an array of optical waveguides, wherein the each optical waveguide of the array terminates at a different spatial location in three dimensional space; a sample holder interposed between the image sensor and the one or more illumination sources; and means for adjusting an actual or effective distance between the sample holder and the image sensor.
- FIG. 1A schematically illustrates a system for imaging densely distributed objects contained in a sample according to one embodiment.
- FIG. 1B illustrates a sample containing densely distributed objects disposed on a sample holder.
- FIG. 2A illustrates a moveable z-adjust stage according to one embodiment.
- FIG. 2B illustrates a moveable z-adjust stage according to another embodiment.
- FIGS. 3A and 3B illustrate a portable microscope for imaging densely distributed objects contained in a sample according to another embodiment.
- FIGS. 4A and 4B illustrate a method used to reconstruct phase and amplitude images of object(s) contained in a sample according to one embodiment.
- FIG. 4C illustrates a method used to reconstruct phase and amplitude images of object(s) contained in a sample according to another embodiment.
- FIG. 5A shows a full FOV ( ⁇ 24 mm 2 ) low resolution (LR) lensfree hologram of a blood smear as captured by the image sensor.
- the dashed rectangle focuses on an area that is rather dense.
- FIG. 5B illustrates a multi-height (5 different heights) PSR lensfree amplitude image of the rectangular region of FIG. 5A .
- FIG. 5C illustrates a 10 ⁇ microscope objective comparison image.
- FIG. 5D illustrates a single height back propagated PSR amplitude image taken from the dashed rectangle in FIG. 5B .
- FIG. 5E illustrates multi-height (5 different heights) PSR lensfree amplitude image of the rectangular region of FIG. 5B .
- FIG. 5F illustrates a 20 ⁇ objective lens (0.4 NA) microscope image of rectangular region of FIG. 5B (and FIG. 5C ).
- FIG. 6B illustrates a single height back propagated PSR phase image.
- FIG. 6C illustrates multi-height based PSR lensfree amplitude image of the leftmost rectangle of FIG. 6A .
- FIG. 6D illustrates multi-height based PSR lensfree phase image of the leftmost rectangle of FIG. 6A .
- FIGS. 6E illustrates a comparative 40 ⁇ objective lens (0.65 NA) microscope image of the leftmost rectangle of FIG. 6A .
- FIG. 6F illustrates the single height back propagated PSR phase image of the leftmost rectangle of FIG. 6B .
- FIG. 6G illustrates the single height back propagated PSR amplitude image of the leftmost rectangle of FIG. 6B .
- FIG. 6H illustrates multi-height based PSR lensfree amplitude image of the rightmost rectangle of FIG. 6A .
- FIG. 6I illustrates multi-height based PSR lensfree phase image of the rightmost rectangle of FIG. 6A .
- FIG. 6J illustrates a comparative 40 x objective lens ( 0 . 65 NA) microscope image of the rightmost rectangle of FIG. 6A .
- FIG. 6K illustrates the single height back propagated PSR phase image of the rightmost rectangle of FIG. 6B .
- FIG. 6L illustrates the single height back propagated PSR amplitude image of the rightmost rectangle of FIG. 6B .
- FIG. 7A illustrates a back propagated phase image of a Pap smear from one PSR lensfree hologram (1 height).
- FIG. 7B illustrates a multi-height (2 heights) based PSR lensfree phase image of the same Pap smear sample.
- FIG. 7C illustrates a multi-height (3 heights) based PSR lensfree phase image of the same Pap smear sample.
- FIG. 7D illustrates a multi-height (4 heights) based PSR lensfree phase image of the same Pap smear sample.
- FIG. 7E illustrates a multi-height (5 heights) based PSR lensfree phase image of the same Pap smear sample.
- FIG. 7F illustrates a comparative 10 ⁇ objective lens (0.25 NA) microscope image of the same field of view.
- FIG. 8A illustrates a single height based back propagated PSR amplitude image of a UCLA pattern etched on a glass slide.
- the letters “U” and “C” have a spacing of about 1
- FIG. 8C illustrates a multi-height (3 heights) based PSR lensfree amplitude image of the same UCLA pattern of FIG. 8A . Thirty (30) iterations were used in the reconstruction.
- FIG. 8D illustrates a multi-height (4 heights) based PSR lensfree amplitude image of the same UCLA pattern of FIG. 8A . Twenty (20) iterations were used in the reconstruction.
- FIG. 8E illustrates a multi-height (5 heights) based PSR lensfree amplitude image of the same UCLA pattern of FIG. 8A . Fifteen (15) iterations were used in the reconstruction.
- FIG. 8F illustrates a microscope comparison image of the same UCLA pattern using a 40 ⁇ objective lens (0.65 NA).
- FIG. 9A illustrates a full FOV ( ⁇ 30 mm 2 ) holographic image of a SUREPREP Pap smear sample acquired using the field-portable microscope of FIGS. 3A and 3B .
- FIG. 9B illustrates the multi-height (5 heights) based PSR lensfree amplitude image of Zone 1 from FIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used.
- FIG. 9C illustrates the multi-height (5 heights) based PSR lensfree amplitude image of Zone 2 from FIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used.
- FIG. 9D illustrates the multi-height (5 heights) based PSR lensfree phase image of Zone 1 from FIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used.
- FIG. 9E illustrates the multi-height (5 heights) based PSR lensfree phase image of Zone 2 from FIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used.
- FIG. 9F illustrates a microscope image (40 ⁇ , 0.65 NA) of Zone 1 from FIG. 9A .
- FIG. 9G illustrates a microscope image (40 ⁇ , 0.65 NA) of Zone 2 from FIG. 9A .
- FIG. 10A illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample.
- FIG. 10B illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample.
- FIG. 10C illustrate a conventional 40 ⁇ microscope image (0.65 NA) of the same THINPREP Pap smear sample of FIGS. 10A and 10B .
- FIG. 10D illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample.
- FIG. 10E illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample.
- FIG. 10F illustrate a conventional 40 ⁇ microscope image (0.65 NA) of the same THINPREP Pap smear sample of FIGS. 10D and 10E .
- FIG. 10G illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample.
- FIG. 10H illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample.
- FIG. 10I illustrate a conventional 40 ⁇ microscope image (0.65 NA) of the same THINPREP Pap smear sample of FIGS. 9G and 9H .
- FIG. 11A illustrates the single height back propagated PSR phase image of a SUREPATH Pap smear sample.
- FIG. 11B illustrates a multi-height (2 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample.
- FIG. 11C illustrates a multi-height (3 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample.
- FIG. 11D illustrates a multi-height (4 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample.
- FIG. 11E illustrates a multi-height (5 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample.
- FIG. 11F illustrates a conventional 10 ⁇ microscope image (0.25 NA) of the same SUREPATH Pap smear sample.
- FIG. 1A illustrates a system 10 for imaging of an object 12 (or more preferably multiple objects 12 ) within a sample 14 (best seen in FIG. 1B ).
- the object 12 may include a cell or biological component or constituent (e.g., a cellular organelle or substructure).
- the object 12 may even include a multicellular organism or the like.
- the object 12 may be a particle or other object.
- the methods and systems described herein have particularly applicability for samples 14 that contain objects 12 that are densely populated amongst or within the sample 14 .
- FIG. 1A illustrates objects 12 in the form of red blood cells (RBCs) to be imaged that are disposed some distance z 2 above an image sensor 16 .
- RBCs red blood cells
- this distance z 2 is adjustable as illustrated by the ⁇ z in the inset of FIG. 1A .
- the sample 14 containing one or more objects 12 is typically placed on a optically transparent sample holder 18 such as a glass or plastic slide, coverslip, or the like as seen in FIG. 1B .
- the Az may be changed by moving the sample holder 18 relative to the image sensor 16 or, alternatively, by moving the image sensor 16 relative to the sample holder 18 .
- Az may be changed by moving both the sample holder 18 and the image sensor 16 .
- the sample holder 18 may be placed on a movable stage 19 that is able to adjust the z 2 distance relative to the image sensor 16 .
- the moveable stage 19 may move the image sensor 16 relative to a stationary sample holder 18 .
- the moveable stage 19 may be coupled to a nut or the like (not shown) that moves in the z direction in response to rotation of a shaft or screw (not shown) coupled to the nut.
- the movable stage 19 preferably moves in generally increments ranging from about 1 ⁇ m to about 1.0 cm and more preferably between about 10 ⁇ m to about 100 ⁇ m.
- the moveable stage 19 may be actuated manually by a user using a knob, dial, or the like.
- the moveable stage 19 may be, in some embodiments, automatically controlled using a small actuator such as a motor, linear actuator, or the like.
- different sized inserts e.g., glass slides or the like
- multiple inserts can be inserted between the sample holder 18 and the image sensor 16 to adjust the z 2 height.
- the actual distance between the image sensor 16 and the sample 14 is not changed. Rather, the effective distance is altered by, for example, changing the wavelength of the illumination source 20 as described below or changing the refractive index of the medium or media located between the image sensor 16 and the sample 14 .
- the surface of image sensor 16 may be in contact with or close proximity to the sample holder 18 .
- the image sensor 16 may include, for example, a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) device.
- CMOS complementary metal-oxide semiconductor
- the image sensor 16 may be monochromatic or color.
- the image sensor 16 generally has a small pixel size which is less than 9.0 ⁇ m in size and more particularly, smaller than 5.0 ⁇ m in size (e.g., 2.2 ⁇ m or smaller). Generally, image sensors 16 having smaller pixel size will produce higher resolutions.
- sub-pixel resolution can be obtained by using the method of capturing and processing multiple lower-resolution holograms, that are spatially shifted with respect to each other by sub-pixel pitch distances.
- the system 10 includes an illumination source 20 that is configured to illuminate a first side (top side as seen in FIG. 1A ) of the sample holder 18 .
- the illumination source 20 is preferably a spatially coherent or a partially coherent light source but may also include an incoherent light source.
- Light emitting diodes LEDs are one example of an illumination source 20 . LEDs are relative inexpensive, durable, and have generally low power requirements. Of course, other light sources may also be used such as a Xenon lamp with a filter. A light bulb is also an option as the illumination source 20 .
- a coherent beam of light such as a laser may also be used (e.g., laser diode).
- the illumination source 20 preferably has a spectral bandwidth that is between about 0.1 and about 100 nm, although the spectral bandwidth may be even smaller or larger. Further, the illumination source 20 may include at least partially coherent light having a spatial coherence diameter between about 0.1 to 10,000 ⁇ m.
- the illumination source 20 may be coupled to an optical fiber as seen in FIG. 1A or another optical waveguide. If the illumination source 20 is a lamp or light bulb, it may be used in connection with an aperture (not shown) or multiple apertures in the case of an array which acts as a spatial filter that is interposed between the illumination source 20 and the sample.
- the term optical waveguide as used herein refers to optical fibers, fiber-optic cables, integrated chip-scale waveguides, an array of apertures and the like. With respect to the optical fiber, the fiber includes an inner core with a higher refractive index than the outer surface so that light is guided therein. The optical fiber itself operates as a spatial filter.
- the core of the optical fiber may have a diameter within the range of about 50 ⁇ m to about 100 ⁇ m.
- the distal end of the fiber optic cable illumination source 20 is located at a distance z 1 from the sample holder 18 .
- the imaging plane of the image sensor 16 is located at a distance z 2 from the sample holder 18 .
- z 2 ⁇ z 1 .
- the distance z 1 may be on the order of around 1 cm to around 10 cm. In other embodiments, the range may be smaller, for example, between around 5 cm to around 10 cm.
- the distance z 2 may be on the order of around 0.05 mm to 2 cm, however, in other embodiments this distance z 2 may be between around 1 mm to 2 mm.
- the z 2 distance is adjustable in increments ranging from about 1 ⁇ m to about 1.0 cm although a larger range such as between 0.1 ⁇ m to about 10.0 cm is also contemplated.
- the incremental z 2 adjustment is within the range of about 10 ⁇ m to about 100 ⁇ m. The particular amount of the increase or decrease does not need to be known in advance.
- the propagation distance z 1 is such that it allows for spatial coherence to develop at the plane of the object(s) 12 , and light scattered by the object(s) 12 interferes with background light to form a lensfree in-line hologram on the image sensor 16 .
- the system 10 includes a computer 30 such as a laptop, desktop, tablet, mobile communication device, personal digital assistant (PDA) or the like that is operatively connected to the system 10 such that lower resolution images (e.g., lower resolution or raw image frames) are transferred from the image sensor 16 to the computer 30 for data acquisition and image processing.
- the computer 30 includes one or more processors 32 that, as described herein in more detail, runs or executes software that takes multiple, sub-pixel (low resolution) images taken at different scan positions (e.g., x and y positions as seen in inset of FIG. 1A ) and creates a single, high resolution projection hologram image of the objects 12 .
- a computer 30 such as a laptop, desktop, tablet, mobile communication device, personal digital assistant (PDA) or the like that is operatively connected to the system 10 such that lower resolution images (e.g., lower resolution or raw image frames) are transferred from the image sensor 16 to the computer 30 for data acquisition and image processing.
- the computer 30 includes one or more processors 32
- the software creates additional high resolution projection hologram images of the objects 12 at each different z 2 distance.
- the multiple, high resolution images obtained at different heights are registered with respect to one another using the software.
- the software also digitally reconstructs complex projection images of the objects 12 through an iterative phase recovery process that rapidly merges all the captured holographic information to recover lost optical phase of each lensfree hologram without the need for any spatial masking, filtering, or prior assumptions regarding the samples. After a number of iterations (typically between 1 and 75), the phase of each lensfree hologram (captured at different heights) is recovered and one the pixel super-resolved holograms is back propagated to the object plane to create phase and amplitude images of the objects 12 .
- the reconstructed images can be displayed to the user on, for example, a display 34 or the like.
- the user may, for example, interface with the computer 30 via an input device 36 such as a keyboard or mouse to select different imaging planes.
- FIG. 1A illustrates that in order to generate super-resolved images, a plurality of different lower resolution images are taken as the illumination source 20 is moved in small increments generally in the x and y directions.
- the x and y directions are generally in a plane parallel with the surface of the image sensor 16 .
- the illumination source 20 may be moved along a surface that may be three-dimensional (e.g., a sphere or other 3D surface in the x, y, and z dimensions).
- the surface may be planar or three-dimensional.
- the illumination source 20 has the ability to move in the x and y directions as indicated by the arrows x and y in the inset of FIG. 1A .
- FIG. 1A illustrates a moveable stage 40 that is able to move the illumination source 20 in small displacements in both the x and y directions.
- the moveable stage 40 can move in sub-micron increments thereby permitting images to be taken of the objects 12 at slight x and y displacements.
- the moveable stage 40 may be controlled in an automated (or even manual) manner by the computer 30 or a separate dedicated controller.
- the moveable stage 40 may move in three dimensions (x, y, and z or angled relative to image sensor 16 ), thereby permitting images to be taken of objects 12 at slight x, y, and z/angled displacements.
- a system 50 that uses a plurality of spaced apart illumination sources can be selectively actuated to achieve the same result without having to physically move the illumination source 20 or image sensor 16 .
- the illumination source 20 is able to make relatively small displacement jogs (e.g., less than about 1 ⁇ m).
- the small discrete shifts parallel to the image sensor 16 are used to generate a single, high resolution image (e.g., pixel super-resolution).
- a system 50 uses a lens-less, on-chip compact microscope 52 that can achieve ⁇ 1 ⁇ m resolution over a wide field-of-view of ⁇ 30 mm 2 .
- This compact lens-less, on-chip microscope 52 weighs ⁇ 122 grams with dimensions of 4 cm ⁇ 4 cm ⁇ 15 cm and is based on partially-coherent digital in-line holography.
- the microscope 52 includes a base 54 that includes an image sensor 56 that may take the form of a CMOS or CCD chip (e.g., CMOS sensor-chip Aptina, MT9J003, 1.67 ⁇ m pixel size).
- the base 54 includes a sample tray 58 that is moveable into and out of the base 54 .
- the sample tray 58 is moveable out of the base 54 to load a slide, slip or other sample holder 18 into the same.
- the sample tray 58 can then be placed closed, whereby the slide, slip, or other sample holder 18 (as seen in FIGS. 1A and 1B ) containing the sample 14 is placed atop the image sensor 56 .
- the microscope 52 also includes an interface 60 ( FIG. 3A ) that can function both for power as well as data transmission.
- the interface 60 may include a standard USB interface.
- the USB interface 60 can provide power to both the image sensor 56 as well as the illumination sources discussed below.
- the microscope 52 includes an elongate portion 62 extending from the base 54 .
- the elongate portion 62 includes a stand-off 64 that includes a hollow, interior portion through which light passes toward the sample positioned above the image sensor 56 .
- the stand-off 64 may be a tubular member as illustrated in FIGS. 3A and 3B .
- the stand-off 64 aids in providing the separation distance z 1 ( FIG. 1A ) between the sample 14 and the illumination source 20 .
- a housing 66 forms part of the elongate portion 62 and includes therein a plurality of illumination sources 70 .
- the illumination sources 70 may include light emitting diodes (LEDs), laser diodes, or the like.
- FIG. 3B illustrates an illumination source 70 that includes an array of LEDs.
- a processor 72 is operatively coupled directly or indirectly to the illumination sources 70 to selectively actuate individual illumination sources 70 .
- Each illumination source 70 is coupled to an optical fiber 74 or other waveguide that terminates into an illumination array 76 that provides for different illumination locations (i.e., different x and/or y locations).
- a series of fibers 74 that are stationary yet placed at different locations relative to the image sensor 56 can be used to effectively provide the sub-pixel shift.
- the individual fibers 74 may be selected digitally or mechanically.
- a processor may select individual light sources coupled to selected fibers 74 to provide digital switching functionality.
- Adjacent optical fibers 74 or waveguides are separated from one another by a distance within the range of about 0.001 mm to about 500 mm.
- the multiple fiber-optic waveguides 74 are butt-coupled to light emitting diodes 70 , which are controlled by a low-cost micro-controller 72 ((Atmel ATmega8515) to sequentially illuminate the sample.
- a low-cost micro-controller 72 ((Atmel ATmega8515) to sequentially illuminate the sample.
- twenty four (24) LEDs are used (OSRAM TOPLED, SuperRed, 633 nm).
- the resulting lensfree holograms are then captured by a digital image sensor 56 and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images of the objects 12 .
- the base 54 of the microscope 42 includes a z-shift stage 78 that is used to manually adjust the distance z 2 between the sample and the image sensor 56 .
- the z-shift stage 78 includes a dial or knob 80 that is manually moved to adjust the distance z 2 by a discrete distance.
- the dial or knob 80 translates lateral or rotational movement of the same into vertical movement of a stage element that is coupled to the image sensor 56 .
- this distance z 2 is adjusted small increments ranging from about 10 ⁇ m to about 100 ⁇ m.
- the exact amount of incremental increase or decrease in z 2 does not need to be known in advance and may be random. This parameter is digitally determined after image acquisition using an autofocus algorithm.
- Movement of the dial or knob 80 moves the image sensor 56 relative to a stationary sample holder (e.g., sample holder 18 of FIGS. 1A and 1 B) that is held within the sample tray 58 .
- the microscope 52 includes an optional lock 82 that is used to lock the position of the samplel 4 /sample holder 18 .
- FIGS. 4A and 4B illustrates a method used to reconstruct images of the object(s) 12 from the plurality lower resolution images taken at different positions.
- a plurality of lower resolution images are obtained of the sample 14 containing the object(s) 12 while the illumination source 20 and/or the image sensor 16 are moved relative to another at a plurality of different locations (e.g., x, y locations) to create the sub-pixel image shifts.
- the number of lower resolution images may vary but generally includes between about 2 and 250 images.
- the sample 14 is disposed from the image sensor 16 at a first distance (d 1 ).
- a pixel super-resolved (PSR) hologram is synthesized based upon the plurality of lower resolution images obtained in operation 1000 .
- the details of digitally converting a plurality of lower resolution images into a single, higher resolution pixel SR image may be found in Bishara et al., Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution, Optics Express 18:11181-11191 (2010), which is incorporated by reference.
- This pixel super-resolution step takes lower resolution holographic shadows of the object(s) 12 (e.g., captured at ⁇ 10 million pixels each in the case of the microscope 52 of FIGS. 3A and 3B ) and then creates a higher resolution lensfree hologram that now contains >300 million pixels over the same 30 mm 2 field-of-view with an effective pixel size of ⁇ 300 nm.
- the distance between the sample 14 and the image sensor 16 is adjusted to a different distance (d n ).
- a plurality of lower resolution images are obtained of the sample 14 containing the object(s) 12 while the illumination source 20 and/or the image sensor 16 are moved relative to another at a plurality of different locations (e.g., x, y locations) to create the sub-pixel image shifts.
- the plurality of lower resolution images are obtained while the sample 14 and the image sensor 16 are located at the new or different distance (d n ).
- a pixel super-resolved hologram (at the different distance (d 2 )) is synthesized based upon the plurality of lower resolution images obtained in operation 1200 .
- process is repeated for different sample-to-sensor differences. Generally, the process repeats such that a pixel super-resolved hologram is created at between 2-20 different distances although this number may vary.
- the plurality of pixel super-resolved holograms obtained at the different heights are then registered with respect to each other as seen in operation 1600 .
- the subsequent iterative phase recovery requires that these pixel super-resolved holograms are accurately registered to each other.
- lateral translation and rotation of the objects 12 between holograms of different heights are unavoidable.
- three-control points from three different corners of the image are selected in one of the holograms (which is deemed the reference hologram).
- One preferred control point could be a small isolated dust particle at a corner since its hologram is circularly symmetric.
- a special alignment marker(s) can also be placed at the corners of the sample holder/substrate. Therefore, normalized correlations between lensfree holograms can be used to find the matching points in each image captured at a different height. After selection of the control points, a small area (e.g., ⁇ 30 ⁇ 30 ⁇ m) around each control point is cropped and digitally interpolated ( ⁇ 4-6 times) to serve as a normalized correlation template. Furthermore, for accurately finding the coordinate shift of each control point among M images, lensfree holographic images have to be positioned in the same z 2 -distance.
- the difference in the z 2 -distance between lensfree holograms acquired at different heights is evaluated by an auto-focus algorithm, such as that disclosed in J. L. Pech-Pacheco et al., “Diatom Autofocusing in Brightfield Microscopy: a Comparative Study,” in Pattern Recognition, International Conference On (IEEE Computer Society, 2000), Vol. 3, p. 3318, incorporated herein by reference, which permits one to digitally propagate the selected correlation templates to the same z 2 -distance, where normalized correlations are calculated to find the coordinate shifts between the control points in each image.
- An affine transformation is used to register the super-resolved holograms of different heights to the reference hologram.
- operations 1700 , 1800 , 1900 , and 2000 illustrate the iterative phase recovery process that is used to recover the lost optical phase. Additional details regarding the iterative phase recovery process may be found in L. J. Allen and M. P. Oxley, Optics Communications, 2001, 199, 65-75, which is incorporated herein by reference.
- the square roots of these resulting M registered holograms are then used as amplitude constraints in the iterative phase recovery algorithm that is steps 1700 through 2000 .
- the initial phase is assumed to be zero, after which the iterative phase recovery algorithm uses the free space propagation function to digitally propagate back and forth among these multiple heights.
- the amplitude constraint i.e., the measurement
- the phase is kept from the previous digital propagation step.
- a zero-phase is assigned to the object intensity measurement.
- Intensity measurement # 1 (step 1700 ) is forward propagated (with zero initial phase) to the plane of intensity measurement # 2 (step 1800 ). Then, the amplitude constraint in measurement # 2 (step 1800 ) is enforced while the calculated phase resulting from forward propagation remains unchanged. The resulting complex field is then forward propagated to the plane of intensity measurement # 3 (step 1900 ), where once again the amplitude constraint in measurement # 3 is enforced while the calculated phase resulting from forward propagation remains unchanged. This process continues until reaching the plane of intensity measurement #M (step 2000 ).
- step 2000 The complex field of plane #M (step 2000 ) is back propagated to the plane of intensity measurement #M ⁇ 1. Then, the amplitude constraint in measurement #M ⁇ 1 is enforced while the resulting phase remains unchanged. The same iteration continues until we reach the plane of intensity measurement # 1 (step 1700 ). When one complete iteration is achieved (by reaching back to the plane of intensity measurement # 1 ), the complex field that is derived in the last step will serve as the input to the next iteration. Typically, between 1-1,000 iterations and more typically between 1-70 iterations are required for satisfactory results.
- the acquired complex field of any one of the measurement planes is selected and is back propagated to the object plane to retrieve both phase image 2200 and amplitude image 2300 of the object(s) 12 within the sample 14 .
- FIGS. 4A and 4B illustrate a preferred method to reconstruct images of the object(s) 12 from the plurality lower resolution images taken at different positions
- the method can be employed without recovering higher resolution pixel super-resolved holograms or images.
- a plurality of lower resolution images are obtained at the same illumination positions (e.g., no x or y shifting) at different sample-to-sensor distances (d n ).
- FIG. 4C illustrates a preferred method according to this embodiment.
- a lower resolution image at a first sample-to-sensor distance (d 1 ) is obtained.
- the sample-to-sensor distance (d n ) is then adjusted as seen by operation 3100 by any of the methods described herein.
- a lower resolution image at the new sample-to-sensor distance is then obtained as seen in operation 3200 .
- the adjustment of the sample-to-sensor distance and image acquisition takes place for a number of cycles. Generally, the process repeats at between 2-20 different distances although this number may vary.
- the plurality of lower resolution images obtained at the different sample-to-sensor distances are then registered with one another as seen in operation 3400 .
- Operations 3500 , 3600 , and 3700 illustrate the iterative phase recovery process that is used to recover the lost optical phase.
- Back propagation is conducted as in the prior embodiment and is seen by respective arrows A and B. These operations may be the same as those described above with respect to the method illustrated in FIG. 4B .
- the acquired complex field of any one of the measurement planes is selected and is back propagated to the object plane to retrieve both phase image 3900 and amplitude image 4000 of the object(s) 12 within the sample 14 .
- the embodiment of FIG. 4C may be simpler, faster, and more cost-effective in some instances.
- a device similar to the setup disclosed in FIG. 1A was tested with both blood smear samples and Papanicolaou smear (Pap smear) samples.
- the set-up is composed of a partially coherent light source ( ⁇ 5 nm bandwidth centered at 550 nm), glass cover slips with different thicknesses (to adjust z 2 ) and a CMOS detector-array.
- Blood smear samples were prepared using whole blood (UCLA Blood Bank, USA), where the samples were diluted ( ⁇ 2) with RPMI (Thermo Scientific, Catalog #: SH3002701) in room temperature. Then 5 ⁇ L of the diluted blood was dropped on a type-one glass cover slip (Fisher Scientific Catalog #12-548-A).
- the blood droplet was then smeared by a second cover slip by applying a constant force.
- the sample was then left to dry in air for ⁇ 10 minutes before being fixed and stained by HEMA 3 Wright-Giemsa staining kit (Fisher Diagnostics).
- the Papanicolaou smear was prepared using a standard SUREPETH (BD Inc.) procedure.
- the z 2 distance was controlled by placing glass cover slips with different thicknesses between the sample and the image sensor (e.g., different inserts).
- the thicknesses of the glass cover slips varied between 50 ⁇ m and 250 ⁇ m, hence the corresponding z 2 distances varied between ⁇ 0.7 mm and ⁇ 1 mm.
- Each lensfree intensity measurement is sampled by the CMOS image sensor with 2.2 ⁇ m pixel size. This relatively large pixel size can cause undersampling issues; therefore, the PSR method was applied in order to effectively decrease the detector pixel size.
- LR lower-resolution
- FIGS. 5A-5F illustrate the benefits of using the above outlined multi-height lensfree imaging approach for a blood smear sample.
- FIG. 5A shows a full FOV ( ⁇ 24 mm 2 ) low resolution (LR) lensfree hologram as captured by the CMOS sensor. The dashed rectangle focuses on an area that is rather dense; however the blood cells are still organized as a mono-layer, suitable for imaging. The reconstruction results of this dense blood smear using five different z 2 -distances (711 ⁇ m, 767 ⁇ m, 821 ⁇ m, 876 ⁇ m and 946 ⁇ m) are shown in FIG. 5B .
- FIGS. 5D, 5E, and 5F provide images of zoomed areas (taken from the dashed rectangle in FIG. 5B ) of single height back propagation image, multi-height (5 heights) reconstruction image, and a 20 ⁇ microscope objective comparison image, respectively.
- the back propagated single height image as seen in FIG. 5D has lower contrast, and it is hard to evaluate the locations of the RBCs for spatial masking purposes. Therefore support-based phase recovery would not be effective in this case.
- the multi-height amplitude image as see in FIG.
- FIGS. 6A, 6B, 6C, 6D, 6F, 6G, 6H, 6I, 6K, and 6L illustrate imaged Pap smears (based on SUREPATH automated slide preparation) using the same multi-height imaging set-up.
- FIGS. 6E and 6J illustrate comparative 40 ⁇ objective lens (0.65 NA) microscope images. Because of the density of the specimen, the reconstruction of this image is a challenging task for any phase recovery method.
- FIG. 6A shows the multi-height phase image, which is recovered using lensfree measurements from five different heights (754 ⁇ m, 769 ⁇ m, 857 ⁇ m, 906 ⁇ m and 996 ⁇ m). The z 2 -distances were automatically determined using an auto-focus algorithm.
- FIGS. 6A, 6B, 6C, 6D, 6F, 6G, 6H, 6I, 6K, and 6L illustrate imaged Pap smears (based on SUREPATH automated slide preparation) using the same multi-he
- FIGS. 6C and 6H show zoomed images of the same Pap smear sample for the amplitude channel.
- FIGS. 6D and 6I show zoomed images of the same Pap smear sample for the phase channel.
- the cell morphology is clear and their boundaries can clearly be seen and separated from the background. Moreover, minor overlaps among the cells do not constitute a limitation in this method.
- FIG. 6B depicts a single height back propagated phase image corresponding to one of the z 2 measurements (the FOV is the same as in FIG. 6A ).
- FIGS. 6F and 6K show zoomed images of the same Pap smear sample for the phase channel calculated using back propagation of a single height image.
- FIGS. 6G and 6L show zoomed images of the same Pap smear sample for the phase amplitude channel calculated using back propagation of a single height image.
- the single height back projection images show significant spatial distortion due to the density of the cells.
- FIGS. 6E and 6J also provide 40 ⁇ objective lens (0.65 NA) microscope comparison images for the same zoomed regions, clearly providing a good match to the multi-height reconstruction results shown in FIGS. 6D, 6I and 6C, 6H .
- This complementary set of information that is conveyed by the amplitude and phase images might facilitate detection of abnormal cells within a Pap test that are characterized for instance by a high nuclear-cytoplasmic ratio.
- FIG. 7A shows a single height back propagated phase image.
- FIG. 7B shows the recovered phase image after 72 iterations.
- FIG. 7C shows significantly better than the phase image of FIG. 7A .
- a further improvement in image quality is achieved by adding a third intensity measurement to the multi-height phase recovery process as seen in FIG. 7C .
- FIG. 7C shows a microscope comparison image (10 ⁇ , 0.25 NA) for the same region of interest.
- ⁇ 490 nm
- the number of Fourier transform pairs was kept constant in each case, as a result of which each reconstruction used a different number of iterations (60, 30, 20 and 15 iterations, respectively).
- the letters ‘U’ and ‘C’ are clearly separated in all of these images, which is an indication of our success in cross registration of different height super-resolved holograms to each other so that spatial smearing affects due to possible inconsistencies among different z 2 lensless holograms are minimized.
- a microscope comparison image of the same “UCLA” pattern can also be seen in FIG. 8F , acquired using a 40 ⁇ objective lens (0.65 NA).
- FIGS. 3A and 3B The performance of the portable microscope 52 of FIGS. 3A and 3B was experimentally validated by imaging Papanicolaou smears (Pap smear). Two FDA-approved liquid-based Pap smear preparation techniques (SUREPATH and THINPREP) were used to test the imaging performance of the portable microscope 52 .
- FIG. 9A illustrates a full FOV ( ⁇ 30 mm 2 ) hologram of a Pap smear sample prepared by SUREPATH technique. In this Pap smear, the cells form a dense and confluent two-dimensional layer on a glass slide. In such a dense sample as shown in FIG.
- FIGS. 9B and 9C show the reconstructed amplitude images of Zones 1 and 2 respectively.
- FIGS. 9D and 9E show the reconstructed phase images of Zones 1 and 2 respectively.
- FIGS. 9F and 9G illustrate microscope images (40 ⁇ objective, 0.65 NA) of the same Zone 1 and Zone 2 regions, respectively.
- NC ratio Nuclear-Cytoplasmic ratio
- FIGS. 10A, 10B, 10D, 10E, 10G, and 10H The imaging results for a THINPREP Pap smear sample are summarized in FIGS. 10A, 10B, 10D, 10E, 10G, and 10H , where phase and amplitude images of the cells are reconstructed without the use any spatial masks or filtering operations that would have been normally required for a single height lensfree measurement.
- Corresponding microscope images (10 ⁇ , 0.25 NA) are shown in FIGS. 10C, 10F, and 10I .
- 10A, 10B, 10D, 10E, 10G, and 10H were reconstructed using five different heights (936 ⁇ m, 993 ⁇ m, 1036 ⁇ m, 1064 ⁇ m and 1080 ⁇ m, each of which was automatically determined using an auto-focus algorithm) and ten iterations were used as part of the phase recovery process.
- the cells and their inner morphology were successfully reconstructed in FIGS. 10A, 10B, 10D, 10E, 10G, and 10H and are in good agreement with the microscope comparison images provided in FIGS. 10C, 10F, and 10I .
- the phase images provide more information/contrast regarding the cell boundaries while the amplitude images provide more information regarding the cells' inner structures, including nuclei.
- FIGS. 11A-11E provide reconstructed phase images of Pap smear samples prepared using the SUREPATH technique at one, two, three, four, and five heights, respectively. The heights were located 933 ⁇ m, 1004 ⁇ m, 1041 ⁇ m, 1065 ⁇ m and 1126 ⁇ m from the detector plane. To provide a fair comparison among the reconstructions, the same number of Fourier transform pairs was used (i.e., 96 pairs) in each case. It is apparent that the back-propagated image ( FIG. 11A ) corresponding to a single height hologram does not provide useful information.
- FIG. 11B A significant image quality improvement is demonstrated in FIG. 11B when two heights are used for phase recovery. A further improvement in image quality is also noticeable when using three heights instead of two (from FIG. 11B to 11C ). When adding the fourth and fifth heights, the improvement in image quality becomes incrementally better.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Microscoopes, Condenser (AREA)
Abstract
Description
- This Application claims priority to U.S. Provisional Patent Application No. 61/556,697, filed on Nov. 7, 2011, which is hereby incorporated by reference in its entirety Priority is claimed pursuant to 35 U.S.C. §119.
- This invention was made with Government support under Grant No. OD006427, awarded by the National Institutes of Health; Grant Nos. 0754880 & 0930501 awarded by the National Science Foundation; Grant No. N00014-09-1-0858 awarded by the United States Navy, Office of Naval Research. The Government has certain rights in this invention.
- The field of the invention generally relates to imaging systems and methods and more particularly imaging systems that have particular application in the imaging and analysis of small particles such as cells, organelles, cellular particles and the like.
- Digital holography has been experiencing a rapid growth over the last several years, together with the availability of cheaper and better digital components as well as more robust and faster reconstruction algorithms, to provide new microscopy modalities that improve various aspects of conventional optical microscopes. In an effort to achieve wide-field on-chip microscopy, the use of unit fringe magnification (F˜1) in lensfree in-line digital holography to claim an FOV of ˜24 mm2 with a spatial resolution of <2 μm and an NA of ˜0.1-0.2 has been demonstrated. See Oh C. et al. On-chip differential interference contrast microscopy using lens-less digital holography. Opt Express.; 18(5):4717-4726 (2010) and Isikman et al., Lensfree Cell Holography On a Chip: From Holographic Cell Signatures to Microscopic Reconstruction, Proceedings of IEEE Photonics Society Annual Fall Meeting, pp. 404-405 (2009), both of which are incorporated herein by reference. This recent work used a spatially incoherent light source that is filtered by an unusually large aperture (˜50-100 μm diameter); and unlike most other lens-less in-line holography approaches, the sample plane was placed much closer to the detector chip rather than the aperture plane, i.e., z1>>z2. This unique hologram recording geometry enables the entire active area of the sensor to act as the imaging FOV of the holographic microscope since F˜1. More importantly, there is no longer a direct Fourier transform relationship between the sample and the detector planes since the spatial coherence diameter at the object plane is much smaller than the imaging FOV. At the same time, the large aperture of the illumination source is now geometrically de-magnified by a factor that is proportional to M=z1/z2 which is typically 100-200. Together with a large FOV, these unique features also bring simplification to the set-up since a large aperture (˜50 μm) is much easier to couple light to and align.
- However, a significant trade-off is made in this recent approach. To wit, the pixel size now starts to be a limiting factor for spatial resolution since the recorded holographic fringes are no longer magnified. Because the object plane is now much closer to the detector plane (e.g., z2˜1 mm), the detection NA approaches ˜1. However, the finite pixel size at the sensor chip can unfortunately record holographic oscillations corresponding to only an effective NA of ˜0.1-0.2, which limits the spatial resolution to <2 μm. While, in principle, a higher spatial density of pixels could be achieved by reducing pixel size at the sensor to e.g., <1 μm, this has obvious technological challenges to use in a large FOV.
- More recently, a lensfree super-resolution holographic microscope has been proposed which achieves sub-micron spatial resolution over a large field-of-view of e.g., ˜24 mm2. See Bishara et al., “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276 (2011), which is incorporated herein by reference. The microscope works based on partially-coherent lensfree digital in-line holography using multiple light sources (e.g., light-emitting diodes—LEDs) placed at ˜3-6 cm away from the sample plane such that at a given time only a single source illuminates the objects, projecting in-line holograms of the specimens onto a CMOS sensor-chip. Since the objects are placed very close to the sensor chip (e.g., ˜1-2 mm) the entire active area of the sensor becomes the imaging field-of-view, and the fringe-magnification is unit. As a result of this, these holographic diffraction signatures are unfortunately under-sampled due to the limited pixel size at the CMOS chip (e.g., ˜2-3 μm). To mitigate this pixel size limitation on spatial resolution, several lensfree holograms of the same static scene are recorded as different LEDs are turned on and off, which creates sub-pixel shifted holograms of the specimens. By using pixel super-resolution techniques, these sub-pixel shifted under-sampled holograms can be digitally put together to synthesize an effective pixel size of e.g., ˜300-400 nm, which can now resolve/sample much larger portion of the higher spatial frequency oscillations within the lensfree object hologram.
- This super-resolved (SR) in-line hologram, however, still suffers from twin-image artifact, which is common to all in-line hologram recording geometries. In earlier work, the use of an iterative object-support based phase recovery method has been demonstrated to eliminate this twin-image artifact creating wide-field microscopic images of samples. This twin-image elimination method, however, requires as input the location estimations of the objects within the imaging field-of-view. To this end, a threshold or a segmentation algorithm can be used to automatically estimate the objects' locations (creating the object support) for relatively sparse samples. However, in denser specimens, this object support is difficult to estimate which can create challenges in removal of the twin-image artifact. For such dense specimens, a new approach is needed for pixel super-resolution holographic microscopy.
- In one aspect of the invention, to overcome these object-support related imaging challenges for dense and connected specimens, a new approach for pixel super-resolution holographic microscopy uses multiple (e.g., 2-5) lensfree intensity measurements that are each captured at a different heights (i.e., z2 as seen in
FIG. 1A ) from the detector-array. Generally, each lensfree super-resolved hologram is synthesized with ˜10-100 μm change in the relative height of the object with respect to the detector-chip surface (although other distances are contemplated), after which the super-resolved holograms are digitally registered and aligned to each other to take into account possible rotations and shifts. The co-registered super-resolved holograms, corresponding to different object heights, are then iteratively processed to recover the missing optical phase so that microscopic images of the specimens can be automatically reconstructed without the need for any spatial masking steps. Therefore, this multi-height holographic approach eliminates the need to estimate the object-support at the sample plane cleaning the twin-image artifacts even for dense and connected specimens. - In one embodiment, a method of imaging a sample includes illuminating a sample spaced apart from an image sensor at a first distance with an illumination source; obtaining an image of the sample from an image sensor with the sample spaced apart from the image sensor at the first distance; illuminating the sample spaced apart from an image sensor at a one or more distances different from the first distance with an illumination source; obtaining an image of the sample from an image sensor with the sample spaced apart from the image sensor at the one or more distances different from the first distance; registering the images of the sample obtained at the one or more distances and the image of the sample obtained at the first distance; iteratively recovering lost phase information from the registered images; and reconstructing an amplitude or phase image of the sample based at least in part on the recovered lost phase information.
- In another embodiment, a method of imaging a sample includes illuminating a sample spaced apart from an image sensor at a first distance with an illumination source at a plurality of different locations. Lower resolution image frames of the sample are obtained from an image sensor at plurality of different locations with the sample spaced apart from the image sensor at the first distance. A higher resolution image of the sample at the first distance is recovered based at least in part on the plurality of lower resolution image frames obtained at the first distance. The sample is illuminated at a one or more sample-to-sensor distances different from the first distance with an illumination source at a plurality of different locations. Lower resolution image frames are obtained of the sample from an image sensor at plurality of different locations with the sample spaced apart from the image sensor at the one or more distances different from the first distance. A higher resolution image of the sample is recovered at the one or more distances based at least in part on the plurality of lower resolution image frames obtained at the one or more distances. The higher resolution images of the sample obtained at the one or more distances and the higher resolution image of the sample obtained at the first distance are registered with one another. Lost phase information from the registered higher resolution images is iteratively recovered. Amplitude or phase images of the sample are reconstructed based at least in part on the recovered lost phase information.
- In another embodiment, a system for imaging objects within a sample includes an image sensor; an illumination source configured to scan along a surface relative to the image sensor and illuminate the sample at a plurality of different locations; a sample interposed between the image sensor and the illumination source; means for adjusting an actual or effective distance between the sample and the image sensor; and at least one processor configured to reconstruct an image of the sample based on the images obtained from illumination source at the plurality of different scan positions.
- In another embodiment, a system for imaging a sample includes an image sensor; one or more illumination sources coupled to an array of optical waveguides, wherein the each optical waveguide of the array terminates at a different spatial location in three dimensional space; a sample holder interposed between the image sensor and the one or more illumination sources; and means for adjusting an actual or effective distance between the sample holder and the image sensor.
-
FIG. 1A schematically illustrates a system for imaging densely distributed objects contained in a sample according to one embodiment. -
FIG. 1B illustrates a sample containing densely distributed objects disposed on a sample holder. -
FIG. 2A illustrates a moveable z-adjust stage according to one embodiment. -
FIG. 2B illustrates a moveable z-adjust stage according to another embodiment. -
FIGS. 3A and 3B illustrate a portable microscope for imaging densely distributed objects contained in a sample according to another embodiment. -
FIGS. 4A and 4B illustrate a method used to reconstruct phase and amplitude images of object(s) contained in a sample according to one embodiment. -
FIG. 4C illustrates a method used to reconstruct phase and amplitude images of object(s) contained in a sample according to another embodiment. -
FIG. 5A shows a full FOV (˜24 mm2) low resolution (LR) lensfree hologram of a blood smear as captured by the image sensor. The dashed rectangle focuses on an area that is rather dense. -
FIG. 5B illustrates a multi-height (5 different heights) PSR lensfree amplitude image of the rectangular region ofFIG. 5A . -
FIG. 5C illustrates a 10× microscope objective comparison image. -
FIG. 5D illustrates a single height back propagated PSR amplitude image taken from the dashed rectangle inFIG. 5B . -
FIG. 5E illustrates multi-height (5 different heights) PSR lensfree amplitude image of the rectangular region ofFIG. 5B . -
FIG. 5F illustrates a 20× objective lens (0.4 NA) microscope image of rectangular region ofFIG. 5B (andFIG. 5C ). -
FIG. 6A illustrates a multi-height based PSR lensfree phase image of a Pap smear. This image was reconstructed using five heights. Fifty (50) iterations were used during phase recovery (λ=550 nm). -
FIG. 6B illustrates a single height back propagated PSR phase image. -
FIG. 6C illustrates multi-height based PSR lensfree amplitude image of the leftmost rectangle ofFIG. 6A . -
FIG. 6D illustrates multi-height based PSR lensfree phase image of the leftmost rectangle ofFIG. 6A . -
FIGS. 6E illustrates a comparative 40× objective lens (0.65 NA) microscope image of the leftmost rectangle ofFIG. 6A . -
FIG. 6F illustrates the single height back propagated PSR phase image of the leftmost rectangle ofFIG. 6B . -
FIG. 6G illustrates the single height back propagated PSR amplitude image of the leftmost rectangle ofFIG. 6B . -
FIG. 6H illustrates multi-height based PSR lensfree amplitude image of the rightmost rectangle ofFIG. 6A . -
FIG. 6I illustrates multi-height based PSR lensfree phase image of the rightmost rectangle ofFIG. 6A . -
FIG. 6J illustrates a comparative 40x objective lens (0.65NA) microscope image of the rightmost rectangle ofFIG. 6A . -
FIG. 6K illustrates the single height back propagated PSR phase image of the rightmost rectangle ofFIG. 6B . -
FIG. 6L illustrates the single height back propagated PSR amplitude image of the rightmost rectangle ofFIG. 6B . -
FIG. 7A illustrates a back propagated phase image of a Pap smear from one PSR lensfree hologram (1 height). -
FIG. 7B illustrates a multi-height (2 heights) based PSR lensfree phase image of the same Pap smear sample. -
FIG. 7C illustrates a multi-height (3 heights) based PSR lensfree phase image of the same Pap smear sample. -
FIG. 7D illustrates a multi-height (4 heights) based PSR lensfree phase image of the same Pap smear sample. -
FIG. 7E illustrates a multi-height (5 heights) based PSR lensfree phase image of the same Pap smear sample. -
FIG. 7F illustrates a comparative 10× objective lens (0.25 NA) microscope image of the same field of view. -
FIG. 8A illustrates a single height based back propagated PSR amplitude image of a UCLA pattern etched on a glass slide. The letters “U” and “C” have a spacing of about 1 -
FIG. 8B illustrates a multi-height (2 heights) based PSR lensfree amplitude image of the same UCLA pattern ofFIG. 8A (λ=490 nm). Sixty (60) iterations were used in the reconstruction. -
FIG. 8C illustrates a multi-height (3 heights) based PSR lensfree amplitude image of the same UCLA pattern ofFIG. 8A . Thirty (30) iterations were used in the reconstruction. -
FIG. 8D illustrates a multi-height (4 heights) based PSR lensfree amplitude image of the same UCLA pattern ofFIG. 8A . Twenty (20) iterations were used in the reconstruction. -
FIG. 8E illustrates a multi-height (5 heights) based PSR lensfree amplitude image of the same UCLA pattern ofFIG. 8A . Fifteen (15) iterations were used in the reconstruction. -
FIG. 8F illustrates a microscope comparison image of the same UCLA pattern using a 40× objective lens (0.65 NA). -
FIG. 9A illustrates a full FOV (˜30 mm2) holographic image of a SUREPREP Pap smear sample acquired using the field-portable microscope ofFIGS. 3A and 3B . -
FIG. 9B illustrates the multi-height (5 heights) based PSR lensfree amplitude image ofZone 1 fromFIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used. -
FIG. 9C illustrates the multi-height (5 heights) based PSR lensfree amplitude image ofZone 2 fromFIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used. -
FIG. 9D illustrates the multi-height (5 heights) based PSR lensfree phase image ofZone 1 fromFIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used. -
FIG. 9E illustrates the multi-height (5 heights) based PSR lensfree phase image ofZone 2 fromFIG. 9A of the SUREPREP Pap smear sample. Total ten iterations were used. -
FIG. 9F illustrates a microscope image (40×, 0.65 NA) ofZone 1 fromFIG. 9A . -
FIG. 9G illustrates a microscope image (40×, 0.65 NA) ofZone 2 fromFIG. 9A . -
FIG. 10A illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample. -
FIG. 10B illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample. -
FIG. 10C illustrate a conventional 40× microscope image (0.65 NA) of the same THINPREP Pap smear sample ofFIGS. 10A and 10B . -
FIG. 10D illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample. -
FIG. 10E illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample. -
FIG. 10F illustrate a conventional 40× microscope image (0.65 NA) of the same THINPREP Pap smear sample ofFIGS. 10D and 10E . -
FIG. 10G illustrates a multi-height (5 heights) based PSR lensfree amplitude image of a THINPREP Pap smear sample. -
FIG. 10H illustrates a multi-height (5 heights) based PSR lensfree phase image of a THINPREP Pap smear sample. -
FIG. 10I illustrate a conventional 40× microscope image (0.65 NA) of the same THINPREP Pap smear sample ofFIGS. 9G and 9H . -
FIG. 11A illustrates the single height back propagated PSR phase image of a SUREPATH Pap smear sample. -
FIG. 11B illustrates a multi-height (2 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample. -
FIG. 11C illustrates a multi-height (3 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample. -
FIG. 11D illustrates a multi-height (4 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample. -
FIG. 11E illustrates a multi-height (5 heights) based PSR lensfree phase image of the SUREPATH Pap smear sample. -
FIG. 11F illustrates a conventional 10× microscope image (0.25 NA) of the same SUREPATH Pap smear sample. -
FIG. 1A illustrates asystem 10 for imaging of an object 12 (or more preferably multiple objects 12) within a sample 14 (best seen inFIG. 1B ). Theobject 12 may include a cell or biological component or constituent (e.g., a cellular organelle or substructure). Theobject 12 may even include a multicellular organism or the like. Alternatively, theobject 12 may be a particle or other object. The methods and systems described herein have particularly applicability forsamples 14 that contain objects 12 that are densely populated amongst or within thesample 14.FIG. 1A illustratesobjects 12 in the form of red blood cells (RBCs) to be imaged that are disposed some distance z2 above animage sensor 16. As explained herein, this distance z2 is adjustable as illustrated by the Δz in the inset ofFIG. 1A . Thesample 14 containing one ormore objects 12 is typically placed on a opticallytransparent sample holder 18 such as a glass or plastic slide, coverslip, or the like as seen inFIG. 1B . The Az may be changed by moving thesample holder 18 relative to theimage sensor 16 or, alternatively, by moving theimage sensor 16 relative to thesample holder 18. In yet another alternative, Az may be changed by moving both thesample holder 18 and theimage sensor 16. For example, as seen inFIG. 2A , thesample holder 18 may be placed on amovable stage 19 that is able to adjust the z2 distance relative to theimage sensor 16. Alternatively, as seen inFIG. 2B , themoveable stage 19 may move theimage sensor 16 relative to astationary sample holder 18. For example, themoveable stage 19 may be coupled to a nut or the like (not shown) that moves in the z direction in response to rotation of a shaft or screw (not shown) coupled to the nut. Themovable stage 19 preferably moves in generally increments ranging from about 1 μm to about 1.0 cm and more preferably between about 10 μm to about 100 μm. Themoveable stage 19 may be actuated manually by a user using a knob, dial, or the like. Alternatively, themoveable stage 19 may be, in some embodiments, automatically controlled using a small actuator such as a motor, linear actuator, or the like. In yet another alternative, different sized (i.e., thicknesses) inserts (e.g., glass slides or the like) can be manually inserted between thesample holder 18 and theimage sensor 16 to adjust the sample-to-sensor distance. Likewise, multiple inserts can be inserted between thesample holder 18 and theimage sensor 16 to adjust the z2 height. In other alternative embodiments, the actual distance between theimage sensor 16 and thesample 14 is not changed. Rather, the effective distance is altered by, for example, changing the wavelength of theillumination source 20 as described below or changing the refractive index of the medium or media located between theimage sensor 16 and thesample 14. - Regardless, the surface of
image sensor 16 may be in contact with or close proximity to thesample holder 18. Generally, theobjects 12 within thesample 14 are located within several millimeters within the active surface of theimage sensor 16. Theimage sensor 16 may include, for example, a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) device. Theimage sensor 16 may be monochromatic or color. Theimage sensor 16 generally has a small pixel size which is less than 9.0 μm in size and more particularly, smaller than 5.0 μm in size (e.g., 2.2 μm or smaller). Generally,image sensors 16 having smaller pixel size will produce higher resolutions. As explained herein, sub-pixel resolution can be obtained by using the method of capturing and processing multiple lower-resolution holograms, that are spatially shifted with respect to each other by sub-pixel pitch distances. - Still referring to
FIG. 1A , thesystem 10 includes anillumination source 20 that is configured to illuminate a first side (top side as seen inFIG. 1A ) of thesample holder 18. Theillumination source 20 is preferably a spatially coherent or a partially coherent light source but may also include an incoherent light source. Light emitting diodes (LEDs) are one example of anillumination source 20. LEDs are relative inexpensive, durable, and have generally low power requirements. Of course, other light sources may also be used such as a Xenon lamp with a filter. A light bulb is also an option as theillumination source 20. A coherent beam of light such as a laser may also be used (e.g., laser diode). Theillumination source 20 preferably has a spectral bandwidth that is between about 0.1 and about 100 nm, although the spectral bandwidth may be even smaller or larger. Further, theillumination source 20 may include at least partially coherent light having a spatial coherence diameter between about 0.1 to 10,000 μm. - The
illumination source 20 may be coupled to an optical fiber as seen inFIG. 1A or another optical waveguide. If theillumination source 20 is a lamp or light bulb, it may be used in connection with an aperture (not shown) or multiple apertures in the case of an array which acts as a spatial filter that is interposed between theillumination source 20 and the sample. The term optical waveguide as used herein refers to optical fibers, fiber-optic cables, integrated chip-scale waveguides, an array of apertures and the like. With respect to the optical fiber, the fiber includes an inner core with a higher refractive index than the outer surface so that light is guided therein. The optical fiber itself operates as a spatial filter. In this embodiment, the core of the optical fiber may have a diameter within the range of about 50 μm to about 100 μm. As seen inFIG. 1A , the distal end of the fiber opticcable illumination source 20 is located at a distance z1 from thesample holder 18. The imaging plane of theimage sensor 16 is located at a distance z2 from thesample holder 18. In thesystem 10 described herein, z2<<z1. For example, the distance z1 may be on the order of around 1 cm to around 10 cm. In other embodiments, the range may be smaller, for example, between around 5 cm to around 10 cm. The distance z2 may be on the order of around 0.05 mm to 2 cm, however, in other embodiments this distance z2 may be between around 1 mm to 2 mm. Of course, as described herein, the z2 distance is adjustable in increments ranging from about 1 μm to about 1.0 cm although a larger range such as between 0.1 μm to about 10.0 cm is also contemplated. In other embodiments, the incremental z2 adjustment is within the range of about 10 μm to about 100 μm. The particular amount of the increase or decrease does not need to be known in advance. In thesystem 10, the propagation distance z1 is such that it allows for spatial coherence to develop at the plane of the object(s) 12, and light scattered by the object(s) 12 interferes with background light to form a lensfree in-line hologram on theimage sensor 16. - Still referring to
FIG. 1A , thesystem 10 includes acomputer 30 such as a laptop, desktop, tablet, mobile communication device, personal digital assistant (PDA) or the like that is operatively connected to thesystem 10 such that lower resolution images (e.g., lower resolution or raw image frames) are transferred from theimage sensor 16 to thecomputer 30 for data acquisition and image processing. Thecomputer 30 includes one ormore processors 32 that, as described herein in more detail, runs or executes software that takes multiple, sub-pixel (low resolution) images taken at different scan positions (e.g., x and y positions as seen in inset ofFIG. 1A ) and creates a single, high resolution projection hologram image of theobjects 12. The software creates additional high resolution projection hologram images of theobjects 12 at each different z2 distance. The multiple, high resolution images obtained at different heights are registered with respect to one another using the software. The software also digitally reconstructs complex projection images of theobjects 12 through an iterative phase recovery process that rapidly merges all the captured holographic information to recover lost optical phase of each lensfree hologram without the need for any spatial masking, filtering, or prior assumptions regarding the samples. After a number of iterations (typically between 1 and 75), the phase of each lensfree hologram (captured at different heights) is recovered and one the pixel super-resolved holograms is back propagated to the object plane to create phase and amplitude images of theobjects 12. The reconstructed images can be displayed to the user on, for example, adisplay 34 or the like. The user may, for example, interface with thecomputer 30 via aninput device 36 such as a keyboard or mouse to select different imaging planes. -
FIG. 1A illustrates that in order to generate super-resolved images, a plurality of different lower resolution images are taken as theillumination source 20 is moved in small increments generally in the x and y directions. The x and y directions are generally in a plane parallel with the surface of theimage sensor 16. Alternatively, theillumination source 20 may be moved along a surface that may be three-dimensional (e.g., a sphere or other 3D surface in the x, y, and z dimensions). Thus, the surface may be planar or three-dimensional. In one aspect of the invention, theillumination source 20 has the ability to move in the x and y directions as indicated by the arrows x and y in the inset ofFIG. 1A . Any number of mechanical actuators may be used including, for example, a stepper motor, moveable stage, piezoelectric element, or solenoid.FIG. 1A illustrates amoveable stage 40 that is able to move theillumination source 20 in small displacements in both the x and y directions. Preferably, themoveable stage 40 can move in sub-micron increments thereby permitting images to be taken of theobjects 12 at slight x and y displacements. Themoveable stage 40 may be controlled in an automated (or even manual) manner by thecomputer 30 or a separate dedicated controller. In one alternative embodiment, themoveable stage 40 may move in three dimensions (x, y, and z or angled relative to image sensor 16), thereby permitting images to be taken ofobjects 12 at slight x, y, and z/angled displacements. - In still an alternative embodiment, as illustrated in
FIGS. 3A and 3B , rather than move theillumination source 20 in the x and y directions, asystem 50 is disclosed that uses a plurality of spaced apart illumination sources can be selectively actuated to achieve the same result without having to physically move theillumination source 20 orimage sensor 16. In this manner, theillumination source 20 is able to make relatively small displacement jogs (e.g., less than about 1 μm). The small discrete shifts parallel to theimage sensor 16 are used to generate a single, high resolution image (e.g., pixel super-resolution). - As seen in
FIGS. 3A and 3B , asystem 50 is disclosed that uses a lens-less, on-chipcompact microscope 52 that can achieve <1 μm resolution over a wide field-of-view of ˜30 mm2. This compact lens-less, on-chip microscope 52 weighs ˜122 grams with dimensions of 4 cm×4 cm×15 cm and is based on partially-coherent digital in-line holography. Themicroscope 52 includes a base 54 that includes animage sensor 56 that may take the form of a CMOS or CCD chip (e.g., CMOS sensor-chip Aptina, MT9J003, 1.67 μm pixel size). Thebase 54 includes asample tray 58 that is moveable into and out of thebase 54. For example, thesample tray 58 is moveable out of the base 54 to load a slide, slip orother sample holder 18 into the same. Thesample tray 58 can then be placed closed, whereby the slide, slip, or other sample holder 18 (as seen inFIGS. 1A and 1B ) containing thesample 14 is placed atop theimage sensor 56. Themicroscope 52 also includes an interface 60 (FIG. 3A ) that can function both for power as well as data transmission. For example, theinterface 60 may include a standard USB interface. TheUSB interface 60 can provide power to both theimage sensor 56 as well as the illumination sources discussed below. - The
microscope 52 includes anelongate portion 62 extending from thebase 54. Theelongate portion 62 includes a stand-off 64 that includes a hollow, interior portion through which light passes toward the sample positioned above theimage sensor 56. The stand-off 64 may be a tubular member as illustrated inFIGS. 3A and 3B . The stand-off 64 aids in providing the separation distance z1 (FIG. 1A ) between thesample 14 and theillumination source 20. Ahousing 66 forms part of theelongate portion 62 and includes therein a plurality of illumination sources 70. The illumination sources 70 may include light emitting diodes (LEDs), laser diodes, or the like.FIG. 3B illustrates anillumination source 70 that includes an array of LEDs. Aprocessor 72 is operatively coupled directly or indirectly to theillumination sources 70 to selectively actuate individual illumination sources 70. Eachillumination source 70 is coupled to anoptical fiber 74 or other waveguide that terminates into anillumination array 76 that provides for different illumination locations (i.e., different x and/or y locations). In this regard, rather than have a movingillumination source 20 that is driven via a mechanical stage or other moving component, a series offibers 74 that are stationary yet placed at different locations relative to theimage sensor 56 can be used to effectively provide the sub-pixel shift. Theindividual fibers 74 may be selected digitally or mechanically. For example, a processor may select individual light sources coupled to selectedfibers 74 to provide digital switching functionality. Adjacentoptical fibers 74 or waveguides are separated from one another by a distance within the range of about 0.001 mm to about 500 mm. - The multiple fiber-
optic waveguides 74 are butt-coupled to light emittingdiodes 70, which are controlled by a low-cost micro-controller 72 ((Atmel ATmega8515) to sequentially illuminate the sample. In themicroscope 52 ofFIG. 3B , twenty four (24) LEDs are used (OSRAM TOPLED, SuperRed, 633 nm). The resulting lensfree holograms are then captured by adigital image sensor 56 and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images of theobjects 12. - Still referring to
FIG. 3B , thebase 54 of the microscope 42 includes a z-shift stage 78 that is used to manually adjust the distance z2 between the sample and theimage sensor 56. The z-shift stage 78 includes a dial orknob 80 that is manually moved to adjust the distance z2 by a discrete distance. The dial orknob 80 translates lateral or rotational movement of the same into vertical movement of a stage element that is coupled to theimage sensor 56. Generally, this distance z2 is adjusted small increments ranging from about 10 μm to about 100 μm. The exact amount of incremental increase or decrease in z2 does not need to be known in advance and may be random. This parameter is digitally determined after image acquisition using an autofocus algorithm. Movement of the dial orknob 80 moves theimage sensor 56 relative to a stationary sample holder (e.g.,sample holder 18 ofFIGS. 1A and 1B) that is held within thesample tray 58. Themicroscope 52 includes anoptional lock 82 that is used to lock the position of the samplel4/sample holder 18. -
FIGS. 4A and 4B illustrates a method used to reconstruct images of the object(s) 12 from the plurality lower resolution images taken at different positions. Referring toFIG. 4A , instep 1000, a plurality of lower resolution images are obtained of thesample 14 containing the object(s) 12 while theillumination source 20 and/or theimage sensor 16 are moved relative to another at a plurality of different locations (e.g., x, y locations) to create the sub-pixel image shifts. The number of lower resolution images may vary but generally includes between about 2 and 250 images. Duringstep 1000, thesample 14 is disposed from theimage sensor 16 at a first distance (d1). Next, as seen instep 1100, a pixel super-resolved (PSR) hologram is synthesized based upon the plurality of lower resolution images obtained inoperation 1000. The details of digitally converting a plurality of lower resolution images into a single, higher resolution pixel SR image may be found in Bishara et al., Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution, Optics Express 18:11181-11191 (2010), which is incorporated by reference. This pixel super-resolution step takes lower resolution holographic shadows of the object(s) 12 (e.g., captured at ˜10 million pixels each in the case of themicroscope 52 ofFIGS. 3A and 3B ) and then creates a higher resolution lensfree hologram that now contains >300 million pixels over the same 30 mm2 field-of-view with an effective pixel size of ˜300 nm. - Next, as seen in
operation 1200, the distance between thesample 14 and theimage sensor 16 is adjusted to a different distance (dn). At this new distance (dn), as seen inoperation 1300, a plurality of lower resolution images are obtained of thesample 14 containing the object(s) 12 while theillumination source 20 and/or theimage sensor 16 are moved relative to another at a plurality of different locations (e.g., x, y locations) to create the sub-pixel image shifts. The plurality of lower resolution images are obtained while thesample 14 and theimage sensor 16 are located at the new or different distance (dn). After the lower resolution images are obtained, as seen inoperation 1400, a pixel super-resolved hologram (at the different distance (d2)) is synthesized based upon the plurality of lower resolution images obtained inoperation 1200. As seen byarrow 1500, process is repeated for different sample-to-sensor differences. Generally, the process repeats such that a pixel super-resolved hologram is created at between 2-20 different distances although this number may vary. - Now referring to
FIG. 4B , the plurality of pixel super-resolved holograms obtained at the different heights are then registered with respect to each other as seen inoperation 1600. The subsequent iterative phase recovery requires that these pixel super-resolved holograms are accurately registered to each other. During the image acquisition step, lateral translation and rotation of theobjects 12 between holograms of different heights are unavoidable. To accurately register these pixel super-resolved holograms to each other, three-control points from three different corners of the image are selected in one of the holograms (which is deemed the reference hologram). One preferred control point could be a small isolated dust particle at a corner since its hologram is circularly symmetric. If need be, a special alignment marker(s) can also be placed at the corners of the sample holder/substrate. Therefore, normalized correlations between lensfree holograms can be used to find the matching points in each image captured at a different height. After selection of the control points, a small area (e.g., ˜30×30 μm) around each control point is cropped and digitally interpolated (˜4-6 times) to serve as a normalized correlation template. Furthermore, for accurately finding the coordinate shift of each control point among M images, lensfree holographic images have to be positioned in the same z2-distance. Therefore, the difference in the z2-distance between lensfree holograms acquired at different heights is evaluated by an auto-focus algorithm, such as that disclosed in J. L. Pech-Pacheco et al., “Diatom Autofocusing in Brightfield Microscopy: a Comparative Study,” in Pattern Recognition, International Conference On (IEEE Computer Society, 2000), Vol. 3, p. 3318, incorporated herein by reference, which permits one to digitally propagate the selected correlation templates to the same z2-distance, where normalized correlations are calculated to find the coordinate shifts between the control points in each image. An affine transformation is used to register the super-resolved holograms of different heights to the reference hologram. - Still referring to
FIG. 4B ,operations steps 1700 through 2000. At the beginning of the algorithm, as seen inoperation 1700, the initial phase is assumed to be zero, after which the iterative phase recovery algorithm uses the free space propagation function to digitally propagate back and forth among these multiple heights. At each height, the amplitude constraint (i.e., the measurement) is enforced while the phase is kept from the previous digital propagation step. - To initiate the phase recovery process, a zero-phase is assigned to the object intensity measurement. One iteration during this phase-recovery process can be described as follows: Intensity measurement #1 (step 1700) is forward propagated (with zero initial phase) to the plane of intensity measurement #2 (step 1800). Then, the amplitude constraint in measurement #2 (step 1800) is enforced while the calculated phase resulting from forward propagation remains unchanged. The resulting complex field is then forward propagated to the plane of intensity measurement #3 (step 1900), where once again the amplitude constraint in
measurement # 3 is enforced while the calculated phase resulting from forward propagation remains unchanged. This process continues until reaching the plane of intensity measurement #M (step 2000). Then instead of forward propagating the fields of the previous stages, back propagation is used as seen by respective arrows A, B, and C. The complex field of plane #M (step 2000) is back propagated to the plane of intensity measurement #M−1. Then, the amplitude constraint in measurement #M−1 is enforced while the resulting phase remains unchanged. The same iteration continues until we reach the plane of intensity measurement #1 (step 1700). When one complete iteration is achieved (by reaching back to the plane of intensity measurement #1), the complex field that is derived in the last step will serve as the input to the next iteration. Typically, between 1-1,000 iterations and more typically between 1-70 iterations are required for satisfactory results. After the phase recovery iterations are complete, as seen inoperation 2100, the acquired complex field of any one of the measurement planes is selected and is back propagated to the object plane to retrieve bothphase image 2200 andamplitude image 2300 of the object(s) 12 within thesample 14. - While
FIGS. 4A and 4B illustrate a preferred method to reconstruct images of the object(s) 12 from the plurality lower resolution images taken at different positions, in another alternative embodiment, the method can be employed without recovering higher resolution pixel super-resolved holograms or images. In this alternative embodiment, a plurality of lower resolution images are obtained at the same illumination positions (e.g., no x or y shifting) at different sample-to-sensor distances (dn).FIG. 4C illustrates a preferred method according to this embodiment. - As seen in
FIG. 4C , in operation 3000 a lower resolution image at a first sample-to-sensor distance (d1) is obtained. The sample-to-sensor distance (dn) is then adjusted as seen byoperation 3100 by any of the methods described herein. A lower resolution image at the new sample-to-sensor distance is then obtained as seen inoperation 3200. As seen byreturn arrow 3300, the adjustment of the sample-to-sensor distance and image acquisition takes place for a number of cycles. Generally, the process repeats at between 2-20 different distances although this number may vary. Still referring toFIG. 4C , the plurality of lower resolution images obtained at the different sample-to-sensor distances are then registered with one another as seen inoperation 3400.Operations FIG. 4B . After the phase recovery iterations are complete, as seen inoperation 3800, the acquired complex field of any one of the measurement planes is selected and is back propagated to the object plane to retrieve bothphase image 3900 andamplitude image 4000 of the object(s) 12 within thesample 14. The embodiment ofFIG. 4C may be simpler, faster, and more cost-effective in some instances. - A device similar to the setup disclosed in
FIG. 1A was tested with both blood smear samples and Papanicolaou smear (Pap smear) samples. The set-up is composed of a partially coherent light source (˜5 nm bandwidth centered at 550 nm), glass cover slips with different thicknesses (to adjust z2) and a CMOS detector-array. Blood smear samples were prepared using whole blood (UCLA Blood Bank, USA), where the samples were diluted (×2) with RPMI (Thermo Scientific, Catalog #: SH3002701) in room temperature. Then 5 μL of the diluted blood was dropped on a type-one glass cover slip (Fisher Scientific Catalog #12-548-A). The blood droplet was then smeared by a second cover slip by applying a constant force. The sample was then left to dry in air for ˜10 minutes before being fixed and stained byHEMA 3 Wright-Giemsa staining kit (Fisher Diagnostics). The Papanicolaou smear (Pap smear) was prepared using a standard SUREPETH (BD Inc.) procedure. - The z2 distance was controlled by placing glass cover slips with different thicknesses between the sample and the image sensor (e.g., different inserts). The thicknesses of the glass cover slips varied between 50 μm and 250 μm, hence the corresponding z2 distances varied between ˜0.7 mm and ˜1 mm. Each lensfree intensity measurement is sampled by the CMOS image sensor with 2.2 μm pixel size. This relatively large pixel size can cause undersampling issues; therefore, the PSR method was applied in order to effectively decrease the detector pixel size. For each z2-distance a lower-resolution (LR) image stack is captured, where each image in this stack is sub-pixel shifted with respect to the other images in the stack. These sub-pixel shifts are achieved by a slight translation of the fiber-tip position between two sequential images.
-
FIGS. 5A-5F illustrate the benefits of using the above outlined multi-height lensfree imaging approach for a blood smear sample.FIG. 5A shows a full FOV (˜24 mm2) low resolution (LR) lensfree hologram as captured by the CMOS sensor. The dashed rectangle focuses on an area that is rather dense; however the blood cells are still organized as a mono-layer, suitable for imaging. The reconstruction results of this dense blood smear using five different z2-distances (711 μm, 767 μm, 821 μm, 876 μm and 946 μm) are shown inFIG. 5B . These five z2-distances/heights are automatically evaluated by using an auto-focus algorithm. The reconstruction results ofFIG. 5B provide a good agreement to a 10× microscope objective comparison image shown inFIG. 5C .FIGS. 5D, 5E, and 5F provide images of zoomed areas (taken from the dashed rectangle inFIG. 5B ) of single height back propagation image, multi-height (5 heights) reconstruction image, and a 20× microscope objective comparison image, respectively. The back propagated single height image as seen inFIG. 5D has lower contrast, and it is hard to evaluate the locations of the RBCs for spatial masking purposes. Therefore support-based phase recovery would not be effective in this case. On the other hand, the multi-height amplitude image as see inFIG. 5E has significantly improved contrast, and individual RBCs can be identified and resolved even in dense clusters. It is important to emphasize that these multi-height reconstruction images shown inFIGS. 5B and 5E are obtained without the use of any spatial masking or any other prior information regarding the sample. -
FIGS. 6A, 6B, 6C, 6D, 6F, 6G, 6H, 6I, 6K, and 6L illustrate imaged Pap smears (based on SUREPATH automated slide preparation) using the same multi-height imaging set-up.FIGS. 6E and 6J illustrate comparative 40× objective lens (0.65 NA) microscope images. Because of the density of the specimen, the reconstruction of this image is a challenging task for any phase recovery method.FIG. 6A shows the multi-height phase image, which is recovered using lensfree measurements from five different heights (754 μm, 769 μm, 857 μm, 906 μm and 996 μm). The z2-distances were automatically determined using an auto-focus algorithm.FIGS. 6C and 6H show zoomed images of the same Pap smear sample for the amplitude channel.FIGS. 6D and 6I show zoomed images of the same Pap smear sample for the phase channel. In these reconstructed multi-height images the cell morphology is clear and their boundaries can clearly be seen and separated from the background. Moreover, minor overlaps among the cells do not constitute a limitation in this method. As a comparison,FIG. 6B depicts a single height back propagated phase image corresponding to one of the z2 measurements (the FOV is the same as inFIG. 6A ). - It is evident that distinguishing the cells from the background is a difficult task in this dense reconstructed image. To better provide a comparison,
FIGS. 6F and 6K show zoomed images of the same Pap smear sample for the phase channel calculated using back propagation of a single height image.FIGS. 6G and 6L show zoomed images of the same Pap smear sample for the phase amplitude channel calculated using back propagation of a single height image. Compared toFIGS. 6D, 6I and 6C, 6H , the single height back projection images show significant spatial distortion due to the density of the cells.FIGS. 6E and 6J also provide 40× objective lens (0.65 NA) microscope comparison images for the same zoomed regions, clearly providing a good match to the multi-height reconstruction results shown inFIGS. 6D, 6I and 6C, 6H . Note the enhanced contrast of the cell boundaries in the phase imagesFIGS. 6D and 6I which is complementary to the spatial information coming from the amplitude images (FIGS. 6C and 6H ). This complementary set of information that is conveyed by the amplitude and phase images might facilitate detection of abnormal cells within a Pap test that are characterized for instance by a high nuclear-cytoplasmic ratio. - An investigation was conducted how the number of intensity measurements used in the iterative reconstruction process affects the image quality. To provide a fair comparison (i.e., to better isolate the source of improvement in image quality), a total of 144 Fourier transform pairs were used in each case, regardless of the number of intensity measurements employed in the multi-height based phase recovery.
FIG. 7A shows a single height back propagated phase image. When a second intensity measurement is added, multi-height based iterative phase recovery approach can be utilized. Consequently, the recovered phase image after 72 iterations (FIG. 7B ) looks significantly better than the phase image ofFIG. 7A . A further improvement in image quality is achieved by adding a third intensity measurement to the multi-height phase recovery process as seen inFIG. 7C . After 36 iterations (i.e., corresponding to a total of 144 Fourier transform pairs as before), the cells that were hidden in the noisy background are now visible as seen by arrows inFIG. 7C . A moderate improvement is noticed in the image contrast when adding more intensity measurements, as can be seen in the reconstructed multi-height phase images from four and five heights, respectively (FIGS. 7D and 7E ). Note that in these two cases, 24 and 16 iterations were used, respectively, so that the total number of Fourier transform operations remains the same in all reconstructions shown inFIGS. 7B-7E , which helps to isolate the source of this image quality improvement and relate it to multiple height measurements rather than the number of back-and-forth digital propagation operations.FIG. 7F shows a microscope comparison image (10×, 0.25 NA) for the same region of interest. - After validating the usefulness of pixel super-resolved multi-height based phase recovery approach with dense blood smears and Pap tests, an isolated ‘UCLA’ pattern was also imaged that was etched on a glass slide using focused ion beam (FIB) milling, where the letters ‘U’ and ‘C’ are ˜1 μm apart. The single height back propagated super-resolved holographic image is shown in
FIG. 8A , where the letters ‘U’ and ‘C’ are clearly separated. The ‘UCLA’ pattern is spatially isolated from nearby objects, and therefore for this small isolated FOV phase recovery is not necessary.FIGS. 8B, 8C, 8D, and 8E illustrate multi-height based reconstructed amplitude images, for two, three, four and five different heights, respectively (λ=490 nm). For a fair comparison among these recoveries, once again the number of Fourier transform pairs was kept constant in each case, as a result of which each reconstruction used a different number of iterations (60, 30, 20 and 15 iterations, respectively). It is evident that the letters ‘U’ and ‘C’ are clearly separated in all of these images, which is an indication of our success in cross registration of different height super-resolved holograms to each other so that spatial smearing affects due to possible inconsistencies among different z2 lensless holograms are minimized. A microscope comparison image of the same “UCLA” pattern can also be seen inFIG. 8F , acquired using a 40× objective lens (0.65 NA). - The performance of the
portable microscope 52 ofFIGS. 3A and 3B was experimentally validated by imaging Papanicolaou smears (Pap smear). Two FDA-approved liquid-based Pap smear preparation techniques (SUREPATH and THINPREP) were used to test the imaging performance of theportable microscope 52.FIG. 9A illustrates a full FOV (˜30 mm2) hologram of a Pap smear sample prepared by SUREPATH technique. In this Pap smear, the cells form a dense and confluent two-dimensional layer on a glass slide. In such a dense sample as shown inFIG. 9A , a simple back-propagation of the hologram to the object plane will result in a distorted image for which the object-support (required for phase recovery based on a single height) would be rather difficult to estimate. However, the multi-height phase recovery approach that is utilized herein requires no prior information or spatial masking/filtering step for proper image reconstruction.FIGS. 9B and 9C show the reconstructed amplitude images ofZones FIGS. 9D and 9E show the reconstructed phase images ofZones FIGS. 9F and 9G illustrate microscope images (40× objective, 0.65 NA) of thesame Zone 1 andZone 2 regions, respectively. - In the reconstructed amplitude images (
FIGS. 9B and 9C ), the cells' cytoplasm is relatively difficult to see, while the nuclei are quite visible with good contrast. Meanwhile, in the corresponding phase images (FIGS. 9D and 9E ) the cells' cytoplasm is much more visible compared to the amplitude images. This complementary set of spatial information coming from phase and amplitude channels of the microscope is rather valuable and might become useful especially for automated estimation of the Nuclear-Cytoplasmic ratio (NC ratio) of each cell on the smear. For instance a large NC ratio is considered as an indicator of an abnormal cell that might be cancerous. - An alternative liquid-based Pap smear preparation method is THINPREP. In this method, the sample is sparser compared to SUREPATH preparation. Nevertheless, cells are still overlapping with each other, forming clusters. The imaging results for a THINPREP Pap smear sample are summarized in
FIGS. 10A, 10B, 10D, 10E, 10G, and 10H , where phase and amplitude images of the cells are reconstructed without the use any spatial masks or filtering operations that would have been normally required for a single height lensfree measurement. Corresponding microscope images (10×, 0.25 NA) are shown inFIGS. 10C, 10F, and 10I . The holographic images shown inFIGS. 10A, 10B, 10D, 10E, 10G, and 10H were reconstructed using five different heights (936 μm, 993 μm, 1036 μm, 1064 μm and 1080 μm, each of which was automatically determined using an auto-focus algorithm) and ten iterations were used as part of the phase recovery process. The cells and their inner morphology were successfully reconstructed inFIGS. 10A, 10B, 10D, 10E, 10G, and 10H and are in good agreement with the microscope comparison images provided inFIGS. 10C, 10F, and 10I . Once again, the phase images provide more information/contrast regarding the cell boundaries while the amplitude images provide more information regarding the cells' inner structures, including nuclei. - An investigation was also conducted into how the number of hologram heights (i.e., M) affects the image quality of the
microscope 52.FIGS. 11A-11E provide reconstructed phase images of Pap smear samples prepared using the SUREPATH technique at one, two, three, four, and five heights, respectively. The heights were located 933 μm, 1004 μm, 1041 μm, 1065 μm and 1126 μm from the detector plane. To provide a fair comparison among the reconstructions, the same number of Fourier transform pairs was used (i.e., 96 pairs) in each case. It is apparent that the back-propagated image (FIG. 11A ) corresponding to a single height hologram does not provide useful information. A significant image quality improvement is demonstrated inFIG. 11B when two heights are used for phase recovery. A further improvement in image quality is also noticeable when using three heights instead of two (fromFIG. 11B to 11C ). When adding the fourth and fifth heights, the improvement in image quality becomes incrementally better. - While the invention described herein has largely been described as a “lens free” imaging platform, it should be understood that various optical components, including lenses, may be combined or utilized in the systems and methods described herein. For instance, the devices described herein may use small lens arrays (e.g., micro-lens arrays) for non-imaging purposes. As one example, a lens array could be used to increase the efficiency of light collection for the sensor array. Such optical components, while not necessary to image the sample and provide useful data and results regarding the same may still be employed and fall within the scope of the invention. While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. The invention, therefore, should not be limited, except to the following claims, and their equivalents.
Claims (16)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/624,624 US20170357083A1 (en) | 2011-11-07 | 2017-06-15 | Maskless imaging of dense samples using multi-height lensfree microscope |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161556697P | 2011-11-07 | 2011-11-07 | |
PCT/US2012/047725 WO2013070287A1 (en) | 2011-11-07 | 2012-07-20 | Maskless imaging of dense samples using multi-height lensfree microscope |
US201414356131A | 2014-05-02 | 2014-05-02 | |
US15/624,624 US20170357083A1 (en) | 2011-11-07 | 2017-06-15 | Maskless imaging of dense samples using multi-height lensfree microscope |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2012/047725 Division WO2013070287A1 (en) | 2011-11-07 | 2012-07-20 | Maskless imaging of dense samples using multi-height lensfree microscope |
US14/356,131 Division US9715099B2 (en) | 2011-11-07 | 2012-07-20 | Maskless imaging of dense samples using multi-height lensfree microscope |
Publications (1)
Publication Number | Publication Date |
---|---|
US20170357083A1 true US20170357083A1 (en) | 2017-12-14 |
Family
ID=48290438
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/356,131 Active 2033-09-12 US9715099B2 (en) | 2011-11-07 | 2012-07-20 | Maskless imaging of dense samples using multi-height lensfree microscope |
US15/624,624 Abandoned US20170357083A1 (en) | 2011-11-07 | 2017-06-15 | Maskless imaging of dense samples using multi-height lensfree microscope |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/356,131 Active 2033-09-12 US9715099B2 (en) | 2011-11-07 | 2012-07-20 | Maskless imaging of dense samples using multi-height lensfree microscope |
Country Status (2)
Country | Link |
---|---|
US (2) | US9715099B2 (en) |
WO (1) | WO2013070287A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10248838B2 (en) | 2015-12-04 | 2019-04-02 | The Regents Of The University Of California | Method and device for single molecule imaging |
DE102018129981A1 (en) | 2018-11-27 | 2020-05-28 | Basler Ag | Processing of a digital intensity image: method, storage medium and imaging system |
US10795315B2 (en) | 2016-05-11 | 2020-10-06 | The Regents Of The University Of California | Method and system for pixel super-resolution of multiplexed holographic color images |
US10838192B2 (en) | 2016-05-10 | 2020-11-17 | The Regents Of The University Of California | Method and device for high-resolution color imaging using merged images from holographic and lens-based devices |
US11262286B2 (en) | 2019-04-24 | 2022-03-01 | The Regents Of The University Of California | Label-free bio-aerosol sensing using mobile microscopy and deep learning |
US11320362B2 (en) | 2016-09-23 | 2022-05-03 | The Regents Of The University Of California | System and method for determining yeast cell viability and concentration |
US11460395B2 (en) | 2019-06-13 | 2022-10-04 | The Regents Of The University Of California | System and method for measuring serum phosphate levels using portable reader device |
US11501544B2 (en) | 2018-06-04 | 2022-11-15 | The Regents Of The University Of California | Deep learning-enabled portable imaging flow cytometer for label-free analysis of water samples |
US11598699B2 (en) * | 2013-02-06 | 2023-03-07 | Alentic Microscience Inc. | Sample processing improvements for quantitative microscopy |
US11874452B2 (en) | 2013-06-26 | 2024-01-16 | Alentic Microscience Inc. | Sample processing improvements for microscopy |
US11893779B2 (en) | 2018-10-18 | 2024-02-06 | The Regents Of The University Of California | Device and method for motility-based label-free detection of motile objects in a fluid sample |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9075225B2 (en) | 2009-10-28 | 2015-07-07 | Alentic Microscience Inc. | Microscopy imaging |
US10254279B2 (en) | 2013-03-29 | 2019-04-09 | Nima Labs, Inc. | System and method for detection of target substances |
US10466236B2 (en) | 2013-03-29 | 2019-11-05 | Nima Labs, Inc. | System and method for detecting target substances |
US10249035B2 (en) * | 2013-03-29 | 2019-04-02 | Nima Labs, Inc. | System and method for detecting target substances |
WO2014160861A1 (en) | 2013-03-29 | 2014-10-02 | 6SensorLabs, Inc. | A portable device for detection of harmful substances |
WO2015017046A1 (en) | 2013-07-31 | 2015-02-05 | The Regents Of The University Of California | Fluorescent imaging using a flatbed scanner |
DE102013016368A1 (en) * | 2013-09-30 | 2015-04-02 | Carl Zeiss Microscopy Gmbh | Light microscope and microscopy method for examining a microscopic sample |
CA3036385C (en) | 2013-12-17 | 2022-06-21 | Alentic Microscience Inc. | Dosimeters including lensless imaging systems |
US10871745B2 (en) | 2014-08-01 | 2020-12-22 | The Regents Of The University Of California | Device and method for iterative phase recovery based on pixel super-resolved on-chip holography |
JP6627083B2 (en) * | 2014-08-22 | 2020-01-08 | パナソニックIpマネジメント株式会社 | Image acquisition device and image forming system |
US10430933B2 (en) * | 2015-02-27 | 2019-10-01 | The Bringham and Women's Hospital, Inc. | Imaging systems and methods of using the same |
JP6661947B2 (en) * | 2015-10-01 | 2020-03-11 | 凸版印刷株式会社 | Fluorescent image capturing apparatus and fluorescent image capturing method |
JP6750033B2 (en) * | 2016-04-08 | 2020-09-02 | アレンティック マイクロサイエンス インコーポレイテッド | Sample processing for microscopy |
WO2017221330A1 (en) * | 2016-06-21 | 2017-12-28 | オリンパス株式会社 | Digital holographic image pickup device and sample holder |
CN106406064B (en) * | 2016-09-12 | 2019-08-02 | 芜湖能盟信息技术有限公司 | Holography display automatic device |
WO2018059469A1 (en) * | 2016-09-28 | 2018-04-05 | Versitech Limited | Recovery of pixel resolution in scanning imaging |
FR3059113B1 (en) * | 2016-11-24 | 2022-11-04 | Commissariat Energie Atomique | METHOD FOR FORMING A HIGH RESOLUTION IMAGE BY LENSLESS IMAGING |
JP6760477B2 (en) * | 2017-02-28 | 2020-09-23 | 株式会社島津製作所 | Cell observation device |
JP6860064B2 (en) * | 2017-03-03 | 2021-04-14 | 株式会社島津製作所 | Cell observation device |
CN110383044A (en) * | 2017-03-03 | 2019-10-25 | 株式会社岛津制作所 | Cell observation device |
EP3593111A4 (en) | 2017-03-10 | 2020-02-26 | The Regents of the University of California, A California Corporation | Mobile microscopy system for air quality monitoring |
US11380438B2 (en) | 2017-09-27 | 2022-07-05 | Honeywell International Inc. | Respiration-vocalization data collection system for air quality determination |
WO2019170757A2 (en) * | 2018-03-07 | 2019-09-12 | LIFE TECHNOLOGIES GmbH | Imaging apparatuses, systems and methods |
JP6950813B2 (en) * | 2018-03-20 | 2021-10-13 | 株式会社島津製作所 | Cell observation device |
JP7036192B2 (en) * | 2018-03-20 | 2022-03-15 | 株式会社島津製作所 | Cell observation device |
US11514325B2 (en) | 2018-03-21 | 2022-11-29 | The Regents Of The University Of California | Method and system for phase recovery and holographic image reconstruction using a neural network |
CN108508588B (en) * | 2018-04-23 | 2019-11-15 | 南京大学 | A kind of multiple constraint information without lens holographic microphotography phase recovery method and its device |
US11222415B2 (en) | 2018-04-26 | 2022-01-11 | The Regents Of The University Of California | Systems and methods for deep learning microscopy |
CN108896331B (en) * | 2018-05-11 | 2020-02-18 | 中国汽车工业工程有限公司 | Method for online detection of grease cleaning efficiency of cleaning equipment before coating |
FR3082944A1 (en) * | 2018-06-20 | 2019-12-27 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | METHOD FOR OBSERVING A SAMPLE WITH LENS-FREE IMAGING, TAKING INTO ACCOUNT A SPATIAL DISPERSION IN THE SAMPLE |
CN109270670A (en) * | 2018-10-31 | 2019-01-25 | 上海理鑫光学科技有限公司 | LED array light source, without lens microscope and image processing method |
US20200319217A1 (en) * | 2019-04-08 | 2020-10-08 | Molecular Devices, Llc | Incubation System and Method for Automated Cell Culture and Testing |
US20220206434A1 (en) * | 2019-04-22 | 2022-06-30 | The Regents Of The University Of California | System and method for deep learning-based color holographic microscopy |
US10876949B2 (en) | 2019-04-26 | 2020-12-29 | Honeywell International Inc. | Flow device and associated method and system |
US10794810B1 (en) | 2019-08-02 | 2020-10-06 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
CN110426397B (en) * | 2019-08-14 | 2022-03-25 | 深圳市麓邦技术有限公司 | Optical detection system, device and method |
KR20210068890A (en) | 2019-12-02 | 2021-06-10 | 삼성전자주식회사 | Inspection apparatus and method based on CDI(Coherent Diffraction Imaging) |
US11221288B2 (en) | 2020-01-21 | 2022-01-11 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
US11333593B2 (en) | 2020-02-14 | 2022-05-17 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
US11391613B2 (en) | 2020-02-14 | 2022-07-19 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
US11181456B2 (en) | 2020-02-14 | 2021-11-23 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
US11835432B2 (en) | 2020-10-26 | 2023-12-05 | Honeywell International Inc. | Fluid composition sensor device and method of using the same |
US20220364973A1 (en) * | 2021-05-13 | 2022-11-17 | Honeywell International Inc. | In situ fluid sampling device and method of using the same |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6876474B2 (en) | 2002-11-27 | 2005-04-05 | Dalhousie University | Method for tracking particles and life forms in three dimensions and in time |
US20070230814A1 (en) * | 2004-05-24 | 2007-10-04 | Miki Haseyama | Image Reconstructing Method |
US7729049B2 (en) | 2007-05-26 | 2010-06-01 | Zeta Instruments, Inc. | 3-d optical microscope |
US9007433B2 (en) | 2009-10-20 | 2015-04-14 | The Regents Of The University Of California | Incoherent lensfree cell holography and microscopy on a chip |
JP5498129B2 (en) * | 2009-11-09 | 2014-05-21 | オリンパス株式会社 | Virtual microscope system |
CA2797566A1 (en) | 2010-05-03 | 2011-11-10 | The Regents Of The University Of California | Wide-field lensless fluorescent imaging on a chip |
WO2012054351A2 (en) | 2010-10-18 | 2012-04-26 | The Regents Of The University Of California | Microscopy method and system incorporating nanofeatures |
US9426429B2 (en) | 2010-10-26 | 2016-08-23 | California Institute Of Technology | Scanning projective lensless microscope system |
WO2012082776A2 (en) | 2010-12-14 | 2012-06-21 | The Regents Of The University Of California | Method and device for holographic opto-fluidic microscopy |
US9057702B2 (en) | 2010-12-21 | 2015-06-16 | The Regents Of The University Of California | Compact wide-field fluorescent imaging on a mobile device |
US20120157160A1 (en) | 2010-12-21 | 2012-06-21 | The Regents Of The University Of California | Compact wide-field fluorescent imaging on a mobile device |
EP2661603A4 (en) | 2011-01-06 | 2014-07-23 | Univ California | Lens-free tomographic imaging devices and methods |
US8866063B2 (en) | 2011-03-31 | 2014-10-21 | The Regents Of The University Of California | Lens-free wide-field super-resolution imaging device |
US20140160236A1 (en) | 2011-07-29 | 2014-06-12 | The Regents Of The University Of California | Lensfree holographic microscopy using wetting films |
PL2812675T3 (en) | 2012-02-06 | 2021-12-13 | The Regents Of The University Of California | Portable rapid diagnostic test reader |
WO2013184835A1 (en) | 2012-06-07 | 2013-12-12 | The Regents Of The University Of California | Wide-field microscopy using self-assembled liquid lenses |
WO2014012031A1 (en) | 2012-07-13 | 2014-01-16 | The Regents Of The University Of California | High throughput lens-free three-dimensional tracking of sperm |
US20140120563A1 (en) | 2012-10-29 | 2014-05-01 | The Regents Of The University Of California | Allergen testing platform for use with mobile electronic devices |
US20160070092A1 (en) | 2014-09-04 | 2016-03-10 | The Regents Of The University Of California | Method and device for fluorescent imaging of single nano-particles and viruses |
-
2012
- 2012-07-20 WO PCT/US2012/047725 patent/WO2013070287A1/en active Application Filing
- 2012-07-20 US US14/356,131 patent/US9715099B2/en active Active
-
2017
- 2017-06-15 US US15/624,624 patent/US20170357083A1/en not_active Abandoned
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11598699B2 (en) * | 2013-02-06 | 2023-03-07 | Alentic Microscience Inc. | Sample processing improvements for quantitative microscopy |
US11874452B2 (en) | 2013-06-26 | 2024-01-16 | Alentic Microscience Inc. | Sample processing improvements for microscopy |
US10248838B2 (en) | 2015-12-04 | 2019-04-02 | The Regents Of The University Of California | Method and device for single molecule imaging |
US10838192B2 (en) | 2016-05-10 | 2020-11-17 | The Regents Of The University Of California | Method and device for high-resolution color imaging using merged images from holographic and lens-based devices |
US10795315B2 (en) | 2016-05-11 | 2020-10-06 | The Regents Of The University Of California | Method and system for pixel super-resolution of multiplexed holographic color images |
US11397405B2 (en) | 2016-05-11 | 2022-07-26 | The Regents Of The University Of California | Method and system for pixel super-resolution of multiplexed holographic color images |
US11320362B2 (en) | 2016-09-23 | 2022-05-03 | The Regents Of The University Of California | System and method for determining yeast cell viability and concentration |
US11501544B2 (en) | 2018-06-04 | 2022-11-15 | The Regents Of The University Of California | Deep learning-enabled portable imaging flow cytometer for label-free analysis of water samples |
US11893779B2 (en) | 2018-10-18 | 2024-02-06 | The Regents Of The University Of California | Device and method for motility-based label-free detection of motile objects in a fluid sample |
DE102018129981A1 (en) | 2018-11-27 | 2020-05-28 | Basler Ag | Processing of a digital intensity image: method, storage medium and imaging system |
US11262286B2 (en) | 2019-04-24 | 2022-03-01 | The Regents Of The University Of California | Label-free bio-aerosol sensing using mobile microscopy and deep learning |
US11460395B2 (en) | 2019-06-13 | 2022-10-04 | The Regents Of The University Of California | System and method for measuring serum phosphate levels using portable reader device |
Also Published As
Publication number | Publication date |
---|---|
US9715099B2 (en) | 2017-07-25 |
WO2013070287A1 (en) | 2013-05-16 |
US20140300696A1 (en) | 2014-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170357083A1 (en) | Maskless imaging of dense samples using multi-height lensfree microscope | |
US20220254538A1 (en) | Fourier ptychographic imaging systems, devices, and methods | |
US8866063B2 (en) | Lens-free wide-field super-resolution imaging device | |
US20210181673A1 (en) | Device and method for iterative phase recovery based on pixel super-resolved on-chip holography | |
Wu et al. | Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring | |
US20140160236A1 (en) | Lensfree holographic microscopy using wetting films | |
Luo et al. | Synthetic aperture-based on-chip microscopy | |
US9605941B2 (en) | Lens-free tomographic imaging devices and methods | |
US9426429B2 (en) | Scanning projective lensless microscope system | |
US20200186705A1 (en) | Variable-illumination fourier ptychographic imaging devices, systems, and methods | |
US20140071452A1 (en) | Fluid channels for computational imaging in optofluidic microscopes | |
US10018818B2 (en) | Structured standing wave microscope | |
Prajapati et al. | Muscope: a miniature on-chip lensless microscope | |
Liao | Imaging Innovations for Whole-Slide and Hyperspectral Microscopy | |
Moreno et al. | A nano-illumination microscope with 7 mm2 extended field-of-view and resolution below 1µm | |
Grinbaum | Multi-Height-Based Lensfree On-Chip Microscopy for Biomedical Applications | |
GREENBAUM et al. | COMPUTATIONAL IMAGING: Lens-free on-chip microscope is field-portable July 1, 2012 A field-portable on-chip holographic microscope images dense and blended biological samples using multiheight lens-free imaging. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIFORNIA Free format text: CONFIRMATORY ASSIGNMENT;ASSIGNOR:GRINBAUM, ALON;REEL/FRAME:054305/0750 Effective date: 20201102 |
|
AS | Assignment |
Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIFORNIA Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE PCT NUMBER PREVIOUSLY RECORDED ON REEL 054305 FRAME 0750. ASSIGNOR(S) HEREBY CONFIRMS THE CONFIRMATORY ASSIGNMENT;ASSIGNOR:GRINBAUM, ALON;REEL/FRAME:054766/0314 Effective date: 20201102 |