WO2021119201A1 - Methods and systems for three-dimensional lightsheet imaging - Google Patents

Methods and systems for three-dimensional lightsheet imaging Download PDF

Info

Publication number
WO2021119201A1
WO2021119201A1 PCT/US2020/064117 US2020064117W WO2021119201A1 WO 2021119201 A1 WO2021119201 A1 WO 2021119201A1 US 2020064117 W US2020064117 W US 2020064117W WO 2021119201 A1 WO2021119201 A1 WO 2021119201A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
transparent side
sample holder
samples
laser beam
Prior art date
Application number
PCT/US2020/064117
Other languages
French (fr)
Inventor
Janice Lai
David Stern
Original Assignee
Enumerix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enumerix, Inc. filed Critical Enumerix, Inc.
Priority to EP20899785.8A priority Critical patent/EP4073573A4/en
Priority to CN202080096198.1A priority patent/CN115516367A/en
Publication of WO2021119201A1 publication Critical patent/WO2021119201A1/en
Priority to US17/836,737 priority patent/US20230029710A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0032Optical details of illumination, e.g. light-sources, pinholes, beam splitters, slits, fibers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • G01N21/6458Fluorescence microscopy
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0036Scanning details, e.g. scanning stages
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0052Optical details of the image generation
    • G02B21/0076Optical details of the image generation arrangements using fluorescence or luminescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/008Details of detection or image processing, including general computer control
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/26Stages; Adjusting means therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/144Imaging characterised by its optical setup
    • G01N2015/1445Three-dimensional imaging, imaging in different image planes, e.g. under different angles or at different depths, e.g. by a relative motion of sample and detector, for instance by tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N2015/1497Particle shape
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N2021/6482Sample cells, cuvettes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6452Individual samples arranged in a regular 2D-array, e.g. multiwell plates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/30Collimators
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/02Simple or compound lenses with non-spherical faces
    • G02B3/06Simple or compound lenses with non-spherical faces with cylindrical or toric faces
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/02Simple or compound lenses with non-spherical faces
    • G02B3/08Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens

Definitions

  • Light sheet imaging is used to illuminate thin sections of a sample.
  • Light sheet imaging techniques have faster image capture rates than comparable point scanning methods such as laser confocal scanning microscopy.
  • point scanning methods such as laser confocal scanning microscopy.
  • the present disclosure provides a light sheet imaging system comprising: a collimated illumination source configured to emit a collimated beam along a beam path; a Powell lens positioned in the beam path after the collimated illumination source and configured to expand the collimated beam along a single axis; a first cylindrical lens positioned in the beam path after the Powell lens and configured to re-collimate the beam along a single axis, thereby producing a collimated beam having a short axis orthogonal to a direction of propagation and long axis orthogonal to the short axis and the direction of propagation; a second cylindrical lens positioned in the beam path after the second cylindrical lens and configured to focus the beam along the short axis to a focal plane.
  • a three-dimensional sample is positioned at the focal plane.
  • the three-dimensional sample comprises fluorescent droplets and/or fluorescently labeled cells.
  • the light sheet imaging system further comprises a camera comprising an imaging path orthogonal to the beam path, wherein the camera is positioned such that the imaging path intersects the beam path at the focal plane.
  • the present disclosure provides a sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid, a translating stage supporting the tube holder and configured to translate the tube holder along a translation axis, wherein the translation axis is oriented at a 45° angle relative to the first transparent side and the second transparent side and parallel to the row of the plurality of tubes.
  • the sample holder further comprises a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid.
  • the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking.
  • the lid comprises a hole configured to accommodate the tube holder and the translating stage.
  • the sample holder further comprises a first opaque side connecting the first transparent side to the third transparent side, and a second opaque side connecting the second transparent side and the third transparent side.
  • the present disclosure provides a sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid, a translating stage supporting the fluid reservoir and the tube holder and configured to translate the fluid reservoir and the tube holder along a first translation axis and a second translation axis, wherein the first translation axis is oriented parallel to the first transparent side and perpendicular to the second transparent side, and wherein the second translation axis is perpendicular to the first transparent side and parallel to the second transparent side, and wherein the first translation axis and the second translation axis are oriented at a 45° angle relative to the row of the plurality of tubes.
  • the sample holder further comprises a fourth transparent side connected to the second transparent side at a right angle and parallel to the first transparent side, and a fifth transparent side connected to the fourth transparent side at a right angle and parallel to the second transparent side.
  • the sample holder further comprises a plurality of transparent sides connected in series at right angles to the second transparent side wherein alternating transparent sides of the plurality of transparent sides are parallel to the first transparent side or parallel to the second transparent side.
  • the sample holder further comprises a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid.
  • the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking.
  • the fluid comprises an index matching fluid.
  • the sample holder comprises a polymer.
  • the plurality of tubes comprises PCR tubes.
  • the plurality of tubes comprises microcentrifuge tubes.
  • the sample holder is configured to accommodate up to 12 tubes.
  • the sample holder is configured to accommodate up to 8 tubes.
  • the sample holder is configured to accommodate up to 4 tubes.
  • the present disclosure provides a method of image processing comprising: collecting a three-dimensional image data set comprising a plurality of cross-section images; identifying a plurality of substantially spherical fluorescent particles in the three- dimensional image data set; dividing one or more cross-section images of the plurality of cross- section images into a plurality of image grids; applying an intensity threshold independently to one or more image grids of the plurality of image grids; and identifying one or more signal positive particles from the plurality of substantially spherical fluorescent particles, wherein the one or more signal positive particles have intensities greater than the intensity threshold.
  • the method further comprises applying a smoothing filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles.
  • the method further comprises applying a median filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles.
  • identifying the plurality of substantially spherical fluorescent particles comprises performing a three-dimensional convolution using a three-dimensional template.
  • the method further comprises determining an intensity of one or more substantially spherical fluorescent particles of the plurality of substantially spherical fluorescent particles by a three-dimensional local maxima analysis. In some aspects, the method further comprises performing a second three-dimensional maxima analysis to remove one or more asymmetric particles from the plurality of substantially spherical fluorescent particles.
  • the intensity threshold is determined by a median plus or minus a standard deviation of a histogram of a logio of the intensities of the image grid. In some aspects, the intensity threshold maximizes the separation between one or more intensities of the signal positive particles and the intensity threshold.
  • the method further comprises repeating the applying the intensity threshold for one or more image grids corresponding to a ratio outlier and determining a refined ratio for one or more image grids corresponding to a ratio outlier after applying the intensity threshold.
  • the method further comprises selecting a region of interest in one or more cross-section images of the plurality of cross-section images, wherein the region of interest comprises a sample area.
  • the method further comprises identifying a background intensity and subtracting the background intensity from an intensity of one or more cross-section images of the plurality of cross-section images.
  • the method further comprises selecting a subset of the plurality of substantially spherical fluorescent particles, wherein the substantially spherical fluorescent particles comprise a size larger than a minimum size threshold and smaller than a maximum size threshold. In some aspects, the method further comprises identifying a plurality of signal positive particles from the plurality of substantially spherical fluorescent particles comprising an intensity above a minimum threshold value.
  • the method further comprises determining if a low signal region is present in a cross-section image, and, if a low signal region is present, identifying the low signal region and a high signal region within the region of interest of one or more cross-section images and designating a low signal image grid comprising the low signal region, or, if no low signal region is present, designating the region of interest as a high signal region.
  • the method further comprises determining a ratio of a number of signal positive particles to a number of the plurality of spherical fluorescent particles for one or more image grids of the plurality of image grids.
  • the method further comprises identifying ratio outliers comprising ratios that are higher or lower than a median of the ratios of the plurality of image grids. In some aspects, the method further comprises removing the ratio outliers. In some aspects, the method is automated. In some aspects, the method is performed without user intervention. In some embodiments, the method further comprises performing cluster removal to eliminate overlapping fluorescent droplets. In some embodiments, the cluster removal comprises eliminating droplets having a peak density below a set threshold. In some embodiments, the peak density is determined based on a droplet size. In some embodiments, the droplet size is calculated based on a full width half maximum. In some embodiments, the cluster removal comprises close neighbor/volume exclusion/non-maximum suppression.
  • cluster removal comprises determining the number of close droplet neighbors for each droplet. In some embodiments, cluster removal further comprises generating a histogram of close droplet neighbors. In some embodiments, cluster removal further comprises eliminating outlier droplets having a determining quantity of neighbors over a set threshold. In some embodiments, the set threshold is based a probability that the merged cluster of droplets is detected as a false positive. [0015] Further provided herein is a method of analyzing a plurality of samples, the method comprising automatically subjecting each sample of the plurality of samples to a laser beam, one sample at a time, thereby generating emission light in each sample of the plurality of samples, and wherein the plurality of samples comprises at least 4 samples.
  • the plurality of samples are located in a sample holder.
  • the sample holder is moved in a direction that is perpendicular to the illumination path of the laser beam to move a scanned first sample out of the laser beam and move a second sample into the beam for scanning.
  • the method further comprises, while subjecting each sample of the plurality of samples to the laser beam, moving the laser beam relative to the sample.
  • moving the laser beam relative to the sample comprises pivoting movements of the laser beam.
  • moving the laser beam relative to the sample comprises translating movements of the laser beam.
  • the pivoting movements or translating movements reduce artifacts in the resulting image data compared to a method in without pivoting movements or translating movements.
  • the laser beam moves relative to the sample with a frequency of at least about 0.1 kFIz. In some aspects, the laser beam moves relative to the sample with a frequency of from about 0.1 kFIz to about 20 kHz. In some aspects, the laser beam moves relative to the sample with a frequency of at least about 1 kHz. In some aspects, the laser beam is a sheet of laser light. In some aspects, the plurality of samples comprises at least 8 samples. In some aspects, the plurality of samples are subjected to a second laser beam. In some aspects, the second laser beam is configured to illuminate a site of a sample of the plurality of samples that is opposite to the site illuminated by the laser beam. In some aspects, each sample of the plurality of samples is subjected to the laser beam and the second laser beam simultaneously.
  • FIG. 1 shows a schematic of a light sheet imaging system (left) and positioning of a sample within an imaging field of view at different cross-section depths (right, (i), (ii), and (iii));
  • FIG. 2 shows an exemplary sample holder design for imaging multiple tubes sequentially without user intervention using light sheet imaging.
  • FIG. 2A shows an isometric view of the sample holder.
  • FIG. 2B shows an image of the sample holder and sample tubes positioned within a light sheet imaging system;
  • FIG. 3 shows an exemplary sample holder design for imaging multiple tubes sequentially without user intervention using light sheet imaging.
  • FIG. 3A shows an isometric view of the sample holder.
  • FIG. 3B shows a top view of the sample holder within a light sheet imaging system;
  • FIG. 4 shows exemplary optical configurations to produce a focused light sheet from a collimated laser source.
  • FIG. 4A shows an optical configuration comprising three cylindrical lenses and an aspheric lens.
  • FIG. 4B shows an optical configuration comprising a Powell lens and two cylindrical lenses;
  • FIGS. 5A, 5B, 5C, 5D, 5E, 5F, and 5G show a method of image processing to identify and count signal positive fluorescent particles and signal negative fluorescent particles.
  • FIG. 5A shows region of interest (ROI) extraction to identify regions of interest comprising a sample area.
  • FIG. 5B shows local background subtraction from a region of interest.
  • FIG. 5C shows three-dimensional (3D) convolution with a spherical template to identify substantially spherical fluorescent particles.
  • FIG. 5D shows local maxima and symmetry checks using the three- dimensional convolution to eliminate asymmetric particles.
  • FIG. 5E shows detection of a low intensity region (“murky layer”) within a region of interest.
  • FIG. 5F shows local brightness histograms and auto-thresholding for individual image grids within a region of interest.
  • FIG. 5G shows global correction to remove grid outliers from two distinct clusters of positive particles having a ratio of signal positive particles to total particles that is significantly higher or significantly lower than the median ratio for all grids;
  • FIG. 6 shows a setup of an imaging system and a method for analyzing 8 or more samples using light sheet imaging
  • FIG. 6A shows analyte samples (here a 96-well plate) being loaded onto a sample loading block of the imaging system.
  • FIG. 6B shows an 8-sample strip of the 96-well plate being picked up to image 8 samples at a time;
  • FIG. 6C shows an 8-sample strip that is located within the imaging chamber and scanned using an optical configurations to produce a focused light sheet from a collimated laser source;
  • FIG. 7 shows different methods for improving image quality and reducing artifacts for analyzing samples using light sheet imaging.
  • FIG. 7A shows that pivoting along the focal plane of the sample relative to the laser beam can reduce artifacts in the resulting image
  • FIG. 7B shows that translating along the focal plane of the sample relative to the laser beam can reduce artifacts in the resulting image
  • FIG. 7C shows that dual sided illumination of a sample can also reduce strip and/or shadow artifacts in the resulting image.
  • FIG. 8A shows an exemplary image of droplet candidate extraction
  • FIG. 8B shows an exemplary diagram of a convolutional neural network (CNN).
  • CNN convolutional neural network
  • FIG. 8C shows an exemplary graph of non-maximum suppression and cluster removal
  • FIG. 9 shows a non-limiting example of a computing device; in this case, a device with one or more processors, memory, storage, and a network interface.
  • Three-dimensional imaging may be used to image three-dimensional samples, such as an organism, a tissue, a cell, or a liquid sample. Three-dimensional imaging may enable higher throughput imaging than comparable one-dimensional or two-dimensional imaging techniques by imaging larger sample volumes than one-dimensional or two-dimensional imaging techniques. Three-dimensional imaging techniques, such as light sheet imaging, may be used to image fluorescent samples containing a plurality of fluorescent particles present in different cross-sectional planes of the sample.
  • a fluorescent sample may comprise a sample tube containing fluorescent particles or droplets, a cell with labeled proteins or nucleic acids, an immunofluorescent tissue sample, or fluorescently labeled organism.
  • a three-dimensional fluorescent sample may be a digital polymerase chain reaction (dPCR) sample containing signal positive and signal negative fluorescent droplets.
  • dPCR digital polymerase chain reaction
  • three-dimensional imaging techniques Compared to one-dimensional and two-dimensional imaging techniques, three- dimensional imaging techniques has many advantages, such as higher throughput imaging methods and sequential imaging of multiple three-dimensional samples without user intervention. However, compared to one-dimensional and two-dimensional imaging techniques, three-dimensional imaging techniques may have additional complications due to the nature of handling three-dimensional samples and processing three-dimensional image data sets, such as uneven background intensities, uneven illumination intensity, or signal variability at different cross-sectional planes or positions within a cross-sectional plane.
  • a light sheet imaging system may comprise an illumination system configured to produce a sheet of light focused in a sample.
  • the sample may be a three-dimensional (3D) sample, such as a sample in a tube or vial.
  • the sample may be a two-dimensional (2D) sample, such as a sample in a planar array or plate.
  • the sample may be a one-dimensional (ID) sample, such as a sample in a flow channel.
  • sample holder systems for high throughput light sheet imaging of multiple three-dimensional samples may be used without user intervention.
  • Further disclosed herein are automated image processing methods to identify and quantify fluorescent particles within three-dimensional image sets without user intervention or user bias.
  • the three-dimensional image sets may be collected using light sheet imaging of a three-dimensional sample.
  • a fluorescent particle may be a particle such as a nanoparticle or bead with a fluorescent moiety
  • a fluorescent particle may be a droplet such as a lipid droplet comprising a fluorescent moiety
  • a fluorescent particle may be a cell comprising a fluorescent moiety
  • a fluorescent particle may be a molecule or moiety such as an organic fluorophore.
  • Disclosed herein are light sheet illumination systems for uniform light sheet illumination of a sample cross-sectional plane. Also disclosed herein are sample holders for positioning a plurality of three-dimensional samples within an imaging system and re-positioning the samples for sequential imaging of the samples without user intervention.
  • the illumination systems disclosed herein may convert a collimated illumination source into a light sheet to image cross-sectional planes within a sample.
  • the collimated illumination source may be a laser.
  • the light sheet may be focused along a single axis.
  • the illumination system may be configured such that the focal plan is positioned within a sample.
  • the sample may be a one-dimensional sample, a two-dimensional sample, or a three- dimensional sample.
  • Exemplary light sheet illumination configurations are shown in FIG. 4.
  • an illumination source e.g., a laser beam
  • the collimated beam may pass through a first cylindrical lens, which causes the beam to converge along a first axis (e.g., a horizontal axis).
  • the diverging beam may pass through an aspheric lens which causes the beam to collimate along the first axis and converge along a second axis perpendicular to the first axis (e.g., a vertical axis).
  • the beam may then pass through a second cylindrical lens having a curved surface oriented perpendicular to the curve surface of the first cylindrical lens.
  • the second cylindrical lens may collimate the beam along the second axis, thereby producing a collimated, elliptical beam that is elongated along the second axis.
  • the resulting beam may have a Gaussian profile.
  • the beam may then pass through a third cylindrical lens having a curved surface oriented parallel to the curved surface of the second cylindrical lens.
  • the third cylindrical lens may focus the elliptical beam along the first axis toward a focal plane.
  • the order of the optical elements may be rearranged.
  • the aspheric lens may be positioned in the beam path before the first cylindrical lens, or the aspheric lens may be positioned after the second cylindrical lens.
  • a collimated illumination source emits a collimated beam (e.g., a laser beam).
  • the collimated beam may pass through a line generating lens.
  • the line generating lens may be a Powell lens or a laser line generator lens.
  • the beam After passing through the line generating lens, the beam may have a linear profile that diverges along a first axis (e.g., a vertical axis).
  • the beam may then pass through a cylindrical lens which may focus the beam along a second axis (e.g., a horizontal axis) toward a focal plane.
  • the linear beam profile produced after the line generating lens shown in FIG. 4B may be longer and have a more uniform intensity across its height. Additionally, the optical configuration shown in FIG. 4B may produce less spherical aberration as compared to the optical configuration shown in FIG. 4A due to the use of fewer optical elements. In both optical configurations, long focal length lenses may be used to reduce spherical aberrations.
  • a light sheet imaging system may comprise an illumination system as described herein and a detector.
  • the detector may be a camera.
  • the camera may be a wide field camera.
  • the camera may have a maximum scan area of 20 ul.
  • the camera may have a maximum scan area of 50 ul or more.
  • Exemplary cameras include, but are not limited to, a QHY174 camera with a scan area of 1920x1200 pixels, 5.6 x 3.5 mm, and a resolution of 2.93 um/pixel, and a Basler aca-2440-35 um with a scan area of 2448 x2048 pixels, 8.4 x 7.1 mm, and a resolution of 3.45 um/pixel.
  • a camera with a larger scan area can scan an entire PCR tube cross section without the need for staggering.
  • the camera may have an imaging path.
  • the camera may be positioned within the light sheet imaging system such that the imaging path is orthogonal to the illumination beam path and intersects the illumination beam path at the focal plane of the light sheet.
  • a sample may be positioned at the intersection of the imaging path and the illumination beam path.
  • a light sheet imaging system may be configured to allow scanning of 1, 2, 4, 6, 8, 10, 12, 20, or more samples at a time and without user intervention. Such imaging systems may allow for high throughput scanning of a plurality of samples. Such samples may comprise digital PCR samples.
  • a high throughput imaging systems may comprise multiple components, including a sample loading block, a sample holder system, an operator arm configured to move a certain number of samples from the sample loading block to the sample holder system, a laser source configured to provide a light sheet, and a detector configured to detect radiation emitted from the sample (e.g., fluorescent light).
  • the sample holder block may be loaded with a plurality of samples (e.g., digital PCR samples), e.g., as shown in FIG. 6A.
  • the plurality of samples are located in a well plate, e.g., a 96-well plate.
  • An operator arm of the light sheet imaging system may be configured to place a certain number of sample tubes, e.g., about 1, 2, 4, 6, 8, 10, 12, 20, or more sample tubes, into a sample holder system as described herein, and as illustrated for an 8-sample strip in FIG. 6B.
  • the sample holder system comprising the samples may be configured to move along an axis such that each sample located in the holder may be scanned by the light source and its emitted radiation detected in the detector unit.
  • the sample holder system may be configured to provide additional movement to a sample, e.g., a pivoting and/or translating movement relative to the light sheet.
  • the light sheet may be moved while a sample is scanned. In such instances, the sample may remain stationary while being scanned, and the light sheet is moving relative to the sample. Such movements of the light sheet can include pivoting and/or translating movements.
  • the frequency of the movement of the light sheet may be faster than the exposure time of the sample. In such cases, the light sheet may move with a frequency of at least about 0.1, 0.5. , 0.7, 1.0, 1.5, 2, 2.5, 5, 10, 15, 20 kilohertz (kHz, 10 3 s 1 ), or higher, or any frequency therebetween.
  • the light sheet may move with a frequency of at least about 1 kHz, or higher.
  • the light sheet may move with a frequency within a range of from 0.1 to 20 kilohertz, 0.1 to 30 kilohertz, from 0.5 to 20 kilohertz, from 0.7 to 20 kilohertz, from 1 to 20 kilohertz, from 2 to 20 kilohertz, from 5 to 20 kilohertz, from 10 to 20 kilohertz, from 15 to 20 kilohertz, from 0.1 to 1 kilohertz, from 0.1 to 2 kilohertz, from 0.1 to 5 kilohertz, from 0.1 to 10 kilohertz, from 0.1 to 15 kilohertz, from 0.5 to 5 kilohertz, from 1 to 10 kilohertz, or from 0.5 to 2 kilohertz.
  • such rapid pivoting of a light sheet can be achieved by using, e.g., a resonance mirror or a galvo-resonant scanner.
  • Such additional movements of the light sheet may reduce artifacts in the resulting images and thus increase image quality.
  • Such artifacts may be caused by light scattering and absorption events during sample scanning using the light sheet.
  • an imaging system is configured to provide a dual side illumination to a sample, as illustrated in FIG. 7C.
  • Such setup may also improve image quality by reducing artifacts, wherein the light of each light source only has to travel through half of the sample in order to illuminate the entire sample cross section.
  • the sample holder systems described herein may be used to hold a plurality of three- dimensional samples for sequential imaging of the samples without user intervention.
  • the sample holder may comprise an open chamber or a closed chamber.
  • the three-dimensional samples may be tubes containing liquid, aqueous, or gel samples.
  • the three-dimension sample may be a tissue sample in a tissue sample holder.
  • the three-dimensional sample may be an organism.
  • the sample holder designs disclosed herein may allow multiple samples (e.g., tubes arranged in a strip tube format) to be imaged automatically without user interference.
  • the samples may be imaged using a three-dimensional scan imaging method.
  • the samples may be imaged using light sheet imaging, as disclosed herein, or the samples may be imaged using confocal imaging.
  • the sample holder designs may also allow samples with a large cross-section (e.g., a 50 pL PCR tube) to be scanned once without the need to stagger multiple images at each cross-section.
  • the sample holder may be configured to accommodate up to 2, up to 3, up to 4, up to 5, up to 6, up to 7, up to 8, up to 9, up to 10, up to 12, up to 14, up to 15, up to 16, up to 18, up to 20, up to 24 or more three-dimensional samples to be imaged without user intervention.
  • a first sample holder design may comprise a fluid reservoir.
  • the fluid reservoir may be elongated, as shown in FIG. 2A, to accommodate translation of a sample partially submerged within the fluid in the fluid reservoir.
  • the fluid reservoir may comprise three transparent sides (“Clear surfaces” denoted by arrows). Two of the transparent sides, a first transparent side and a second transparent side, may be connected at a right angle.
  • the first transparent side may be configured to transmit an illumination light (e.g., a light sheet).
  • the second transparent side may be configured to transmit an emission light (e.g., a fluorescent emission) toward a detector (e.g., a camera).
  • the third transparent side shown behind the first transparent side and the second transparent side in FIG.
  • the sample such as a plurality of tubes, may positioned in a sample holder, as shown in FIG. 2B.
  • the sample holder may be supported by a translating stage.
  • the translating stage may be configured to translate the samples along a translation axis while the samples are partially submerged in the fluid.
  • the translation axis may be parallel to the long axis of the fluid reservoir.
  • the translation axis may be at a 45° angle relative to the surfaces of the first transparent side and the second transparent side.
  • the translation axis may be parallel to the surface of the third transparent side.
  • the translating stage may be configured to translate the sample holder along a first translation axis and a second translation axis.
  • the first translation axis may be parallel to the surface of the first transparent side.
  • the second translation axis may parallel to the surface of the second transparent side.
  • the first translation axis and the second translation axis may be parallel to the surface of the third transparent side.
  • the fluid reservoir may contain an index matching fluid.
  • the index matching fluid may match the index of refraction of the sample.
  • the fluid reservoir and the sample holder may be positioned within a three-dimensional imaging system.
  • the fluid reservoir and the sample holder may be positioned within a light sheet imaging system as described herein.
  • the sample holder may be positioned such that a first sample is positioned at the intersection of the illumination beam path and the imaging path, and translation of the sample along the translation axis enables imaging of sequential cross- sectional planes within the first sample, as shown in FIG. 1.
  • Cross-sectional image planes within the sample may shift relative to the camera field of view depending on the position of the cross- sectional plane within the sample, as illustrated in panels (i), (ii), and (iii) of FIG. 1.
  • a second sample holder design may comprise a fluid reservoir.
  • the fluid reservoir may be configured to accommodate a sample holder holding a plurality of samples, as shown in FIG. 3A, partially submerged within the fluid in the fluid reservoir.
  • the fluid reservoir may comprise a plurality of transparent sides (“Clear surfaces” denoted by arrows). Pairs of the transparent sides may be connected at a right angle, such that the plurality of transparent sides are arranged in a saw tooth pattern.
  • the number of pairs of transparent sides may correspond to the number of samples that the sample holder may hold.
  • the first transparent side of each pair may be configured to transmit an illumination light (e.g., a light sheet).
  • the second transparent side of each pair may be configured to transmit an emission light (e.g., a fluorescent emission) toward a detector (e.g., a camera).
  • An additional transparent side shown behind the plurality of transparent sides in FIG. 3A, may be configured to transmit the illumination light out of the fluid reservoir after the illumination light passes through a sample to reduce reflections within the sample chamber.
  • the remaining sides of the fluid reservoir may be opaque or partially opaque to reduce reflections within the fluid reservoir.
  • the sample such as a plurality of tubes, may positioned in a sample holder, as shown in FIG. 2B.
  • the sample may be positioned such that the illumination beam is not obstructed as it enters and exits the sample.
  • the sample holder and the fluid reservoir may be supported by a translating stage.
  • the translating stage may be configured to translate the samples and the fluid reservoir along a first translation axis and a second translation axis.
  • the first translation axis may be parallel to the surface of the first transparent side of each pair.
  • the second translation axis may parallel to the surface of the second transparent side of each pair.
  • the first translation axis and the second translation axis may be parallel to the surface of the additional transparent side.
  • the translating stage may be configured to translate the samples and the fluid reservoir along a translation axis while the samples are partially submerged in the fluid.
  • the translation axis may be at a 45° angle relative to the surfaces of the first transparent side of each pair and the second transparent side of each pair.
  • the translation axis may be parallel to the surface of the additional transparent side.
  • the fluid reservoir may contain an index matching fluid.
  • the index matching fluid may match the index of refraction of the sample.
  • the fluid reservoir may further comprise a lid.
  • the lid may comprise an O-ring to prevent leaking of the fluid.
  • the fluid reservoir and the sample holder may be positioned within a three-dimensional imaging system.
  • the fluid reservoir and the sample holder may be positioned within a light sheet imaging system as described herein.
  • the sample holder may be positioned such that a first sample is positioned at the intersection of the illumination beam path and the imaging path and the illumination beam path passes through the first transparent side of a first pair and the imaging path passes through the second transparent side of the first pair.
  • Translation of the sample along the first translation axis enables imaging of sequential cross-sectional planes within the first sample, as shown in FIG. 3B.
  • Translation along the second translation axis may position a second sample at the intersection of the illumination beam path and the imaging path as well as position the second pair of transparent sides within the illumination beam path and imaging path.
  • the second sample holder design may reduce evaporation and wicking of the fluid, provide a smaller image and instrument footprint, and reduce image analysis complexity.
  • the image processing methods described herein may be used to identify and quantify fluorescent particles within a three-dimensional image data set.
  • the fluorescent particles may be droplets (e.g., droplets comprising nucleic acid molecules, protein molecules, cells).
  • the fluorescent particles may be particles or cells suspended in solution.
  • the fluorescent particles may be particles, molecules, or cells suspended in a gel matrix.
  • the resulting cross-sectional plane images may be combined to produce a three- dimensional image data set for each sample.
  • a region of interest within each cross-section may be selected outlining the confines of the sample (FIG. 5A). This may ensure that only the fluorescent particles inside the tube cross-section were considered.
  • a local background subtraction may be performed to suppress background fluorescence (FIG. 5B).
  • a smoothing filter may be applied to smooth images, and a median filter may be applied to remove hot pixels.
  • the smoothing filter may be a Gaussian filter.
  • a three-dimensional (3D) convolution may be performed using a three-dimensional template to identify fluorescent particles (e.g., corresponding to fluorescent lipid droplets) substantially conforming to a desired shape and within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C).
  • the three-dimensional template may be a spherical template, an elliptical template, or any other three-dimensional shape.
  • a 3D local maxima analysis may be performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D).
  • an additional 3D local maxima analysis may be performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
  • a boundary may be defined between the bulk sample and a top layer of the sample (referred to as the “murky layer”) with lower signal than the bulk sample due to refractive index mismatch at the air-sample interface (FIG. 5E).
  • Regions above the murky layer boundary may be grouped into a grid for local histogram analysis separate from grids containing images of the bulk sample.
  • Each tube cross-section may be divided into multiple grids for separate thresholding (FIG. 5F). For each grid, a histogram of the logio of the intensity (logio(intensity)) of each fluorescent particle candidate may be plotted.
  • a threshold logio(intensity) value for each grid may be automatically set to differentiate between noise and signal.
  • the threshold may be set based on the assumptions that either (A) the majority of the particles belong to one population with some outliers (e.g., a population of lipid droplets where most are negative for signal, or a population of lipid droplets were most are positive for signal), or (B) there are two populations of particles.
  • a median plus or minus the standard deviation threshold may be used to automatically separate the signal positive and signal negative particles.
  • an optimization algorithm may be used to select a threshold that maximizes the separation between the signal positive population and the threshold, as measured by number of standard deviations of the positive population above the threshold.
  • the ratio of signal positive particles to signal negative particles may be determined for each grid. It may be expected that the ratios of signal positive and signal negative particles should not vary substantially between grids despite variations in brightness. Large deviation of the ratio in a given grid from the median ratio may be indicative of improper threshold selection. Grids with ratios substantially higher or lower than the median ratio may be identified as outliers and may be automatically selected to undergo an additional round of refinement and threshold detection for one or more (e.g. two) distinct clusters of positive particles that are counted (FIG. 5G). Following additional refinement, remaining outliers may be removed, and the global positive particle count may be determined.
  • Grids with ratios substantially higher or lower than the median ratio may be identified as outliers and may be automatically selected to undergo an additional round of refinement and threshold detection for one or more (e.g. two) distinct clusters of positive particles that are counted (FIG. 5G). Following additional refinement, remaining outliers may be removed, and the global positive particle count may be determined.
  • the noise and signal are differentiated using a convolutional neural network (CNN) in addition to, or in place of differentiation based on the intensity-based cutoff.
  • CNN convolutional neural network
  • the CNN learns to distinguish positive and negative droplets based on the 3D intensity profile of the fluorescent droplets as a whole — which includes information of the shape and size of the droplet, instead of using the maximum pixel intensity of each droplet and intensity-based histogram cutoffs.
  • the CNN determines fluorescent droplets candidates based on the 3D local maxima analysis.
  • the CNN is trained with synthetic data, actual dataset, or both at various positive droplet occupancies and signal-to-noise ratios (SNR).
  • the CNN is additionally or alternatively training with hard negative data. In some embodiments, training with such hard negative data improves the CNN’s ability to identify ambiguous fluorescent droplets candidates.
  • the imaging processing methods herein further comprise cluster removal to eliminate overlapping fluorescent droplets.
  • cluster removal comprises eliminating droplets having a peak density below a set threshold.
  • the peak density is determined based on a droplet size.
  • the droplet size is calculated based on a full width half maximum.
  • cluster removal comprises close neighbor/volume exclusion/non-maximum suppression.
  • cluster removal comprises determining a number of close droplet neighbors for each droplet.
  • cluster removal further comprises generate a histogram of close droplet neighbors.
  • cluster removal further comprises eliminating outlier droplets having a determining quantity of neighbors over a set threshold.
  • the set threshold is based a probability that the merged cluster of droplets is detected as a false positive. In some embodiments, such cluster removal assumes that all droplets are randomly distributed, and that density is relatively homogeneous across the entire tube volume. In some embodiments, the imaging processing methods herein further comprises calculating the SNR for each droplet individually, and eliminating droplets having an SNR below a set threshold
  • the image processing methods herein enable improved fluorescent particle candidate detection by mitigating the low signal to noise ratios, high signal and background variation, and murky interfaces inherent in many 3D images of PCR droplets in a tube.
  • machine learning algorithms are utilized to aid to detect a fluorescent droplets candidate.
  • the machine learning algorithms for detecting a fluorescent droplets candidate employ one or more forms of labels including but not limited to human annotated labels and semi-supervised labels.
  • the human annotated labels can be provided by a hand-crafted heuristic.
  • the semi-supervised labels can be determined using a clustering technique to find fluorescent droplets candidates similar to those flagged by previous human annotated labels and previous semi-supervised labels.
  • the semi-supervised labels can employ a XGBoost, a neural network, or both.
  • the training set is enlarged by generating synthetic 3D images.
  • the synthetic 3D images are generated by approximating droplets as spheres or blobs with different 3D aspect ratios.
  • the semi-supervised label comprises an augmentation applied the synthetic image, real data, or both.
  • the augmentation comprises a signal intensity, a signal variation, a SNR, a polydispersity, a transformation, or any combination thereof.
  • the transformation comprises a dilation, expansion, reflection, rotation, shear, stretch, translation, or any combination thereof.
  • the SNR augmentation adds noise (e.g. salt and pepper, striations) to the image to make the model most robust to noise.
  • noise augmentation is annotated with ground truth based on the exact locations of generated droplets, augmentations, or both.
  • increased training model size and diversity improves the robustness of the machine learning algorithms herein.
  • the machine learning algorithms for detecting a fluorescent droplets candidate employ a distant supervision method.
  • the distant supervision method can create a large training set seeded by a small hand-annotated training set.
  • the distant supervision method can comprise positive-unlabeled learning with the training set as the ‘positive’ class.
  • the distant supervision method can employ a logistic regression model, a recurrent neural network, or both.
  • Examples of machine learning algorithms can include a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, deep learning, or other supervised learning algorithm or unsupervised learning algorithm for classification and regression.
  • the machine learning algorithms can be trained using one or more training datasets.
  • the machine learning algorithm utilizes regression modeling, wherein relationships between predictor variables and dependent variables are determined and weighted.
  • the fluorescent droplets candidate can be a dependent variable and is derived from an intensity-based histogram.
  • Ai Al, A2, A3, A4, A5, A6, A7, ...) are “weights” or coefficients found during the regression modeling; and
  • Xi XI, X2, X3, X4, X5, X6, X7, ...) are data collected from the User. Any number of Ai and Xi variable can be included in the model.
  • FIG. 9 a block diagram is shown depicting an exemplary machine that includes a computer system 1300 (e.g., a processing or computing system) within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies for static code scheduling of the present disclosure.
  • a computer system 1300 e.g., a processing or computing system
  • the components in FIG. 9 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments.
  • Computer system 1300 may include one or more processors 1301, a memory 1303, and a storage 1308 that communicate with each other, and with other components, via a bus 1340.
  • the bus 1340 may also link a display 1332, one or more input devices 1333 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 1334, one or more storage devices 1335, and various tangible storage media 1336. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 1340.
  • the various tangible storage media 1336 can interface with the bus 1340 via storage medium interface 1326.
  • Computer system 1300 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers.
  • ICs integrated circuits
  • PCBs printed circuit boards
  • mobile handheld devices such as mobile telephones or PDAs
  • laptop or notebook computers distributed computer systems, computing grids, or servers.
  • Computer system 1300 includes one or more processor(s) 1301 (e.g., central processing units (CPUs) or general purpose graphics processing units (GPGPUs)) that carry out functions.
  • processor(s) 1301 optionally contains a cache memory unit 1302 for temporary local storage of instructions, data, or computer addresses.
  • Processor(s) 1301 are configured to assist in execution of computer readable instructions.
  • Computer system 1300 may provide functionality for the components depicted in FIG. 9 as a result of the processor(s) 1301 executing non-transitory, processor-executable instructions embodied in one or more tangible computer-readable storage media, such as memory 1303, storage 1308, storage devices 1335, and/or storage medium 1336.
  • the computer-readable media may store software that implements particular embodiments, and processor(s) 1301 may execute the software.
  • Memory 1303 may read the software from one or more other computer-readable media (such as mass storage device(s) 1335, 1336) or from one or more other sources through a suitable interface, such as network interface 1320.
  • the software may cause processor(s) 1301 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defming data stmctures stored in memory 1303 and modifying the data structures as directed by the software.
  • the memory 1303 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 1304) (e.g., static RAM (SRAM), dynamic RAM (DRAM), ferroelectric random access memory (FRAM), phase- change random access memory (PRAM), etc.), a read-only memory component (e.g., ROM 1305), and any combinations thereof.
  • ROM 1305 may act to communicate data and instructions unidirectionally to processor(s) 1301
  • RAM 1304 may act to communicate data and instructions bidirectionally with processor(s) 1301.
  • ROM 1305 and RAM 1304 may include any suitable tangible computer-readable media described below.
  • a basic input/output system 1306 (BIOS) including basic routines that help to transfer information between elements within computer system 1300, such as during start-up, may be stored in the memory 1303.
  • Fixed storage 1308 is connected bidirectionally to processor(s) 1301, optionally through storage control unit 1307.
  • Fixed storage 1308 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein.
  • Storage 1308 may be used to store operating system 1309, executable(s) 1310, data 1311, applications 1312 (application programs), and the like.
  • Storage 1308 can also include an optical disk drive, a solid- state memory device (e.g., flash-based systems), or a combination of any of the above.
  • Information in storage 1308 may, in appropriate cases, be incorporated as virtual memory in memory 1303.
  • storage device(s) 1335 may be removably interfaced with computer system 1300 (e.g., via an external port connector (not shown)) via a storage device interface 1325.
  • storage device(s) 1335 and an associated machine-readable medium may provide non-volatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 1300.
  • software may reside, completely or partially, within a machine-readable medium on storage device(s) 1335.
  • software may reside, completely or partially, within processor(s)
  • Bus 1340 connects a wide variety of subsystems.
  • reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate.
  • Bus 1340 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
  • such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof.
  • ISA Industry Standard Architecture
  • EISA Enhanced ISA
  • MCA Micro Channel Architecture
  • VLB Video Electronics Standards Association local bus
  • PCI Peripheral Component Interconnect
  • PCI-X PCI-Express
  • AGP Accelerated Graphics Port
  • HTTP HyperTransport
  • SATA serial advanced technology attachment
  • Computer system 1300 may also include an input device 1333.
  • a user of computer system 1300 may enter commands and/or other information into computer system 1300 via input device(s) 1333.
  • Examples of an input device(s) 1333 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a touch screen, a multi -touch screen, a joystick, a stylus, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof.
  • an alpha-numeric input device e.g., a keyboard
  • a pointing device e.g., a mouse or touchpad
  • a touchpad e.g., a touch screen
  • a multi -touch screen e.g.
  • the input device is a Kinect, Leap Motion, or the like.
  • Input device(s) 1333 may be interfaced to bus 1340 via any of a variety of input interfaces 1323 (e.g., input interface 1323) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above.
  • computer system 1300 when computer system 1300 is connected to network 1330, computer system 1300 may communicate with other devices, specifically mobile devices and enterprise systems, distributed computing systems, cloud storage systems, cloud computing systems, and the like, connected to network 1330. Communications to and from computer system 1300 may be sent through network interface 1320.
  • network interface 1320 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 1330, and computer system 1300 may store the incoming communications in memory 1303 for processing.
  • Computer system 1300 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 1303 and communicated to network 1330 from network interface 1320.
  • Processor(s) 1301 may access these communication packets stored in memory 1303 for processing.
  • Examples of the network interface 1320 include, but are not limited to, a network interface card, a modem, and any combination thereof.
  • Examples of a network 1330 or network segment 1330 include, but are not limited to, a distributed computing system, a cloud computing system, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, a peer-to-peer network, and any combinations thereof.
  • a network, such as network 1330 may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
  • Information and data can be displayed through a display 1332.
  • a display 1332 include, but are not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), an organic liquid crystal display (OLED) such as a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display, a plasma display, and any combinations thereof.
  • the display 1332 can interface to the processor(s) 1301, memory 1303, and fixed storage 1308, as well as other devices, such as input device(s) 1333, via the bus 1340.
  • the display 1332 is linked to the bus 1340 via a video interface 1322, and transport of data between the display 1332 and the bus 1340 can be controlled via the graphics control 1321.
  • the display is a video projector.
  • the display is a head-mounted display (HMD) such as a VR headset.
  • HMD head-mounted display
  • suitable VR headsets include, by way of non-limiting examples, HTC Vive,
  • the display is a combination of devices such as those disclosed herein.
  • computer system 1300 may include one or more other peripheral output devices 1334 including, but not limited to, an audio speaker, a printer, a storage device, and any combinations thereof.
  • peripheral output devices may be connected to the bus 1340 via an output interface 1324.
  • Examples of an output interface 1324 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof.
  • computer system 1300 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein.
  • Reference to software in this disclosure may encompass logic, and reference to logic may encompass software.
  • reference to a computer- readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate.
  • the present disclosure encompasses any suitable combination of hardware, software, or both.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal.
  • the processor and the storage medium may reside as discrete components in a user terminal.
  • suitable computing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • server computers desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles.
  • Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art.
  • the computing device includes an operating system configured to perform executable instructions.
  • the operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications.
  • suitable server operating systems include, by way of non -limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®.
  • suitable personal computer operating systems include, by way of non limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®.
  • the operating system is provided by cloud computing.
  • suitable mobile smartphone operating systems include, by way of non-limiting examples, Nokia® Symbian®
  • suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®.
  • suitable video game console operating systems include, by way of non limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
  • Non-transitory computer readable storage medium
  • the platforms, systems, media, and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked computing device.
  • a computer readable storage medium is a tangible component of a computing device.
  • a computer readable storage medium is optionally removable from a computing device.
  • a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, distributed computing systems including cloud computing systems and services, and the like.
  • the program and instructions are permanently, substantially permanently, semi permanently, or non-transitorily encoded on the media.
  • the platforms, systems, media, and methods disclosed herein include at least one computer program, or use of the same.
  • a computer program includes a sequence of instructions, executable by one or more processor(s) of the computing device’s
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), computing data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages. [0090] The functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • a computer program comprises one sequence of instructions.
  • a computer program comprises a plurality of sequences of instructions.
  • a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations.
  • a computer program includes one or more software modules.
  • a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
  • the platforms, systems, media, and methods disclosed herein include software, server, and/or database modules, or use of the same.
  • software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art.
  • the software modules disclosed herein are implemented in a multitude of ways.
  • a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
  • a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
  • the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
  • software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on a distributed computing platform such as a cloud computing platform. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. Databases
  • the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same.
  • suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase.
  • a database is internet- based.
  • a database is web-based.
  • a database is cloud computing-based.
  • a database is a distributed database.
  • a database is based on one or more local computer storage devices.
  • EXAMPLE llmaging Multiple Tubes in Three Dimensions Using Light Sheet Imaging This example describes imaging multiple tubes using light sheet imaging.
  • a first assay four PCR tubes arranged in a line containing suspensions of fluorescent and non-fluorescent lipid droplets were placed on a translating stage.
  • the translating stage was positioned to hold one of the four PCR tubes at the intersection of a light sheet illumination path and a camera imaging path oriented at a 90° angle relative to the illumination path, as illustrated in FIG. 1.
  • the translating stage was configured to translate along a single axis at a 45° angle with respect to the illumination path and the imaging path.
  • the tubes were partially submerged in an open sample chamber, as illustrated in FIG. 2A and shown in FIG.
  • the sample chamber had three flat, clear surfaces and was positioned in the imaging path so that the first clear surface was positioned orthogonal to the imaging path (“Clear surface 1”), and the second clear surface was positioned orthogonal to the light sheet illumination path (“Clear surface 2”), as shown in FIG. 1.
  • the third clear surface (“Clear surface 3”) was positioned in the light sheet illumination path for the light sheet illumination to exit the sample chamber. While imaging, the tubes were translated along the axis of translation such that sequential cross-sectional planes of each tube were illuminated and imaged. The position of the tube cross-sectional plane image shifted within the camera field of view as the tube was translated along the 45° translation axis, as illustrated in FIG. 1 panels (i),
  • This example describes imaging multiple tubes in three dimensions in an enclosed sample holder using light sheet imaging.
  • FIG. 3A In this assay, four PCR tubes arranged in a line containing suspensions of fluorescent and non-fluorescent lipid droplets are placed in a sample chamber positioned on a translating stage, as illustrated in FIG. 3A.
  • the translating stage is positioned to hold one of the four PCR tubes at the intersection of a light sheet illumination path and a camera imaging path oriented at a 90° angle relative to the illumination path, as illustrated in FIG. 3B.
  • the translating stage is configured to translate along two orthogonal axes, a y axis parallel to the illumination path, and a z axis parallel to the imaging axis.
  • the tubes are partially submerged in a sample chamber, as illustrated in FIG.
  • the sample chamber has two flat, clear surfaces for each tube and an additional flat, clear surface.
  • the sample chamber is positioned in the imaging path so that the first clear surface corresponding to a first sample tube (“First clear surface”) is positioned perpendicular to the imaging path, and the second clear surface corresponding to the first sample tube (“Second clear surface”) is positioned perpendicular to the light sheet illumination path.
  • the additional clear surface (“Additional clear surface”) is positioned in the light sheet illumination path for the light sheet illumination to exit the sample chamber.
  • the tubes and sample chamber While imaging, the tubes and sample chamber are translated along the z axis of such that sequential cross-sectional planes of each tube were illuminated and imaged. In contrast to the first assay, the position of the tube cross-sectional plane image remain centered within the camera field of view during translation. Once a tube has been scanned, the tubes and sample chamber are translated along the y axis to move a new tube into position to be imaged.
  • the tubes and sample chamber are translated along both the y axis and z axis, at a 45° angle with respect to the illumination path and the imaging path, such that sequential cross-sectional planes of each tube were illuminated and imaged, as described in EXAMPLE 1.
  • the tubes and sample chamber are further translated along the y axis and the z axis to move a new tube into position to be imaged.
  • This example describes automated detection and quantification of fluorescently labeled droplets imaged using light sheet imaging.
  • a sample containing fluorescently labeled droplets was imaged as described in EXAMPLE 1 and illustrated in FIGS. 5A-5G.
  • the resulting cross- sectional plane images were combined to produce a three-dimensional image data set for each sample.
  • a region of interest within each cross-section was selected outlining the confines of the sample (FIG. 5A). This was to ensure that only the fluorescent particles inside the tube cross- section were considered.
  • a local background subtraction was performed to suppress background fluorescence (FIG. 5B).
  • a Gaussian filter was applied to smooth images, and a median filter was applied to remove hot pixels.
  • a three-dimensional (3D) convolution was performed using a spherical template to identify substantially spherical fluorescent particles (corresponding to fluorescent lipid droplets) within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C).
  • a 3D local maxima analysis was performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D).
  • An additional 3D local maxima analysis was performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
  • a boundary was defined between the bulk sample and a top layer of the sample (referred to as the “murky layer”) with lower signal than the bulk sample due to refractive index mismatch at the air-sample interface (FIG. 5E). Regions above the murky layer boundary were grouped into a grid for local histogram analysis separate from grids containing images of the bulk sample. Each tube cross-section was divided into multiple grids for separate thresholding (FIG. 5F). For each grid, a histogram of the logio of the intensity (logio(intensity)) of each fluorescent particle candidate was plotted. A threshold logio(intensity) value for each grid was automatically set to differentiate between noise and signal.
  • the threshold was set based on the assumptions that either (A) the majority of the particles belong to one population with some outliers (e.g., a population of lipid droplets where most are negative for signal, or a population of lipid droplets were most are positive for signal), or (B) there are two populations of particles.
  • A a median plus or minus the standard deviation threshold was used to automatically separate the signal positive and signal negative particles.
  • B an optimization algorithm was used to select a threshold that maximizes the separation between the signal positive population and the threshold, as measured by number of standard deviations of the positive population above the threshold.
  • This example describes a light sheet imaging setup and a method for using such setup to image 8 or more digital PCR samples automatically without user interference.
  • the setup described herein is designed to allow for the imaging of 8 PCR samples using an 8-sample strip. However, it should be noted that the setup can be reconfigured to allow the analysis of fewer or more than 8 samples at a time.
  • FIG. 6 illustrates the procedural setup and steps for sample loading and scanning for this high throughput digital PCR analysis method.
  • the 96 PCR samples are loaded onto the sample loading block using e.g., a 96-well plate (shown as arrow #1), and from which the samples can be picked up in an 8-sample strip, thus 8 samples at a time.
  • an 8-sample strip (shown as arrow #2) is picked up from the sample plate and moved on a translation stage (shown as arrow #3) for scanning of the sample.
  • the PCR samples are then scanned using a focused light sheet from a collimated laser source (shown as white arrow #4) to generate light sheet images for each of the 8 samples of the sample strip.
  • a focused light sheet from a collimated laser source (shown as white arrow #4) to generate light sheet images for each of the 8 samples of the sample strip.
  • the light sheet is configured to pivot and/or translate at a frequency of about 1 kHz while the sample is scanned to increase image quality by reducing artifacts as described in EXAMPLE 5 below.
  • the 8-sample strip is moved back to the sample loading block, and a second 8-sample strip is picked up and imaged.
  • Strip and shadow artifacts in light sheet images can be caused by light scattering and light absorption during scanning.
  • the imaging setup is configured (e.g., by using a resonant mirror) such that the light source (e.g., laser light sheet) is capable of undergoing certain movements relative to the sample during scanning, thereby eliminating or “averaging out” some (or most) of the scattering and absorption events.
  • the light source e.g., laser light sheet
  • FIG. 7A shows that a pivoting light sheet improves image quality as shown in the “average” image on the far right of FIG. 7A.
  • FIG. 7B shows that a translating light sheet improves image quality as shown in the “average” image on the far right of FIG. 7B.
  • FIG. 7C Another method to reduce artifacts is using a dual side illumination, as illustrated in FIG. 7C.
  • a sample e.g., a PCR sample tube
  • each light source only has to travel through half of the sample in order to illuminate the entire sample cross section.
  • This example describes automated detection and quantification of fluorescently labeled droplets imaged using light sheet imaging.
  • a sample containing fluorescently labeled droplets was imaged as described in EXAMPLE 1 and illustrated in FIGS. 5A-5G.
  • the resulting cross- sectional plane images were combined to produce a three-dimensional image data set for each sample.
  • a region of interest within each cross-section was selected outlining the confines of the sample (FIG. 5A). This was to ensure that only the fluorescent particles inside the tube cross- section were considered.
  • a local background subtraction was performed to suppress background fluorescence (FIG. 5B).
  • a Gaussian filter was applied to smooth images, and a median filter was applied to remove hot pixels.
  • a three-dimensional (3D) convolution was performed using a spherical template to identify substantially spherical fluorescent particles (corresponding to fluorescent lipid droplets) within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C).
  • a 3D local maxima analysis was performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D).
  • An additional 3D local maxima analysis was performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
  • Fluorescent droplets candidates were determined based on the 3D local maxima analysis (FIG. 8A).
  • the noise and signal were differentiated using a convolutional neural network (CNN) by determining shape and size data of the fluorescent droplets as a whole (FIG. 8B).
  • Clusters were then removed to eliminate overlapping fluorescent droplets having a peak density below a set threshold. The peak density was determined based on a droplet size and the droplet size is calculated based on a full width half maximum.
  • Clusters were removed based on close neighbor/volume exclusion/non-maximum suppression and by determining a number of close droplet neighbors for each droplet.
  • a histogram of close droplet neighbors was generated (FIG. 8C), whereas outlier droplets having a determining quantity of neighbors over a set threshold were eliminated.

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Dispersion Chemistry (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

Disclosed herein are light sheet imaging systems for imaging fluorescent samples. Also disclosed herein are sample holder systems for high throughput light sheet imaging of multiple three-dimensional samples without user intervention. Further disclosed herein are automated image processing methods to identify and quantify fluorescent particles within three-dimensional image sets without user intervention or user bias.

Description

METHODS AND SYSTEMS FOR THREE-DIMENSIONAL LIGHTSHEET IMAGING
CROSS-REFERENCE
[0001] This application claims the benefit of U.S. Provisional Application No. 62/946,373, filed December 10, 2019, and U.S. Provisional Application No. 63/082,219, filed September 23, 2020, which are hereby incorporated by reference in their entirety herein.
BACKGROUND
[0002] Light sheet imaging is used to illuminate thin sections of a sample. Light sheet imaging techniques have faster image capture rates than comparable point scanning methods such as laser confocal scanning microscopy. There is a need for improved light sheet imaging techniques to increase sample throughput and reduce the need for user intervention.
SUMMARY
[0003] In various aspects, the present disclosure provides a light sheet imaging system comprising: a collimated illumination source configured to emit a collimated beam along a beam path; a Powell lens positioned in the beam path after the collimated illumination source and configured to expand the collimated beam along a single axis; a first cylindrical lens positioned in the beam path after the Powell lens and configured to re-collimate the beam along a single axis, thereby producing a collimated beam having a short axis orthogonal to a direction of propagation and long axis orthogonal to the short axis and the direction of propagation; a second cylindrical lens positioned in the beam path after the second cylindrical lens and configured to focus the beam along the short axis to a focal plane.
[0004] In some aspects, a three-dimensional sample is positioned at the focal plane. In some aspects, the three-dimensional sample comprises fluorescent droplets and/or fluorescently labeled cells. In some aspects, the light sheet imaging system further comprises a camera comprising an imaging path orthogonal to the beam path, wherein the camera is positioned such that the imaging path intersects the beam path at the focal plane.
[0005] In various aspects, the present disclosure provides a sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid, a translating stage supporting the tube holder and configured to translate the tube holder along a translation axis, wherein the translation axis is oriented at a 45° angle relative to the first transparent side and the second transparent side and parallel to the row of the plurality of tubes.
[0006] In some aspects, the sample holder further comprises a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid. In some aspects, the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking. In some aspects, the lid comprises a hole configured to accommodate the tube holder and the translating stage. In some aspects, the sample holder further comprises a first opaque side connecting the first transparent side to the third transparent side, and a second opaque side connecting the second transparent side and the third transparent side.
[0007] In various aspects, the present disclosure provides a sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid, a translating stage supporting the fluid reservoir and the tube holder and configured to translate the fluid reservoir and the tube holder along a first translation axis and a second translation axis, wherein the first translation axis is oriented parallel to the first transparent side and perpendicular to the second transparent side, and wherein the second translation axis is perpendicular to the first transparent side and parallel to the second transparent side, and wherein the first translation axis and the second translation axis are oriented at a 45° angle relative to the row of the plurality of tubes.
[0008] In some aspects, the sample holder further comprises a fourth transparent side connected to the second transparent side at a right angle and parallel to the first transparent side, and a fifth transparent side connected to the fourth transparent side at a right angle and parallel to the second transparent side. In some aspects, the sample holder further comprises a plurality of transparent sides connected in series at right angles to the second transparent side wherein alternating transparent sides of the plurality of transparent sides are parallel to the first transparent side or parallel to the second transparent side. In some aspects, the sample holder further comprises a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid. In some aspects, the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking.
[0009] In some aspects, the fluid comprises an index matching fluid. In some aspects, the sample holder comprises a polymer. In some aspects, the plurality of tubes comprises PCR tubes. In some aspects, the plurality of tubes comprises microcentrifuge tubes. In some aspects, the sample holder is configured to accommodate up to 12 tubes. In some aspects, the sample holder is configured to accommodate up to 8 tubes. In some aspects, the sample holder is configured to accommodate up to 4 tubes.
[0010] In various aspects, the present disclosure provides a method of image processing comprising: collecting a three-dimensional image data set comprising a plurality of cross-section images; identifying a plurality of substantially spherical fluorescent particles in the three- dimensional image data set; dividing one or more cross-section images of the plurality of cross- section images into a plurality of image grids; applying an intensity threshold independently to one or more image grids of the plurality of image grids; and identifying one or more signal positive particles from the plurality of substantially spherical fluorescent particles, wherein the one or more signal positive particles have intensities greater than the intensity threshold.
[0011] In some aspects, the method further comprises applying a smoothing filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles. In some aspects, the method further comprises applying a median filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles. In some aspects, identifying the plurality of substantially spherical fluorescent particles comprises performing a three-dimensional convolution using a three-dimensional template.
[0012] In some aspects, the method further comprises determining an intensity of one or more substantially spherical fluorescent particles of the plurality of substantially spherical fluorescent particles by a three-dimensional local maxima analysis. In some aspects, the method further comprises performing a second three-dimensional maxima analysis to remove one or more asymmetric particles from the plurality of substantially spherical fluorescent particles. In some aspects, the intensity threshold is determined by a median plus or minus a standard deviation of a histogram of a logio of the intensities of the image grid. In some aspects, the intensity threshold maximizes the separation between one or more intensities of the signal positive particles and the intensity threshold. [0013] In some aspects, the method further comprises repeating the applying the intensity threshold for one or more image grids corresponding to a ratio outlier and determining a refined ratio for one or more image grids corresponding to a ratio outlier after applying the intensity threshold. In some aspects, the method further comprises selecting a region of interest in one or more cross-section images of the plurality of cross-section images, wherein the region of interest comprises a sample area. In some aspects, the method further comprises identifying a background intensity and subtracting the background intensity from an intensity of one or more cross-section images of the plurality of cross-section images. In some aspects, the method further comprises selecting a subset of the plurality of substantially spherical fluorescent particles, wherein the substantially spherical fluorescent particles comprise a size larger than a minimum size threshold and smaller than a maximum size threshold. In some aspects, the method further comprises identifying a plurality of signal positive particles from the plurality of substantially spherical fluorescent particles comprising an intensity above a minimum threshold value.
[0014] In some aspects, the method further comprises determining if a low signal region is present in a cross-section image, and, if a low signal region is present, identifying the low signal region and a high signal region within the region of interest of one or more cross-section images and designating a low signal image grid comprising the low signal region, or, if no low signal region is present, designating the region of interest as a high signal region. In some aspects, the method further comprises determining a ratio of a number of signal positive particles to a number of the plurality of spherical fluorescent particles for one or more image grids of the plurality of image grids. In some aspects, the method further comprises identifying ratio outliers comprising ratios that are higher or lower than a median of the ratios of the plurality of image grids. In some aspects, the method further comprises removing the ratio outliers. In some aspects, the method is automated. In some aspects, the method is performed without user intervention. In some embodiments, the method further comprises performing cluster removal to eliminate overlapping fluorescent droplets. In some embodiments, the cluster removal comprises eliminating droplets having a peak density below a set threshold. In some embodiments, the peak density is determined based on a droplet size. In some embodiments, the droplet size is calculated based on a full width half maximum. In some embodiments, the cluster removal comprises close neighbor/volume exclusion/non-maximum suppression. In some embodiments, cluster removal comprises determining the number of close droplet neighbors for each droplet. In some embodiments, cluster removal further comprises generating a histogram of close droplet neighbors. In some embodiments, cluster removal further comprises eliminating outlier droplets having a determining quantity of neighbors over a set threshold. In some embodiments, the set threshold is based a probability that the merged cluster of droplets is detected as a false positive. [0015] Further provided herein is a method of analyzing a plurality of samples, the method comprising automatically subjecting each sample of the plurality of samples to a laser beam, one sample at a time, thereby generating emission light in each sample of the plurality of samples, and wherein the plurality of samples comprises at least 4 samples. In some aspects, the plurality of samples are located in a sample holder. In some aspects, the sample holder is moved in a direction that is perpendicular to the illumination path of the laser beam to move a scanned first sample out of the laser beam and move a second sample into the beam for scanning. In some aspects, the method further comprises, while subjecting each sample of the plurality of samples to the laser beam, moving the laser beam relative to the sample. In some aspects, moving the laser beam relative to the sample comprises pivoting movements of the laser beam. In some aspects, moving the laser beam relative to the sample comprises translating movements of the laser beam. In some aspects, the pivoting movements or translating movements reduce artifacts in the resulting image data compared to a method in without pivoting movements or translating movements. In some aspects, the laser beam moves relative to the sample with a frequency of at least about 0.1 kFIz. In some aspects, the laser beam moves relative to the sample with a frequency of from about 0.1 kFIz to about 20 kHz. In some aspects, the laser beam moves relative to the sample with a frequency of at least about 1 kHz. In some aspects, the laser beam is a sheet of laser light. In some aspects, the plurality of samples comprises at least 8 samples. In some aspects, the plurality of samples are subjected to a second laser beam. In some aspects, the second laser beam is configured to illuminate a site of a sample of the plurality of samples that is opposite to the site illuminated by the laser beam. In some aspects, each sample of the plurality of samples is subjected to the laser beam and the second laser beam simultaneously.
INCORPORATION BY REFERENCE
[0016] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. The novel features of the disclosure are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the disclosure are utilized, and the accompanying drawings of which:
[0018] FIG. 1 shows a schematic of a light sheet imaging system (left) and positioning of a sample within an imaging field of view at different cross-section depths (right, (i), (ii), and (iii)); [0019] FIG. 2 shows an exemplary sample holder design for imaging multiple tubes sequentially without user intervention using light sheet imaging. FIG. 2A shows an isometric view of the sample holder. FIG. 2B shows an image of the sample holder and sample tubes positioned within a light sheet imaging system;
[0020] FIG. 3 shows an exemplary sample holder design for imaging multiple tubes sequentially without user intervention using light sheet imaging. FIG. 3A shows an isometric view of the sample holder. FIG. 3B shows a top view of the sample holder within a light sheet imaging system;
[0021] FIG. 4 shows exemplary optical configurations to produce a focused light sheet from a collimated laser source. FIG. 4A shows an optical configuration comprising three cylindrical lenses and an aspheric lens. FIG. 4B shows an optical configuration comprising a Powell lens and two cylindrical lenses;
[0022] FIGS. 5A, 5B, 5C, 5D, 5E, 5F, and 5G show a method of image processing to identify and count signal positive fluorescent particles and signal negative fluorescent particles. FIG. 5A shows region of interest (ROI) extraction to identify regions of interest comprising a sample area. FIG. 5B shows local background subtraction from a region of interest. FIG. 5C shows three-dimensional (3D) convolution with a spherical template to identify substantially spherical fluorescent particles. FIG. 5D shows local maxima and symmetry checks using the three- dimensional convolution to eliminate asymmetric particles. FIG. 5E shows detection of a low intensity region (“murky layer”) within a region of interest. FIG. 5F shows local brightness histograms and auto-thresholding for individual image grids within a region of interest. FIG. 5G shows global correction to remove grid outliers from two distinct clusters of positive particles having a ratio of signal positive particles to total particles that is significantly higher or significantly lower than the median ratio for all grids;
[0023] FIG. 6 shows a setup of an imaging system and a method for analyzing 8 or more samples using light sheet imaging;
[0024] FIG. 6A shows analyte samples (here a 96-well plate) being loaded onto a sample loading block of the imaging system. [0025] FIG. 6B shows an 8-sample strip of the 96-well plate being picked up to image 8 samples at a time;
[0026] FIG. 6C shows an 8-sample strip that is located within the imaging chamber and scanned using an optical configurations to produce a focused light sheet from a collimated laser source; [0027] FIG. 7 shows different methods for improving image quality and reducing artifacts for analyzing samples using light sheet imaging.
[0028] FIG. 7A shows that pivoting along the focal plane of the sample relative to the laser beam can reduce artifacts in the resulting image;
[0029] FIG. 7B shows that translating along the focal plane of the sample relative to the laser beam can reduce artifacts in the resulting image;
[0030] FIG. 7C shows that dual sided illumination of a sample can also reduce strip and/or shadow artifacts in the resulting image.
[0031] FIG. 8A shows an exemplary image of droplet candidate extraction;
[0032] FIG. 8B shows an exemplary diagram of a convolutional neural network (CNN);
[0033] FIG. 8C shows an exemplary graph of non-maximum suppression and cluster removal; and
[0034] FIG. 9 shows a non-limiting example of a computing device; in this case, a device with one or more processors, memory, storage, and a network interface.
DETAILED DESCRIPTION
[0035] Three-dimensional imaging may be used to image three-dimensional samples, such as an organism, a tissue, a cell, or a liquid sample. Three-dimensional imaging may enable higher throughput imaging than comparable one-dimensional or two-dimensional imaging techniques by imaging larger sample volumes than one-dimensional or two-dimensional imaging techniques. Three-dimensional imaging techniques, such as light sheet imaging, may be used to image fluorescent samples containing a plurality of fluorescent particles present in different cross-sectional planes of the sample. In some embodiments, a fluorescent sample may comprise a sample tube containing fluorescent particles or droplets, a cell with labeled proteins or nucleic acids, an immunofluorescent tissue sample, or fluorescently labeled organism. For example, a three-dimensional fluorescent sample may be a digital polymerase chain reaction (dPCR) sample containing signal positive and signal negative fluorescent droplets.
[0036] Compared to one-dimensional and two-dimensional imaging techniques, three- dimensional imaging techniques has many advantages, such as higher throughput imaging methods and sequential imaging of multiple three-dimensional samples without user intervention. However, compared to one-dimensional and two-dimensional imaging techniques, three-dimensional imaging techniques may have additional complications due to the nature of handling three-dimensional samples and processing three-dimensional image data sets, such as uneven background intensities, uneven illumination intensity, or signal variability at different cross-sectional planes or positions within a cross-sectional plane.
[0037] Disclosed herein are light sheet imaging systems for imaging fluorescent samples. A light sheet imaging system may comprise an illumination system configured to produce a sheet of light focused in a sample. The sample may be a three-dimensional (3D) sample, such as a sample in a tube or vial. The sample may be a two-dimensional (2D) sample, such as a sample in a planar array or plate. The sample may be a one-dimensional (ID) sample, such as a sample in a flow channel. Also disclosed herein are sample holder systems for high throughput light sheet imaging of multiple three-dimensional samples. These sample holder systems may be used without user intervention. Further disclosed herein are automated image processing methods to identify and quantify fluorescent particles within three-dimensional image sets without user intervention or user bias. In some embodiments, the three-dimensional image sets may be collected using light sheet imaging of a three-dimensional sample. As used herein, a fluorescent particle may be a particle such as a nanoparticle or bead with a fluorescent moiety, a fluorescent particle may be a droplet such as a lipid droplet comprising a fluorescent moiety, a fluorescent particle may be a cell comprising a fluorescent moiety, or a fluorescent particle may be a molecule or moiety such as an organic fluorophore.
Systems for Light Sheet Imaging
[0038] Disclosed herein are light sheet illumination systems for uniform light sheet illumination of a sample cross-sectional plane. Also disclosed herein are sample holders for positioning a plurality of three-dimensional samples within an imaging system and re-positioning the samples for sequential imaging of the samples without user intervention.
Light Sheet Illumination Systems
[0039] The illumination systems disclosed herein may convert a collimated illumination source into a light sheet to image cross-sectional planes within a sample. The collimated illumination source may be a laser. In some embodiments, the light sheet may be focused along a single axis. The illumination system may be configured such that the focal plan is positioned within a sample. The sample may be a one-dimensional sample, a two-dimensional sample, or a three- dimensional sample. [0040] Exemplary light sheet illumination configurations are shown in FIG. 4. In a first configuration, shown in FIG. 4A, an illumination source (e.g., a laser beam) is passed through a collimator to produce a collimated beam. The collimated beam may pass through a first cylindrical lens, which causes the beam to converge along a first axis (e.g., a horizontal axis).
The diverging beam may pass through an aspheric lens which causes the beam to collimate along the first axis and converge along a second axis perpendicular to the first axis (e.g., a vertical axis). The beam may then pass through a second cylindrical lens having a curved surface oriented perpendicular to the curve surface of the first cylindrical lens. The second cylindrical lens may collimate the beam along the second axis, thereby producing a collimated, elliptical beam that is elongated along the second axis. The resulting beam may have a Gaussian profile. The beam may then pass through a third cylindrical lens having a curved surface oriented parallel to the curved surface of the second cylindrical lens. The third cylindrical lens may focus the elliptical beam along the first axis toward a focal plane. In some embodiments, the order of the optical elements may be rearranged. For example, the aspheric lens may be positioned in the beam path before the first cylindrical lens, or the aspheric lens may be positioned after the second cylindrical lens.
[0041] In a second configuration, shown in FIG. 4B, a collimated illumination source emits a collimated beam (e.g., a laser beam). The collimated beam may pass through a line generating lens. In some embodiments, the line generating lens may be a Powell lens or a laser line generator lens. After passing through the line generating lens, the beam may have a linear profile that diverges along a first axis (e.g., a vertical axis). The beam may then pass through a cylindrical lens which may focus the beam along a second axis (e.g., a horizontal axis) toward a focal plane.
[0042] As compared to the elliptical beam generated after the first cylindrical lens, the aspheric lens, and the third cylindrical lens shown in FIG. 4A, the linear beam profile produced after the line generating lens shown in FIG. 4B may be longer and have a more uniform intensity across its height. Additionally, the optical configuration shown in FIG. 4B may produce less spherical aberration as compared to the optical configuration shown in FIG. 4A due to the use of fewer optical elements. In both optical configurations, long focal length lenses may be used to reduce spherical aberrations.
[0043] A light sheet imaging system may comprise an illumination system as described herein and a detector. The detector may be a camera. For example, the camera may be a wide field camera. The camera may have a maximum scan area of 20 ul. The camera may have a maximum scan area of 50 ul or more. Exemplary cameras include, but are not limited to, a QHY174 camera with a scan area of 1920x1200 pixels, 5.6 x 3.5 mm, and a resolution of 2.93 um/pixel, and a Basler aca-2440-35 um with a scan area of 2448 x2048 pixels, 8.4 x 7.1 mm, and a resolution of 3.45 um/pixel. A camera with a larger scan area can scan an entire PCR tube cross section without the need for staggering. The camera may have an imaging path. The camera may be positioned within the light sheet imaging system such that the imaging path is orthogonal to the illumination beam path and intersects the illumination beam path at the focal plane of the light sheet. A sample may be positioned at the intersection of the imaging path and the illumination beam path.
[0044] A light sheet imaging system may be configured to allow scanning of 1, 2, 4, 6, 8, 10, 12, 20, or more samples at a time and without user intervention. Such imaging systems may allow for high throughput scanning of a plurality of samples. Such samples may comprise digital PCR samples. A high throughput imaging systems may comprise multiple components, including a sample loading block, a sample holder system, an operator arm configured to move a certain number of samples from the sample loading block to the sample holder system, a laser source configured to provide a light sheet, and a detector configured to detect radiation emitted from the sample (e.g., fluorescent light). The sample holder block may be loaded with a plurality of samples (e.g., digital PCR samples), e.g., as shown in FIG. 6A. In some instances, the plurality of samples are located in a well plate, e.g., a 96-well plate. An operator arm of the light sheet imaging system may be configured to place a certain number of sample tubes, e.g., about 1, 2, 4, 6, 8, 10, 12, 20, or more sample tubes, into a sample holder system as described herein, and as illustrated for an 8-sample strip in FIG. 6B. The sample holder system comprising the samples may be configured to move along an axis such that each sample located in the holder may be scanned by the light source and its emitted radiation detected in the detector unit.
[0045] In some instances, the sample holder system may be configured to provide additional movement to a sample, e.g., a pivoting and/or translating movement relative to the light sheet. In various embodiments, the light sheet may be moved while a sample is scanned. In such instances, the sample may remain stationary while being scanned, and the light sheet is moving relative to the sample. Such movements of the light sheet can include pivoting and/or translating movements. The frequency of the movement of the light sheet may be faster than the exposure time of the sample. In such cases, the light sheet may move with a frequency of at least about 0.1, 0.5. , 0.7, 1.0, 1.5, 2, 2.5, 5, 10, 15, 20 kilohertz (kHz, 103 s 1), or higher, or any frequency therebetween. In such instances, the light sheet may move with a frequency of at least about 1 kHz, or higher. In such cases, the light sheet may move with a frequency within a range of from 0.1 to 20 kilohertz, 0.1 to 30 kilohertz, from 0.5 to 20 kilohertz, from 0.7 to 20 kilohertz, from 1 to 20 kilohertz, from 2 to 20 kilohertz, from 5 to 20 kilohertz, from 10 to 20 kilohertz, from 15 to 20 kilohertz, from 0.1 to 1 kilohertz, from 0.1 to 2 kilohertz, from 0.1 to 5 kilohertz, from 0.1 to 10 kilohertz, from 0.1 to 15 kilohertz, from 0.5 to 5 kilohertz, from 1 to 10 kilohertz, or from 0.5 to 2 kilohertz. In some cases, such rapid pivoting of a light sheet can be achieved by using, e.g., a resonance mirror or a galvo-resonant scanner. Such additional movements of the light sheet, as shown in FIGs. 7A-7B, may reduce artifacts in the resulting images and thus increase image quality. Such artifacts may be caused by light scattering and absorption events during sample scanning using the light sheet.
[0046] In some embodiments, an imaging system is configured to provide a dual side illumination to a sample, as illustrated in FIG. 7C. Such setup may also improve image quality by reducing artifacts, wherein the light of each light source only has to travel through half of the sample in order to illuminate the entire sample cross section.
Sample Holder Systems
[0047] The sample holder systems described herein may be used to hold a plurality of three- dimensional samples for sequential imaging of the samples without user intervention. The sample holder may comprise an open chamber or a closed chamber. The three-dimensional samples may be tubes containing liquid, aqueous, or gel samples. The three-dimension sample may be a tissue sample in a tissue sample holder. The three-dimensional sample may be an organism.
[0048] The sample holder designs disclosed herein may allow multiple samples (e.g., tubes arranged in a strip tube format) to be imaged automatically without user interference. In some embodiments, the samples may be imaged using a three-dimensional scan imaging method. For example, the samples may be imaged using light sheet imaging, as disclosed herein, or the samples may be imaged using confocal imaging. The sample holder designs may also allow samples with a large cross-section (e.g., a 50 pL PCR tube) to be scanned once without the need to stagger multiple images at each cross-section. The sample holder may be configured to accommodate up to 2, up to 3, up to 4, up to 5, up to 6, up to 7, up to 8, up to 9, up to 10, up to 12, up to 14, up to 15, up to 16, up to 18, up to 20, up to 24 or more three-dimensional samples to be imaged without user intervention.
[0049] A first sample holder design, shown in FIG. 1 and FIG. 2, may comprise a fluid reservoir. The fluid reservoir may be elongated, as shown in FIG. 2A, to accommodate translation of a sample partially submerged within the fluid in the fluid reservoir. The fluid reservoir may comprise three transparent sides (“Clear surfaces” denoted by arrows). Two of the transparent sides, a first transparent side and a second transparent side, may be connected at a right angle. The first transparent side may be configured to transmit an illumination light (e.g., a light sheet). The second transparent side may be configured to transmit an emission light (e.g., a fluorescent emission) toward a detector (e.g., a camera). The third transparent side, shown behind the first transparent side and the second transparent side in FIG. 2A, may be configured to transmit the illumination light out of the fluid reservoir after the illumination light passes through a sample to reduce reflections within the sample chamber. The remaining sides of the fluid reservoir may be opaque or partially opaque to reduce reflections within the fluid reservoir. [0050] The sample, such as a plurality of tubes, may positioned in a sample holder, as shown in FIG. 2B. The sample holder may be supported by a translating stage. The translating stage may be configured to translate the samples along a translation axis while the samples are partially submerged in the fluid. The translation axis may be parallel to the long axis of the fluid reservoir. The translation axis may be at a 45° angle relative to the surfaces of the first transparent side and the second transparent side. Optionally, the translation axis may be parallel to the surface of the third transparent side. In some embodiments, the translating stage may be configured to translate the sample holder along a first translation axis and a second translation axis. The first translation axis may be parallel to the surface of the first transparent side. The second translation axis may parallel to the surface of the second transparent side. Optionally, the first translation axis and the second translation axis may be parallel to the surface of the third transparent side. The fluid reservoir may contain an index matching fluid. The index matching fluid may match the index of refraction of the sample.
[0051] The fluid reservoir and the sample holder may be positioned within a three-dimensional imaging system. For example, the fluid reservoir and the sample holder may be positioned within a light sheet imaging system as described herein. The sample holder may be positioned such that a first sample is positioned at the intersection of the illumination beam path and the imaging path, and translation of the sample along the translation axis enables imaging of sequential cross- sectional planes within the first sample, as shown in FIG. 1. Cross-sectional image planes within the sample may shift relative to the camera field of view depending on the position of the cross- sectional plane within the sample, as illustrated in panels (i), (ii), and (iii) of FIG. 1. In some embodiments, the cross-sectional image planes within the sample do not shift relative to the camera field of vie depending on the position of the cross-sectional plane within the sample. Further translation along the translation axis may position a second sample at the intersection of the illumination beam path and the imaging path. [0052] A second sample holder design, shown in FIG. 3, may comprise a fluid reservoir. The fluid reservoir may be configured to accommodate a sample holder holding a plurality of samples, as shown in FIG. 3A, partially submerged within the fluid in the fluid reservoir. The fluid reservoir may comprise a plurality of transparent sides (“Clear surfaces” denoted by arrows). Pairs of the transparent sides may be connected at a right angle, such that the plurality of transparent sides are arranged in a saw tooth pattern. The number of pairs of transparent sides may correspond to the number of samples that the sample holder may hold. The first transparent side of each pair may be configured to transmit an illumination light (e.g., a light sheet). The second transparent side of each pair may be configured to transmit an emission light (e.g., a fluorescent emission) toward a detector (e.g., a camera). An additional transparent side, shown behind the plurality of transparent sides in FIG. 3A, may be configured to transmit the illumination light out of the fluid reservoir after the illumination light passes through a sample to reduce reflections within the sample chamber. The remaining sides of the fluid reservoir may be opaque or partially opaque to reduce reflections within the fluid reservoir.
[0053] The sample, such as a plurality of tubes, may positioned in a sample holder, as shown in FIG. 2B. The sample may be positioned such that the illumination beam is not obstructed as it enters and exits the sample. The sample holder and the fluid reservoir may be supported by a translating stage. The translating stage may be configured to translate the samples and the fluid reservoir along a first translation axis and a second translation axis. The first translation axis may be parallel to the surface of the first transparent side of each pair. The second translation axis may parallel to the surface of the second transparent side of each pair. Optionally, the first translation axis and the second translation axis may be parallel to the surface of the additional transparent side. The translating stage may be configured to translate the samples and the fluid reservoir along a translation axis while the samples are partially submerged in the fluid. The translation axis may be at a 45° angle relative to the surfaces of the first transparent side of each pair and the second transparent side of each pair. Optionally, the translation axis may be parallel to the surface of the additional transparent side. The fluid reservoir may contain an index matching fluid. The index matching fluid may match the index of refraction of the sample. The fluid reservoir may further comprise a lid. The lid may comprise an O-ring to prevent leaking of the fluid.
[0054] The fluid reservoir and the sample holder may be positioned within a three-dimensional imaging system. For example, the fluid reservoir and the sample holder may be positioned within a light sheet imaging system as described herein. The sample holder may be positioned such that a first sample is positioned at the intersection of the illumination beam path and the imaging path and the illumination beam path passes through the first transparent side of a first pair and the imaging path passes through the second transparent side of the first pair. Translation of the sample along the first translation axis enables imaging of sequential cross-sectional planes within the first sample, as shown in FIG. 3B. Translation along the second translation axis may position a second sample at the intersection of the illumination beam path and the imaging path as well as position the second pair of transparent sides within the illumination beam path and imaging path. As compared to the first sample holder design, the second sample holder design may reduce evaporation and wicking of the fluid, provide a smaller image and instrument footprint, and reduce image analysis complexity.
Image Processing Methods for Fluorescent Particle Quantification
[0055] The image processing methods described herein may be used to identify and quantify fluorescent particles within a three-dimensional image data set. The fluorescent particles may be droplets (e.g., droplets comprising nucleic acid molecules, protein molecules, cells). The fluorescent particles may be particles or cells suspended in solution. The fluorescent particles may be particles, molecules, or cells suspended in a gel matrix.
[0056] The resulting cross-sectional plane images may be combined to produce a three- dimensional image data set for each sample. A region of interest within each cross-section may be selected outlining the confines of the sample (FIG. 5A). This may ensure that only the fluorescent particles inside the tube cross-section were considered. A local background subtraction may be performed to suppress background fluorescence (FIG. 5B). A smoothing filter may be applied to smooth images, and a median filter may be applied to remove hot pixels. In some embodiments, the smoothing filter may be a Gaussian filter.
[0057] A three-dimensional (3D) convolution may be performed using a three-dimensional template to identify fluorescent particles (e.g., corresponding to fluorescent lipid droplets) substantially conforming to a desired shape and within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C). In some embodiments, the three-dimensional template may be a spherical template, an elliptical template, or any other three-dimensional shape. A 3D local maxima analysis may be performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D). Optionally, an additional 3D local maxima analysis may be performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
[0058] A boundary may be defined between the bulk sample and a top layer of the sample (referred to as the “murky layer”) with lower signal than the bulk sample due to refractive index mismatch at the air-sample interface (FIG. 5E). Regions above the murky layer boundary may be grouped into a grid for local histogram analysis separate from grids containing images of the bulk sample. Each tube cross-section may be divided into multiple grids for separate thresholding (FIG. 5F). For each grid, a histogram of the logio of the intensity (logio(intensity)) of each fluorescent particle candidate may be plotted. A threshold logio(intensity) value for each grid may be automatically set to differentiate between noise and signal. The threshold may be set based on the assumptions that either (A) the majority of the particles belong to one population with some outliers (e.g., a population of lipid droplets where most are negative for signal, or a population of lipid droplets were most are positive for signal), or (B) there are two populations of particles. For case (A), a median plus or minus the standard deviation threshold may be used to automatically separate the signal positive and signal negative particles. For case (B), an optimization algorithm may be used to select a threshold that maximizes the separation between the signal positive population and the threshold, as measured by number of standard deviations of the positive population above the threshold.
[0059] The ratio of signal positive particles to signal negative particles may be determined for each grid. It may be expected that the ratios of signal positive and signal negative particles should not vary substantially between grids despite variations in brightness. Large deviation of the ratio in a given grid from the median ratio may be indicative of improper threshold selection. Grids with ratios substantially higher or lower than the median ratio may be identified as outliers and may be automatically selected to undergo an additional round of refinement and threshold detection for one or more (e.g. two) distinct clusters of positive particles that are counted (FIG. 5G). Following additional refinement, remaining outliers may be removed, and the global positive particle count may be determined.
[0060] In some embodiments, as shown in FIGs. 8A to 8C, the noise and signal are differentiated using a convolutional neural network (CNN) in addition to, or in place of differentiation based on the intensity-based cutoff. In some embodiments, the CNN learns to distinguish positive and negative droplets based on the 3D intensity profile of the fluorescent droplets as a whole — which includes information of the shape and size of the droplet, instead of using the maximum pixel intensity of each droplet and intensity-based histogram cutoffs.
[0061] In some embodiments, per FIG. 8A, the CNN determines fluorescent droplets candidates based on the 3D local maxima analysis. In some embodiments, the CNN is trained with synthetic data, actual dataset, or both at various positive droplet occupancies and signal-to-noise ratios (SNR). In some embodiments, the CNN is additionally or alternatively training with hard negative data. In some embodiments, training with such hard negative data improves the CNN’s ability to identify ambiguous fluorescent droplets candidates.
[0062] In some embodiments, the imaging processing methods herein further comprise cluster removal to eliminate overlapping fluorescent droplets. In some embodiments, cluster removal comprises eliminating droplets having a peak density below a set threshold. In some embodiments, the peak density is determined based on a droplet size. In some embodiments, the droplet size is calculated based on a full width half maximum. In some embodiments, cluster removal comprises close neighbor/volume exclusion/non-maximum suppression. In some embodiments, cluster removal comprises determining a number of close droplet neighbors for each droplet. In some embodiments, cluster removal further comprises generate a histogram of close droplet neighbors. In some embodiments, cluster removal further comprises eliminating outlier droplets having a determining quantity of neighbors over a set threshold. In some embodiments, the set threshold is based a probability that the merged cluster of droplets is detected as a false positive. In some embodiments, such cluster removal assumes that all droplets are randomly distributed, and that density is relatively homogeneous across the entire tube volume. In some embodiments, the imaging processing methods herein further comprises calculating the SNR for each droplet individually, and eliminating droplets having an SNR below a set threshold
[0063] In some embodiments, the image processing methods herein enable improved fluorescent particle candidate detection by mitigating the low signal to noise ratios, high signal and background variation, and murky interfaces inherent in many 3D images of PCR droplets in a tube.
Machine Learning
[0064] In some embodiments, machine learning algorithms are utilized to aid to detect a fluorescent droplets candidate. In some embodiments, the machine learning algorithms for detecting a fluorescent droplets candidate employ one or more forms of labels including but not limited to human annotated labels and semi-supervised labels. The human annotated labels can be provided by a hand-crafted heuristic. The semi-supervised labels can be determined using a clustering technique to find fluorescent droplets candidates similar to those flagged by previous human annotated labels and previous semi-supervised labels. The semi-supervised labels can employ a XGBoost, a neural network, or both. [0065] In some embodiments, the training set is enlarged by generating synthetic 3D images. In some embodiments, the synthetic 3D images are generated by approximating droplets as spheres or blobs with different 3D aspect ratios.
[0066] In some embodiments, the semi-supervised label comprises an augmentation applied the synthetic image, real data, or both. In some embodiments, the augmentation comprises a signal intensity, a signal variation, a SNR, a polydispersity, a transformation, or any combination thereof. In some embodiments, the transformation comprises a dilation, expansion, reflection, rotation, shear, stretch, translation, or any combination thereof. In some embodiments, the SNR augmentation adds noise (e.g. salt and pepper, striations) to the image to make the model most robust to noise. In some embodiments, such noise augmentation is annotated with ground truth based on the exact locations of generated droplets, augmentations, or both. In some embodiments, increased training model size and diversity improves the robustness of the machine learning algorithms herein.
[0067] In some embodiments, the machine learning algorithms for detecting a fluorescent droplets candidate employ a distant supervision method. The distant supervision method can create a large training set seeded by a small hand-annotated training set. The distant supervision method can comprise positive-unlabeled learning with the training set as the ‘positive’ class. The distant supervision method can employ a logistic regression model, a recurrent neural network, or both. The
[0068] Examples of machine learning algorithms can include a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, deep learning, or other supervised learning algorithm or unsupervised learning algorithm for classification and regression. The machine learning algorithms can be trained using one or more training datasets. In some embodiments, the machine learning algorithm utilizes regression modeling, wherein relationships between predictor variables and dependent variables are determined and weighted. In one embodiment, for example, the fluorescent droplets candidate can be a dependent variable and is derived from an intensity-based histogram.
[0069] A non-limiting example of a multi-variate linear regression model algorithm is seen below: probability = A0 + A1(X1) + A2(X2) + A3(X3) + A4(X4) + A5(X5) + A6(X6) + A7(X7)...wherein Ai (Al, A2, A3, A4, A5, A6, A7, ...) are “weights” or coefficients found during the regression modeling; and Xi (XI, X2, X3, X4, X5, X6, X7, ...) are data collected from the User. Any number of Ai and Xi variable can be included in the model. Computing system
[0070] Referring to FIG. 9, a block diagram is shown depicting an exemplary machine that includes a computer system 1300 (e.g., a processing or computing system) within which a set of instructions can execute for causing a device to perform or execute any one or more of the aspects and/or methodologies for static code scheduling of the present disclosure. The components in FIG. 9 are examples only and do not limit the scope of use or functionality of any hardware, software, embedded logic component, or a combination of two or more such components implementing particular embodiments.
[0071] Computer system 1300 may include one or more processors 1301, a memory 1303, and a storage 1308 that communicate with each other, and with other components, via a bus 1340. The bus 1340 may also link a display 1332, one or more input devices 1333 (which may, for example, include a keypad, a keyboard, a mouse, a stylus, etc.), one or more output devices 1334, one or more storage devices 1335, and various tangible storage media 1336. All of these elements may interface directly or via one or more interfaces or adaptors to the bus 1340. For instance, the various tangible storage media 1336 can interface with the bus 1340 via storage medium interface 1326. Computer system 1300 may have any suitable physical form, including but not limited to one or more integrated circuits (ICs), printed circuit boards (PCBs), mobile handheld devices (such as mobile telephones or PDAs), laptop or notebook computers, distributed computer systems, computing grids, or servers.
[0072] Computer system 1300 includes one or more processor(s) 1301 (e.g., central processing units (CPUs) or general purpose graphics processing units (GPGPUs)) that carry out functions. Processor(s) 1301 optionally contains a cache memory unit 1302 for temporary local storage of instructions, data, or computer addresses. Processor(s) 1301 are configured to assist in execution of computer readable instructions. Computer system 1300 may provide functionality for the components depicted in FIG. 9 as a result of the processor(s) 1301 executing non-transitory, processor-executable instructions embodied in one or more tangible computer-readable storage media, such as memory 1303, storage 1308, storage devices 1335, and/or storage medium 1336. The computer-readable media may store software that implements particular embodiments, and processor(s) 1301 may execute the software. Memory 1303 may read the software from one or more other computer-readable media (such as mass storage device(s) 1335, 1336) or from one or more other sources through a suitable interface, such as network interface 1320. The software may cause processor(s) 1301 to carry out one or more processes or one or more steps of one or more processes described or illustrated herein. Carrying out such processes or steps may include defming data stmctures stored in memory 1303 and modifying the data structures as directed by the software.
[0073] The memory 1303 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., RAM 1304) (e.g., static RAM (SRAM), dynamic RAM (DRAM), ferroelectric random access memory (FRAM), phase- change random access memory (PRAM), etc.), a read-only memory component (e.g., ROM 1305), and any combinations thereof. ROM 1305 may act to communicate data and instructions unidirectionally to processor(s) 1301, and RAM 1304 may act to communicate data and instructions bidirectionally with processor(s) 1301. ROM 1305 and RAM 1304 may include any suitable tangible computer-readable media described below. In one example, a basic input/output system 1306 (BIOS), including basic routines that help to transfer information between elements within computer system 1300, such as during start-up, may be stored in the memory 1303.
[0074] Fixed storage 1308 is connected bidirectionally to processor(s) 1301, optionally through storage control unit 1307. Fixed storage 1308 provides additional data storage capacity and may also include any suitable tangible computer-readable media described herein. Storage 1308 may be used to store operating system 1309, executable(s) 1310, data 1311, applications 1312 (application programs), and the like. Storage 1308 can also include an optical disk drive, a solid- state memory device (e.g., flash-based systems), or a combination of any of the above. Information in storage 1308 may, in appropriate cases, be incorporated as virtual memory in memory 1303.
[0075] In one example, storage device(s) 1335 may be removably interfaced with computer system 1300 (e.g., via an external port connector (not shown)) via a storage device interface 1325. Particularly, storage device(s) 1335 and an associated machine-readable medium may provide non-volatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for the computer system 1300. In one example, software may reside, completely or partially, within a machine-readable medium on storage device(s) 1335. In another example, software may reside, completely or partially, within processor(s)
1301
[0076] Bus 1340 connects a wide variety of subsystems. Herein, reference to a bus may encompass one or more digital signal lines serving a common function, where appropriate. Bus 1340 may be any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. As an example and not by way of limitation, such architectures include an Industry Standard Architecture (ISA) bus, an Enhanced ISA (EISA) bus, a Micro Channel Architecture (MCA) bus, a Video Electronics Standards Association local bus (VLB), a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, an Accelerated Graphics Port (AGP) bus, HyperTransport (HTX) bus, serial advanced technology attachment (SATA) bus, and any combinations thereof.
[0077] Computer system 1300 may also include an input device 1333. In one example, a user of computer system 1300 may enter commands and/or other information into computer system 1300 via input device(s) 1333. Examples of an input device(s) 1333 include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device (e.g., a mouse or touchpad), a touchpad, a touch screen, a multi -touch screen, a joystick, a stylus, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), an optical scanner, a video or still image capture device (e.g., a camera), and any combinations thereof. In some embodiments, the input device is a Kinect, Leap Motion, or the like. Input device(s) 1333 may be interfaced to bus 1340 via any of a variety of input interfaces 1323 (e.g., input interface 1323) including, but not limited to, serial, parallel, game port, USB, FIREWIRE, THUNDERBOLT, or any combination of the above.
[0078] In particular embodiments, when computer system 1300 is connected to network 1330, computer system 1300 may communicate with other devices, specifically mobile devices and enterprise systems, distributed computing systems, cloud storage systems, cloud computing systems, and the like, connected to network 1330. Communications to and from computer system 1300 may be sent through network interface 1320. For example, network interface 1320 may receive incoming communications (such as requests or responses from other devices) in the form of one or more packets (such as Internet Protocol (IP) packets) from network 1330, and computer system 1300 may store the incoming communications in memory 1303 for processing. Computer system 1300 may similarly store outgoing communications (such as requests or responses to other devices) in the form of one or more packets in memory 1303 and communicated to network 1330 from network interface 1320. Processor(s) 1301 may access these communication packets stored in memory 1303 for processing.
[0079] Examples of the network interface 1320 include, but are not limited to, a network interface card, a modem, and any combination thereof. Examples of a network 1330 or network segment 1330 include, but are not limited to, a distributed computing system, a cloud computing system, a wide area network (WAN) (e.g., the Internet, an enterprise network), a local area network (LAN) (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a direct connection between two computing devices, a peer-to-peer network, and any combinations thereof. A network, such as network 1330, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
[0080] Information and data can be displayed through a display 1332. Examples of a display 1332 include, but are not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT-LCD), an organic liquid crystal display (OLED) such as a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display, a plasma display, and any combinations thereof. The display 1332 can interface to the processor(s) 1301, memory 1303, and fixed storage 1308, as well as other devices, such as input device(s) 1333, via the bus 1340. The display 1332 is linked to the bus 1340 via a video interface 1322, and transport of data between the display 1332 and the bus 1340 can be controlled via the graphics control 1321. In some embodiments, the display is a video projector. In some embodiments, the display is a head-mounted display (HMD) such as a VR headset. In further embodiments, suitable VR headsets include, by way of non-limiting examples, HTC Vive,
Oculus Rift, Samsung Gear VR, Microsoft HoloLens, Razer OSVR, FOVE VR, Zeiss VR One, Avegant Glyph, Freefly VR headset, and the like. In still further embodiments, the display is a combination of devices such as those disclosed herein.
[0081] In addition to a display 1332, computer system 1300 may include one or more other peripheral output devices 1334 including, but not limited to, an audio speaker, a printer, a storage device, and any combinations thereof. Such peripheral output devices may be connected to the bus 1340 via an output interface 1324. Examples of an output interface 1324 include, but are not limited to, a serial port, a parallel connection, a USB port, a FIREWIRE port, a THUNDERBOLT port, and any combinations thereof.
[0082] In addition or as an alternative, computer system 1300 may provide functionality as a result of logic hardwired or otherwise embodied in a circuit, which may operate in place of or together with software to execute one or more processes or one or more steps of one or more processes described or illustrated herein. Reference to software in this disclosure may encompass logic, and reference to logic may encompass software. Moreover, reference to a computer- readable medium may encompass a circuit (such as an IC) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware, software, or both.
[0083] Those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality.
[0084] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0085] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by one or more processor(s), or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
[0086] In accordance with the description herein, suitable computing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, media streaming devices, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will also recognize that select televisions, video players, and digital music players with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers, in various embodiments, include those with booklet, slate, and convertible configurations, known to those of skill in the art.
[0087] In some embodiments, the computing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device’s hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non -limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smartphone operating systems include, by way of non-limiting examples, Nokia® Symbian®
OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. Those of skill in the art will also recognize that suitable media streaming device operating systems include, by way of non-limiting examples, Apple TV®, Roku®, Boxee®, Google TV®, Google Chromecast®, Amazon Fire®, and Samsung® HomeSync®. Those of skill in the art will also recognize that suitable video game console operating systems include, by way of non limiting examples, Sony® PS3®, Sony® PS4®, Microsoft® Xbox 360®, Microsoft Xbox One, Nintendo® Wii®, Nintendo® Wii U®, and Ouya®.
Non-transitory computer readable storage medium
[0088] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked computing device. In further embodiments, a computer readable storage medium is a tangible component of a computing device. In still further embodiments, a computer readable storage medium is optionally removable from a computing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, distributed computing systems including cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi permanently, or non-transitorily encoded on the media.
Computer program
[0089] In some embodiments, the platforms, systems, media, and methods disclosed herein include at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable by one or more processor(s) of the computing device’s
CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), computing data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages. [0090] The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof.
Software Modules
[0091] In some embodiments, the platforms, systems, media, and methods disclosed herein include software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on a distributed computing platform such as a cloud computing platform. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. Databases
[0092] In some embodiments, the platforms, systems, media, and methods disclosed herein include one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of three dimensional images and fluorescent droplets candidates. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. Further non-limiting examples include SQL, PostgreSQL, MySQL, Oracle, DB2, and Sybase. In some embodiments, a database is internet- based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In a particular embodiment, a database is a distributed database. In other embodiments, a database is based on one or more local computer storage devices.
Terms and Definitions
[0093] Unless otherwise defined, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Any reference to “or” herein is intended to encompass “and/or” unless otherwise stated.
[0094] Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.
[0095] Whenever the term “no more than,” “less than,” “less than or equal to,” or “at most” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than” or “less than or equal to,” or “at most” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.
[0096] Where values are described as ranges, it will be understood that such disclosure includes the disclosure of all possible sub-ranges within such ranges, as well as specific numerical values that fall within such ranges irrespective of whether a specific numerical value or specific sub range is expressly stated. EXAMPLES
[0097] The following examples are illustrative and non-limiting to the scope of the devices, systems, fluidic devices, kits, and methods described herein.
EXAMPLE llmaging Multiple Tubes in Three Dimensions Using Light Sheet Imaging [0098] This example describes imaging multiple tubes using light sheet imaging. In a first assay, four PCR tubes arranged in a line containing suspensions of fluorescent and non-fluorescent lipid droplets were placed on a translating stage. The translating stage was positioned to hold one of the four PCR tubes at the intersection of a light sheet illumination path and a camera imaging path oriented at a 90° angle relative to the illumination path, as illustrated in FIG. 1. The translating stage was configured to translate along a single axis at a 45° angle with respect to the illumination path and the imaging path. The tubes were partially submerged in an open sample chamber, as illustrated in FIG. 2A and shown in FIG. 2B, containing index matching fluid matched to the lipid droplet suspension. The sample chamber had three flat, clear surfaces and was positioned in the imaging path so that the first clear surface was positioned orthogonal to the imaging path (“Clear surface 1”), and the second clear surface was positioned orthogonal to the light sheet illumination path (“Clear surface 2”), as shown in FIG. 1. The third clear surface (“Clear surface 3”) was positioned in the light sheet illumination path for the light sheet illumination to exit the sample chamber. While imaging, the tubes were translated along the axis of translation such that sequential cross-sectional planes of each tube were illuminated and imaged. The position of the tube cross-sectional plane image shifted within the camera field of view as the tube was translated along the 45° translation axis, as illustrated in FIG. 1 panels (i),
(ii), and (iii) showing a cross-sectional plane from a front edge of a tube (i), a cross-sectional plane from the middle of the tube (ii), and a cross-sectional plane from a back edge of the tube
(iii). The tubes were further translated along the 45° translation axis to move subsequent tubes into position to be imaged.
EXAMPLE 2Imaging Multiple Tubes in Three Dimensions in an Enclosed Sample Holder
Using Light Sheet Imaging
[0099] This example describes imaging multiple tubes in three dimensions in an enclosed sample holder using light sheet imaging.
[0100] In this assay, four PCR tubes arranged in a line containing suspensions of fluorescent and non-fluorescent lipid droplets are placed in a sample chamber positioned on a translating stage, as illustrated in FIG. 3A. The translating stage is positioned to hold one of the four PCR tubes at the intersection of a light sheet illumination path and a camera imaging path oriented at a 90° angle relative to the illumination path, as illustrated in FIG. 3B. The translating stage is configured to translate along two orthogonal axes, a y axis parallel to the illumination path, and a z axis parallel to the imaging axis. The tubes are partially submerged in a sample chamber, as illustrated in FIG. 3A, containing index matching fluid matched to the lipid droplet suspension and fitted with a lid with an O-ring seal. The sample chamber has two flat, clear surfaces for each tube and an additional flat, clear surface. The sample chamber is positioned in the imaging path so that the first clear surface corresponding to a first sample tube (“First clear surface”) is positioned perpendicular to the imaging path, and the second clear surface corresponding to the first sample tube (“Second clear surface”) is positioned perpendicular to the light sheet illumination path. The additional clear surface (“Additional clear surface”) is positioned in the light sheet illumination path for the light sheet illumination to exit the sample chamber. While imaging, the tubes and sample chamber are translated along the z axis of such that sequential cross-sectional planes of each tube were illuminated and imaged. In contrast to the first assay, the position of the tube cross-sectional plane image remain centered within the camera field of view during translation. Once a tube has been scanned, the tubes and sample chamber are translated along the y axis to move a new tube into position to be imaged.
[0101] In an alternative configuration, the tubes and sample chamber are translated along both the y axis and z axis, at a 45° angle with respect to the illumination path and the imaging path, such that sequential cross-sectional planes of each tube were illuminated and imaged, as described in EXAMPLE 1. Once a tube has been scanned, the tubes and sample chamber are further translated along the y axis and the z axis to move a new tube into position to be imaged.
EXAMPLE 3
Automated Detection and Quantification of Fluorescently Labeled Droplets Imaged Using
Light Sheet Imaging
[0102] This example describes automated detection and quantification of fluorescently labeled droplets imaged using light sheet imaging. A sample containing fluorescently labeled droplets was imaged as described in EXAMPLE 1 and illustrated in FIGS. 5A-5G. The resulting cross- sectional plane images were combined to produce a three-dimensional image data set for each sample. A region of interest within each cross-section was selected outlining the confines of the sample (FIG. 5A). This was to ensure that only the fluorescent particles inside the tube cross- section were considered. A local background subtraction was performed to suppress background fluorescence (FIG. 5B). A Gaussian filter was applied to smooth images, and a median filter was applied to remove hot pixels. [0103] A three-dimensional (3D) convolution was performed using a spherical template to identify substantially spherical fluorescent particles (corresponding to fluorescent lipid droplets) within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C). A 3D local maxima analysis was performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D). An additional 3D local maxima analysis was performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
[0104] A boundary was defined between the bulk sample and a top layer of the sample (referred to as the “murky layer”) with lower signal than the bulk sample due to refractive index mismatch at the air-sample interface (FIG. 5E). Regions above the murky layer boundary were grouped into a grid for local histogram analysis separate from grids containing images of the bulk sample. Each tube cross-section was divided into multiple grids for separate thresholding (FIG. 5F). For each grid, a histogram of the logio of the intensity (logio(intensity)) of each fluorescent particle candidate was plotted. A threshold logio(intensity) value for each grid was automatically set to differentiate between noise and signal. The threshold was set based on the assumptions that either (A) the majority of the particles belong to one population with some outliers (e.g., a population of lipid droplets where most are negative for signal, or a population of lipid droplets were most are positive for signal), or (B) there are two populations of particles. For case (A), a median plus or minus the standard deviation threshold was used to automatically separate the signal positive and signal negative particles. For case (B), an optimization algorithm was used to select a threshold that maximizes the separation between the signal positive population and the threshold, as measured by number of standard deviations of the positive population above the threshold.
[0105] The ratio of signal positive particles to signal negative particles was determined for each grid. It was expected that the ratios of signal positive and signal negative particles should not vary substantially between grids despite variations in brightness. Large deviation of the ratio in a given grid from the median ratio would be indicative of improper threshold selection. Grids with ratios substantially higher or lower than the median ratio were identified as outliers and were automatically selected to undergo an additional round of refinement and threshold detection (FIG. 5G). Following additional refinement, remaining outliers were removed, and the global positive particle count was determined. EXAMPLE 4
High Sample Throughput Digital PCR System [0106] This example describes a light sheet imaging setup and a method for using such setup to image 8 or more digital PCR samples automatically without user interference. The setup described herein is designed to allow for the imaging of 8 PCR samples using an 8-sample strip. However, it should be noted that the setup can be reconfigured to allow the analysis of fewer or more than 8 samples at a time.
[0107] FIG. 6 illustrates the procedural setup and steps for sample loading and scanning for this high throughput digital PCR analysis method. As shown in FIG. 6A, the 96 PCR samples are loaded onto the sample loading block using e.g., a 96-well plate (shown as arrow #1), and from which the samples can be picked up in an 8-sample strip, thus 8 samples at a time. Next, as shown in FIG. 6B, an 8-sample strip (shown as arrow #2) is picked up from the sample plate and moved on a translation stage (shown as arrow #3) for scanning of the sample. The PCR samples are then scanned using a focused light sheet from a collimated laser source (shown as white arrow #4) to generate light sheet images for each of the 8 samples of the sample strip. Using a resonant mirror, the light sheet is configured to pivot and/or translate at a frequency of about 1 kHz while the sample is scanned to increase image quality by reducing artifacts as described in EXAMPLE 5 below. After scanning is complete, the 8-sample strip is moved back to the sample loading block, and a second 8-sample strip is picked up and imaged.
EXAMPLE 5
Increased Image Quality through Reduction of Artifacts [0108] This example describes methods for improving the image quality of light sheet imaging experiments. Strip and shadow artifacts in light sheet images can be caused by light scattering and light absorption during scanning. In order to reduce these artifacts, the imaging setup is configured (e.g., by using a resonant mirror) such that the light source (e.g., laser light sheet) is capable of undergoing certain movements relative to the sample during scanning, thereby eliminating or “averaging out” some (or most) of the scattering and absorption events. Such movements of the light source relative to the sample (e.g., digital PCR sample) are in the kHz range (e.g., from about 0.1 kHz to about 20 kHz) and are shown in FIG. 7. FIG. 7A shows that a pivoting light sheet improves image quality as shown in the “average” image on the far right of FIG. 7A. FIG. 7B shows that a translating light sheet improves image quality as shown in the “average” image on the far right of FIG. 7B.
[0109] Another method to reduce artifacts is using a dual side illumination, as illustrated in FIG. 7C. Here, a sample (e.g., a PCR sample tube) is illuminated on both sides such that each light source only has to travel through half of the sample in order to illuminate the entire sample cross section.
EXAMPLE 6
Increased Image Quality through Reduction of Artifacts [0110] This example describes automated detection and quantification of fluorescently labeled droplets imaged using light sheet imaging. A sample containing fluorescently labeled droplets was imaged as described in EXAMPLE 1 and illustrated in FIGS. 5A-5G. The resulting cross- sectional plane images were combined to produce a three-dimensional image data set for each sample. A region of interest within each cross-section was selected outlining the confines of the sample (FIG. 5A). This was to ensure that only the fluorescent particles inside the tube cross- section were considered. A local background subtraction was performed to suppress background fluorescence (FIG. 5B). A Gaussian filter was applied to smooth images, and a median filter was applied to remove hot pixels.
[0111] A three-dimensional (3D) convolution was performed using a spherical template to identify substantially spherical fluorescent particles (corresponding to fluorescent lipid droplets) within an expected size range and eliminate fluorescent particles that do not conform to the expected size and shape parameters (FIG. 5C). A 3D local maxima analysis was performed to identify potential fluorescent particle candidates having a positive fluorescent signal (FIG. 5D). An additional 3D local maxima analysis was performed to filter out asymmetric particle candidates (e.g., non-spherical particle candidates).
[0112] Fluorescent droplets candidates were determined based on the 3D local maxima analysis (FIG. 8A). The noise and signal were differentiated using a convolutional neural network (CNN) by determining shape and size data of the fluorescent droplets as a whole (FIG. 8B). Clusters were then removed to eliminate overlapping fluorescent droplets having a peak density below a set threshold. The peak density was determined based on a droplet size and the droplet size is calculated based on a full width half maximum. Clusters were removed based on close neighbor/volume exclusion/non-maximum suppression and by determining a number of close droplet neighbors for each droplet. A histogram of close droplet neighbors was generated (FIG. 8C), whereas outlier droplets having a determining quantity of neighbors over a set threshold were eliminated.
[0113] While preferred embodiments of the present disclosure have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. It is intended that the following claims define the scope of the disclosure and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A light sheet imaging system comprising: a collimated illumination source configured to emit a collimated beam along a beam path; a Powell lens positioned in the beam path after the collimated illumination source and configured to expand the collimated beam along a single axis; a first cylindrical lens positioned in the beam path after the Powell lens and configured to re-collimate the beam along a single axis, thereby producing a collimated beam having a short axis orthogonal to a direction of propagation and long axis orthogonal to the short axis and the direction of propagation; and a second cylindrical lens positioned in the beam path after the second cylindrical lens and configured to focus the beam along the short axis to a focal plane.
2. The light sheet imaging system of claim 1, wherein a three-dimensional sample is positioned at the focal plane.
3. The light sheet imaging system of claim 2, wherein the three-dimensional sample comprises fluorescent droplets or fluorescently labeled cells.
4. The light sheet imaging system of claim 1, further comprising a camera comprising an imaging path orthogonal to the beam path, wherein the camera is positioned such that the imaging path intersects the beam path at the focal plane.
5. A sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid; and a translating stage supporting the tube holder and configured to translate the tube holder along a translation axis, wherein the translation axis is oriented at a 45° angle relative to the first transparent side and the second transparent side and parallel to the row of the plurality of tubes.
6. The sample holder of claim 5, further comprising a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid.
7. The sample holder of claim 6, wherein the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking.
8. The sample holder of claim 6, wherein the lid comprises a hole configured to accommodate the tube holder and the translating stage.
9. The sample holder of any one of claims 5-8, further comprising a first opaque side connecting the first transparent side to the third transparent side, and a second opaque side connecting the second transparent side and the third transparent side.
10. A sample holder comprising: a fluid reservoir configured to contain a fluid, the fluid reservoir comprising a base, a first transparent side, a second transparent side connected to the first transparent side at a right angle, and a third transparent side opposite the first transparent side and connected to the first transparent side and the second transparent side; a tube holder configured to hold a row of a plurality of tubes partially submerged in the fluid; and a translating stage supporting the fluid reservoir and the tube holder and configured to translate the fluid reservoir and the tube holder along a first translation axis and a second translation axis, wherein the first translation axis is oriented parallel to the first transparent side and perpendicular to the second transparent side, and wherein the second translation axis is perpendicular to the first transparent side and parallel to the second transparent side, and wherein the first translation axis and the second translation axis are oriented at a 45° angle relative to the row of the plurality of tubes.
11. The sample holder of claim 10, further comprising a fourth transparent side connected to the second transparent side at a right angle and parallel to the first transparent side, and a fifth transparent side connected to the fourth transparent side at a right angle and parallel to the second transparent side.
12. The sample holder of claim 10, further comprising a plurality of transparent sides connected in series at right angles to the second transparent side wherein alternating transparent sides of the plurality of transparent sides are parallel to the first transparent side or parallel to the second transparent side.
13. The sample holder of any one of claims 10-12, further comprising a lid configured to fit on top of the first transparent side, the second transparent side, and the third transparent and configured to retain the fluid.
14. The sample holder of claim 13, wherein the lid further comprises an O-ring positioned at the contact point between the lid and the first transparent side, the second transparent side, and the third transparent side, wherein the O-ring is configured to prevent fluid leaking.
15. The sample holder of any one of claims 5-14, wherein the fluid comprises an index matching fluid.
16. The sample holder of any one of claims 5-15, wherein the sample holder comprises a polymer.
17. The sample holder of any one of claims 5-16, wherein the plurality of tubes comprises PCR tubes.
18. The sample holder of any one of claims 5-17, wherein the plurality of tubes comprises microcentrifuge tubes.
19. The sample holder of any one of claims 5-18, wherein the sample holder is configured to accommodate up to 12 tubes.
20. The sample holder of any one of claims 5-19, wherein the sample holder is configured to accommodate up to 8 tubes.
21. The sample holder of claim 5-20, wherein the sample holder is configured to accommodate up to 4 tubes.
22. A method of image processing comprising: collecting a three-dimensional image data set comprising a plurality of cross-section images; identifying a plurality of substantially spherical fluorescent particles in the three- dimensional image data set; dividing one or more cross-section images of the plurality of cross-section images into a plurality of image grids; applying an intensity threshold independently to one or more image grids of the plurality of image grids; and identifying one or more signal positive particles from the plurality of substantially spherical fluorescent particles, wherein the one or more signal positive particles have intensities greater than the intensity threshold.
23. The method of claim 22, further comprising applying a smoothing filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles.
24. The method of any one of claims 22-23, further comprising applying a median filter to one or more cross-section images of the plurality of cross-section images prior to identifying the plurality of substantially spherical fluorescent particles.
25. The method of any one of claims 22-24, wherein identifying the plurality of substantially spherical fluorescent particles comprises performing a three-dimensional convolution using a three-dimensional template.
26. The method of any one of claims 22-25, further comprising determining an intensity of one or more substantially spherical fluorescent particles of the plurality of substantially spherical fluorescent particles by a three-dimensional local maxima analysis.
27. The method of claim 26, further comprising performing a second three-dimensional maxima analysis to remove one or more asymmetric particles from the plurality of substantially spherical fluorescent particles.
28. The method of any one of claims 22-27, wherein the intensity threshold is determined by a median plus or minus a standard deviation of a histogram of a logio of the intensities of the image grid.
29. The method of any one of claims 22-27 wherein the intensity threshold maximizes the separation between one or more intensities of the signal positive particles and the intensity threshold.
30. The method of any one of claims 22-29, further comprising repeating the applying the intensity threshold for one or more image grids corresponding to a ratio outlier and determining a refined ratio for one or more image grids corresponding to a ratio outlier after applying the intensity threshold.
31. The method of any one of claims 22-30, further comprising selecting a region of interest in one or more cross-section images of the plurality of cross-section images, wherein the region of interest comprises a sample area.
32. The method of any one of claims 22-31, further comprising identifying a background intensity and subtracting the background intensity from an intensity of one or more cross- section images of the plurality of cross-section images.
33. The method of any one of claims 22-32, further comprising selecting a subset of the plurality of substantially spherical fluorescent particles, wherein the substantially spherical fluorescent particles comprise a size larger than a minimum size threshold and smaller than a maximum size threshold.
34. The method of any one of claims 22-33, further comprising identifying a plurality of signal positive particles from the plurality of substantially spherical fluorescent particles comprising an intensity above a minimum threshold value.
35. The method of any one of claims 22-34, further comprising determining if a low signal region is present in a cross-section image, and, if a low signal region is present, identifying the low signal region and a high signal region within the region of interest of one or more cross-section images and designating a low signal image grid comprising the low signal region, or, if no low signal region is present, designating the region of interest as a high signal region.
36. The method of any one of claims 22-35, further comprising determining a ratio of a number of signal positive particles to a number of the plurality of spherical fluorescent particles for one or more image grids of the plurality of image grids.
37. The method of any one of claims 22-35, further comprising identifying ratio outliers comprising ratios that are higher or lower than a median of the ratios of the plurality of image grids.
38. The method of claim 37, further comprising removing the ratio outliers.
39. The method of any one of claims 22-38, wherein the method is automated.
40. The method of any one of claims 22-39, where the method is performed without user intervention.
41. The method of any one of claims 22-40, further comprising performing cluster removal to eliminate overlapping fluorescent droplets.
42. The method of any one of claims 22-41, wherein the cluster removal comprises eliminating droplets having a peak density below a set threshold.
43. The method of claim 42, wherein the peak density is determined based on a droplet size.
44. The method of claim 43, wherein the droplet size is calculated based on a full width half maximum.
45. The method of any one of claims 22-41, wherein the cluster removal comprises close neighbor/volume exclusion/non-maximum suppression.
46. The method of any one of claims 22-41, wherein cluster removal comprises determining a number of close droplet neighbors for each droplet.
47. The method of claim 46, wherein cluster removal further comprises generating a histogram of close droplet neighbors.
48. The method of claim 46 or 47, wherein cluster removal further comprises eliminating outlier droplets having a determining quantity of neighbors over a set threshold.
49. The method of claim 48, wherein the set threshold is based a probability that the merged cluster of droplets is detected as a false positive.
50. A method of analyzing a plurality of samples, the method comprising automatically subjecting each sample of the plurality of samples to a laser beam, one sample at a time, thereby generating emission light in each sample of the plurality of samples, and wherein the plurality of samples comprises at least 4 samples.
51. The method of claim 50, wherein the plurality of samples are located in a sample holder.
52. The method of claim 51, wherein the sample holder is moved in a direction that is perpendicular to the illumination path of the laser beam to move a scanned first sample out of the laser beam and move a second sample into the beam for scanning.
53. The method of claim 52, further comprising, while subjecting each sample of the plurality of samples to the laser beam, moving the laser beam relative to the sample.
54. The method of claim 53, wherein moving the laser beam relative to the sample comprises pivoting movements of the laser beam.
55. The method of claim 53, wherein moving the laser beam relative to the sample comprises translating movements of the laser beam.
56. The method of claim 54 or 55, wherein the pivoting movements or translating movements reduce artifacts in the resulting image data compared to a method in without pivoting movements or translating movements.
57. The method of any one of claims 54-56, wherein the laser beam moves relative to the sample with a frequency of at least about 0.1 kHz.
58. The method of claim 57, wherein the laser beam moves relative to the sample with a frequency of from about 0.1 kHz to about 20 kHz.
59. The method of claim 57, wherein the laser beam moves relative to the sample with a frequency of at least about 1 kHz.
60. The method of claim 50, wherein the laser beam is a sheet of laser light.
61. The method of claim 50, wherein the plurality of samples comprises at least 8 samples.
62. The method of claim 50, wherein the plurality of samples are subjected to a second laser beam.
63. The method of claim 62, wherein the second laser beam is configured to illuminate a site of a sample of the plurality of samples that is opposite to the site illuminated by the laser beam.
64. The method of claim 63, wherein each sample of the plurality of samples is subjected to the laser beam and the second laser beam simultaneously.
PCT/US2020/064117 2019-12-10 2020-12-09 Methods and systems for three-dimensional lightsheet imaging WO2021119201A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20899785.8A EP4073573A4 (en) 2019-12-10 2020-12-09 Methods and systems for three-dimensional lightsheet imaging
CN202080096198.1A CN115516367A (en) 2019-12-10 2020-12-09 Method and system for three-dimensional light sheet imaging
US17/836,737 US20230029710A1 (en) 2019-12-10 2022-06-09 Methods and systems for three-dimensional lightsheet imaging

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201962946373P 2019-12-10 2019-12-10
US62/946,373 2019-12-10
US202063082219P 2020-09-23 2020-09-23
US63/082,219 2020-09-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/836,737 Continuation US20230029710A1 (en) 2019-12-10 2022-06-09 Methods and systems for three-dimensional lightsheet imaging

Publications (1)

Publication Number Publication Date
WO2021119201A1 true WO2021119201A1 (en) 2021-06-17

Family

ID=76330751

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/064117 WO2021119201A1 (en) 2019-12-10 2020-12-09 Methods and systems for three-dimensional lightsheet imaging

Country Status (4)

Country Link
US (1) US20230029710A1 (en)
EP (1) EP4073573A4 (en)
CN (1) CN115516367A (en)
WO (1) WO2021119201A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023167877A1 (en) * 2022-03-01 2023-09-07 University Of Washington Apparatuses systems and methods for three dimensional microdissection of samples

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180284411A1 (en) * 2017-03-30 2018-10-04 Olympus Corporation Microscope apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013107298A1 (en) * 2013-07-10 2015-01-15 Carl Zeiss Microscopy Gmbh Arrangement for light-sheet microscopy
US10310248B2 (en) * 2016-08-18 2019-06-04 Olympus Corporation Microscope including a medium container containing an immersion medium in which a specimen container containing an immersion medium and a sample is immersed
CN109964163B (en) * 2016-09-16 2022-07-26 纽约市哥伦比亚大学理事会 Three-dimensional imaging using swept confocal alignment plane excitation and custom image separator
JP2018128604A (en) * 2017-02-09 2018-08-16 オリンパス株式会社 Microscope device
CN109060736A (en) * 2018-06-27 2018-12-21 北京天天极因科技有限公司 Mating plate fluorescent microscopic imaging device and detection method for the imaging of transparence drop

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180284411A1 (en) * 2017-03-30 2018-10-04 Olympus Corporation Microscope apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023167877A1 (en) * 2022-03-01 2023-09-07 University Of Washington Apparatuses systems and methods for three dimensional microdissection of samples

Also Published As

Publication number Publication date
CN115516367A (en) 2022-12-23
EP4073573A1 (en) 2022-10-19
EP4073573A4 (en) 2024-04-10
US20230029710A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
Möckl et al. Accurate and rapid background estimation in single-molecule localization microscopy using the deep neural network BGnet
Christiansen et al. In silico labeling: predicting fluorescent labels in unlabeled images
Molnar et al. Accurate morphology preserving segmentation of overlapping cells based on active contours
US11315292B2 (en) Live-cell computed tomography
US11080830B2 (en) Systems and methods for segmentation and analysis of 3D images
CN107945150A (en) The image processing method and system of gene sequencing
JP2017504307A (en) Method and system for digitally counting features on an array
Sbalzarini Seeing is believing: quantifying is convincing: computational image analysis in biology
WO2019196019A1 (en) Fluorescence image registration method, gene sequencing instrument and system, and storage medium
US20230029710A1 (en) Methods and systems for three-dimensional lightsheet imaging
Amat et al. Towards comprehensive cell lineage reconstructions in complex organisms using light‐sheet microscopy
Wilson et al. Automated single particle detection and tracking for large microscopy datasets
US20230186502A1 (en) Deep Photometric Learning (DPL) Systems, Apparatus and Methods
JP2021167812A (en) Optic system using variable diffuser
JP2021167811A (en) Inspection method based on edge field and deep learning
Franco-Barranco et al. Stable deep neural network architectures for mitochondria segmentation on electron microscopy volumes
US11961327B2 (en) Image processing method and device, classifier training method, and readable storage medium
Aaron et al. Practical considerations in particle and object tracking and analysis
CN110111842A (en) Image definition analysis and focusing method, sequenator, system and storage medium
KR20230038078A (en) Apparatus and method for detecting defects of glossy surfaces of objects
Li et al. Research on object detection of PCB assembly scene based on effective receptive field anchor allocation
US11803963B2 (en) Computational model for analyzing images of a biological specimen
CN117437186A (en) Transparent part surface defect detection method and system based on deep learning algorithm
US20230018469A1 (en) Specialist signal profilers for base calling
US20230124417A1 (en) Computer-implemented methods for quantitation of features of interest in whole-slide imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20899785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020899785

Country of ref document: EP

Effective date: 20220711