CN111465882A - Optical device, apparatus and system for assays - Google Patents

Optical device, apparatus and system for assays Download PDF

Info

Publication number
CN111465882A
CN111465882A CN201880020973.8A CN201880020973A CN111465882A CN 111465882 A CN111465882 A CN 111465882A CN 201880020973 A CN201880020973 A CN 201880020973A CN 111465882 A CN111465882 A CN 111465882A
Authority
CN
China
Prior art keywords
sample
optical
optical assembly
light
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880020973.8A
Other languages
Chinese (zh)
Other versions
CN111465882B (en
Inventor
斯蒂芬·Y·周
丁惟
戚骥
田军
张玙璠
董玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yisheng Biotechnology Co ltd
Yewei Co.,Ltd.
Original Assignee
Essenlix Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Essenlix Corp filed Critical Essenlix Corp
Priority to CN202310507084.7A priority Critical patent/CN116794819A/en
Publication of CN111465882A publication Critical patent/CN111465882A/en
Application granted granted Critical
Publication of CN111465882B publication Critical patent/CN111465882B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/8483Investigating reagent band
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L3/00Containers or dishes for laboratory use, e.g. laboratory glassware; Droppers
    • B01L3/50Containers for the purpose of retaining a material to be analysed, e.g. test tubes
    • B01L3/502Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures
    • B01L3/5027Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures by integrated microfluidic structures, i.e. dimensions of channels and chambers are such that surface tension forces are important, e.g. lab-on-a-chip
    • B01L3/502715Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures by integrated microfluidic structures, i.e. dimensions of channels and chambers are such that surface tension forces are important, e.g. lab-on-a-chip characterised by interfacing components, e.g. fluidic, electrical, optical or mechanical interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L9/00Supporting devices; Holding devices
    • B01L9/52Supports specially adapted for flat sample carriers, e.g. for plates, slides, chips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • G01N21/6458Fluorescence microscopy
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/75Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated
    • G01N21/77Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated by observing the effect on a chemical indicator
    • G01N21/78Systems in which material is subjected to a chemical reaction, the progress or the result of the reaction being investigated by observing the effect on a chemical indicator producing a change of colour
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/483Physical analysis of biological material
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/0008Microscopes having a simple construction, e.g. portable microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/002Scanning microscopes
    • G02B21/0024Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
    • G02B21/0052Optical details of the image generation
    • G02B21/0076Optical details of the image generation arrangements using fluorescence or luminescence
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/06Means for illuminating specimens
    • G02B21/08Condensers
    • G02B21/12Condensers affording bright-field illumination
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/16Microscopes adapted for ultraviolet illumination ; Fluorescence microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/02Mountings, adjusting means, or light-tight connections, for optical elements for lenses
    • G02B7/023Mountings, adjusting means, or light-tight connections, for optical elements for lenses permitting adjustment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/51Housings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L2200/00Solutions for specific problems relating to chemical or physical laboratory apparatus
    • B01L2200/02Adapting objects or devices to another
    • B01L2200/025Align devices or objects to ensure defined positions relative to each other

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Optics & Photonics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Molecular Biology (AREA)
  • Clinical Laboratory Science (AREA)
  • Biomedical Technology (AREA)
  • Hematology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Plasma & Fusion (AREA)
  • Dispersion Chemistry (AREA)
  • Biophysics (AREA)
  • Urology & Nephrology (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

In addition, the present invention provides devices and methods for simple, rapid and sensitive assays.

Description

Optical device, apparatus and system for assays
Cross reference to related applications
Priority is claimed for united states provisional patent applications filed on 8/2/2017 with serial number 62/456,590, on 15/2/2017 with serial number 62/459,554 and on 16/2/2017 with serial number 62/460,075, on 8/2/2017 with serial number 62/456,504(ESX045PRV), on 16/2/2017 with serial number 62/460,062(ESX045PRV2), and on 9/2/2017 with serial number 62/457,133(ESX046PRV), the entire contents of which are incorporated herein by reference for all purposes.
Technical Field
Wherein the present invention relates to an apparatus and method for performing biological and chemical assays and computational imaging.
Background
In biological and chemical assays (e.g., diagnostic tests), there is often a need for simple, rapid, and sensitive assays (including imaging). The present invention provides, inter alia, devices and methods for simple, rapid and sensitive assays, including imaging.
Drawings
Those skilled in the art will appreciate that the drawings described below are for illustration purposes only. The drawings are not intended to limit the scope of the present invention in any way. The figures are not drawn to scale. In the figures giving experimental data points, the lines connecting the data points are used only to guide the observed data, and have no other significance.
FIGS. 1-A, 1-B, and 1-C are schematic illustrations of system test samples under fluorescent illumination mode according to some embodiments of the present invention.
Fig. 2-a, 2-B, and 2-C are schematic diagrams of system test samples under bright field illumination mode according to some embodiments of the present invention.
Fig. 3 is a schematic exploded view of an optical adapter device in systems and systems 20 according to some embodiments of the present invention.
Fig. 4 is a schematic cross-sectional view showing details of a system test sample, particularly a device, in bright field illumination mode according to some embodiments of the present invention.
Fig. 5 is a schematic cross-sectional view showing details of a system test sample, particularly an apparatus, under fluorescent illumination mode according to some embodiments of the present invention.
6-A and 6-B are schematic cross-sectional views illustrating designs for stopping a control lever in a predetermined position when pulled outward from a device according to some embodiments of the present invention.
Fig. 7 is a schematic diagram of the structure of a sample slide holding a QMAX apparatus according to some embodiments of the present invention.
Fig. 8 is a schematic view of a movable arm that switches between two predetermined rest positions according to some embodiments of the present invention.
Fig. 9 is a schematic diagram of how a slider may indicate whether a QMAX device is inserted in the correct orientation according to some embodiments of the present invention.
10-A, 10-B, and 10-C are schematic diagrams of systems for smartphone colorimetric readers according to some embodiments of the present invention.
Fig. 11 is a schematic exploded view of an optical adapter device in a system according to some embodiments of the invention.
Figure 12 is a schematic cross-sectional view showing details of a system for reading a color chart, particularly details of the apparatus, according to some embodiments of the present invention.
13-A, 13-B, and 13-C are schematic diagrams of systems for smartphone colorimetric readers according to some embodiments of the present invention.
Fig. 14 is a schematic exploded view of an optical adapter device in a system according to some embodiments of the invention.
15-A, 15-B, and 15-C are schematic diagrams illustrating details of systems for reading color charts, and in particular, devices, according to some embodiments of the present invention.
Fig. 16-a shows a tomography apparatus consisting of an imaging sensor, a lens, and a QMAX structure, according to some embodiments of the invention.
Fig. 16-B shows an embodiment of a column array pattern for the letter E.
Fig. 16-C shows a thin lens model, which explains the effect of focal length on the captured image.
FIG. 16-D shows an image of the exemplary pillar array of FIG. 16-B taken by an imaging sensor.
Fig. 16-E shows a diagram of a scheme based on phase image retrieval.
Fig. 17-a illustrates an analyte detection and localization workflow including two phases, training and prediction according to some embodiments of the invention.
FIG. 17-B illustrates a process of removing an item from an ordered list according to some embodiments of the invention.
Fig. 18-a shows an embodiment of a QMAX apparatus for cell imaging.
Detailed description of illustrative embodiments
The following detailed description illustrates some embodiments of the invention by way of example and not by way of limitation. The section headings and any sub-headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described in any way. The contents under the chapter title and/or the sub-title are not limited to the chapter title and/or the sub-title but are applicable to the entire description of the present invention.
The citation of any publication is for its disclosure prior to the filing date and should not be construed as an admission that the claims are not entitled to antedate such publication by virtue of prior invention. Further, the dates of publication provided may be different from the actual publication dates which may need to be independently confirmed.
Seven exemplary embodiments are shown below: one embodiment of an optical adapter for bright field and fluorescence microscopy imaging attached to a smartphone; one embodiment of an optical adapter for colorimetric measurements attached to a smartphone using a tilted fiber-optic endface as a light source; one embodiment of an optical adapter for colorimetric measurement attached to a smartphone using side illumination of a looped optical fiber as a light source; one embodiment of a tomography apparatus and method; one embodiment of machine learning assisted analysis and imaging; one embodiment of an apparatus and method for tissue staining and cell imaging; one embodiment of a dual lens imaging system.
A. Optical adapter for bright field and fluorescent microscopes attached to a smartphone
Brightfield and fluorescence microscopy are very powerful techniques for testing certain properties of a sample, and have wide applications in health monitoring, disease diagnosis, scientific education, and the like. However, conventionally, taking microscopic images requires an expensive microscope and experienced personnel, which cannot be done by ordinary people. Although there are some recent inventions that can turn a smartphone into a bright field microscope, such bright field microscope images only give very limited sample information.
The invention described herein addresses this problem by providing a system comprising an optical adapter and a smartphone. The optical adapter device is mounted on a smartphone, converting it into a microscope that can take fluorescent and bright field images of a sample. The system can be operated conveniently and reliably by ordinary personnel at any location. The optical adapter takes advantage of the existing resources of the smartphone, including the camera, light source, processor and display screen, which provides a low cost solution for bright field and fluorescent microscopy to users.
In the present invention, the optical adapter device includes a holder frame that fits over the upper portion of the handset and an optical cartridge attached to the holder, having a sample receiver slot and illumination optics. In some prior art (us patent No. 2004/029091 and us patent No. 2011/0292198), their optical adapter designs are a unitary piece that includes a clip-on mechanical component and a functional optical element mounted on a smartphone. A problem with such designs is that they require redesign of the integral optical adapter for each particular model of smartphone. In the present invention, however, the optical adapter is divided into a holding frame for mounting only the smartphone and a general optical box containing all functional components. For the smart phones with different sizes, as long as the relative positions of the camera and the light source are the same, the fixing frame is only required to be redesigned, and a large amount of design and manufacturing cost is saved.
The optical box of the optical adapter comprises: a receiver slot that receives the sample within a field of view and a focal range of the smartphone camera and positions the sample in a sample slide; bright field illumination optics for capturing bright field microscopic images of the sample; fluorescence illumination optics for capturing fluorescence microscopy images of a sample; a lever that switches between bright field illumination optics and fluorescent illumination optics by sliding inward and outward in the optics box.
The receiver slot has a rubber door attached thereto that may completely cover the slot to prevent ambient light from entering the optics box to be collected by the camera. In the prior art (us patent 2016/0290916), the sample cell is always exposed to ambient light, which does not cause much problems because it only performs bright field microscopy. The present invention can utilize such rubber doors in performing fluorescence microscopy because ambient light can contribute much noise to the camera's image sensor.
In order to capture a good fluorescence microscopy image, it is desirable that little excitation light enters the camera and that the camera only collects the fluorescence emitted by the sample. However, for all common smartphones, the optical filter placed in front of the camera does not block well light of the undesired wavelength range of the light emitted from the light source of the smartphone, due to the large divergence angle of the light beam emitted by the light source, and the optical filter works anyway for non-collimated light beams. The collimating optics may be designed to collimate the light emitted by the smartphone light source to address this issue, but this approach increases the size and cost of the adapter. In contrast, in the present invention, the fluorescent lighting optics enable excitation light to illuminate the sample at a large oblique angle of incidence, partly from the waveguide inside the sample slide and partly from the rear side of the sample side, so that the excitation light is hardly collected by the camera to reduce the noise signal entering the camera.
Bright field illumination optics in the adapter receive and rotate the light beam emitted by the light source to backlight the sample at normal incidence.
Typically, the optics box also contains a lens mounted therein that is aligned with the camera of the smartphone, which magnifies the image captured by the camera. The images taken by the camera may be further processed by the processor of the smartphone and the analysis results output on the screen of the smartphone.
To implement bright field illumination and fluorescent illumination optics in the same optical adapter, a slidable lever is used in the present invention. The optics of the fluorescent illumination optics are mounted on the lever and when the lever is fully slid into the optics box, the fluorescent illumination optics block the optical path of the bright field illumination optics and switch the illumination optics to the fluorescent illumination optics. When the control rod slides out, the fluorescent lighting optical element mounted on the control rod moves out of the optical path and switches to bright field lighting optics. This lever design allows the optical adapter to operate in both bright field and fluorescent lighting modes without the need to design two different single mode optical boxes.
The control lever comprises two planes at different levels.
In some embodiments, the two planes may be joined together with a vertical rod and moved together into or out of the optical box. In some embodiments, the two planes may be separated, and each plane may be moved individually into or out of the optical box.
The upper control rod plane contains at least one optical element, which may be, but is not limited to, a filter. The upper control rod plane moves below the light source and the preferred distance between the upper control rod plane and the light source is in the range of 0 to 5 mm.
A portion of the bottom joystick plane is not parallel to the image plane. The surface of the plane non-parallel part of the bottom control rod has high mirror finish, and the reflectivity is more than 95 percent. The non-parallel portion of the bottom joystick plane moves under the light source and deflects the light emitted from the light source to illuminate the sample area directly below the camera backwards. The preferred angle of inclination of the non-parallel portion of the bottom lever plane is in the range of 45 to 65 degrees, and the angle of inclination is defined as the angle between the non-parallel bottom plane and the vertical plane.
A portion of the bottom control bar plane is parallel to the image plane and is located 1mm to 10mm below the sample. The surface of the parallel portion of the bottom control rod plane is highly light absorbing, with light absorption greater than 95%. The absorption surface is used to eliminate reflected light that is back-illuminated on the sample at small angles of incidence.
To use the control rod to slide in and out to switch the illumination optics, a stop design is used that includes a ball plunger and a groove on the control rod to stop the control rod in a predetermined position when pulled outward from the adapter. This allows the user to pull the lever with any force, but stops the lever at a fixed position where the mode of operation of the optical adapter is switched to brightfield illumination.
A sample slide is mounted within the receiver slot to receive the QMAX device and position the sample in the QMAX device within the field of view and focus range of the smartphone camera.
The sample slider includes fixed track frame and digging arm:
the frame rail is fixedly mounted in the receptacle slot of the optics box. The rail frame has a slide groove matching the width and thickness of the QMAX device so that the QMAX device can slide along the rail. The width and height of the track groove are carefully configured such that the QMAX device moves less than 0.5mm in a direction perpendicular to the sliding direction in the sliding plane and moves less than 0.2mm in the thickness direction of the QMAX device.
The frame rail has an open window under the field of view of the camera of the cell phone to allow light to back illuminate the sample.
The movable arm is pre-disposed in a slide groove of the rail frame and moves with the QMAX device to guide the motion of the QMAX device in the rail frame.
The movable arm is equipped with a stop mechanism having two predetermined stop positions. For one position, the arm will stop the QMAX device at a position where the fixed sample area on the QMAX device is just below the camera of the smartphone. For the other position, the arm will stop the QMAX device at a position where the sampling area on the QMAX device is outside the field of view of the smartphone, and the QMAX device can be easily removed from the track slot.
The movable arm is switched between the two stop positions by pressing the QMAX device and the movable arm together to the end of the rail groove and then releasing.
The movable arm may indicate whether the QMAX device is inserted in the correct orientation. The shape of one corner of the QMAX device is configured to be different from the other three right angles. The shape of the movable arm is matched to the specific shape of the one corner so that the QMAX device can only be slid in the correct direction to the correct position in the track groove.
FIGS. 1-A, 1-B, and 1-C are schematic diagrams of a system 19 for testing a sample under fluorescent illumination mode. In particular, FIGS. 1-B and 1-C are exploded views of system 19 shown from the front and back sides, respectively. The system 19 comprises a smartphone 1; an optical adapter device 18 mounted on the upper part of the smartphone 1; a sample slide 5 inserted into the receiver slot 4 of the device 18 such that the sample on the sample slide 5 is located within the field of view and focus of the camera module 1C in the smartphone 1. The control rod 8 is fully depressed into the device 18 so that the system 19 operates in a fluorescent lighting mode. After the entry of the sample slide 5, a rubber door 16 attached to the device 18 covers the receiver slot 4 to prevent ambient light from entering the receiver slot 4 and affecting the test.
Software (not shown) installed in the smartphone 1 analyzes the image collected by the camera module 1C while the light source 1L in the smartphone 1 emits light, so as to obtain some attributes of the sample, and outputs the result to the display screen 1f in the smartphone 1.
Fig. 2-a, 2-B, and 2-C are schematic diagrams of a system 20 for testing samples in bright field illumination mode. Specifically, fig. 2-B and 2-C are exploded views of the system 20 shown from the front and back sides, respectively. The system 20 comprises a smartphone 1; an optical adapter device 18 mounted on the upper part of the smartphone 1; a sample slide 5 inserted into the receiver slot 4 of the device 18 such that the sample on the sample slide 5 is located within the field of view and focus of the camera module 1C in the smartphone 1. The control rod 8 is pulled outwardly from the device 18 and stopped by a stopper (not shown) at a predetermined position in the device 18 so that the system 20 operates in a brightfield lighting mode.
FIG. 3 is a schematic exploded view of the optical adapter arrangement 18 in the system 19 and system 20. the arrangement 18 includes a holder housing 2 mounted in the upper portion of the smartphone 1, an optics box 3 attached to the housing 2, including a receiver slot 4, an optics compartment 3C, rails 6B and 6t allowing the control rod 8 to slide in, and a rubber door 16 inserted into the slot 4s to cover the receiver slot 4. an optics insert 7 fits into the top of the optics compartment 3C, with an exit aperture 7L and an entrance aperture 7C aligned with the light source 1L and camera 1C in the smartphone 1 (as shown in FIGS. 2-B). lens 11 is mounted in the entrance aperture 7C in the optics insert 7 and is configured such that a sample inserted into the receiver slot 4 is within the working distance of the camera 1C (as shown in FIGS. 2-B and 1-B). lens 11 is used to magnify an image of a sample captured by the camera 1C (as shown in FIGS. 2-B and 1-B) mounted in the optics box 11, and when the optics adapter arrangement is mounted in the top of the optics box 1C with the optics box, including a horizontal light-reflecting rod 14, a horizontal light-reflecting mirror 14 mounted in the bottom of the box 3, and a horizontal light-reflecting rod 14, and a horizontal light-reflecting rod-controlling rod-bar-light-reflecting device-a switch-a horizontal-bar-light-and a horizontal-controlling rod-light-bar-light-controlling device-a horizontal switch-bar-a switch-a horizontal switch-bar-a horizontal switch-bar-a horizontal switch-bar-a horizontal switch-optical switch-bar-optical switch-a horizontal switch-bar-a horizontal switch-optical device-a horizontal switch-optical device-a horizontal switch-optical device-a horizontal switch.
FIG. 4 is a schematic cross-sectional view showing details of system 20 testing a sample in bright field illumination mode, particularly details of device 18. this figure shows the function of the elements described above with reference to FIG. 3 pulling control rod 8 outward from device 18 (as shown in FIG. 3) and stopped by stop 17 (as shown in FIG. 3) at a predetermined position such that mirror 13 and mirror 14 are exposed to camera 1C and light source 1L and aligned with camera 1C and light source 1L. light source 1L emits beam BB1 away from smartphone 1. beam BB1 is deflected by mirror 14 90 degrees to beam BB2, which beam BB2 is further deflected by mirror 13 90 degrees to beam BB3. beam BB3 back illuminates the sample in sample 5 at a normal angle of incidence. lens 11 produces a magnified image of the sample on the image sensor plane of camera 1℃ smartphone 1 captures and processes the image to obtain certain properties of the sample.
FIG. 5 is a schematic cross-sectional view showing details of system 19 testing a sample in a fluorescence illumination mode, particularly details of apparatus 18. this figure shows the function of the elements described above with reference to FIG. 3. control rod 8 (shown in FIG. 3) is fully inserted into apparatus 18 so that light absorber 9 and tilted mirror 10 are in the field of view of camera 1C and light source 1L and blocks the light path between light source 1L and mirror pair 13 and 14. band pass filter 15 is located directly below light source 1L. light source 1L emits a beam BF1 away from smartphone 1. filter 15 allows a beam BF1 having a specific wavelength range matching the excitation wavelength of the fluorescent sample in sample slide 5 to pass through 865. A portion of beam BF1 to impinge on the edge of transparent sample 5 and couple to the beam BF3 traveling in sample slide 5 and illuminate the sample area 1 below lens 11. A portion of beam 1 impinges on mirror 10. tilted mirror 10 deflects 1 to the beam 1 and causes the remaining beam to be reflected in the region of sample slide 18 to be reflected light with a large angle to pass through the camera 18. the filter 16 to form a diverging angle after the sample is processed by the sample absorber 18C and the sample is processed by the sample absorber 18.
Fig. 6-a and 6-B are schematic cross-sectional views illustrating a design for stopping the control lever 8 at a predetermined position when the control lever 8 is pulled outward from the device 18. The ball plunger 17 is mounted in the side wall of the track groove 6t and a recess 8g is drilled in the side wall of the control rod 8, the shape of the recess 8g matching the shape of the ball in the ball plunger 17. When the control rod 8 is pulled outward from the device 18 and has not yet reached the predetermined position as shown in fig. 2, as shown in fig. 6-a, the ball in the ball plunger 17 is pressed into its body by the side wall of the control rod 8, so that the control rod 8 can slide along the rail 6 t. When the groove 8g on the control lever 8 reaches the position of the ball plunger 17, the ball in the ball plunger 17 jumps into the groove 8g to stop the control lever 8, as shown in fig. 6-B.
Fig. 7 is a schematic diagram of the structure of a sample slide holding a QMAX device. The sample slide comprises: a rail frame having a rail groove to slide the QMAX device therealong; and a movable arm which is provided in the track groove in advance, and moves together with the QMAX device to guide the movement thereof. The movable arm is equipped with a stop mechanism to stop the QMAX device at two predetermined stop positions. The width and height of the track groove are carefully configured such that the QMAX device moves less than 0.5mm in a direction perpendicular to the sliding direction in the sliding plane and moves less than 0.2mm in the thickness direction of the QMAX device.
Fig. 8 is a schematic view of the movable arm switching between two predetermined rest positions. By pressing the QMAX device and the movable arm together to the end of the rail slot and then releasing, the QMAX card can stop at position 1, where the sample area is out of the field of view of the smartphone camera for easily removing the QMAX device from the slider, or at position 2, where the sample area is just below the field of view of the smartphone camera for capturing images.
Fig. 9 is a schematic diagram of how the slider indicates whether the QMAX device is inserted in the correct direction. The shape of one corner of the QMAX device is configured to be different from the other three right angles. The shape of the boom is matched to the corner shape of the specific shape so that the QMAX device can only slide in the correct direction to the correct position in the track groove. If the QMAX device is flipped or inserted from the wrong side, the portion of the QMAX device outside the slider is longer than when the QMAX device is correctly inserted.
When both fluorescence and bright field images are available, knowledge of the fluorescence image can be used to process the bright field image, or knowledge of the bright field image can be used to process the fluorescence image, or both images can be processed together. The fields of view of the fluorescence image and the bright field image may be different; thus, the two images are not spatially aligned pixel-to-pixel.
To account for the misalignment between the fluorescence image and the bright field image, image registration may be applied to the two images. Image registration finds a geometric transformation that relates the spatial location from one image to another. Various image registration algorithms may be used to align the fluorescence and bright field images, including but not limited to feature point-based, cross-correlation-based, fourier-alignment-based, and the like. The image registration outputs a geometric transformation that maps the spatial location (coordinates) of one image to another image.
After the fluorescent and bright field images are aligned, information from both images can be used to improve the processing of one image, or both images can be processed together.
Example (b):
A1. an optical adapter, comprising:
i. a stent frame, and
an optics box removably attached to the support frame,
wherein the mount frame is configured to removably mount on a mobile device and align the optics box with a camera and illumination source integrated in the mobile device;
wherein the optical cartridge comprises a sample receiver slot and illumination optics.
B1. An optical system, comprising:
i. the optical adapter of embodiment a 1; and
a qmax card comprising a first plate and a second plate, wherein the first plate and the second plate compress a liquid sample into a uniform thickness layer of less than 200 μ ι η; and
a slider configured to receive a QMAX card and to be inserted into the optical box.
C1. The adapter or system of any preceding implementation, wherein the mobile device is a smartphone.
C2. The adapter or system of any preceding embodiment wherein the holder frame comprises a holder housing configured to be replaceable with other holder housings having different dimensions for different mobile devices.
C3. The adapter or system of any preceding implementation, wherein the holder frame is sized to removably fit the optical adapter to an upper portion of a mobile device.
C4. The adapter or system of any preceding embodiment, wherein the optics box of the optical adapter comprises:
i. a receiver slot configured to receive and position a QMAX card in a sample slide in a field of view and a focal range of a camera;
bright field illumination optics configured to capture bright field microscopic images of the sample;
fluorescence illumination optics configured to capture fluorescence microscopy images of a sample; and
a control rod configured to switch between the brightfield illumination optics and the fluorescence illumination optics by sliding inward and outward in the optical box.
C5. The adapter or system of any preceding implementation, wherein the receiver slot comprises a rubber door that can completely cover the slot to prevent ambient light from entering the light box to be collected by the camera.
C6. The adapter or system of any preceding embodiment, wherein the bright field illumination optics in the adapter are configured to receive and redirect the light beam emitted by the light source to backlight the sample at a normal angle of incidence
C7. The adapter or system of any preceding embodiment wherein the optics box further comprises a lens mounted therein and aligned with a camera of the mobile device, the lens magnifying an image captured by the camera.
C8. The adapter or system of any preceding implementation, wherein the image captured by the camera is further processed by a processor of the mobile device and the analysis results are output on a screen of the mobile device.
C9. The adapter or system of any preceding embodiment, wherein the control rod is slidable and configured to implement both bright field illumination and fluorescent illumination optics in the same optical adapter.
C10. An adapter or system according to any preceding embodiment, wherein the optics of the fluorescent lighting optics are mounted on a lever, and when the lever is fully slid into the optics box,
C11. the adapter or system of any preceding embodiment, wherein the control rod with the fluorescent illumination optics blocks the optical path of the bright field illumination optics and switches the illumination optics to the fluorescent illumination optics.
C12. The adapter or system of any preceding embodiment, wherein when the lever slides out, the fluorescent illumination optics mounted on the lever moves out of the optical path and switches the illumination optics to bright field illumination optics.
C13. The adapter or system of any preceding embodiment, wherein the lever comprises two planes at different heights.
C14. An adaptor or system according to any preceding embodiment, wherein the two planes are connected together by a vertical rod and moved together into or out of the optics box.
C15. The adapter or system of any preceding embodiment wherein the two planes are separable and each plane is individually movable into and out of the optics box.
C16. An adapter or system according to any preceding embodiment wherein the upper control rod plane comprises at least one optical element which may be, but is not limited to, a filter.
C17. An adaptor or system according to any preceding embodiment, wherein the upper rod plane moves below the light source and the preferred distance between the upper rod plane and the light source is in the range 0 to 5 mm.
C18. The adapter or system of any preceding embodiment, wherein a portion of the bottom lever plane is not parallel to the image plane.
C19. An adapter or system as in any preceding embodiment, wherein the surface of the non-parallel portion of the bottom control rod plane has a mirror finish with a reflectivity of greater than 95%.
C20. The adapter or system of any preceding embodiment, wherein the non-parallel portion of the bottom lever plane moves under the light source and deflects light emitted from the light source to reverse illuminate the sample area directly under the camera.
C21. An adapter or system according to any preceding embodiment, wherein the preferred angle of inclination of the non-parallel portion of the bottom lever plane is in the range of 45 degrees to 65 degrees, and the angle of inclination is defined as the angle between the non-parallel bottom plane and the vertical plane.
C22. The adapter or system of any preceding embodiment, wherein a portion of the bottom lever plane is parallel to the image plane and is located 1mm to 10mm below and away from the sample.
C23. An adapter or system as in any preceding embodiment, wherein the surface of the parallel portion of the bottom lever plane is highly light absorbing, having a light absorption of greater than 95%.
C24. An adaptor or system according to any preceding embodiment, wherein the absorbing surface is for eliminating reflected light that is back-illuminated on the sample at small angles of incidence.
C25. The adapter or system of any preceding embodiment, wherein the lever comprises a stop configured to stop the lever.
C26. An adaptor or system according to any preceding embodiment, wherein the stop comprises a ball plunger and a groove on the control rod is used to stop the control rod at a predetermined position when pulled outwardly from the adaptor.
C27. An adapter or system according to any preceding embodiment, wherein the stop is configured to allow a user to pull the lever with any force, but to stop the lever at a fixed position where the mode of operation of the optical adapter is switched to brightfield illumination.
C28. The adapter or system of any preceding embodiment, wherein the sample slide is mounted within the receiver slot to receive the QMAX device and position the sample in the QMAX device within the field of view and focus range of the smartphone camera.
C29. An adapter or system according to any preceding embodiment wherein the movable arm is switched between the two stop positions by pressing the QMAX device and the movable arm together to the end of the track slot and then releasing.
C30. The adapter or system of any preceding embodiment, wherein the movable arm may indicate whether the QMAX device is inserted in the correct orientation.
C31. An adapter or system according to any preceding embodiment, wherein one corner of the QMAX apparatus is shaped differently from the other three right-angled corners.
C31. The shape of the movable arm is matched to the shape of the corner of the specific shape so that the QMAX device can only slide in the correct direction to the correct position in the track groove.
C32. The adapter or system of any preceding embodiment, wherein the sample slide comprises a fixed track frame and a movable arm:
C33. the adapter or system of any preceding embodiment, wherein the frame rail is fixedly mounted in a receiver slot of the optics box; the rail frame has a slide groove matching the width and thickness of the QMAX device so that the QMAX device can slide along the rail. The width and height of the track groove are carefully configured such that the QMAX device moves less than 0.5mm in a direction perpendicular to the sliding direction in the sliding plane and moves less than 0.2mm in the thickness direction of the QMAX device.
C34. The adapter or system of any preceding embodiment, wherein the frame rail has an open window below the field of view of the camera of the smartphone to allow light to backlight the sample.
C35. The adapter or system of any preceding embodiment wherein the movable arm is pre-built into a slide slot of the track frame and moves with the QMAX device to guide the movement of the QMAX device in the track frame.
C36. An adaptor or system according to any preceding embodiment, wherein the moveable arm is provided with a stop mechanism having two predetermined stop positions.
B. An optical adapter (angled fiber end illumination) for connecting the colorimetric reader to the smartphone.
Colorimetric assay is a very powerful technique and has wide applications in health monitoring, disease diagnosis, chemical assays, and the like. The key factor in obtaining accurate colorimetric assay results is accurate quantification of color change. Typically, the colorimetric test strip is analyzed for color change by comparing the color change to a standard color card. However, this comparison is done by the human eye and is susceptible to ambient light conditions, which limits the accuracy of quantifying color changes.
The invention described herein addresses this problem by providing a system that includes an optical adapter and a handset. The optical adapter device is mounted on a cell phone that converts it into a colorimetric reader that can provide consistent and uniform illumination to illuminate the front surface of the colorimetric test card and capture an image of the sample to analyze the color change. The system can be operated conveniently and reliably by ordinary personnel at any location. The optical adapter utilizes existing resources of the smartphone, including cameras, light sources, processors, and display screens, which provides a low cost solution to accurately quantify colorimetric color changes.
In the present invention, an optical adaptation device includes a holder frame that fits over the upper portion of a cell phone and an optical cartridge attached to the holder, the optical cartridge having a sample receiver slot and illumination optics. In some prior art connection adapters for cell phones, their adapter design is a unitary piece that includes clamping mechanical parts and functional elements that fit on the cell phone. A problem with such designs is that they require redesign of the integral adapter for each particular model of smartphone. In the present invention, however, the optical adapter is divided into a holding frame for mounting only the smartphone and a general optical box containing all functional components. For the smart phones with different sizes, as long as the relative positions of the camera and the light source are the same, the fixing frame is only required to be redesigned, and a large amount of design and manufacturing cost is saved.
An optical box of an optical adapter includes: a receiver tank to receive and position the colorimetric sample within a field of view and a focus range of the smartphone camera; illumination and imaging optics that produce uniform illumination across the sample and capture an image of the sample independent of any external conditions.
In order to capture an image of the sample to accurately represent the color change, it is desirable that the area of the sample under the camera be uniformly illuminated. But for all common smartphones there is always a distance between the light source and the camera. When the sample is placed very close to the camera of the smartphone, the area that is uniformly frontally illuminated by the light source is directly below the light source, but not within the field of view of the camera, without additional illumination optics. To solve this problem, in the present invention, a tilted large core optical fiber is used to rotate a light beam emitted from a light source so as to uniformly illuminate a sample region directly below a camera.
Instead, in the present invention, both end faces of the optical fiber are given a matte finish to act as a diffuser, so that the end face facing the sample can be a surface light source to produce more uniform illumination on the sample.
Typically, the optics box also contains a lens mounted therein that is aligned with the camera of the smartphone, which brings the sample within the focal range of the camera. The image captured by the camera will be further processed by the processor of the smartphone to analyze the color change and output the analysis result on the screen of the smartphone.
The sample slide is mounted within the receiver slot to receive the QMAX device and position the sample in the QMAX device within the field of view and focus range of the smartphone camera.
The sample slider includes trapped orbit frame and digging arm:
the frame rail is fixedly mounted in the receiver slot of the optics box. The rail frame has a slide groove matching the width and thickness of the QMAX device so that the QMAX device can slide along the rail. The width and height of the track groove are carefully configured such that the QMAX device moves less than 0.5mm in a direction perpendicular to the sliding direction in the sliding plane and moves less than 0.2mm in the thickness direction of the QMAX device.
The frame rail has an open window under the field of view of the camera of the cell phone to allow light to back illuminate the sample.
The movable arm is pre-disposed in a slide groove of the rail frame and moves with the QMAX device to guide the motion of the QMAX device in the rail frame.
The movable arm is equipped with a stop mechanism having two predetermined stop positions. For one position, the arm will stop the QMAX device at a position where the fixed sample area on the QMAX device is just below the camera of the smartphone. For the other position, the arm will stop the QMAX device at a position where the sampling area on the QMAX device is outside the field of view of the smartphone, and the QMAX device can be easily removed from the track slot.
The movable arm is switched between the two stop positions by pressing the QMAX device and the movable arm together to the end of the rail groove and then releasing.
The movable arm may indicate whether the QMAX device is inserted in the correct orientation. The shape of one corner of the QMAX device is configured to be different from the other three right angles. The shape of the boom is matched to the corner shape of the specific shape so that the QMAX device can only slide in the correct direction to the correct position in the track groove.
10-A, 10-B, and 10-C are schematic diagrams of a system 10 for a smartphone colorimetric reader, specifically, FIGS. 10-B and 10-C are exploded views of the system 10 shown from the front and back sides, respectively, the system 10 includes a smartphone 1, an optical adapter device 13 mounted on the upper portion of the smartphone 1, a colorimetric test card 137 inserted into a receiver slot 136 of the device 13 such that a sample area on the sample card 137 is within the field of view and focus of a camera module 1C in the smartphone 1, software (not shown) installed in the smartphone 1 analyzes images collected by the camera module 1C while a light source 1L in the smartphone 1 is emitting light, so as to analyze color changes of the colorimetric test, and output the results to a display screen 1f in the smartphone 1.
Fig. 11 is a schematic exploded view of the optical adapter device 13 in the system 10, the device 13 includes a holder housing 131 mounted on the upper portion of the smartphone 1, an optical cartridge 132 attached to the housing 131 contains a receiver slot 136, an optical chamber 132C, an optical insert 134 fits into the top of the optical chamber 132C, with an exit aperture 134L and an entrance aperture 134C aligned with the light source 1L and camera 1C (shown in fig. 10-B) in the smartphone 1, a lens 133 is mounted in the entrance aperture 134C in the optical insert 134 and configured so that the sample area on the colorimetric sample card 137 inserted into the receiver slot 136 is within the working distance of the camera 1C (as shown in fig. 10-B), a large core fiber 135 is mounted in the exit aperture 134L at an oblique angle, a fiber 135 having a matte finish on both end faces of the fiber 135 is described below in fig. 2 as operating as illumination optics in the device 13.
FIG. 12 is a schematic cross-sectional view showing details of system 10 reading a colorimetric card, particularly device 13, which shows the functionality of the elements described above with reference to FIG. 11. light source 1L emits a light beam B1 away from smartphone 1. light beam B1 is coupled into optical fiber 135 through a first end face and travels in the direction of optical fiber 135 and emits from a second end face to become light beam B2. light beam B2 illuminates the sample area of colorimetric sample card 137 directly below camera 1C from the front side to produce uniform illumination.
C. An optical adapter (fiber ring illumination) for connecting the colorimetric reader to the smartphone.
Colorimetric assay is a very powerful technique and has wide applications in health monitoring, disease diagnosis, chemical assays, and the like. The key factor in obtaining accurate colorimetric assay results is accurate quantification of color change. Typically, the colorimetric test strip is analyzed for color change by comparing the color change to a standard color card. However, this comparison is done by the human eye and is susceptible to ambient light conditions, which limits the accuracy of quantifying color changes.
The invention described herein addresses this problem by providing a system that includes an optical adapter and a handset. The optical adapter device is mounted on a cell phone that converts it into a colorimetric reader that can provide consistent and uniform illumination to illuminate the front surface of the colorimetric test card and capture an image of the sample to analyze the color change. The system can be operated conveniently and reliably by ordinary personnel at any location. The optical adapter utilizes existing resources of the smartphone, including cameras, light sources, processors, and display screens, which provides a low cost solution to accurately quantify colorimetric color changes.
In the present invention, an optical adaptation device includes a holder frame that fits over the upper portion of a cell phone and an optical cartridge attached to the holder, the optical cartridge having a sample receiver slot and illumination optics. In some prior art connection adapters for cell phones, their adapter design is a unitary piece that includes clamping mechanical parts and functional elements that fit on the cell phone. A problem with such designs is that they require redesign of the integral adapter for each particular model of smartphone. In the present invention, however, the optical adapter is divided into a holding frame for mounting only the smartphone and a general optical box containing all functional components. For the smart phones with different sizes, as long as the relative positions of the camera and the light source are the same, the fixing frame is only required to be redesigned, and a large amount of design and manufacturing cost is saved.
An optical box of an optical adapter includes: a receiver tank to receive and position the colorimetric sample within a field of view and a focus range of the smartphone camera; illumination and imaging optics that produce uniform illumination across the sample and capture an image of the sample independent of any external conditions.
In order to capture an image of the sample to accurately represent the color change, it is desirable that the area of the sample under the camera be uniformly illuminated. But for all common smartphones the light source is always a point source and mounted at a distance near the camera, which means that the light source is not centrosymmetric with respect to the camera. This leads to the following problems: when the sample is placed very close to the camera of the smartphone, the illumination pattern on the front surface of the sample in the camera field of view will have a gradient intensity variation in the linear direction without the aid of additional illumination optics. It is therefore desirable to produce a light source with a large emission area and symmetrical to the center of the camera. To achieve this, in the present invention, a plastic side-emitting fiber optic ring is placed around the smartphone camera such that the fiber optic ring is centered symmetrically with respect to the camera. Two end faces of the optical fiber ring are installed towards a light source of the smart phone. This will convert the original single point source into an infinite number of small light sources with nearly equal luminous intensities distributed on a circle equidistant from the smartphone camera. The light emitted from the side wall of the ring-shaped optical fiber further passes through the diffusion film to increase the emission area and make the illumination more uniform. The sample area directly below the camera is uniformly front-illuminated by the designed illumination optics based on a side-emitting fiber ring.
Since how the color of the colorimetric sample is represented depends largely on the lighting conditions, it is important to control the uniformity of lighting in the light box independently of any external light conditions. To address this problem, the receiver slot has a rubber door attached to it that can completely cover the slot to prevent ambient light from entering the optics box, resulting in a change in lighting conditions.
Typically, the optics box also contains a lens mounted therein that is aligned with the camera of the smartphone, which brings the sample within the focal range of the camera. The image captured by the camera will be further processed by the processor of the smartphone to analyze the color change and output the analysis result on the screen of the smartphone.
The sample slide is mounted within the receiver slot to receive the QMAX device and position the sample in the QMAX device within the field of view and focus range of the smartphone camera.
The sample slider includes trapped orbit frame and digging arm:
the frame rail is fixedly mounted in the receiver slot of the optics box. The rail frame has a slide groove matching the width and thickness of the QMAX device so that the QMAX device can slide along the rail. The width and height of the track groove are carefully configured such that the QMAX device moves less than 0.5mm in a direction perpendicular to the sliding direction in the sliding plane and moves less than 0.2mm in the thickness direction of the QMAX device.
The frame rail has an open window under the field of view of the camera of the cell phone to allow light to back illuminate the sample.
The movable arm is pre-disposed in a slide groove of the rail frame and moves with the QMAX device to guide the motion of the QMAX device in the rail frame.
The movable arm is equipped with a stop mechanism having two predetermined stop positions. For one position, the arm will stop the QMAX device at a position where the fixed sample area on the QMAX device is just below the camera of the smartphone. For the other position, the arm will stop the QMAX device at a position where the sampling area on the QMAX device is outside the field of view of the smartphone, and the QMAX device can be easily removed from the track slot.
The movable arm is switched between the two stop positions by pressing the QMAX device and the movable arm together to the end of the rail groove and then releasing.
The movable arm may indicate whether the QMAX device is inserted in the correct orientation. The shape of one corner of the QMAX device is configured to be different from the other three right angles. The shape of the boom is matched to the corner shape of the specific shape so that the QMAX device can only slide in the correct direction to the correct position in the track groove.
Some embodiments
1. Optical fiber ring illuminator
In some embodiments of the optical assembly, wherein
a. The radius of the side light-emitting optical fiber is 10 mm;
b. the diameter of the looped optical fiber may be at least 5mm, 10mm, 15mm, 20mm, 25mm, 30mm, 40mm, 50mm, 60mm, 80mm, or 100mm, or a range between any two values;
c. the diameter of the cross-section of the annular optical fiber may be at least 0.5mm, 1.0mm, 1.5mm, 2.0mm, 2.5mm, 3mm, 4mm, 5mm, 6mm, 8mm, or 10mm, or within a range between any two values.
In some embodiments of the optical assembly, wherein
d. The diameter of the external imager lens is 6 mm;
e. the imager lens may have a diameter of at least 2mm, 3mm, 4mm, 5mm, 10mm, 15mm, 20mm, 25mm, 30mm, 40mm, or 50mm, or a range between any two values.
In some embodiments of the optical assembly, wherein the annular optical fiber may be used in conjunction with or replaced by a microlens array;
in some embodiments of the optical assembly, wherein the optical assembly comprises a light diffuser plate between the sample and the looped optical fiber, wherein the light diffuser plate has an aperture configured to align with the camera.
In some embodiments of the optical assembly, wherein a side of the light diffuser plate may have a length of at least 5mm, 10mm, 15mm, 20mm, 25mm, 30mm, 40mm, 50mm, 100mm, 150mm, or 200mm, or in a range between any two values, wherein a thickness of the diffuser plate may be at least 2mm, 3mm, 4mm, 5mm, 10mm, 15mm, or 20mm, or in a range between any two values.
In some embodiments of the optical assembly, wherein the distance between the light diffusing plate and the upper annular optical fiber can be at least 1mm, 10mm, 15mm, 20mm, 25mm, 30mm, 40mm, 50mm, 100mm, or within a range between any two values.
The optical assembly of claim 2, wherein the distance between the sample and the looped fiber can be at least 2mm, 10mm, 15mm, 20mm, 25mm, 30mm, 40mm, 50mm, 100mm, 150mm, 200mm, or a range between any two values.
A control rod:
1. the optical assembly of claim 3, wherein a distance between the first plane on the movable arm and the light source can be at least 0.5mm, 2mm, 4mm, 8mm, 10mm, 20mm, 50mm, 100mm, or within a range between any two values.
2. The optical assembly of claim 3, wherein the distance between the first plane and the second plane of the movable arm can be at least 5mm, 10mm, 15mm, 20mm, 40mm, 100mm, 200mm, or within a range between any two values.
3. The optical assembly of claim 5, wherein the distance that the movable arm needs to move to switch between different positions may be at least 1mm, 5mm, 15mm, 20mm, 40mm, 100mm, or within a range between any two values.
4. The optical assembly of claim 3, wherein the second plane is connected to a tilted plane, wherein the mirror is mounted on the tilted plane 5 the optical assembly of claim 4, wherein the preferred tilt angle of the tilted plane may be at least 10 degrees, 30 degrees, 60 degrees, 80 degrees, or within a range between any two values, and the tilt angle is defined as the angle between the second plane and the tilted plane.
13-A, 13-B, and 13-C are schematic diagrams of a system 10 for a smartphone colorimetric reader, in particular, FIGS. 13-B and 13-C are exploded views of the system 10 shown from the front and back sides, respectively, the system 10 includes a smartphone 1, an optical adapter device 13 mounted on the upper portion of the smartphone 1, a colorimetric sample card 138 inserted into a receiver slot 137 of the device 13 such that a sample area on the sample card 138 is within the field of view and focus of a camera module 1C in the smartphone 1, a rubber door 139 attached to the device 18 covers the receiver slot 137 after the sample card 138 enters to prevent ambient light from entering the optical adapter 13 to affect the test, software (not shown) installed in the smartphone 1 analyzes images collected by the camera module 1C while a light source 1L in the smartphone 1 is emitting light to analyze color changes of the colorimetric test and output the results to a display screen 1f in the smartphone 1.
Fig. 14 is a schematic exploded view of the optical adapter device 13 in the system 10, the device 13 includes a holder housing 131 mounted in the upper portion of the smartphone 1, an optical box 132 attached to the housing 131 includes a receiver slot 137, an optical chamber 132C, and a rubber door 139 inserted into a groove 137s to cover the receiver slot 137, an optical insert 134 fits into the top of the optical chamber 132C with an exit aperture 134L and an entrance aperture 134C aligned with the light source 1L and camera 1C (shown in fig. 13-B) in the smartphone 1, a lens 133 mounted in the entrance aperture 134C in the optical insert 134 and configured such that a sample area on a colorimetric sample card 138 inserted into the receiver slot 137 is located within the working distance of the camera 1C (as shown in fig. 13-B), a side-emitting fiber ring 135 mounted in the optical insert 134, the optical insert 134 configured to have the camera 1C located in the center of the fiber ring 135, both end faces of the fiber ring 135 mounted in the exit aperture 134L facing the light source 1L, the fiber ring 136 is placed in the optical insert 135 and the optical insert 134 has an open aperture Ac depicted in the operational view of the illumination light ring 13-Ac.
15-A, 15-B, and 15-C show a schematic of the system 10 for reading the colorimetric card, particularly details of the device 13, FIG. 15-A is a cross-sectional view showing details of the device 13, and FIGS. 15-B and 15-C are schematic diagrams showing only the configuration of the optical elements in the device 13, which illustrate the functions of the elements described above with reference to FIG. 14. light emitted from the light source 1L is coupled into the side-emitting fiber ring 135 from both end faces of the fiber ring 135 and propagates internally along the ring. light beam B1 exits from the side wall of the fiber ring and passes through the diffuser film 136. light beam B1 illuminates the sample area of the colorimetric sample card 138 directly below the camera 1C from the front side to produce uniform illumination. the illuminated sample area absorbs a portion of the light beam B1 and reflects light beam B1 to light beam B2. light beam B2 is collected by the lens 133 and enters the camera 1℃ the lens 133 produces an image of the sample area on the image sensor plane of the camera 1℃ the illuminated image is captured and processed to analyze the colorimetric information to determine color changes in the color of the smart phone 1C.
D. Tomography device and system
D-1.QMAX structure tomography device
A tomography apparatus for reconstructing a sectionable virtual three-dimensional copy of a biological sample at the highest resolution in the nanometer scale is disclosed. The device includes an imaging sensor, a lens and a QMAX device as shown in fig. 16-a.
The QMAX device has a periodic array of columns. The biological sample is contained in a QMAX device. An index matching liquid may be used to reduce scattering of light and reduce non-uniformity of the refractive index across the sample. The QMAX structure enhances the detection sensitivity by six (or more) orders of magnitude.
D-2 QMAX structure-based calibration
The pillar array has a metal disk on top of each pillar. The metal disk provides a calibration signal for spatial and height calibration of images captured by the imaging sensor. The shape of the metal disc may be designed to facilitate rapid calibration. For example, the metal disc may be shaped like the letter E; such an array of pillars is shown in FIG. 16-B.
When the imaging sensor captures an image with or without a biological sample on the QMAX structure, the captured image can be spatially calibrated and the focal length of the camera can also be quantitatively calibrated.
For spatial calibration, the captured image is subject to object detection. The object detection scheme may be template matching, optical character recognition, shape detection, or other schemes used in the art. Object detection retrieves the direction of the detected pattern, which in the embodiment of fig. 16-B is the letter E. And space calibration is realized by using the orientation parameters and two-dimensional geometric transformation.
We disclose quantitative calibration of focal lengths using a column array. The effect of focal length on the captured image can be explained by a thin lens model, as shown in fig. 16-C. If the sensing device is at a distance from the focal plane, point Q will be projected onto a circle of diameter k σ and its radiance will spread over the circle, where Q is defocused. The position v of the focal plane depends on the focal length f of the lens and the distance u from the object. The relationship between these three variables is given by the well-known gaussian lens law or thin lens equation:
Figure GDA0002340036120000151
we measure the degree of focus on the captured image and derive the focal plane position. The degree of focus measures the focus level of the entire image or each image pixel. Various algorithms and operators have been proposed in the literature to measure the degree of focus, such as gradient-based, laplacian-based, wavelet-based, statistical-based, cosine/fourier transform-based, and the like.
The degree of focus of the column array captured at different focal planes may be measured in advance and stored in a look-up table. For example, when the imaging sensor captures a new image of the column array, FIG. 16-D shows the captured image of the example column array in FIG. 16-B, we calculate the degree of focus of the newly captured image, refer the degree of focus to a look-up table, and find its corresponding focal plane position.
D-3 tomography system
The purpose of tomography is to reconstruct a three-dimensional volume of a biological sample by a plurality of projections of the biological sample. The end-to-end tomography system includes a light source, imaging, and three-dimensional reconstruction.
Light source
The light captured by the imaging sensor may be refracted from the sample, emitted from the sample, and the like.
Imaging
The imaging section captures a projection on the imaging sensor. The projections may be captured at different focal lengths, different angles, from different illuminations, etc.
Several images can be captured at different focal lengths. The lens moves the QMAX structure in a step or steps towards or back. The value of the step size and the movement of the lens can be controlled by hardware or software through an application program interface. The image sensor records the captured image.
Several images may be captured at different angles. The sample is rotated and an optical image projected approximately straight through it is captured. The sample is rotated to a series of angular positions and an image is captured at each orientation. The apparatus is carefully aligned to ensure that the axis of rotation is perpendicular to the optical axis so that projection data is collected by the imaging sensor for each plane. The focal plane may be located midway between the rotation axis and the QMAX card closest to the lens. This means that each image contains both in-focus data from the first half of the sample (the half closest to the lens) and out-of-focus data from the second half of the sample. The in-focus data will be used for three-dimensional volumetric reconstruction while the out-of-focus data will not be used. A band pass filter may be provided to select the focus data.
Optical projection tomography is performed using standard tomography algorithms. Due to the position of the focal plane with respect to the rotation axis, two images taken 180 degrees apart from each other will be focused on different parts of the sample. Limiting the backprojection to the area corresponding to the focused portion of the sample improves the quality of the result. When accumulating data for various orientations through the sample, the half-disk mask used as a band-pass filter can be rotated to ensure that only the focused data is back-projected.
Several images may be captured under different illuminations. Quantitative phase images can be obtained from time-dependent interference patterns caused by frequency shifting of the reference beam relative to the sample beam. A galvanometer mounted tilting mirror can be used to change the illumination angle. The laser beam passes through two acousto-optic modulators that shift the frequency of the laser beam. The second beam splitter recombines the sample and reference laser beams to form an interference pattern captured at the imaging sensor. The phase image is then calculated by applying phase-shifting interferometry. For near plane wave illumination of thin samples with small refractive index contrast, the phase of the transmitted field is well approximated by the line integral of the refractive index along the beam propagation path. Thus, the phase image can be simply interpreted as a projection of the refractive index.
In addition to band pass filters, various imaging filters may be used during image capture for purposes including (but not limited to):
(1) signal selection, thereby selecting a portion of a captured image;
(2) signal enhancement, thereby enhancing part or all of the captured image;
(3) signal transformation whereby part or all of the captured image is transformed into another representation, such as a frequency representation, a multi-scale representation, or the like;
(4) signal copying whereby a portion of the captured image is replaced by another portion of the captured image, or by a representation of another portion of the captured image;
(5) or any combination of (1) - (4).
By carrying out filtering processing such as contrast enhancement, color enhancement, noise reduction and the like on the acquired image, the dynamic range of pixel brightness can be improved, the color temperature can be adjusted, the signal to noise ratio can be improved and the like.
The captured image may be converted to another representation that may be more suitable for three-dimensional reconstruction. It may be converted to different formats (8-bit to 16-bit, integer to floating point, etc.), different color spaces (RGB to HSV, etc.), different domains (spatial domain to frequency domain, etc.), etc.
A portion of the captured image may be replaced by another portion (or a transformation of another portion) of the captured image. It may be a spatial region that is replaced by a transform of another region, e.g. a reflection extension around a boundary, etc., it may be a frequency subband that is replaced by a transform of another frequency subband, e.g. a high frequency subband is replaced by an estimate from a low frequency subband, etc.
Three-dimensional reconstruction
Reconstructing a three-dimensional volume of a biological sample from projections of the biological sample is an inverse problem. The three-dimensional volume reconstruction may employ a phase image retrieval scheme, a backprojection scheme, a non-linear approximation scheme, an optimization scheme, and the like.
When several images are captured at different focus distances, we calculate the degrees of focus of these images and list these degrees of focus as vectors. Then, we reference the vectors with a look-up table and find their corresponding focal plane distances. Corresponding may be based on distance, based on correlation, or other criteria for selecting the best match.
Fig. 16-E shows a diagram of a scheme based on phase image retrieval. It consists of four components:
focal length calculation
Phase image retrieval
Altitude estimation
Three-dimensional volume reconstruction
The second component, phase recovery, is by a quantitative phase imaging technique based on the intensity Transmission (TIE) equation. TIE equation state
Figure GDA0002340036120000171
Wherein
Figure GDA0002340036120000172
Representing the intensity gradient that can be calculated from the multifocal image, k being the wave number,
Figure GDA0002340036120000173
Is the sample phase distribution.
The TIE equation can be solved by fast Fourier transform and discrete cosine transform; for example, "boundary artifact-free phase retrieval using transmission of intensity equation: fast solution using discrete cosine transform ", c.zuo, q.chen, and a.asundi, Optics Express, volume 22, phase 8, month 4 2014. Retrieving phase images from TIE equations
Figure GDA0002340036120000174
Given the phase image, we estimate the height (thickness) of the biological sample recall that for a sample of thickness t and refractive index n, the corresponding optical path length L p is Lp=t×n
The height of the biological sample can be calculated using the known refractive index.
Further, a three-dimensional volume of the biological sample may be reconstructed.
Backprojection algorithms are commonly used for three-dimensional reconstruction in tomography. The method comprises a Fourier transform basis algorithm, a filtering back projection algorithm, a back projection filtering algorithm and an iterative algorithm.
When the focal plane is located differently with respect to the rotation axis, two images taken 180 degrees apart from each other will be focused on different parts of the sample. To compensate, a half-plane adjusted backprojection algorithm may be employed. Therefore, limiting the backprojection to the area corresponding to the focused portion of the sample will improve the quality of the result. When accumulating data for various orientations through the sample, the half-disk mask can be rotated to ensure that only the focused data is back-projected.
As another embodiment of the back projection algorithm, a process based on a filtered back projection method may be applied. A discrete inverse Radon transform is applied to each x-slice in the direction of beam rotation, where x is the coordinate in the tilt direction and the relative angle of the laser beam direction to the objective lens optical axis. To compensate for the angle between the imaging and illumination directions, the x value is divided. To reduce the impact of missing projections, an iterative constraint method may be applied.
For the inversion problem of reconstructing a three-dimensional volume from its projections, the resulting three-dimensional volume may be blurred. A ramp filter may be used to eliminate or reduce blurring.
In addition to deblurring filters, various imaging filters can be used for three-dimensional volumetric reconstruction, such as (including but not limited to):
(1) signal selection, wherein a portion of an image or image volume is selected;
(2) signal enhancement, wherein a portion or all of an image or image volume is enhanced;
(3) signal transformation whereby part or all of the captured image is transformed into another representation, such as a frequency representation, a multi-scale representation, or the like;
(4) signal copying whereby a portion of the captured image is replaced by another portion of the captured image, or by a representation of another portion of the captured image;
(5) or any combination of (1) - (4).
D-4 examples of the invention
Da1. an apparatus for imaging a sample comprising a QMAX apparatus and an imager, wherein:
(1) the QMAX device comprises:
a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate comprises on its inner surface a sample contacting area for contacting a fluid sample;
one or both plates comprise a spacer secured to the respective plate;
v. the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
wherein one of these configurations is an open configuration, wherein: the two plates are separated, the spacing between the plates is not adjusted by the spacers, and samples are deposited on one or both of the plates; and
wherein another of the configurations is a closed configuration configured in an open configuration after deposition of the sample; and in the closed configuration: at least a portion of the sample is compressed by the two plates into a layer of uniform thickness, wherein the uniform thickness of the layer is bounded by the inner surfaces of the two plates and is accommodated by the plates and the spacer.
(2) The imager is configured to capture an image of a signal emanating from at least a portion of the uniform thickness layer.
Db1. a system for tomography comprising a QMAX device, an imager, a holder, and a control device, wherein:
(1) the QMAX apparatus includes: a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate comprises on its inner surface a sample contacting area for contacting a fluid sample;
one or both plates comprise a spacer secured to the respective plate;
v. the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
wherein one of these configurations is an open configuration, wherein: the two plates are separated, the spacing between the plates is not adjusted by the spacers, and the sample is deposited on one or both of the plates; and
wherein another of the configurations is a closed configuration configured in an open configuration after deposition of the sample; and in the closed configuration: at least a portion of the sample is compressed by the two plates into a layer of uniform thickness, wherein the uniform thickness of the layer is bounded by the inner surfaces of the two plates and is accommodated by the plates and the spacer.
(2) The imager includes an image sensor and a lens, wherein:
i. the lens is configured to focus a signal emitted from at least a portion of the layer of uniform thickness and project the focused signal onto the image sensor, and
an image sensor is configured to capture an image of the focus signal;
(3) the holder is configured to adjust a relative position between the QMAX device and the imager; and
(4) the control means comprises hardware and software for controlling and/or deriving the positional adjustments made by the stent, and receiving and reconstructing said images into a three-dimensional volume.
Dbb1. a system for tomography comprising a QMAX device, an imager, a holder, and a control device, wherein:
(1) the QMAX apparatus includes: a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate comprises on its inner surface a sample contacting area for contacting a fluid sample;
one or both plates comprise a spacer secured to the respective plate;
v. the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
wherein one of these configurations is an open configuration, wherein: the two plates are separated, the spacing between the plates is not adjusted by the spacers, and the sample is deposited on one or both of the plates; and
wherein the other of the configurations is a closed configuration that is configured after the sample is deposited in the open configuration; and in the closed configuration: at least a portion of the sample is compressed by the two plates into a layer of uniform thickness, wherein the uniform thickness of the layer is defined by the inner surfaces of the two plates and is accommodated by the plates and the spacers.
(2) The imager is capable of changing a focal plane, comprising an image sensor and a lens, wherein:
i. the lens is configured for focusing a signal emanating from at least a portion of the layer of uniform thickness and projecting the focused signal onto the image sensor, and
the image sensor is configured to capture an image of the focus signal;
the lens is a single lens or a compound lens consisting of a plurality of lenses;
at least one element of the lens is movable to change the distance from the image sensor, thereby changing the focal plane of the imager; and
v. the movable lens may be driven by a stepper motor and/or electromagnetic force, which are computerized or manually controlled.
And
(4) the control means comprises hardware and software for controlling and/or deriving the positional adjustments made by the stent, and receiving and reconstructing the images into a three-dimensional volume.
Dc1. a tomography method comprising the steps of:
(a) depositing the sample onto a QMAX device in any of the foregoing device or system embodiments;
(b) after (a), using the two plates to compress at least a portion of the sample into a substantially uniform thickness layer, the layer bounded by the sample contacting surfaces of the plates, wherein the uniform thickness of the layer is accommodated by the spacers and the plates, wherein the compressing comprises:
bringing the two plates together; and
conforming, in parallel or sequentially, a region of at least one of the plates to press the plates together to a closed configuration, wherein the conforming press produces substantially uniform pressure on the plates over at least a portion of the sample, and the press spreads at least a portion of the sample laterally between the sample contacting surfaces of the plates, and wherein the closed configuration is one in which the spacing between the plates in a uniform thickness region layer is adjusted by the spacers;
(c) capturing an image of a signal emanating from at least a portion of the uniformly thick layer using the imager of any of the foregoing apparatus or system embodiments;
(d) adjusting the relative position of the QMAX device and the imager, and repeating step (c); and
(e) after a series of steps (c), reconstructing the captured images into a three-dimensional volume of at least a portion of the layer of uniform thickness,
wherein conformal compression is a method of making the pressure applied over an area substantially constant regardless of changes in the shape of the outer surface of the plate; and
wherein the parallel pressing simultaneously applies the pressure on the desired area, and the sequential pressing applies the pressure on a part of the predetermined area and gradually moves to other areas.
Dcc1. a method of taking images at different focal planes, comprising the steps of:
(a) computer-controlled or manually-controlled movement of a movable lens in an imager to an initial position;
(b) corresponding the movable lens position to a position of a focal plane;
(c) capturing an image using an image sensor in the imager and recording the position of the focal plane;
(d) computerized or manually adding a stepping displacement to move the movable lens to a next position;
(e) repeating steps (b) through (d);
(f) after a series of steps (e), several images at different focal planes are captured.
A device according to any of the preceding embodiments, wherein the QMAX device further comprises a dry reagent coated on one or both plates, which dry reagent stains the sample.
The apparatus of any preceding embodiment, wherein:
i. one or both of the plate sample contact areas comprises one or more binding sites, each binding site binding and immobilizing a respective analyte; or
One or both plate sample contacting areas comprise one or more storage sites, each storage site storing one or more reagents; wherein the reagent is dissolved and dispersed in the sample during or after step (c), and wherein the sample contains one or more analytes; or
One or more amplification sites, each of said amplification sites being capable of amplifying a signal from an analyte or a label of an analyte when said analyte or label is 500nm from said amplification site. (ii) a
Any combination of i to iii.
The apparatus of any preceding embodiment, wherein the imager further comprises a light source providing light for illumination or excitation of a uniform thickness layer for imaging.
DA24. the device of embodiment DA23, wherein the light source is selected from the group consisting of L ED, laser, incandescent, and any combination thereof.
Db2. the system of embodiment DB1, wherein the signal comprises an optical signal selected from the group consisting of: light reflection, light refraction, light transmission, luminescent signal, and any combination thereof.
The system of any of the preceding embodiments, wherein the imager further comprises a light source providing light illuminating the uniform thickness layer for imaging, wherein the light source is selected from the group consisting of incandescent light, L ED, CF L, laser, and any combination thereof.
The system of any preceding embodiment, wherein the imager further comprises a light source providing excitation light that excites fluorescence emission from the layer of uniform thickness for imaging, wherein the light source is L ED and/or a laser.
The system of any preceding embodiment, wherein the holder is capable of adjusting the relative position of the lens and the QMAX device along the optical axis of the QMAX device to change the focal plane position of the lens.
The system of any preceding embodiment, wherein the holder is capable of adjusting a relative position between the lens and the QMAX device to change an imaging angle, wherein the imaging angle is an angle between a focal plane of the lens and the uniform thickness layer.
The system of any of the preceding embodiments, wherein the imager further comprises a light source that provides illumination light for imaging, and wherein the holder is capable of adjusting the relative position between the light source and the QMAX device to change the angle of incidence of the illumination light, wherein the angle of incidence is the angle between the illumination light and a line perpendicular to the uniform thickness layer.
The system of any preceding embodiment, wherein the control device comprises hardware and software for sending commands defining position adjustments to the holder, and wherein the holder is configured to receive the commands and adjust with a deviation of no more than 10%.
The system of any preceding embodiment, wherein the control means comprises hardware and software for sending commands defining position adjustments to a holder, and wherein the holder is configured to receive the commands and adjust with a deviation of no more than 1%.
Db10. the system according to any of the embodiments DB8-DB9, wherein the control means comprises hardware and software for receiving input defining position adjustments and converting the input into commands for the holder to make adjustments.
The system of any preceding embodiment, wherein the system further comprises a plurality of calibration columns, and wherein:
(1) a plurality of calibration posts located between the sample contacting areas of the two plates in a closed configuration and having different heights from each other, each less than the uniform height of the spacer;
(2) capturing the images at different focal planes along a common optical axis; and
(3) the control means comprises hardware and software for: (a) calculating a focus score for each of the images; and (b) inferring a focus plane position for each of the captured images by comparing the focus scores to a lookup table, wherein the focus scores are a matrix of degrees of focus calculated for each pixel of the captured image, wherein the lookup table is predetermined and includes a row of predetermined focus plane positions along the common optical axis and calibration focus scores for the respective row, each calibration focus score calculated based on an image of the calibration pillar captured at the respective predetermined focus plane.
The system of any preceding embodiment, wherein the images are captured at different focal planes along a common optical axis, and wherein the control means comprises hardware and software for: (a) generating a phase image for a biological entity in at least a portion of the layer, wherein the phase image is a phase distribution calculated based on a wavelength of illumination light used for imaging, at least a portion of an image containing a signal from the biological entity, and focal plane positions of the respective captured images; and (b) estimating a thickness of the biological entity based on the phase image and the refractive index of the sample, wherein the biological entity is part or all of at least a portion of the layer.
Db13. the system according to embodiment DB8, wherein the control means comprises hardware and software for reconstructing the at least part of the image into a three-dimensional volume of the biological entity based on the estimated thickness.
The system of any preceding embodiment, wherein the images are captured at different imaging angles, wherein the control means comprises hardware and software for: (1) knowing or deriving an imaging angle for each of said images; and (2) reconstructing the image into a three-dimensional volume using a backprojection algorithm based on a known/derived imaging angle, and wherein the imaging angle is the angle between the focal plane of the lens and the layer of uniform thickness.
The system of any preceding embodiment, wherein the images are captured at different angles of incidence of the illumination light, wherein the control means comprises hardware and software for: (1) knowing or deducing the angle of incidence for each image; and (2) reconstructing the image into a three-dimensional volume using a backprojection algorithm based on a known/derived angle of incidence, and wherein the angle of incidence of the illumination light is the angle between the illumination light and a line normal to the uniform thickness layer.
The system of any of the embodiments DB14-DB15, wherein the backprojection algorithm is selected from the group consisting of: fourier transform basis algorithms, filtered backprojection algorithms, backprojection and filtering algorithms, iterative algorithms, and any combination thereof.
The system according to any of the preceding embodiments, wherein the imager is equipped with an imaging filter, and wherein the captured image is filtered by the imaging filter and/or software of the control means for: (1) signal selection, thereby selecting a portion of a captured image; (2) signal enhancement, thereby enhancing some or all of the captured image; (3) signal transformation, which transforms part or all of the captured image into another representation, such as a frequency representation, a multi-scale representation, etc.; (4) signal copying whereby a portion of the captured image is replaced by another portion of the captured image, or by a representation of another portion of the captured image; or any combination of (1) - (4).
The system of any preceding implementation, wherein the control device further comprises hardware and software for reconstructing at least a portion of the image into a three-dimensional volume, wherein during three-dimensional volume reconstruction the image and the three-dimensional volume are filtered by the software for: (1) signal selection, wherein a portion of an image or image volume is selected; (2) signal enhancement, wherein a portion or all of an image or image volume is enhanced; (3) signal transformation, in which part or all of an image or image volume is transformed into another representation, such as a frequency representation, a multi-scale representation, or the like; (4) signal replication in which a portion of an image or image volume is replaced by another portion of the captured image, or a representation of another portion of the captured image; or any combination of (1) - (4).
Dc2. the method of embodiment DC1, further comprising: staining the sample with a stain prior to step (c).
The method of any preceding method embodiment, wherein during step (b), conformal compression is performed by a human hand.
The method of any preceding method embodiment, wherein the conformable press of step (d) is provided by a pressurized liquid, a pressurized gas, or a conformable material.
The method of any preceding method implementation, wherein the adjusting step (d) comprises adjusting a relative position of the lens and the QMAX device along an optical axis of the QMAX device to change a focal plane position of the lens.
The method of any preceding method implementation, wherein the adjusting step (d) comprises adjusting a relative position between the lens and the QMAX device to change an imaging angle, wherein the imaging angle is an angle between a focal plane and the uniform thickness layer.
The method of any preceding method embodiment, wherein the imager further comprises a light source providing illumination light for imaging, and wherein the adjusting step (d) comprises adjusting the relative position of the light source and the QMAX device to change the angle of incidence of the illumination light, wherein the angle of incidence is the angle between the illumination light and a line normal to the uniform thickness layer.
The method of any preceding method embodiment, wherein the adjusting step (d) is performed manually.
The method of any preceding method embodiment, wherein the adjusting step (d) is performed by a control device operably coupled to the holder, wherein the control device includes hardware and software for receiving input defining the position adjustment and sending a command to the holder, and wherein the holder is configured to receive the command and cause the adjustment to have a deviation of no greater than 10%.
The method of any preceding method embodiment, wherein the adjusting step (d) is performed by a control device operably coupled to the holder, wherein the control device includes hardware and software for receiving input defining the position adjustment and sending a command to the holder, and wherein the holder is configured to receive the command and cause the adjustment to have a deviation of no greater than 1%.
The method of any preceding method embodiment, wherein the images are captured at different focal planes along a common optical axis, and wherein the reconstructing step (e) comprises: (i) calculating a focus score for each of the images; and (ii) inferring a focal plane location at which each of the images was captured by comparing the focus scores to a look-up table, wherein a focus score is a matrix of degrees of focus calculated for each pixel of the captured image, wherein the look-up table is predetermined and comprises a row of predetermined focal plane locations along the common optical axis and calibration focus scores for respective rows, each calibration focus score being calculated based on an image of a calibration column captured at a respective predetermined focal plane.
The method of any preceding method embodiment, wherein the images are captured at different focal planes along a common optical axis, and wherein the reconstructing step (e) comprises: (i) generating a phase image for a biological entity in the at least a portion of the layer, wherein the phase image is a phase distribution calculated based on a wavelength of illumination light used for imaging, at least a portion of the image containing signals from the biological entity, and a focal plane location at which the image was captured; and (ii) estimating a thickness of the biological entity based on the phase image and the refractive index of the sample, wherein the biological entity is part or all of at least a portion of the layer.
Dc13. the method of embodiment DC12, wherein reconstructing step (e) further comprises reconstructing at least a portion of the image into a three-dimensional volume of a biological entity based on the estimated thickness.
The method of any preceding method embodiment, wherein the images are captured at different imaging angles, wherein the reconstructing step (e) comprises: (i) knowing or deriving an imaging angle for each of said images; and (ii) reconstructing the image into a three-dimensional volume using a backprojection algorithm based on a known/derived imaging angle, and wherein the imaging angle is the angle between the focal plane of the lens and the layer of uniform thickness.
The method of any preceding method embodiment, wherein the images are captured at different angles of incidence of the illumination light, wherein the reconstructing step (e) comprises: (i) knowing or deriving an angle of incidence for each of said images; and (ii) reconstructing the image into a three-dimensional volume using a backprojection algorithm based on a known/derived angle of incidence, and wherein the angle of incidence of the illumination light is the angle between the illumination light and a line normal to the uniform thickness layer.
The method of any of embodiments DC14-DC15, wherein the back projection algorithm is selected from the group consisting of: fourier transform basis algorithms, filtered backprojection algorithms, backprojection and filtering algorithms, iterative algorithms, and any combination thereof.
The method of any of the preceding method embodiments, wherein the sample is a biological sample selected from the group consisting of cells, tissue, bodily fluids, stool, and any combination thereof.
The method of any preceding method embodiment, wherein the sample is an environmental sample from an environmental source selected from the group consisting of rivers, lakes, ponds, oceans, glaciers, icebergs, rain, snow, sewage, reservoirs, tap water, drinking water, and the like; solid samples from the group consisting of soil, compost, sand, rock, concrete, wood, brick, sewage, air, underwater vents, industrial waste gas, vehicle waste gas, and any combination thereof.
The method according to any preceding method embodiment, wherein the sample is a food sample selected from the group consisting of: raw materials, cooked foods, plant and animal food sources, pre-treated foods, partially or fully treated foods, and any combination thereof.
The method according to any one of the preceding method embodiments, wherein the sample is blood and the biological entities are red blood cells, white blood cells and/or platelets.
Dc21. the method of embodiment DC20, further comprising:
(f) the volume of red blood cells, white blood cells and/or platelets is calculated based on their respective reconstructed three-dimensional volumes.
Dc22. a method according to embodiment DC21, further comprising:
(g) based on the calculated volume, determining a blood test reading selected from the group consisting of: mean Corpuscular Volume (MCV), hematocrit, red blood cell distribution width (RDW), Mean Platelet Volume (MPV), Platelet Distribution Width (PDW), Immature Platelet Fraction (IPF), and any combination thereof.
E. Machine learning assisted determination and imaging
E-1. QMAX device for analysis and imaging
An apparatus for biological analyte detection and localization is disclosed, comprising a QMAX apparatus, an imager and a computing unit. The biological samples were tested on a QMAX device. Counts and locations of analytes contained in a sample are obtained by the present disclosure.
The imager captures an image of the biological sample. The image is submitted to a computing unit. The computing unit may be physically connected directly to the imager, through a network connection, or indirectly through image transfer.
E-2. workflow
The disclosed analyte detection and localization employs machine learning deep learning. Machine learning algorithms are algorithms that can learn from data. A more rigorous definition of machine learning is "if the performance of a computer program measured by P at a task in T improves with experience E, then the computer program is said to learn from experience E about some type of task T and performance measure P". It explores the study and construction of algorithms that can learn from and predict data, which overcome the problem of following strict static program instructions by building models from sample inputs to make data-driven predictions or decisions.
Deep learning is a particular type of machine learning based on a set of algorithms that attempt to model high-level abstractions in the data. In a simple case, there may be two groups of neurons: a group receiving input signals and a group transmitting output signals. When the input layer receives the input, it passes the modified version of the input to the next layer. In deep networks, there are many layers between the input and output (and these layers are not made up of neurons, but it may help consider this approach), allowing the algorithm to use multiple processing layers consisting of multiple linear and non-linear transforms.
The disclosed analyte detection and localization workflow includes two phases, training and prediction, as shown in fig. 17-a. We will describe the training and prediction phases in the following paragraphs.
Training in the training phase, annotated training data is fed into a convolutional neural network. A convolutional neural network is a specialized neural network used to process data having a known grid-like topology. Examples include time series data, which may be considered as a 1D grid sampled at regular time intervals, and image data, which may be considered as a 2D grid of pixels. Convolutional networks have been very successful in practical applications. The name "convolutional neural network" means that the network employs a mathematical operation called convolution. Convolution is a special linear operation. Convolutional networks are simple neural networks that use convolution in at least one of their layers instead of general matrix multiplication.
The training data is annotated to the analyte to be detected. The annotation indicates whether the analyte is present in the training data. The annotation may be in the form of a bounding box that completely contains the analyte or the central location of the analyte. In the latter case, the central position is further translated into a ring covering the analyte.
When the size of the training data is large, two challenges exist: annotations (typically done by a human) are time consuming and computationally expensive to train. To overcome these challenges, the training data may be divided into small-sized patches, and these patches or portions of these patches may then be annotated and trained.
The annotated training data is fed to a convolutional neural network for model training. The output is a model that can be used for pixel-level prediction of an image. We use Caffe libraries with complete convolution networks (FCNs). Other convolutional neural network structures, such as tensor flow (Tensorflow), may also be used.
The training phase generates a model to be used in the prediction phase. The model can be reused in the prediction phase of the input image. Thus, the computational unit only needs to access the generated model. It does not require access to training data nor does it require a training phase to be run on the computing unit.
Prediction
In the prediction phase, a detection component is applied to the input image, followed by a localization component. The output of the prediction phase is the count of the analytes contained in the sample and the location of each analyte.
In the detection component, the input image is fed into a convolutional neural network along with the model generated from the training phase. The output of the detection stage is a pixel level prediction in the form of a heat map. The heat map may be the same size as the input image, or it may be a reduced version of the input image. Each pixel in the thermal map has a value from 0 to 1, which can be considered as a likelihood (belief) whether the pixel belongs to the analyte. The higher the value, the greater the chance that it belongs to the analyte.
The heat map is an input to the positioning component. We disclose an algorithm for locating the center of an analyte. The main idea is to iteratively detect local peaks from the heat map. After we find the peak, we calculate a local region around the peak with smaller values. We remove this region from the heat map and find the next peak from the remaining pixels. This process is repeated until all pixels are removed from the thermal map.
One embodiment of the localization algorithm is to sort the heat map values from highest value to lowest value into a one-dimensional ordered list. The pixel with the highest value is then selected and removed from the list along with its neighbors. This process is repeated to select the pixel in the list with the highest value until all pixels are deleted from the list.
Algorithm global search (heatmap)
Inputting:
heat map (heatmap)
And (3) outputting:
loci
loci←{}
ordering (heatmap)
At the same time (the heat map is not empty)
s ← pop (heat map)
D ← { disk center is s, radius is R }
Heat map \ D// deletion of D from heat map
Addition of s to loci
}
After ranking, the heatmap is a one-dimensional ordered list, with the heatmap values ordered from highest to lowest. Each heat map value is associated with its corresponding pixel coordinate. The first item in the heat map is the item with the highest value, which is the output of the pop (heat map) function. A disk is created with the center being the pixel coordinates of the disk with the highest heat map value. All heat map values whose pixel coordinates lie within the disk are then removed from the heat map. The algorithm repeats the popping of the highest value in the current heat map, removing the discs around it, until the item is removed from the heat map.
In the ordered list heatmap, each item is aware of subsequent items and subsequent items. When an item is deleted from the ordered list, we make the following changes, as shown in FIG. 17-B:
assume that the deleted item is xrIts continuation term is xpAnd its following term is xf
For continuation term xpAnd redefining the following item of which the following item is the deleted item. Thus, xpIs now xf
For delete item xrThe continuation and following items are not defined and then deleted from the ordered list.
For the following term xfRedefining its continuation entries as continuation entries of the deleted entries. Thus, xfIs now xp
After all items are deleted from the ordered list, the location algorithm is complete. The number of elements in the set of loci will be the count of the analyte and the location information is the pixel coordinates per s in the set of loci.
Another embodiment searches for local peaks that are not necessary for the local peak with the highest heat map value. To detect each local peak, we start with a random starting point and search for local maxima. After the peak is found, a local area around the peak having a smaller value is calculated. This region is removed from the heat map and the next peak is found from the remaining pixels. This process is repeated until all pixels are removed from the thermal map.
Algorithm local search (s, heat map)
Inputting:
s: starting position (x, y)
Heatmap
And (3) outputting:
s: the position of the local peak.
We only consider pixels with values > 0.
Algorithm overlay (s, heatmap)
Inputting:
s: the position of the local peak.
Heat map:
and (3) outputting:
covering: set of pixels covered by the peak:
this is an breadth first search algorithm starting from s, with an access point that changes conditions: only when heat map [ p ] >0 and heat map [ p ] < ═ heat map [ q ], the neighbors p of the current location q are added to cover. Thus, each pixel covered has a non-falling path leading to a local peak s.
Algorithm locate (heatmap) input:
heatmap
And (3) outputting:
loci
loci←{}
pixel ← { all pixels in heat map }
While the pixel is not an empty
s ← any pixel from the pixels
s ← local search (s, heatmap)// s are now local peaks
Detecting a local region of radius R around s to obtain a better local peak
r ← overlay (s, heatmap)
Pixel ← pixel \ r// delete all pixels covered
Addition of s to loci
E-3. examples of the invention
An advanced learning method for data analysis, comprising:
(a) receiving an image of a test sample, wherein the sample is loaded into a QMAX device and the image is taken by an imager connected to the QMAX device, wherein the image comprises a detectable signal from an analyte in the test sample;
(b) analyzing the image with a detection model and generating a 2-D data array of the image, wherein the 2-D data array includes probability data for the analyte at each location in the image, and the detection model is established by a training process comprising:
i. feeding an annotation data set to the convolutional neural network, wherein the annotation data set is from a sample of the same type as the test sample and for the same analyte;
training and building a detection model by convolution;
(c) analyzing the 2-D data array to detect local signal peaks by:
i. signal listing process, or
A local search process; and
(d) the amount of analyte is calculated based on the local signal peak information.
Eb1. a system for data analysis, comprising:
QMAX apparatus, imager and calculation unit, wherein:
(a) the QMAX device is configured to compress at least a portion of the test sample into a highly uniform thickness layer;
(b) the imager is configured to generate an image of the sample at the layer of uniform thickness, wherein the image includes a detectable signal from an analyte in the test sample;
(c) the computing unit is configured to:
i. receiving an image from an imager;
analyzing the image with the detection model and generating a 2-D data array of the image, wherein 2-D
The data array includes probability data for the analyte at each location in the image and the detection model is built by a training process that includes:
feeding an annotation data set to the convolutional neural network, wherein the annotation data set is from a sample of the same type as the test sample and for the same analyte;
training and establishing the detection model through convolution;
(iii) (c) analyzing the 2-D data array using a signal list process or a local search process to detect local signal peaks; and
calculating the amount of analyte based on the local signal peak information.
The method of implementation EA1, wherein signal list processing comprises:
i. establishing a signal list by iteratively detecting local peak values in the two-dimensional data array, calculating a local area around the detected local peak values, and sequentially removing the detected peak values and local area data into the signal list; and
sequentially repeating removing the highest signal from the list of signals and removing signals from around the highest signal, thereby detecting local signal peaks.
The method according to any of the embodiments EA, wherein the local search process comprises:
i. searching for local maxima in the 2-D data array starting at random points;
calculating a local area around the peak but with a smaller value;
removing local maxima and their surrounding smaller values from the 2-D data array; and
repeating steps i-iii to detect local signal peaks.
The method according to EA of any of the previous embodiments, wherein the annotated dataset is partitioned prior to annotation.
Eb2. the system of implementation EB1, wherein imager comprises a camera.
Eb3. the system of implementation EB2, wherein the camera is part of a mobile communication device.
Eb4. the system according to any of the preceding implementations EB, wherein the computing unit is part of a mobile communication device.
F. Apparatus and method for tissue staining and cell imaging
F-1. embodiment of QMAX device for tissue staining and cell imaging
Fig. 18 shows an embodiment of a generic QMAX device with or without a hinge, where Q: quantizing; m: amplification; a: adding a reagent; x: accelerating; also known as Compression Regulated Open Flow (CROF) devices. The universal QMAX apparatus includes a first board 10 and a second board 20. In particular, fig. (a) shows a perspective view of a first plate 10 and a second plate 20, wherein the first plate has spacers. It should be noted, however, that spacers are also fixed on the second plate 20 (not shown) or on both the first plate 10 and the second plate 20 (not shown). Figure (B) shows a perspective view and a cross-sectional view of a sample 90 deposited on the first plate 10 in an open configuration; it should be noted, however, that the sample 90 is also deposited on the second plate 20 (not shown), or on both the first plate 10 and the second plate 20 (not shown). Graph (C) shows (i) stretching of sample 90 (sample flowing between the inner surfaces of the plates) and reducing the sample thickness using first plate 10 and second plate 20, and (ii) adjusting the sample thickness using the spacers and plates in the closed configuration of the QMAX device. The inner surface of each plate has one or more binding sites and/or storage sites (not shown).
In some embodiments, the spacers 40 have a predetermined uniform height and a predetermined uniform spacer spacing, average L OD-8 ppb. in a closed configuration, as shown in FIG. 1(C), the spacing between the plates, and thus the thickness of the sample 90, is adjusted by the spacers 40. in some embodiments, the uniform thickness of the sample 90 is substantially similar to the uniform height of the spacers 40. it should be noted that, although FIG. 18-A shows the spacers 40 to be secured on one of the plates, in some embodiments, the spacers are not fixed.
Fig. 18-a shows an embodiment of a QMAX apparatus for cell imaging. As shown, the device includes a first plate 10, a second plate 20, and a spacer 40. The plates are movable relative to each other into different configurations, one or both of the plates being flexible. Each plate has a sample contact area (not shown) on its respective inner surface for contacting the staining solution 910 and/or the tissue sample 90 suspected of containing the analyte of interest. The second plate 20 comprises spacers 40 fixed to its inner surface 21; the spacers 40 have a predetermined substantially uniform height and a predetermined spacer spacing, and at least one spacer is located within the sample contact area.
Fig. 18-a, panels (a) and (B) show one configuration, the open configuration. As shown, in the open configuration, the two plate portions are either completely separated, the spacing 102 between the plates is not adjusted by the spacer 40, and the staining solution 910 and sample 90 are deposited on the first plate 10. It should be noted that the staining solution 910 and sample 90 may also be deposited on the second plate 20 or both plates.
Figure 18-a (C) shows another configuration of the two panels, a closed configuration. As shown in figure (B), the closed configuration is constructed after the staining solution 910 and sample 90 are deposited in the open configuration. And in the closed configuration at least a portion of the sample 90 is between the two plates and a layer of at least a portion of the staining solution 910 is between at least a portion of the sample 90 and the second plate 20, wherein at least a portion of the thickness staining solution layer is accommodated by the plates, the sample 90 and the spacer 40, and the average distance between the sample surface and the second plate surface is equal to or less than 250 μm with little variation.
In some embodiments, the sample may be dried thereon in an open configuration, and wherein the sample comprises a bodily fluid selected from the group consisting of: amniotic fluid, aqueous humor, vitreous humor, blood (e.g., whole blood, fractionated blood, plasma, or serum), breast milk, cerebrospinal fluid (CSF), cerumen (cerumen), chyle, chyme, endolymph, perilymph, stool, breath, gastric acid, gastric juice, lymph, mucus (including nasal drainage and sputum), pericardial fluid, peritoneal fluid, pleural fluid, pus, rheumatism, saliva, exhaled breath condensate, sebum, semen, sputum, sweat, synovial fluid, tears, vomit, urine, and any combination thereof.
In some embodiments, the sample contacting area of the one or two plates is configured such that the sample can be dried thereon in an open configuration, and the sample comprises a blood smear and is dried on one or two plates.
In some embodiments, the sample is a solid tissue section having a thickness of 1-200 μm, and the sample contact area of one or both plates is adhered to the sample. In some embodiments, the sample is paraffin embedded. In some embodiments, the sample is blood.
In some embodiments, the staining solution is a pure buffer solution that does not specifically contain components that are capable of altering the properties of the sample. In some embodiments, the staining solution comprises a fixative capable of fixing the sample. In some embodiments, the staining solution comprises a blocking agent, wherein the blocking agent is configured to render non-specific endogenous species in the sample unreactive with a detection agent for specifically labeling the target analyte. In some embodiments, the staining solution comprises a dewaxing agent capable of removing paraffin from the sample. In some embodiments, the staining solution comprises a permeabilization reagent capable of permeabilizing cells in a tissue sample containing the analyte of interest. In some embodiments, the staining solution comprises an antigen retrieval agent capable of promoting antigen retrieval. In some embodiments, the staining solution comprises a detection reagent that specifically labels the analyte of interest in the sample. In some embodiments, the sample contacting region of one or both plates comprises a storage site comprising a capping agent, wherein the capping agent is configured to render non-specific endogenous species in the sample unreactive with a detection agent for specifically labeling the target analyte. In some embodiments, the sample contacting area of one or both plates comprises a storage site comprising a dewaxing agent capable of removing paraffin from the sample. In some embodiments, the sample contacting region of one or both plates comprises a storage site comprising a permeabilization reagent capable of permeabilizing a cell in a tissue sample comprising an analyte of interest. In some embodiments, the sample contacting region of one or both plates comprises a storage site comprising an antigen retrieval agent capable of promoting antigen retrieval. In some embodiments, the sample contacting region of one or both plates comprises a storage site containing a detection reagent that specifically labels a target analyte in the sample. In some embodiments, the sample contacting region of one or both plates comprises a binding site comprising a capture agent, wherein the capture agent is configured to bind to a target analyte on the surface of a cell in the sample and immobilize the cell.
In some embodiments, the detection agent comprises a staining agent for a staining agent selected from the group consisting of: acid fuchsin, alcian blue 8GX, alizarin red S, aniline blue WS, basic brilliant yellow O, azo carmine B, azo carmine G, azure a, azure B, azure C, basic fuchsin, bismark brown Y, brilliant cresol blue, brilliant green, carmine, chloramphenicol black E, congo red, CI cresyl violet, crystal violet, dactinoxin, eosin B, eosin Y, erythrosine, ethyl eosin, ethyl green, fast green FCF, fluorescein isothiocyanate, giemsa stain, hematoxylin and eosin, indigo carmine, jiannala green B, philosomal stainin 1899, light green SF, malachite green, houmadake yellow, methyl orange, methyl violet 2B, methylene blue, (methylene blue), neutral red, aniline black, nile blue a, karyon red, oleosin, G, lichen II, pararosaniline, rosaniline, Pararosaniline, piridoxine B, pyronine, resazurin, rose bengal, safranin O, sudan black B, sudan No. three, sudan No. five, tetrachromium staining, thionine, toluidine blue, wagert, swiss staining, and any combination thereof.
In some embodiments, the detection agent comprises an antibody configured to specifically bind to a protein analyte in the sample.
In some embodiments, the detection agent comprises an oligonucleotide probe configured to specifically bind to DNA and/or RNA in the sample.
In some embodiments, the detection reagent is labeled with a reporter molecule, wherein the reporter molecule is configured to provide a detectable signal to be read and analyzed.
In some embodiments, the signal is selected from the group consisting of:
i. luminescence selected from photoluminescence, electroluminescence, and electrochemiluminescence;
light absorption, reflection, transmission, diffraction, scattering or diffusion;
surface raman scattering;
an electrical impedance selected from the group consisting of resistance, capacitance, and inductance;
magnetic relaxivity;
and any combination of i-v.
F-2 immunohistochemistry
In some embodiments, the devices and methods of the present invention may be used to immunohistochemically assay a sample.
In an Immunohistochemical (IHC) staining method, a tissue sample is fixed (e.g., in paraformaldehyde), optionally embedded in wax, cut into thin sections less than 100 μm thick (e.g., 2 μm to 6 μm thick), and then fixed on a support such as a glass slide. Once fixed, the tissue sections may be dehydrated using an increased concentration of alcohol wash and clarified using a detergent such as xylene.
In most IHC procedures, primary and secondary antibodies can be used. In these methods, the primary antibody binds to an antigen of interest (e.g., a biomarker) and is unlabeled. The second antibody binds to the first antibody and is conjugated directly to the reporter molecule or a linker molecule (e.g., biotin) that can collect the reporter molecule in solution. Alternatively, the primary antibody itself may be conjugated directly to the reporter molecule or a linker molecule (e.g., biotin) that can collect the reporter molecule in solution. Reporter molecules include fluorophores (e.g., FITC, TRITC, AMCA, fluorescein, and rhodamine) and enzymes such as Alkaline Phosphatase (AP) and horseradish peroxidase (HRP), among which are a variety of fluorescent, chromogenic, and chemiluminescent substrates such as DAB or BCIP/NBT.
In the direct method, tissue sections are incubated with a labeled primary antibody (e.g., a FITC-conjugated antibody) in a binding buffer. The primary antibody binds directly to the antigen in the tissue section, and after washing the tissue section to remove any unbound primary antibody, the section is analyzed by microscopy.
In the indirect method, the tissue section is incubated with an unlabeled primary antibody that binds the target antigen in the tissue. After washing the tissue sections to remove unbound primary antibody, the tissue sections are incubated with labeled secondary antibody that binds the primary antibody.
Following immunohistochemical staining of the antigen, the tissue sample may be stained with another stain, such as hematoxylin, Hoechst stain, and DAPI, to provide contrast and/or to identify other features.
The device may be used for Immunohistochemical (IHC) staining of tissue samples. In these embodiments, the device may comprise a first plate and a second plate, wherein: the plates may be moved relative to each other into different configurations; one or both plates are flexible; each plate having a sample contacting area on its respective surface for contacting a tissue sample or an IHC staining solution; the sample contact area in the first plate is smooth and flat; the sample contacting area in the second plate comprises spacers fixed on the surface and having a predetermined substantially uniform height and a predetermined constant spacer pitch, said spacer pitch being in the range of 7 μm to 200 μm;
wherein one of these configurations is an open configuration, wherein: the two plates are completely or partially separated, the spacing between the plates being not adjusted by the spacers; and wherein the other of the configurations is a closed configuration, the closed configuration being configured after the sample and the IHC staining solution are deposited in an open configuration; and in the closed configuration: at least a portion of the sample is between the two plates and at least a portion of the sample and the second plate is a layer of at least partial staining solution, wherein at least a portion of the thickness staining solution layer is adjusted by the plates, the sample and the spacer, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm with little variation.
In some embodiments, the device may comprise a dry IHC stain coated on the sample-contacting region of one or both plates. In some embodiments, the device may comprise a dried IHC stain coated on the sample-contacting region of the second plate, and the IHC stain solution comprises a liquid that dissolves the dried IHC stain. The device of claim 1, wherein the thickness of the sample is 2 μm to 6 μm.
F-3.H & E and Special dyeings
In some embodiments, the devices and methods of the present invention can be used to perform H & E staining and special staining.
Hematoxylin and eosin staining or hematoxylin and eosin staining (H & E staining or HE staining) is one of the main stains in histology. It is the most widely used stain in medical diagnostics, usually the gold standard; for example, when a pathologist observes a biopsy of a suspicious cancer, the histological section may be H & E stained and referred to as an "H & E section," H + E section, "or" HE section. The combination of hematoxylin and eosin produces blue, violet and red colors.
In diagnostic pathology, the term "special staining" is most commonly used in the clinical setting and refers only to any technique for imparting color to a sample other than the H & E method. This also includes immunohistochemistry and in situ hybridization staining. On the other hand, H & E staining is the most commonly used staining method in histology and medical diagnostic laboratories.
In any embodiment, the stem binding site may comprise a capture agent, such as an antibody or nucleic acid. In some embodiments, the releasable dry reagent may be a labeled reagent, such as a fluorescently labeled reagent, e.g., a fluorescently labeled antibody or cell stain, such as a romanofsky stain, a leishmania stain, a magelin stain, a giemsa stain, a philadelphia stain, a reiter stain, or any combination thereof (e.g., a giemsa rapi mixed stain). Such a stain may comprise eosin Y or eosin B with methylene blue. In certain embodiments, the stain may be a basic stain, such as hematoxylin.
In some embodiments, specific staining agents include, but are not limited to, acid fuchsin, alcian blue 8GX, alizarin red S, aniline blue WS, basic brilliant yellow O, azo carmine B, azo carmine G, azure a, azure B, azure C, basic fuchsin, bismarck brown Y, brilliant cresol blue, brilliant green, carmine, chloramphenicol black E, congo red, CI cresyl violet, crystal violet, dacron, eosin B, eosin Y, erythrosine, ethyl eosin, ethyl green, fast green FCF, fluorescein isothiocyanate, giemsa stain, hematoxylin and eosin, indigo carmine, jianna green B, philer' S stainin 1899, green light SF, malachite green, mahuang, methyl orange, methyl violet 2B, methylene blue, methylene violet, (methylene blue), neutral red, aniline black, nile blue a, nuclear fast red, oleoresin red, Orange G, orange II, orcein, rosaniline, pyrauxin B, pyronine, resazurin, rose bengal, safranin O, sudan black B, sudan III, sudan V, four chrome stain, thionine, toluidine blue, Vegett, Reynold stain, and any combination thereof.
F-4 in situ hybridization technique
In some embodiments, the devices and methods of the present invention can be used to perform In Situ Hybridization (ISH) on a tissue sample.
In Situ Hybridization (ISH) is a type of hybridization that uses labeled complementary DNA, RNA, or modified nucleic acid strands (i.e., probes) to localize specific DNA or RNA sequences in a portion or part of a tissue (in situ), or if the tissue is small enough (e.g., plant seeds, drosophila embryos), throughout the tissue (whole-load ISH), cells, and Circulating Tumor Cells (CTCs).
In situ hybridization is used to reveal the location of a particular nucleic acid sequence on a chromosome or in a tissue, a key step in understanding the organization, regulation, and function of a gene. Key technologies currently in use include: hybridization to mRNA in situ using oligonucleotides and RNA probes (radiolabeled and hapten-labeled); analysis with light and electron microscopy; carrying out overall in situ hybridization; double detection of RNA and RNA plus protein; detecting the chromosome sequence by fluorescence in situ hybridization. DNA ISH can be used to determine the structure of chromosomes. Fluorescent dnaish (fish) can be used, for example, in medical diagnostics to assess chromosomal integrity. RNA ISH (RNA in situ hybridization) is used to measure and locate RNA (mRNA, lncRNA and miRNA) within tissue sections, cells, whole scaffolds and Circulating Tumor Cells (CTCs).
In some embodiments, the detection agent comprises a nucleic acid probe for in situ hybridization staining. Nucleic acid probes include, but are not limited to, oligonucleotide probes configured to specifically bind DNA and/or RNA in a sample.
F-5. System and method for tissue staining and cell imaging
There is also provided a system for rapid staining and analysis of tissue samples using a mobile phone, comprising:
(a) the sample, the staining solution and the device as described above,
(b) mobile communication
An apparatus, comprising:
i. one or more cameras for detecting and/or imaging a sample;
electronics, signal processors, hardware and software for receiving and/or processing the detected signals and/or images of the sample and for remote communication;
(c) a light source from a mobile communication device or an external source.
Also provided is a method for rapid staining and analysis of a tissue sample using a mobile phone, comprising:
(a) depositing a tissue sample and a staining solution on the device of the system and placing the two plates in a closed configuration;
(b) acquiring a mobile phone with hardware and software for imaging, data processing and communication;
(c) performing an assay on the tissue sample deposited on the CROF device by the mobile phone to produce a result; and
(d) the results are transmitted from the mobile phone to a location remote from the mobile phone.
Also provided is a method for staining a tissue sample, comprising:
(a) obtaining a tissue sample;
(b) obtaining a dyeing solution;
(c) a first plate and a second plate, wherein:
the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate having a sample contacting area on its respective surface for contacting a tissue sample or an IHC staining solution;
the sample contact area in the first plate is smooth and flat;
the sample contacting area in the second plate comprises spacers fixed on the surface and having a predetermined substantially uniform height and a predetermined constant spacer pitch, the spacer pitch being in the range of 7 μm to 200 μm;
(c) depositing the sample on one or both plates; when the plates are configured in an open configuration, wherein the open configuration is one in which the two plates are partially or fully separated and the spacing between the plates is not adjusted by the spacers;
(d) after (c), compressing at least a portion of the tissue sample and at least a portion of the staining solution into a closed configuration with two plates;
wherein in the closed configuration: at least a part of the sample is between the two plates and at least a part of the layer of staining solution is between at least a part of the sample and the second plate, wherein at least a part of the layer of staining solution is adjusted by the plates, the sample and the spacer, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm with little variation.
All of the benefits and advantages described for other embodiments (e.g., accelerated reaction, faster results, etc.) can be applied to the devices, systems, and methods.
Moreover, all of the parameters described above in the context of other embodiments (e.g., the size, spacing, and shape of the spacers, the flexibility of the spacers and plates, and how the devices and systems may be used, etc.) may be incorporated into IHC implementations described in this section.
For example, in some embodiments, the spacers of a layer of adjusted uniform thickness (i.e., the spacers that separate the plates from one another in a layer) have a "fill factor" of at least 1%, such as at least 2% or at least 5%, where the fill factor is the same as the uniform thicknessThe ratio of the spacer area of the thickness layer contact to the total plate area of the uniform thickness layer contact. In some embodiments, for spacers that accommodate a layer of uniform thickness, the young's modulus of the spacer multiplied by the fill factor of the spacer is equal to or greater than 10MPa, e.g., at least 15MPa or at least 20MPa, where the fill factor is the ratio of the area of the spacer in contact with the layer of uniform thickness to the total plate area in contact with the layer of uniform thickness. In some embodiments, the thickness of the flexible sheet times the Young's modulus of the flexible sheet is in the range of 60 to 300GPa μm. In some embodiments, for a flex plate, the fourth power of the spacer spacing (ISD) divided by the thickness (h) of the flex plate and the young's modulus (E) of the flex plate, ISD4/(. DELTA.E) equal to or less than 106μm3a/GPa, e.g. less than 105μm3a/Gpa of less than 104μm3a/GPa or less than 103μm3/GPa。
In some embodiments, one or both plates comprise position markers on or within the surface of the plate that provide information on the position of the plate, e.g., the position to be analyzed or the position on which a zone should be deposited. In some cases, one or both plates may include graduation markings on the surface or inside the plate that provide information on the lateral dimensions of the sections and/or the structure of the plate. In some embodiments, one or both plates comprise imaging indicia on the surface or within the plate that facilitates imaging of the sample. For example, the imaging markers may help focus the imaging device or guide the imaging device to a location on the device. In some embodiments, the spacer can be used as a position marker, a scale marker, an imaging marker, or any combination thereof.
In some embodiments, the spacer spacing may be substantially periodic. In some cases, the spacers may be in a regular pattern and the spacing between adjacent spacers may be substantially the same. In some implementations, the spacers are pillars having a cross-sectional shape selected from circular, polygonal, circular, square, rectangular, oval, or any combination thereof, and in some implementations, the spacers may have a substantially flat top surface, wherein for each spacer, the ratio of the lateral dimension of the spacer to its height is at least 1. In some embodiments, the smallest lateral dimension of the spacer is less than or substantially equal to the smallest dimension of the analyte in the fluid sample. The minimum lateral dimension of the spacers is in the range of 0.5 μm to 100 μm, for example in the range of 2 μm to 50 μm or 0.5 μm to 10 μm.
In some embodiments, the spacer has a cylindrical shape and the sidewall corners of the spacer have a rounded shape with a radius of curvature of at least 1 μm, such as at least 1.2 μm, at least 1.5 μm, or at least 2.0 μm. The spacers may have any convenient density, for example at least 1000/mm2A density of, for example, at least 1000/mm2A density of at least 2000/mm2A density of at least 5,000/mm2Or a density of at least 10,000/mm2The density of (c).
In the device, at least one plate may be transparent, allowing optical reading of the assay. Also, in the device, at least one of the plates may be made of a flexible polymer, allowing for efficient expansion of the sample by compressing the plates together. In some embodiments, the spacers are incompressible and/or independently, only one of the plates is flexible, for the pressure of compressing the plates. The flexible sheet may have a thickness in the range of 20 μm to 200 μm, for example 50 μm to 150 μm. As mentioned above, in the closed position, the thickness of the uniform thickness layer may have small variations.
In some embodiments, the variation may be less than 10%, less than 5%, or less than 2%, meaning that the thickness of the region is no more than +/-10%, 5%, or 2% of the average thickness.
In some embodiments, the first and second panels are connected and the device can be changed from the open configuration to the closed configuration by folding the panels. In some embodiments, the first and second panels may be connected by a hinge, and the device may be changed from the open configuration to the closed configuration by folding the panels to cause the device to bend along the hinge. The hinge may be a separate material attached to the plate, or in some cases, the plate may be integrally formed with the plate.
In some embodiments, the device is capable of analyzing the portion very quickly. In some cases, the analysis may be performed in 60 seconds or less, 30 seconds, 20 seconds or 15 seconds or 10 seconds or less.
In some embodiments, the system may further include (d) a housing configured to hold the sample and mounted to the mobile communication device. The housing may contain optics for facilitating imaging and/or signal processing of the sample by the mobile communication device, and a mount configured to hold the optics on the mobile communication device. In some cases, an optical element of the device (e.g., a lens, filter, mirror, prism, or beam splitter) may be movable such that the sample may be imaged in at least two channels.
In some embodiments, the mobile communication device may be configured to communicate the test results to medical personnel (e.g., MD), a medical facility (e.g., hospital or testing laboratory), or an insurance company. Further, the mobile communication device may be configured to communicate information about the subject (e.g., the subject's age, gender, weight, address, name, previous test results, previous medical history, etc.) with a medical professional, medical institution or insurance company. In certain embodiments, the mobile communication device may be configured to receive prescriptions, diagnoses, or suggestions from medical personnel. For example, in some embodiments, the mobile communication device may transmit the assay results to a remote location where medical personnel give a diagnosis. The diagnosis may be communicated to the subject via the mobile communication device.
In some embodiments, the mobile communication device may include hardware and software that allows it to (a) capture an image of the sample; (b) analyzing the test position and the control position in the image; and (c) comparing the value obtained from the analysis of the test location with a threshold value characterizing a rapid diagnostic test. In some cases, the mobile communication device communicates with the remote location via a wireless or cellular network.
In either embodiment, the mobile communication device may be a mobile phone.
The system may be used in a method comprising (a) a sample on a device of the system; (b) assaying the sample deposited on the device to produce a result; and (c) communicating the results from the mobile communication device to a location remote from the mobile communication device. The method may include analyzing the results at the remote location to provide analysis results; and transmitting the analysis results from the remote location to the mobile communication device. As described above, the analysis may be performed by medical personnel at a remote location. Also, in some embodiments, the mobile communication device may receive prescriptions, diagnoses, or recommendations from medical personnel at a remote location.
A method for analyzing a tissue slice is also provided. In some embodiments, the method may comprise obtaining an apparatus as described above, depositing the portion onto one or both plates of the apparatus; placing the panels in a closed configuration and applying an external force on at least a portion of the panels; samples in uniformly thick layers were analyzed when the panels were in a closed configuration.
In some embodiments, the device comprises:
(a) obtaining a tissue slice;
(b) obtaining a first plate and a second plate that are movable relative to each other into different configurations, wherein each plate has a substantially flat sample contacting surface, one or both plates are flexible, and the one or both plates contain a spacer that is fixed with the respective sample contacting surface, and wherein the spacer has:
i. a predetermined substantially uniform height of the first and second substrates,
a shape of a pillar having a substantially uniform cross-section and a flat top surface;
a width to height ratio equal to or greater than 1;
a predetermined constant inter-spacing distance in the range of 10 μ ι η to 200 μ ι η;
v. a fill factor equal to or greater than 1%;
(c) depositing the sample on one or both plates; when the plates are configured in an open configuration, wherein the open configuration is one in which the two plates are partially or fully separated and the spacing between the plates is not adjusted by the spacers;
(d) (c) thereafter, using the two plates to compress at least a portion of the section into a layer of substantially uniform thickness, the layer being bounded by the sample contacting surfaces of the plates, wherein the uniform thickness of the layer is accommodated by the spacers and the plates and has an average value that varies by less than 10% over the range of 1.8 μm to 3 μm, wherein the compression comprises:
bringing the two plates together; and
conforming, in parallel or sequentially, a region of at least one of the plates to press the plates together to a closed configuration, wherein the conforming press produces substantially uniform pressure on the plates over at least a portion of the sample, and the press spreads at least a portion of the sample laterally between the sample contacting surfaces of the plates, and wherein the closed configuration is one in which the spacing between the plates in a uniform thickness region layer is adjusted by the spacers;
(e) analysing cross-sections in layers of uniform thickness when the panel is closed
Constructing;
wherein the fill factor is a ratio of the spacer contact area to the total plate area;
wherein conformal compression is a method of making the pressure applied over an area substantially constant regardless of changes in the shape of the outer surface of the plate; and
wherein the parallel pressing simultaneously applies the pressures on the intended area, and the sequential pressing applies the pressure on a part of the intended area and gradually moves to other areas.
In some embodiments, the method may comprise: removing the external force after the panels are in the closed configuration; when the panel is in the closed configuration, portions of the layer of uniform thickness are imaged. As described above, in these embodiments, the spacer pitch may be in the range of 20 μm to 200 μm or 5 μm to 20 μm. In these embodiments, the product of the fill factor and the young's modulus of the spacer is 2MPa or greater. In some embodiments, the surface variation is less than 30 nm.
In any of these embodiments, the imaging and counting may be accomplished by: i. irradiating a cross section in the uniform thickness layer; capturing one or more images of the portion using a CCD or CMOS sensor.
In some embodiments, the external force may be provided by the human 5 hand, for example, by pressing down with a finger, such as a thumb, or pinching between a thumb and another finger, such as an index finger, on the same hand.
In some embodiments, one or more plates may comprise a dried reagent (e.g., a binding agent, a staining agent, a detection agent, or an assay reactant) coated on one or both plates.
In some embodiments, the thickness uniformity of the uniform thickness sample layer may be up to +/-5%, such as up to +/-2% or up to +/-1%.
In some embodiments, the spacer is a column having a cross-sectional shape selected from the group consisting of circular, polygonal, circular, square, rectangular, oval, elliptical, or any combination thereof.
F-6 examples of the invention
A device for analyzing a tissue sample, comprising:
a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate having on its respective inner surface a sample contacting area for contacting a staining solution and/or a tissue sample suspected of containing a target analyte;
one or both plates comprise a spacer secured to the respective plate;
v. the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
wherein one of these configurations is an open configuration, wherein: the two plates are partially or completely separated, the spacing between the plates is not adjusted by the spacers, and the sample is deposited on one or both of the plates;
wherein the other of the configurations is a closed configuration which, after deposition of the staining solution and the sample, is configured to be in an open configuration and in a closed configuration: at least a portion of the sample is between the two plates and a layer of at least a portion of the staining solution is between the two plates at least a portion of the sample and the second plate, wherein the thickness of at least a portion of the staining solution layer is adjusted by the plates, the sample and the spacers, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm with small variations.
Faa1. a device for analyzing a tissue sample, comprising:
a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate has a sample contacting area on its respective inner surface for contacting a transfer solution and/or a tissue sample suspected of containing a target analyte;
one or both plates contain a staining agent that dries on the respective sample-contacting region and is configured to dissolve in the transfer solution and stain the tissue sample upon contacting the transfer solution;
v. one or both plates comprise a spacer secured to the respective plate;
the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
wherein one of these configurations is an open configuration, wherein: the two plates are partially or completely separated, the spacing between the plates is not adjusted by the spacers, and the sample is deposited on one or both of the plates;
wherein another of the configurations is a closed configuration, which is configured in the open configuration after deposition of the staining solution and the sample, and in the closed configuration: at least part of the sample is between the two plates and at least part of the layer of transfer solution is between at least part of the sample and the second plate, wherein the thickness of at least part of the layer of transfer solution is adjusted by the plates, the sample and the spacer, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm with little variation.
Fb1. a method for analyzing a tissue sample, comprising the steps of:
(a) obtaining a tissue sample suspected of containing a target analyte and a staining solution;
(b) obtaining a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate has a sample contacting area on its respective inner surface for contacting a staining solution and/or a tissue sample;
one or both plates comprise a spacer secured to the respective plate;
v. the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
(c) depositing a staining solution and a tissue sample on one or both plates when the plates are in an open configuration,
wherein the open configuration is one in which the two plates are partially or completely separated, the spacing between the two plates is not adjusted by the spacer, and the sample and staining solution are deposited on one or both plates;
(d) after (c), bonding the two panels together and pressing the panels into a closed configuration,
wherein pressing comprises concurrently or sequentially conforming a region of at least one of the plates to press the plates together into a closed configuration, wherein the conforming pressing produces a substantially uniform pressure on the plate over at least a portion of the sample, and the pressing spreads at least a portion of the sample laterally between the inner surfaces of the plates;
wherein the other of the configurations is a closed configuration, the closed configuration being configured in the open configuration after deposition of the staining solution and the sample, and in the closed configuration: at least a part of the sample is between the two plates and a layer of at least a part of the staining solution is between the two plates at least a part of the sample and the second plate, wherein the thickness of at least a part of the staining solution layer is adjusted by the plates, the sample and the spacers, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm, with small variations;
and
(e) the target analyte is analyzed when the plate is in the closed configuration.
Fbb1. a method for analyzing a tissue sample comprising the steps of:
(a) obtaining a tissue sample suspected of containing a target analyte and a transfer solution;
(b) obtaining a first plate, a second plate, and a spacer, wherein:
i. the plates are movable relative to each other into different configurations;
one or both plates are flexible;
each plate having on its respective inner surface a sample contacting area for contacting a staining solution and/or a tissue sample suspected of containing a target analyte;
one or both plates contain a staining agent coated on the respective sample contacting region and configured to dissolve in the transfer solution and stain the tissue sample upon contacting the transfer solution;
v. one or both plates comprise a spacer secured to the respective plate;
the spacers have a predetermined substantially uniform height and a predetermined spacer pitch, and
at least one spacer is located inside the sample contacting area;
(c) depositing a staining solution and a tissue sample on one or both plates when the plates are in an open configuration,
wherein the open configuration is one in which the two plates are partially or completely separated, the spacing between the two plates is not adjusted by the spacer, and the sample and staining solution are deposited on one or both plates;
(d) after (c), bonding the two panels together and pressing the panels into a closed configuration,
wherein pressing comprises concurrently or sequentially conforming a region of at least one of the plates to press the plates together into a closed configuration, wherein the conforming pressing produces a substantially uniform pressure on the plate over at least a portion of the sample, and the pressing spreads at least a portion of the sample laterally between the inner surfaces of the plates;
wherein the other of the configurations is a closed configuration, the closed configuration being configured in the open configuration after deposition of the staining solution and the sample, and in the closed configuration: at least a part of the sample is between the two plates and a layer of at least a part of the staining solution is between the two plates at least a part of the sample and the second plate, wherein the thickness of at least a part of the staining solution layer is adjusted by the plates, the sample and the spacers, and the average distance between the surface of the sample and the surface of the second plate is equal to or less than 250 μm, with small variations;
and
(e) the target analyte is analyzed when the plate is in the closed configuration.
In some embodiments, the sample may be dried thereon in an open configuration, and wherein the sample comprises a bodily fluid selected from the group consisting of: amniotic fluid, aqueous humor, vitreous humor, blood (e.g., whole blood, fractionated blood, plasma, or serum), breast milk, cerebrospinal fluid (CSF), cerumen (cerumen), chyle, chyme, endolymph, perilymph, stool, breath, gastric acid, gastric juice, lymph, mucus (including nasal drainage and sputum), pericardial fluid, peritoneal fluid, pleural fluid, pus, rheumatism, saliva, exhaled breath condensate, sebum, semen, sputum, sweat, synovial fluid, tears, vomit, urine, and any combination thereof.
Faa2. the apparatus of any preceding embodiment, wherein the staining solution has a viscosity in the range of 0.1mPa S to 3.5 mPaS.
The device of any of the preceding embodiments, wherein the sample contacting area of the one or two plates is configured such that the sample can be dried thereon in the open configuration, and wherein the sample comprises a blood smear and is dried on one or two plates.
The device of any of the preceding embodiments, wherein the sample contacting area of the one or both plates is adhered to the sample, and wherein the sample is a section of tissue having a thickness in the range of 1-200 μ ι η.
The device according to embodiment FA4, wherein the sample is paraffin embedded.
The device according to any of the preceding embodiments, wherein the sample is fixed.
The device of any preceding embodiment, wherein the staining solution comprises a fixative capable of fixing the sample.
The device of any preceding embodiment, wherein the staining solution comprises a blocking agent, wherein the blocking agent is configured to disable non-specific endogenous species in the sample from reacting with a detection agent for specifically labeling the target analyte.
The apparatus of any preceding embodiment, wherein the staining solution comprises a deparaffinizing agent capable of removing paraffin from the sample.
The device according to any of the preceding embodiments, wherein the staining solution comprises a permeabilization reagent capable of permeabilizing cells in a tissue sample containing the analyte of interest.
The device according to any of the preceding embodiments, wherein the staining solution comprises an antigen retrieval agent capable of promoting antigen retrieval.
The device of any one of the preceding embodiments, wherein the staining solution comprises a detection agent that specifically labels the target analyte in the sample.
The device of any of the preceding embodiments, wherein the sample contacting region of one or both plates comprises a storage location comprising a blocking agent, wherein the blocking agent is configured to disable non-specific endogenous species in the sample from reacting with a detection agent for specifically labeling a target analyte.
The device of any preceding embodiment, wherein the sample contacting area of one or both plates comprises a storage location containing a dewaxing agent capable of removing paraffin from the sample.
A device according to any of the preceding embodiments, wherein the sample contacting area of one or both plates comprises a storage site containing a permeabilization reagent capable of permeabilizing cells in a tissue sample containing a target analyte.
The device according to any of the preceding embodiments, wherein the sample contacting area of one or both plates comprises a storage site comprising an antigen retrieval agent capable of promoting antigen retrieval.
A device according to any of the preceding embodiments, wherein the sample contacting area of one or both plates comprises a storage location containing a detection agent that specifically labels a target analyte in the sample.
The device of any preceding embodiment, wherein the detection agent comprises a stain selected from the group consisting of: acid fuchsin, alcian blue 8GX, alizarin red S, aniline blue WS, basic brilliant yellow O, azo carmine B, azo carmine G, azure a, azure B, azure C, basic fuchsin, bismark brown Y, brilliant cresol blue, brilliant green, carmine, chloramphenicol black E, congo red, CI cresyl violet, crystal violet, dactinoxin, eosin B, eosin Y, erythrosine, ethyl eosin, ethyl green, fast green FCF, fluorescein isothiocyanate, giemsa stain, hematoxylin and eosin, indigo carmine, jiannala green B, philosomal stainin 1899, light green SF, malachite green, houmadake yellow, methyl orange, methyl violet 2B, methylene blue, (methylene blue), neutral red, aniline black, nile blue a, karyon red, oleosin, G, lichen II, pararosaniline, rosaniline, Pararosaniline, piridoxine B, pyronine, resazurin, rose bengal, safranin O, sudan black B, sudan No. three, sudan No. five, tetrachromium staining, thionine, toluidine blue, wagert, swiss staining, and any combination thereof.
The device of any one of the preceding embodiments, wherein the detection agent comprises an antibody configured to specifically bind to a protein analyte in the sample.
The device of any one of the preceding embodiments, wherein the detection agent comprises an oligonucleotide probe configured to specifically bind to DNA and/or RNA in the sample.
The device of any of the preceding embodiments, wherein the detection agent is labeled with a reporter molecule, wherein the reporter molecule is configured to provide a detectable signal to be read and analyzed.
Apparatus according to embodiment FA21, wherein the signal is selected from the group consisting of:
i. luminescence selected from photoluminescence, electroluminescence, and electrochemiluminescence;
light absorption, reflection, transmission, diffraction, scattering or diffusion;
surface raman scattering;
an electrical impedance selected from the group consisting of resistance, capacitance, and inductance;
magnetic relaxivity;
and any combination of i-v.
The device of any of the preceding embodiments, wherein the sample contacting region of one or both plates comprises a binding site comprising a capture agent, wherein the capture agent is configured to bind to a target analyte on the surface of a cell in the sample and immobilize the cell.
Fb2. the method according to embodiment FB1 wherein the depositing step (c) comprises depositing and drying the sample on one or both plates before depositing the remaining part of the staining solution on top of the dried sample, wherein the sample can be dried thereon in an open configuration, and wherein the sample comprises a body fluid selected from the group consisting of: amniotic fluid, aqueous humor, vitreous humor, blood (e.g., whole blood, fractionated blood, plasma, or serum), breast milk, cerebrospinal fluid (CSF), cerumen (cerumen), chyle, chyme, endolymph, perilymph, stool, breath, gastric acid, gastric juice, lymph, mucus (including nasal drainage and sputum), pericardial fluid, peritoneal fluid, pleural fluid, pus, rheumatism, saliva, exhaled breath condensate, sebum, semen, sputum, sweat, synovial fluid, tears, vomit, urine, and any combination thereof.
Fbb2. the method of any preceding embodiment, wherein the staining solution has a viscosity in the range of 0.1mPa S to 3.5 mPaS.
Fb3. the method of any preceding embodiment, wherein depositing step (c) comprises depositing and drying the sample on one or two plates before depositing the remainder of the staining solution on top of the dried sample, and wherein the sample comprises a blood smear and is dried on one or two plates.
Fb4. the method of any preceding embodiment, wherein the depositing step (c) comprises depositing and attaching the sample to one or both plates prior to depositing the staining solution on top of the sample, wherein the sample contacting area of one or both plates is adhered to the sample, and wherein the sample is a tissue section having a thickness in the range of 1-200 m.
Fb5. the device according to embodiment FA4, wherein the sample is paraffin embedded.
The method of any preceding embodiment, wherein the sample is fixed.
The method of any preceding embodiment, wherein the staining solution comprises a fixative capable of fixing the sample.
The method of any one of the preceding embodiments, wherein the staining solution comprises a blocking agent, wherein the blocking agent is configured to disable non-specific endogenous species in the sample from reacting with a detection agent for specifically labeling the target analyte.
The method of any preceding embodiment, wherein the staining solution comprises a deparaffinizing agent capable of removing paraffin from the sample.
B10. The method according to any one of the preceding embodiments, wherein the staining solution comprises a permeabilizing reagent capable of permeabilizing cells in the tissue sample containing the analyte of interest.
A method according to any preceding embodiment, wherein the staining solution comprises an antigen retrieval agent capable of promoting antigen retrieval.
The method of any one of the preceding embodiments, wherein the staining solution comprises a detection agent that specifically labels the target analyte in the sample.
The method of any preceding embodiment, wherein the sample contacting region of one or both plates comprises a storage site comprising a blocking agent, wherein the blocking agent is configured to disable non-specific endogenous species in the sample from reacting with a detection agent for specifically labeling a target analyte.
A method according to any preceding embodiment, wherein the sample contacting region of one or both plates comprises a storage site containing a dewaxing agent capable of removing paraffin from the sample.
A method according to any of the preceding embodiments, wherein the sample contacting region of one or both plates comprises a storage site containing a permeabilizing reagent capable of permeabilizing a cell in a tissue sample containing an analyte of interest.
A method according to any preceding embodiment, wherein the sample contacting region of one or both plates comprises a storage site comprising an antigen retrieval agent capable of promoting antigen retrieval.
A method according to any of the preceding embodiments, wherein the sample contacting region of one or both plates comprises a storage site containing a detection agent that specifically labels a target analyte in the sample.
The method of any preceding embodiment, wherein the detection agent comprises a stain selected from the group consisting of: acid fuchsin, alcian blue 8GX, alizarin red S, aniline blue WS, basic brilliant yellow O, azo carmine B, azo carmine G, azure a, azure B, azure C, basic fuchsin, bismark brown Y, brilliant cresol blue, brilliant green, carmine, chloramphenicol black E, congo red, CI cresyl violet, crystal violet, dactinoxin, eosin B, eosin Y, erythrosine, ethyl eosin, ethyl green, fast green FCF, fluorescein isothiocyanate, giemsa stain, hematoxylin and eosin, indigo carmine, jiannala green B, philosomal stainin 1899, light green SF, malachite green, houmadake yellow, methyl orange, methyl violet 2B, methylene blue, (methylene blue), neutral red, aniline black, nile blue a, karyon red, oleosin, G, lichen II, pararosaniline, rosaniline, Pararosaniline, piridoxine B, pyronine, resazurin, rose bengal, safranin O, sudan black B, sudan No. three, sudan No. five, tetrachromium staining, thionine, toluidine blue, wagert, swiss staining, and any combination thereof.
The method of any preceding embodiment, wherein the detection agent comprises an antibody configured to specifically bind to a protein analyte in the sample.
Fb20. the method of any preceding embodiment, wherein the detection agent comprises an oligonucleotide probe configured to specifically bind to DNA and/or RNA in the sample.
The method of any preceding embodiment, wherein the detection agent is labeled with a reporter molecule, wherein the reporter molecule is configured to provide a detectable signal to be read and analyzed.
Fb22, FB21 in some embodiments, the signal selected from the group consisting of:
i. luminescence selected from photoluminescence, electroluminescence, and electrochemiluminescence;
light absorption, reflection, transmission, diffraction, scattering or diffusion;
surface raman scattering;
an electrical impedance selected from the group consisting of resistance, capacitance, and inductance;
magnetic relaxivity;
and any combination of i-v.
The method of any one of the preceding embodiments, wherein the sample contacting region of one or both plates comprises a binding site comprising a capture agent, wherein the capture agent is configured to bind to a target analyte on the surface of a cell in the sample and immobilize the cell.
Fb24. the method according to any of the preceding embodiments, further comprising, prior to step (e): the sample is incubated in the closed configuration for a period of time longer than the time it takes for the detection agent to diffuse through the uniformly thick layer and the sample.
Fb25. the method of any preceding embodiment, further comprising, prior to step (e): incubating the sample in a closed configuration at a predetermined temperature in the range of 30-75 ℃.
Fb26. the method of any preceding embodiment, wherein the staining solution comprises a transfer solution.
G. Dual lens imaging system
But now dual cameras are becoming more common on state of the art smart phones, which provides more possibilities for smart phone based imaging. By using two cameras, two different areas of the sample can be imaged simultaneously, which amounts to a much larger field of view. Furthermore, each camera may be used for microscopic imaging at a different resolution. For example, one camera may perform microscopy with a lower resolution but larger field of view to image a large object in the sample, while another camera may perform microscopy with a higher resolution but smaller field of view to image a small object. This is useful when the sample used for imaging has a mixture of small and large objects. It would therefore be highly desirable to provide users with a dual camera based smartphone imaging system.
Dual camera imaging system
Fig. 19-a is a schematic diagram of a dual camera imaging system. A dual camera imaging system includes a mobile computing device (e.g., a smartphone) with two built-in camera modules, two external lenses, a QMAX device, and a light source. Each camera module has an internal lens and an image sensor. The QMAX device is located below both camera modules. Each outer lens is placed at a suitable height between the QMAX device and its corresponding inner lens, where the sample in the QMAX device can be clearly focused on the image sensor. Each outer lens is aligned with its corresponding inner lens. The light captured by the imaging sensor may be refracted from the sample, emitted from the sample, and the like. The light captured by the imaging sensor covers visible wavelengths and can illuminate the sample in the QMAX device from the back or top side at a normal or oblique angle of incidence.
Dual camera imaging system for large field of view imaging
One embodiment is a dual camera imaging system for large FOV imaging. In this embodiment, the images taken by the two cameras have the same scale or optical magnification. For this purpose, an outer lens 1fE1Focal length of, inner lens 1fN1Focal length of, the outer lens 2fE2And inner lens 2fN2Satisfies the following relationship:
Figure GDA0002340036120000411
the distance between the two cameras is chosen to be a suitable value such that the FOVs of the two cameras overlap. As shown in fig. 19-B, the letter "a" represents the sample, and a portion of the letter "a" exists in the FOV of camera 1 and the FOV of camera 2 due to the overlap between the FOVs of the two cameras.
Another image processing step is used to merge the two images into one large image by matching the same features shared by the two images taken by camera 1 and camera 2.
Dual camera imaging system for dual resolution imaging
Lens-based imaging systems have an inherent disadvantage in that they have a compromise between the size of the FOV and the resolution. To obtain a large FOV, the resolution of the imaging system needs to be sacrificed. This problem is of greater concern when samples are mixed with small and large objects having significantly different size dimensions. In order to image a sufficient number of large objects, the FOV needs to be large enough, but this loses resolution to obtain details of small objects. To address this issue, in this embodiment, a dual camera imaging system is used to achieve dual resolution imaging on the same sample, with camera 1 (or 2) being used for low resolution and large FOV imaging and camera 2 (or 1) being used for high resolution and small FOV imaging.
The resolution of the imaging system depends on the optical magnification, and the optical magnification is equal to the ratio of the focal length of the outer lens to the focal length of the inner length. For example, in the present embodiment,camera 1 for low resolution imaging and camera 2 for high resolution imaging, and external lens 1fE1Focal length, inner lens 1fN1Focal length, outer lens 2fE2 focal length and inner lens 2fN2Satisfies the following relationship:
Figure GDA0002340036120000421
the FOVs of the two cameras may or may not overlap.
As shown in fig. 19-C, the sample image taken by camera 1 covers a larger FOV and contains more objects in a single FOV, but cannot resolve the details of small objects. And the image taken by the camera 2 covers a relatively small FOV and contains fewer objects in a single FOV but with a higher resolution capable of resolving details in small objects.
Examples of the invention
A1. A dual lens imaging apparatus, comprising:
a first external lens, a second external lens, a housing unit, and a card unit, wherein:
i. a housing unit configured to accommodate the first and second external lenses and the card unit, and to connect the dual-length imaging device with the mobile device;
the first and second outer lenses are configured to align with two inner lenses in the mobile device, respectively; and
the card unit is configured to receive a sample card, the sample card containing a sample,
wherein the card unit is located between the outer lens and the inner lens;
wherein the external lens is configured to focus illumination light refracted or emitted from the sample card onto an image sensor in the mobile device, thereby allowing the image sensor to capture an image of the sample.
B1. A dual lens imaging system, comprising:
(a) the two-lens imaging device of embodiment a1,
(b) the mobile device contains hardware and software for capturing and processing images of the sample by the dual lens imaging device.
C1. The device or system of any preceding embodiment, wherein the sample card is a QMAX card.
C2. The device or system of any preceding implementation, wherein the mobile device is a mobile communication device.
C3. The device or system of any preceding implementation, wherein the mobile device is a smartphone.
C4. The device or system of any preceding implementation, wherein the mobile device includes a light source that provides light to the sample card.
C5. The device or system of any preceding embodiment, wherein the two outer lenses are configured to capture overlapping images that at least partially overlap.
C6. The device or system of embodiment C5, wherein the overlapping images have the same resolution.
C7. The device or system of embodiment C6, wherein the software is configured to process the overlaid images to generate a combined image of the sample.
C8. The device or system of embodiment C5, wherein the overlapping images have different resolutions.
C9. The device or system of embodiment C8, wherein the software is configured to process the overlapping images to illustrate a particular portion of the image having a lower resolution.
C10. The device or system of any preceding embodiment, wherein the two outer lenses are configured to image two different locations of the sample area of the Q card.
C11. The device or system of any preceding embodiment, wherein the two outer lenses are configured to have different sized fields of view.
C12. The device or system of any preceding embodiment, wherein the two outer lenses are configured to have different sized FoV (inter-field of view), and wherein the ratio of the two different FoV is 1.1, 1.2, 1.5, 2, 5, 10, 15, 20, 30, 50, 100, 200, 1000 or within any value of both. Preferred ratios are 1.2, 1.5, 2, 5, 10, 20, or any range of values in the two.
C13. The device or system of any preceding embodiment, wherein the overlap of the FoV of two outer lenses is configured to be about 1%, 5%, 10%, 20%, 50%, 60%, 70%, 80%, 90%, or a range between any of these values.
C14. The device or system of any preceding embodiment, wherein the two outer lenses are optically coupled to different filters and/or polarizers.
Other embodiments
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing; and
a control rod within the cavity and having a control rod,
wherein the joystick comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens arranged to provide a field of view to the camera;
a cavity within the housing for receiving a sample and positioning the sample within a field of view of the camera, wherein the lens is positioned to receive light refracted by or emitted by the sample when within the field of view of the camera; and
a control rod within the cavity and having a control rod,
wherein the joystick comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing for receiving and positioning the sample within a field of view of the camera; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode, and
wherein the lever comprises a first planar area extending along a first plane and a second planar area laterally displaced from the first planar area along a first direction and extending along a second plane, the first plane being arranged at a different height from the second plane along a second direction, the second direction being perpendicular to the first direction.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing for receiving and positioning the sample within a field of view of the camera; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode, and
wherein the lever includes a first planar area extending along a first plane and a second planar area laterally displaced from the first planar area along a first direction and extending along a second plane, the first plane being arranged at a different height from the second plane along a second direction, the second direction being perpendicular to the first direction, and
wherein the first planar region contains at least one optical element and the second planar region contains at least one optical element.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing; and
a control rod within the cavity and having a control rod,
wherein the joystick comprises at least one optical element and is configured to be movable between at least three different positions, wherein (i) in a first position the imaging device is capable of imaging the sample in a bright field mode, (ii) in a second position the imaging device is capable of imaging the sample in a fluorescence excitation mode, and (iii) in a third position the imaging device is capable of measuring the light absorption of the sample.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens configured to provide a field of view of the camera;
a cavity within the housing for receiving and positioning the sample within a field of view of the camera;
an aperture within the housing, wherein the aperture is arranged to receive source light from the light source for illuminating the sample; and
a control rod in the cavity is arranged in the cavity,
wherein the joystick comprises at least one optical element and is configured to be movable between a first position in which (i) the imaging device is capable of imaging the sample in bright field mode and a second position in which the imaging device is capable of imaging the sample in fluorescence excitation mode, wherein in the fluorescence excitation mode the lens is arranged to receive light emitted by the sample when the sample is illuminated by the light source.
An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens configured to provide a field of view of the camera;
a cavity within the housing for receiving and positioning the sample within a field of view of the camera;
a control rod in the cavity is arranged in the cavity,
wherein the joystick comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode.
An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of a sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a housing;
a cavity within the housing;
a lens configured to provide a microscopic field of view to the camera; and
a movable arm within the cavity, wherein the movable arm is configured to be switchable between a first position and a second position, wherein when the movable arm is in the first position, the optical assembly is in a bright field mode, and when the movable arm is in the second position, the optical assembly is in a fluorescence excitation mode.
The optical assembly of any embodiment, wherein the housing comprises:
a sample receiving area within the cavity; and
a slot on one side of the housing, wherein the slot is arranged for receiving a sample substrate within the sample receiving area and positioning the sample within a field of view of the camera.
The optical assembly of embodiments, further comprising a first set of one or more optical elements arranged to receive light entering from a first aperture in the housing corresponding to the light source and redirect the light entering from the first aperture to a second aperture in the housing corresponding to the camera along a first path to provide bright field illumination of the sample when the moveable arm is in the first position.
The optical assembly of embodiments, wherein the first set of one or more optical elements comprises a first right angle mirror and a second right angle mirror, wherein the first right angle mirror and the second right angle mirror are in a first path and are arranged to reflect light from the light source for normal incidence into the camera,
the optical assembly of embodiments, wherein the light source is a point source to achieve interferometric imaging of the transparent sample by illuminating the sample with the same wavefront.
The optical assembly of embodiments, further comprising a second set of one or more optical elements mechanically coupled to the moveable arm and arranged to receive light entering from the first aperture and redirect the light entering from the first aperture along a second path to obliquely illuminate the sample to provide fluorescent illumination of the sample when the moveable arm is in the second position,
the optical assembly of an embodiment, wherein the oblique angle is greater than a collection angle of a lens configured to provide a field of view of the camera.
The optical assembly of embodiments, wherein the second set of one or more optical elements comprises a mirror and an optical absorber, wherein the mirror reflects light to obliquely illuminate the sample and the optical absorber absorbs extraneous light from the first aperture that would otherwise pass through the second aperture of the housing and cover the camera in fluorescence excitation mode.
The optical assembly of embodiments, wherein the absorber absorbs light that is not incident on the mirror after passing through the first aperture, wherein the light absorber is a thin film light absorber.
The optical assembly of embodiments, further comprising a third set of one or more optical elements arranged to receive light entering from the first aperture and redirect light entering into the second aperture in the movable arm and travel along the first path toward the light diffuser on the movable arm to illuminate the sample in a normal direction to measure light absorption of the sample.
The optical assembly of embodiments, wherein the third set of one or more optical elements comprises a light diffuser, a first angled mirror, and a second angled mirror, wherein the first angled mirror and the second angled mirror are located in the first path and are arranged to reflect light from the light source to the light diffuser before being incident perpendicularly into the camera;
the optical assembly of embodiments, wherein the light diffuser is a translucent diffuser having an opacity in the range of 10% to 90%.
The optical assembly of an embodiment, further comprising a rubber door to cover the sample receiver to prevent ambient light from entering the cavity.
The optical assembly of any preceding embodiment, wherein the light source and the camera are positioned at a fixed distance from each other on the same side of the handheld electronic device.
A system, comprising: the optical assembly of any preceding embodiment, and a mobile phone accessory comprising a first side configured to be coupled to the optical assembly and a second, opposite side configured to be coupled to a handheld electronic device, wherein the handheld electronic device is a mobile phone.
The system of any embodiment, wherein the mobile phone accessory is replaceable to provide accessories for different sized mobile phones.
The system of any embodiment, wherein the size of the mobile phone accessory is adjustable.
An optical assembly for a handheld mobile electronic device, the optical assembly comprising:
a housing;
a cavity within the housing;
a plurality of optical elements within the cavity, wherein the plurality of optical elements are arranged to receive light entering from a first aperture in the housing and redirect the light entering from the first aperture along a first path toward a second aperture in the housing;
a movable arm configurable in at least two different positions within the housing, a movable arm configurable in at least three different positions within the housing,
wherein the movable arm comprises a light reflector portion for reflecting light,
wherein the movable arm contains a light diffuser to homogenize the light and disrupt the coherence of the light,
wherein the moveable arm includes an aperture aligned with the inlet aperture in the housing,
wherein when the movable arm is in a first position within the housing, the light reflector portion is positioned between the inlet aperture and the plurality of optical elements in the housing such that the light reflector portion blocks light entering from the first opening from being incident on the plurality of optical elements, and
wherein when the movable arm is in the second position within the housing, light entering from the first opening is incident on the plurality of optical elements, and wherein when the movable arm is in the third position within the housing, light entering from the first opening passes through an aperture on the movable arm and is then incident on the light diffuser;
the optical assembly of any embodiment, comprising a slot on one side of the housing, wherein the slot is arranged to receive a sample substrate such that:
the first path intersects the sample substrate when the sample substrate is fully inserted into the slot and the moveable arm is in the second position within the housing; and
when the sample substrate is fully inserted into the slot and the movable arm is in the first position within the housing, light reflected by the light reflector portion is redirected to the sample substrate; and
when the sample substrate is fully inserted into the slot and the moveable arm is in a third position within the housing, the light travels along a first path toward the light diffuser and then impinges on the sample substrate.
The optical assembly of any implementation, wherein the movable arm includes a light absorbing portion to absorb light that is not incident on the mirror after passing through the first aperture.
The optical assembly of any embodiment, wherein the movable arm comprises:
a first receiver positioned above the light reflector portion; and
an optical filter disposed in the receiver; and a second receptacle located above the aperture portion; and a filter located in the receiver.
The optical assembly of any embodiment, wherein when the movable arm is in the first position, a filter disposed in the receiver is positioned to receive light entering from a first aperture in the housing; and a filter located in the receiver is positioned to receive light entering from the first aperture in the housing when the movable arm is in the third position.
The optical assembly of any embodiment, wherein when the movable arm is in the first position, the filter located in the receptacle overlaps an area where a portion of the sample substrate is located when the sample substrate is fully inserted into the slot.
A system, comprising:
the optical assembly of any embodiment; and
a mobile phone accessory including a first side configured to couple to the optical assembly and including a second opposite side configured to couple to a mobile phone, wherein a size of the mobile phone accessory is adjustable.
An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of a sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving and positioning the sample within the microscopic field of view;
an optical fiber configured to receive light from the light source and illuminate the receiver.
The optical component of any implementation, wherein the lens and camera define an optical axis when the optical component is attached to a handheld electronic device, and wherein the optical fiber surrounds the optical axis.
The optical assembly of any embodiment, wherein the optical fiber is ring-shaped.
The optical assembly of any embodiment, wherein the optical fiber is a side-emitting optical fiber.
The optical assembly of any embodiment, wherein the optical assembly comprises a housing defining the receiver, wherein the looped optical fiber is positioned in a groove of the housing, wherein the housing comprises an aperture configured to align with both end faces of the light source and the looped optical fiber to receive light from the light source.
The optical assembly of any embodiment, wherein light is emitted from a side of the looped optical fiber to illuminate a sample area on the optical axis directly below the camera.
The optical assembly of any embodiment, wherein the optical assembly comprises a housing defining a receiver, wherein the housing comprises a first aperture configured to be aligned with the light source, and the first end face of the optical fiber is positioned in the first aperture to receive light from the light source.
The optical assembly of any embodiment, wherein the housing comprises a second aperture configured to be aligned with the camera, and wherein the optical fiber comprises a first end positioned in the first aperture and comprises a second end positioned in the second aperture.
The optical assembly of any embodiment, wherein at least one of the first end face of the optical fiber and the second end face of the optical fiber is entangled.
The optical assembly of any embodiment, wherein the optical fiber is tilted with respect to the light source when the optical assembly is attached to the handheld electronic device, and
wherein the second end face of the optical fiber is arranged for illuminating a region of the sample directly below the lens.
The optical assembly of any embodiment, wherein the optical assembly comprises a housing defining a receiver, the housing comprising a groove having the optical fiber disposed therein.
An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of a sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving and positioning the sample within the microscopic field of view;
a mirror offset from the optical axis of the lens and positioned to reflect light from the light source and illuminate the sample within a range of tilt angles relative to the optical axis; and
a wavelength filter positioned between the sample and the camera to pass fluorescence emitted by the sample in response to the oblique illumination.
The optical assembly of any embodiment, wherein the lens is positioned on a front side of the sample and the mirror is positioned to illuminate the sample obliquely from a back side of the sample, wherein the oblique angle is greater than a collection angle of the lens.
The optical assembly of any of the embodiments further includes an optical absorber positioned on an optical axis adjacent the mirror to absorb light from the light source that is not reflected by the mirror.
The optical assembly of any embodiment, wherein the mirror and the optical absorber are mounted on a common structure and tilted with respect to each other.
The optical assembly of any embodiment, further comprising a second wavelength filter positioned in the path of the illumination light between the light source and the mirror to select certain wavelengths for illuminating the sample.
The optical assembly of any preceding embodiment, wherein the sample is supported by a sample holder comprising a planar structure, and wherein the receiver is configured to position the planar structure to extend partially into the path of the illumination light from the light source to couple the illumination light into the planar structure.
The optical assembly of embodiment 6, wherein the receiver is configured to position the planar structure such that the illumination light path is incident on an edge of the planar structure, wherein the edge extends along a plane perpendicular to a plane containing the field of view.
The optical assembly of any embodiment, wherein the mirror is arranged to reflect light to partially illuminate the sample obliquely from the back side of the planar structure and to partially illuminate the edge of the planar structure to couple the illumination light into the planar structure.
The optical assembly of any embodiment, further comprising a rubber door to cover the sample receiver to prevent ambient light from entering the optical assembly and entering the camera.
The optical assembly of any embodiment, wherein the planar structure is configured to guide an attached illumination light to the sample to illuminate the sample and cause the sample to emit fluorescent light.
The optical assembly of any embodiment, further comprising a sample holder,
the optical assembly of any embodiment, wherein the sample is a liquid sample and the sample holder comprises a first plate and a second plate that hold the liquid sample.
The optical assembly of any preceding implementation, wherein the lens, receiver, mirror, and wavelength filter are supported in a common optical box, and the optical assembly further comprises a replaceable holder frame for attaching the optical box to the handheld electronic device.
The optical assembly of any embodiment, wherein the light source and the camera are positioned on the same side of the handheld electronic device and at a fixed distance from each other.
The optical assembly of any embodiment, wherein the handheld electronic device is a cell phone.
A device comprising an optical assembly as in any preceding embodiment and a handheld electronic device.
An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of a sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving and positioning a sample within a microscope field of view,
wherein the sample is supported by a sample holder comprising a planar structure, and wherein the receiver is configured to position the planar structure to extend partially into a path of illumination light from the light source to couple the illumination light into the planar structure and cause the sample to emit fluorescence; and
a wavelength filter positioned between the sample and the camera to pass fluorescence emitted by the sample in response to the illumination.
The optical assembly of any embodiment further comprises a rubber door covering the sample receiver to prevent ambient light from entering the optical assembly through the receiver.
The optical assembly of any embodiment, wherein the planar structure is configured to guide a coupled illumination light to the sample to illuminate the sample and cause the sample to emit fluorescent light.
The optical assembly of any embodiment, further comprising a sample holder,
the optical assembly of any embodiment, wherein the sample is a liquid sample and the sample holder comprises a first plate and a second plate that hold the liquid sample.
The optical assembly of any embodiment, further comprising a second wavelength filter positioned in the path of the illumination light between the light source and the portion of the sample holder that extends partially into the light path.
The optical assembly of any preceding implementation, wherein the lens, receiver, and wavelength filter are supported in a common optical box, and the optical assembly further comprises a replaceable holder frame for attaching the optical box to the handheld electronic device.
The optical assembly of any embodiment, wherein the light source and the camera are positioned at a fixed distance from each other on the same side of the handheld electronic device.
The optical assembly of any embodiment, wherein the handheld electronic device is a cell phone.
A device comprising an optical assembly as in any preceding embodiment and a handheld electronic device.
An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of a sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a first assembly lens configured to provide a first microscopic field of view to a first camera module;
a second assembled lens configured to provide a second microscopic field of view to a second camera module; and
a receiver for receiving and positioning the sample within the first microscopic field of view and within the second microscopic field of view.
The optical assembly of any embodiment, wherein the first camera module comprises a first inner lens and the second camera module comprises a second inner lens, wherein a first optical power provided by the first assembly lens and the first inner lens is the same as a second optical power provided by the second assembly lens and the second inner lens.
The optical assembly of any embodiment, wherein a first ratio of a focal length of the first assembled lens to a focal length of the first inner lens is equal to a second ratio of a focal length of the second assembled lens to a focal length of the second inner lens.
The optical assembly of any embodiment, wherein a first image resolution provided by the first camera module and the first assembled lens is the same as a second image resolution provided by the second camera module and the second assembled lens.
The optical assembly of any embodiment, wherein the first camera module comprises a first inner lens and the second camera module comprises a second inner lens, wherein a first optical power provided by the first fitting lens and the first inner lens is different from a second optical power provided by the second fitting lens and the second inner lens.
The optical assembly of any embodiment, wherein a first ratio of the focal length of the first assembled lens to the focal length of the first inner lens is less than a second ratio of the focal length of the second assembled lens to the focal length of the second inner lens.
The optical assembly of any embodiment, wherein a first image resolution provided by the first camera module and the first assembled lens is less than a second image resolution provided by the second camera module and the second assembled lens.
The optical assembly of any preceding embodiment, wherein the first microscopic field of view overlaps the second microscopic field of view.
The optical assembly of any embodiment, wherein the first microscopic field of view overlaps the second microscopic field of view by an amount between 1% and 90%.
The optical assembly of any embodiment, wherein the first microscopic field of view does not overlap the second microscopic field of view.
The optical component of any preceding embodiment, wherein each of the first and second fabrication lenses is arranged to receive light scattered by or emitted by the sample.
The optical assembly of any preceding embodiment, wherein the first microscopic field of view is smaller than the second microscopic field of view.
The optical assembly of any preceding embodiment, wherein the angular field of view of the first assembly lens is less than the angular field of view of the second assembly lens.
The optical assembly of any embodiment, wherein a ratio of the angular field of view of the first assembled lens to the angular field of view of the second assembled lens is between 1.1 and 1000.
The optical assembly of any embodiment, comprising:
a first filter disposed in a first illumination path to or from the first assembly lens; and
a second filter disposed in a second illumination path to or from the second assembly lens.
The optical assembly of any embodiment, wherein the first filter is configured to filter a first wavelength range, the second filter is configured to filter a second wavelength range, and the first wavelength range is different from the second wavelength range.
The optical assembly of any embodiment, comprising:
a first polarizer disposed in a first illumination path to or from the first assembly lens; and
a second polarizer disposed in a second illumination path to or from the second assembly lens.
The optical assembly of any embodiment, wherein the first polarizer and the second polarizer have different polarization-dependent light transmission and blocking characteristics.
A device comprising an optical assembly as in any preceding embodiment and a handheld electronic device.
The optical assembly of any embodiment, wherein the handheld electronic device is a cell phone.
The device of any embodiment, wherein the handheld electronic device is configured to computationally merge a first image obtained from the first camera module with a second image obtained from the second camera module.
An imaging method, comprising:
compressing the sample between two plates, wherein the two plates are separated from each other by an array of spacers, at least one having a reference mark;
acquiring a plurality of images of the sample using an imaging system comprising a camera and at least one lens, wherein each image corresponds to a different object plane within the thickness of the sample;
computationally analyzing each image to determine information about the respective object plane based on one or more reference markers; and
a three-dimensional image of the sample is computationally constructed based on the plurality of images and information about the respective object planes.
The imaging method of any preceding embodiment, wherein the determined information about the respective object plane comprises a depth of the object plane relative to the imaging system.
The imaging method of any embodiment 2, wherein at least some of the spacers each have a reference mark.
The imaging method of any preceding embodiment, wherein the determined information about the respective object plane comprises a depth and an orientation of the object plane relative to the imaging system.
The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises determining a degree of defocus of one or more reference marks.
The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises: determining a depth of each reference mark of the plurality of reference marks based on the defocus degree of each reference mark; and determining a depth and orientation of the respective object plane relative to the imaging system based on the determined depth of the reference marker.
The imaging method of any preceding embodiment, wherein the reference mark is rotationally asymmetric with respect to an axis perpendicular to at least one of the plates.
The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises determining a rotational orientation of one or more reference markers relative to the axis of the imaging system.
The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises comparing image information about a reference marker with a priori knowledge about the reference marker.
The imaging method of any preceding embodiment, wherein the a priori knowledge about the reference markers is based on one or more of a shape of each reference marker and a position of each reference marker relative to the plate.
The method of imaging as in any preceding embodiment, wherein the spacer is a cylinder.
The imaging method of any preceding embodiment, wherein acquiring a plurality of images comprises moving one or more components of an imaging system relative to a plate holding a sample.
The imaging method of any preceding embodiment, wherein the computational construction of the three-dimensional image comprises processing each acquired image to remove defocused features.
The imaging method of any preceding embodiment, wherein processing each acquired image to remove defocus features comprises using a band pass filter.
The imaging method of any preceding embodiment, wherein the acquired image corresponds to an interference image formed by combining light from the sample with reference light of the sample that is not directed onto the camera.
An imaging device, comprising:
an imaging system comprising a camera and at least one lens;
a sample holder for supporting a sample cartridge relative to an imaging system, the sample cartridge comprising two plates separated from each other by an array of spacers, at least one having a reference mark, wherein a sample to be imaged is configured to be compressed between the two plates; and
a processing and control system coupled to the sample holder and the camera and configured to acquire a plurality of images of the sample using the imaging system, wherein each image corresponds to a different object plane within the thickness of the sample, and
wherein the processing and control system is further configured to:
computationally analyzing each image based on one or more reference markers to determine information about a corresponding object plane; and
a three-dimensional image of the sample is computationally constructed based on the plurality of images and information about the respective object planes.
The imaging device of any preceding embodiment, wherein the determined information about the respective object plane comprises a depth of the object plane relative to the imaging system.
The imaging device of any embodiment, or wherein at least some of the spacers each have a reference mark.
The imaging device of any preceding embodiment, wherein the determined information about the respective object plane comprises a depth and an orientation of the object plane relative to the imaging system.
The device of any preceding implementation, wherein computational analysis of each image comprises determining a degree of defocus for one or more of the reference markers.
The apparatus of embodiment 20, wherein the computational analysis of each image comprises: determining a depth of each reference mark of the plurality of reference marks based on the defocus degree of each reference mark; and determining a depth and orientation of the respective object plane relative to the imaging system based on the determined depth of the reference marker.
The device of any preceding embodiment, wherein the reference mark is rotationally asymmetric with respect to an axis perpendicular to at least one of the plates.
The device of any implementation, wherein computational analysis of each image includes determining a rotational orientation of one or more of the reference markers relative to the axis of the imaging system.
The device of any preceding embodiment, wherein the computational analysis of each image comprises comparing image information about a reference marker with a priori knowledge about the reference marker.
The device of any of the preceding embodiments, wherein the a priori knowledge about the reference markers is based on one or more of a shape of each reference marker and a position of each reference marker relative to the plate.
The device of any preceding embodiment, wherein the spacer is a cylinder.
The device of any preceding embodiment, wherein the control system is configured to move one or more components of the imaging system relative to the plate holding the sample to acquire the plurality of images.
The device of any preceding implementation, wherein computational construction of the three-dimensional image comprises processing each acquired image to remove defocused features.
The device of any implementation, wherein processing each acquired image to remove defocus features comprises using a band-pass filter.
The device of any preceding embodiment, wherein the acquired image corresponds to an interference image formed by combining light from the sample with reference light of the sample that is not directed onto the camera.
Still other embodiments
The present invention includes various embodiments that can be combined in various ways as long as the various components are not contradictory to each other. Embodiments should be considered as a single invention document: each application has other applications as references and is also incorporated by reference in its entirety for all purposes, rather than as a discrete, independent document. These embodiments include not only the disclosure in the current document, but also documents that are referenced, incorporated or claim priority herein.
(1)Definition of
The terms used to describe the devices, systems and methods disclosed herein are defined in the present application or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on day 2/7 of 2017, US provisional application number 62/426065 filed on day 2/8 of 2017, and US provisional application number 62/456504 filed on day 2/8 of 2017, filed on day 8 of 2016, respectively, all of which are incorporated herein in their entirety for all purposes.
The terms "CROF card (or card)", "COF card", "QMAX card", Q card "," CROF device "," COF device "," QMAX device "," CROF board "," COF board ", and" QMAX-board "are interchangeable, except that in some embodiments, the COF card does not include a spacer; and these terms refer to a device that includes a first plate and a second plate that are movable relative to each other into different configurations (including open and closed configurations), and that includes a spacer that adjusts the spacing between the plates (except for some embodiments of COFs). The term "X-board" refers to one of the two boards in a CROF card, with the spacer fixed to the board. Further description of COF cards, CROF cards and X-boards is described in provisional application serial No. 62/456065 filed on 7.2.2017, all of which are incorporated herein in their entirety for all purposes.
(2)Q-card, spacer and uniform sample thickness
The devices, systems, and methods disclosed herein may include or use Q-cards, spacers, and uniform sample thickness embodiments for sample detection, analysis, and quantification. In some embodiments, the Q-card includes spacers that help make at least a portion of the sample a highly uniform layer. The structure, materials, functions, variations and dimensions of the spacers and uniformity of the spacers and sample layers are listed, described and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on 2/7/2017, US provisional application number 62/426065 filed on 2/8/2017, and US provisional application number 62/456504 filed on 2/8/2017, respectively, filed on 8/2016, all of which are incorporated herein in their entirety for all purposes.
(3)Hinge, open recess, recessed edge and slider
The devices, systems, and methods disclosed herein may include or use a Q-card for sample detection, analysis, and quantification. In some embodiments, the Q-card includes a hinge, notch, recess, and slider that help facilitate the handling of the Q-card and the measurement of the sample. The structure, materials, functions, variations and dimensions of the hinges, notches, grooves and slides are listed, described and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on 2/7/2017, US provisional application number 62/426065 filed on 2/8/2017, and US provisional application number 62/456504 filed on 2/8/2017, respectively, at 2016/8/9/14, all of which are incorporated herein in their entirety for all purposes.
(4)Q card, slider and smart mobile phone detection system
The devices, systems, and methods disclosed herein may include or use a Q-card for sample detection, analysis, and quantification. In some embodiments, a Q-card is used with a slider of the card that allows the smartphone detection system to read. The structure, materials, functions, variations, dimensions and connections of the Q-card, slider and handset inspection system are listed, described and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775 filed on 2016, US provisional application number 62/456065 filed on 2017, 2, 7, 2017, US provisional application number 62/426065 filed on 2017, 2, 8, 2017, respectively, and US provisional application number 62/456504 filed on 2017, 2, 8, all of which are incorporated herein in their entirety for all purposes.
In some embodiments of QMAX, the sample contacting region of one or both plates comprises a compressive open flow Monitoring Surface Structure (MSS) configured to monitor how much flow has occurred after COF. For example, in some embodiments, the MSS comprises a shallow square array that will cause friction against a component in the sample (e.g., blood cells in blood). By examining the distribution of some components of the sample, information can be obtained about the flow of the sample and its components under COF.
The depth of the MSS may be 1/1000, 1/100, 1/100, 1/5, 1/2, or in a range of any two values, of the spacer height, and be in the shape of a protrusion or a hole.
(5)Detection method
The devices, systems, and methods disclosed herein may include or be used in various types of detection methods. The detection methods are disclosed, described, and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, filed on day 2/7 of 2017, U.S. provisional application number 62/456065 filed on day 2/8 of 2017, U.S. provisional application number 62/426065 filed on day 2/8 of 2017, and U.S. provisional application number 62/456504 filed on day 2/8 of 2017, respectively, all of which are incorporated herein in their entirety for all purposes.
(6)Marking
The devices, systems, and methods disclosed herein may employ various types of labels for analyte detection. The markers are listed, described and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on day 2/7 of 2017, US provisional application number 62/426065 filed on day 2/8 of 2017, and US provisional application number 62/456504 filed on day 2/8 of 2017, respectively, all of which are incorporated herein in their entirety for all purposes.
(7)Analyte
The devices, systems, and methods disclosed herein can be used to manipulate and detect various types of analytes, including biomarkers. The analytes are listed, described, and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on day 2/7 of 2017, US provisional application number 62/426065 filed on day 2/8 of 2017, and US provisional application number 62/456504 filed on day 2/8 of 2017, respectively, all of which are incorporated herein in their entirety for all purposes.
(8)Applications (fields and samples)
The devices, systems, and methods disclosed herein may be used in a variety of applications (fields and samples). Applications are disclosed, described and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on day 2/7 of 2017, US provisional application number 62/426065 filed on day 2/8 of 2017, and US provisional application number 62/456504 filed on day 2/8 of 2017, respectively, filed on day 8 of 2016, 8 of 8/8 of 2016, all of which are incorporated herein in their entirety for all purposes.
(9)Cloud
The devices, systems, and methods disclosed herein may employ cloud technology for data transmission, storage, and/or analysis. The relevant cloud technologies are listed, described, and summarized herein or in PCT application (assigned US) numbers PCT/US2016/045437 and PCT/US0216/051775, US provisional application number 62/456065 filed on day 2/7 of 2017, US provisional application number 62/426065 filed on day 2/8 of 2017, and US provisional application number 62/456504 filed on day 2/8 of 2017, respectively, all of which are incorporated herein in their entirety for all purposes.
Other notes
Other embodiments of the inventive subject matter according to the present disclosure are described in the paragraphs listed below.
It must be noted that, as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise, e.g., when the word "single" is used. For example, reference to "an analyte" includes a single analyte and a plurality of analytes, reference to "a capture agent" includes a single capture agent and a plurality of capture agents, reference to "a detection agent" includes a single detection agent and a plurality of detection agents, and reference to "a reagent" includes a single reagent and a plurality of reagents.
As used herein, the terms "adapted" and "constructed" mean that an element, component, or other subject matter is designed and/or intended to perform a given function. Thus, the use of the terms "adapted" and "constructed" should not be construed to mean that a given element, component, or other subject matter is simply "capable" of performing a given function. Similarly, subject matter recited as being configured to perform a particular function may additionally or alternatively be described as being operable to perform that function.
As used herein, the use of the phrases "for example," as an example, "and/or simply the terms" example "and" exemplary "when referring to one or more components, features, details, structures, embodiments, and/or methods in accordance with the present disclosure is intended to convey that the described components, features, details, structures, embodiments, and/or methods are illustrative, non-exclusive examples of components, features, details, structures, embodiments, and/or methods in accordance with the present disclosure. Thus, the described components, features, details, structures, embodiments, and/or methods are not intended to be limiting, required, or exclusive/exhaustive; and other components, features, details, structures, embodiments, and/or methods, including structurally and/or functionally similar and/or equivalent components, features, details, structures, embodiments, and/or methods, are also within the scope of the present disclosure.
As used herein, the phrases "at least one" and "one or more" in reference to a list of more than one entity refer to any one or more of the entities in the list of entities and are not limited to at least one of each (each) and each (every) entity specifically listed in the list of entities. For example, "at least one of a and B" (or, equivalently, "at least one of a or B," or, equivalently, "at least one of a and/or B") can refer to a alone, B alone, or a combination of a and B.
As used herein, the term "and/or" disposed between a first entity and a second entity refers to one of (1) the first entity, (2) the second entity, and (3) the first entity and the second entity. The use of "and/or" listed plural entities should be construed in the same way, i.e., "one or more" of the entities so combined. In addition to the entities specifically identified by the "and/or" clause, other entities, whether related or unrelated to those specifically identified, may optionally be present.
When numerical ranges are mentioned herein, the invention includes embodiments in which the endpoints are included, embodiments in which both endpoints are excluded, and embodiments in which one endpoint is included and the other endpoint is excluded. Both endpoints should be assumed to be included unless otherwise stated. Moreover, unless otherwise indicated or apparent from the context and understanding to one of ordinary skill in the art.
In the event that any patent, patent application, or other reference is incorporated by reference herein and (1) the manner in which the term is defined is inconsistent with an unincorporated portion of the present disclosure or other incorporated reference and/or (2) is otherwise inconsistent with an unincorporated portion of the present disclosure or other incorporated reference, the unincorporated portion of the present disclosure should be taken as the priority and the term or disclosure incorporated therein should only be taken as the priority for the reference in which the term is defined and/or incorporated disclosure originally existed.
Other embodiments
It is to be understood that while the invention has been described in conjunction with the detailed description thereof, the foregoing description is intended to illustrate and not limit the scope of the invention, which is defined by the scope of the appended claims.

Claims (119)

1. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, comprising:
a housing;
a cavity within the housing; and
a control rod within the cavity and having a control rod,
wherein the control rod contains at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode.
2. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens arranged to provide a field of view to the camera;
a cavity within the housing for receiving the sample and positioning the sample within a field of view of the camera, wherein the lens is positioned to receive light refracted by or emitted by the sample when within the field of view of the camera; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging a sample in bright field mode and (ii) in the second position the imaging device is capable of imaging a sample in fluorescence excitation mode.
3. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing for receiving and positioning a sample within a field of view of a camera; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging a sample in bright field mode and (ii) in the second position the imaging device is capable of imaging a sample in fluorescence excitation mode, and
wherein the lever includes a first planar region extending along a first plane and a second planar region laterally displaced from the first planar region along a first direction and extending along a second plane, the first plane being disposed at a different height from the second plane along a second direction, the second direction being perpendicular to the first direction.
4. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing for receiving and positioning a sample within a field of view of a camera; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode,
wherein the lever comprises a first planar area extending along a first plane and a second planar area laterally displaced from the first planar area along a first direction and extending along a second plane, the first plane being arranged at a different height from the second plane along a second direction, the second direction being perpendicular to the first direction, and
wherein the first planar region contains at least one optical element and the second planar region contains at least one optical element.
5. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a cavity within the housing; and
a control rod within the cavity and having a control rod,
wherein the control rod comprises at least one optical element and is configured to be movable between at least three different positions, wherein (i) in a first position the imaging device is capable of imaging the sample in bright field mode, (ii) in a second position the imaging device is capable of imaging the sample in fluorescence excitation mode, and (iii) in a third position the imaging device is capable of measuring the light absorption of the sample.
6. An optical adapter for imaging a sample using a handheld imaging device having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens configured to provide a field of view of the camera;
a cavity within the housing for receiving and positioning a sample within a field of view of the camera;
an aperture within the housing, wherein the aperture is arranged to receive source light from the light source for illuminating the sample; and
a control rod in the cavity, wherein the control rod is arranged in the cavity,
wherein the control rod comprises at least one optical element and is configured to be movable between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging a sample in bright field mode, (ii) in the second position the imaging device is capable of imaging a sample in fluorescence excitation mode, wherein in the fluorescence excitation mode the lens is arranged to receive light emitted by the sample when illuminated by a light source.
7. An optical adapter for imaging a sample using a smartphone having a light source, a single camera, and a computer processor, the optical adapter comprising:
a housing;
a lens configured to provide a field of view of the camera;
a cavity within the housing for receiving and positioning the sample within a field of view of the camera;
a control rod within the cavity and having a control rod,
wherein the control rod contains at least one optical element and is configured to move between a first position and a second position, wherein (i) in the first position the imaging device is capable of imaging the sample in bright field mode and (ii) in the second position the imaging device is capable of imaging the sample in fluorescence excitation mode.
8. An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of the sample by the camera with light from the light source illuminating the sample, the optical assembly comprising:
a housing;
a cavity within the housing;
a lens configured to provide a microscopic field of view to the camera; and
a movable arm within the cavity, wherein the movable arm is configurable to be switchable between a first position and a second position, wherein when the movable arm is in the first position, the optical assembly is in a bright field mode, and when the movable arm is in the second position, the optical assembly is in a fluorescence excitation mode.
9. The optical assembly of any implementation, wherein the housing includes:
a sample receiving area within the cavity; and
a slot on a side of the housing, wherein the slot is arranged for receiving a sample substrate within the sample receiving area and positioning the sample within a field of view of the camera.
10. The optical assembly of an embodiment, further comprising a first set of one or more optical elements arranged to receive light entering from a first aperture in the housing corresponding to the light source and redirect light entering from the first aperture along a first path to a second aperture in the housing corresponding to a camera to provide bright field illumination of the sample when the movable arm is in a first position.
11. The optical assembly of an embodiment, wherein the first set of one or more optical elements comprises a first right angle mirror and a second right angle mirror, wherein the first right angle mirror and the second right angle mirror are in a first path and are arranged to reflect light from the light source as being perpendicularly incident into the camera.
12. The optical assembly of an embodiment, wherein the light source is a point source to enable interferometric imaging of the transparent sample by illuminating the sample with the same wavefront.
13. The optical assembly of an embodiment, further comprising a second set of one or more optical elements mechanically coupled to the moveable arm and arranged to receive light entering from the first aperture and redirect light entering from the first aperture along a second path to obliquely illuminate the sample so as to provide fluorescent illumination of the sample when the moveable arm is in the second position,
14. the optical assembly of an embodiment, wherein the angle of tilt is greater than a collection angle of the lens configured to provide a field of view of a camera.
15. The optical assembly of an embodiment, wherein the second set of one or more optical elements comprises a mirror and an optical absorber, wherein the mirror reflects light to obliquely illuminate the sample and the optical absorber absorbs extraneous light from the first aperture that would otherwise pass through the second aperture of the housing and overwhelm the camera in the fluorescence excitation mode.
16. The optical assembly of embodiments, wherein the absorber absorbs light that is not incident on the mirror after passing through the first aperture, wherein the light absorber is a thin film light absorber.
17. The optical assembly of embodiments, further comprising a third set of one or more optical elements arranged to receive light entering from the first aperture and redirect light entering into a second aperture in the movable arm and travel along the first path toward a light diffuser on the movable arm to illuminate the sample in a normal direction to measure light absorption of the sample.
18. The optical assembly of embodiments, wherein the third set of one or more optical elements comprises a light diffuser, a first right angle mirror, and a second right angle mirror, wherein the first right angle mirror and the second right angle mirror are located in the first path and are arranged to reflect light from the light source to the light diffuser before being incident perpendicularly into the camera;
19. the optical assembly of embodiments, wherein the light diffuser is a translucent diffuser having an opacity in the range of 10% to 90%.
20. The optical assembly of an embodiment, further comprising a rubber door to cover the sample receiver to prevent ambient light from entering the cavity.
21. The optical assembly of any preceding implementation, wherein the light source and the camera are positioned at a fixed distance from each other on the same side of the handheld electronic device.
22. A system, comprising: an optical assembly according to any of the preceding embodiments, and
a mobile phone accessory including a first side configured to couple to the optical assembly and a second, opposite side configured to couple to the handheld electronic device, wherein the handheld electronic device is a mobile phone.
23. The system of any embodiment, wherein the mobile phone accessory is replaceable to provide accessories for different sized mobile phones.
24. The system of any embodiment, wherein the size of the mobile phone accessory is adjustable.
25. An optical assembly for a handheld mobile electronic device, the optical assembly comprising:
a housing;
a cavity within the housing;
a plurality of optical elements within the cavity, wherein the plurality of optical elements are arranged to receive light entering from a first aperture in the housing and redirect light entering from the first aperture along a first path toward a second aperture in the housing;
a movable arm configurable in at least two different positions within the housing, a movable arm configurable in at least three different positions within the housing,
wherein the movable arm comprises a light reflector portion for reflecting light,
wherein the movable arm comprises a light diffuser to homogenize the light and disrupt the coherence of the light,
wherein the movable arm includes an aperture aligned with the inlet aperture in the housing,
wherein when the movable arm is in a first position within the housing, the light reflector portion is positioned between an entrance aperture in the housing and a plurality of optical elements such that the light reflector portion blocks light entering from a first opening from being incident on the plurality of optical elements, and
wherein light entering from the first opening is incident on the plurality of optical elements when the movable arm is in the second position within the housing, and wherein light entering from the first opening passes through an aperture on the movable arm and is then incident on the light diffuser when the movable arm is in the third position within the housing;
26. the optical assembly of any embodiment, comprising a slot on a side of the housing, wherein the slot is arranged to receive a sample substrate such that:
the first path intersects the sample substrate when the sample substrate is fully inserted within the slot and the moveable arm is in a second position within the housing; and
when the sample substrate is fully inserted within the slot and the movable arm is in a first position within the housing, light reflected by the light reflector portion is redirected to the sample substrate; and
when the sample substrate is fully inserted into the slot and the movable arm is in a third position within the housing, light travels along the first path toward the light diffuser and then impinges on the sample substrate.
27. The optical assembly of any implementation, wherein the movable arm includes a light absorbing portion to absorb light that is not incident on the mirror after passing through a first aperture.
28. The optical assembly of any implementation, wherein the movable arm comprises:
a first receiver positioned above the light reflector portion; and
an optical filter disposed in the receiver; and a second receiver located above the aperture portion; and an optical filter located in the receiver.
29. The optical assembly of any implementation, wherein an optical filter disposed in the receiver is positioned to receive light entering from a first aperture in the housing when the movable arm is in a first position; and an optical filter located in the receiver is positioned to receive light entering from the first aperture in the housing when the movable arm is in the third position.
30. The optical assembly of any embodiment, wherein when the movable arm is in a first position, the optical filter located in the receptacle overlaps an area where a portion of the sample substrate is located with the sample substrate fully inserted into the slot.
31. A system, comprising:
the optical assembly of any embodiment; and
a mobile phone accessory including a first side configured to couple to the optical assembly and including a second opposite side configured to couple to the mobile phone, wherein a size of the mobile phone accessory is adjustable.
32. An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured so as to enable microscopic imaging of the sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving the sample and positioning the sample within the microscopic field of view;
an optical fiber configured to receive light from the light source and illuminate the receiver.
33. The optical component of any implementation, wherein the lens and camera define an optical axis when the optical component is attached to a handheld electronic device, and wherein the optical fiber surrounds the optical axis.
34. The optical assembly of any embodiment, wherein the optical fiber is annular.
35. The optical assembly of any embodiment, wherein the optical fiber is a side-emitting optical fiber.
36. The optical assembly of any embodiment, wherein the optical assembly comprises a housing defining the receiver, wherein the looped optical fiber is located in a groove of the housing, wherein the housing comprises an aperture configured to align with both end faces of the light source and the looped optical fiber to receive light from the light source.
37. The optical assembly of any embodiment, wherein light is emitted from the side of the looped optical fiber to illuminate a sample area on the optical axis directly below the camera.
38. The optical assembly of any embodiment, wherein the optical assembly comprises a housing defining the receiver, wherein the housing comprises a first aperture configured to be aligned with the light source, and a first end face of the optical fiber is positioned in the first aperture to receive light from the light source.
39. The optical assembly of any embodiment, wherein the housing includes a second aperture configured to align with a camera, and wherein the optical fiber includes a first end positioned in the first aperture and includes a second end positioned in the second aperture.
40. The optical assembly of any embodiment, wherein at least one of the first end face of the optical fiber and the second end face of the optical fiber is entangled.
41. The optical assembly of any embodiment, wherein the optical fiber is tilted with respect to the light source when the optical assembly is attached to the handheld electronic device, and
wherein the second end face of the optical fiber is arranged to illuminate a region of the sample directly below the lens.
42. The optical assembly of any embodiment, wherein the optical assembly includes a housing defining the receiver, the housing including a groove, and a fashion optical fiber disposed in the groove.
43. An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured so as to enable microscopic imaging of the sample by the camera with illumination of the sample by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving a sample and positioning the sample within the microscopic field of view;
a mirror offset from an optical axis of the lens and positioned to reflect light from the light source and illuminate the sample over a range of tilt angles relative to the optical axis; and
a wavelength filter positioned between the sample and a camera to pass fluorescence emitted by the sample in response to the oblique illumination.
44. The optical assembly of any embodiment, wherein the lens is positioned on a front side of the sample and the mirror is positioned to illuminate the sample obliquely from a back side of the sample, wherein the oblique angle is greater than a collection angle of the lens.
45. The optical assembly of any implementation, further comprising an optical absorber positioned on an optical axis adjacent the mirror to absorb light from the light source that is not reflected by the mirror.
46. The optical assembly of any implementation, wherein the mirror and the optical absorber are mounted on a common structure and tilted with respect to each other.
47. The optical assembly of any embodiment, further comprising a second wavelength filter positioned in the path of the illumination light between the light source and the mirror to select certain wavelengths for illuminating the sample.
48. The optical assembly of any preceding embodiment, wherein the sample is supported by a sample holder comprising a planar structure, and wherein the receiver is configured to position the planar structure to extend partially into the path of the illumination light from the light source to couple the illumination light into the planar structure.
49. The optical assembly of embodiment 6, wherein the receiver is configured for positioning the planar structure such that the illumination light path is incident on an edge of the planar structure, wherein the edge extends along a plane perpendicular to a plane containing the field of view.
50. The optical assembly of any implementation, wherein the mirror is arranged to reflect light to partially obliquely illuminate the sample from a back side of a planar structure and to partially illuminate an edge of the planar structure to couple illumination light into the planar structure.
51. The optical assembly of any implementation, further comprising a rubber door to cover the sample receiver to prevent ambient light from entering the optical assembly and entering the camera.
52. The optical assembly of any embodiment, wherein the planar structure is configured to waveguide the coupled illumination light to the sample to illuminate the sample and cause the sample to emit fluorescence.
53. The optical assembly of any embodiment, further comprising the sample holder,
54. the optical assembly of any embodiment 6, wherein the sample is a liquid sample and the sample holder comprises a first plate and a second plate that sandwich the liquid sample.
55. The optical assembly of any preceding implementation, wherein the lens, the receiver, the mirror, and the wavelength filter are housed in a common optical box, and further comprising a replaceable holder frame for attaching the optical box to the handheld electronic device.
56. The optical assembly of any embodiment, wherein the light source and camera are positioned on the same side of the handheld electronic device and at a fixed distance from each other.
57. The optical assembly of any implementation, wherein the handheld electronic device is a cell phone.
58. A device comprising the optical assembly of any preceding implementation and the handheld electronic device.
59. An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, the optical assembly configured to enable microscopic imaging of the sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a lens configured to provide a microscopic field of view to the camera;
a receiver for receiving the sample and positioning the sample within the microscope field of view,
wherein the sample is supported by a sample holder comprising a planar structure, and wherein the receiver is configured to position the planar structure to extend partially into a path of illumination light from the light source to couple illumination light into the planar structure and cause the sample to emit fluorescence; and
a wavelength filter positioned between the sample and the camera to pass fluorescence emitted by the sample in response to the illumination.
60. The optical assembly of any embodiment further comprises a rubber door covering the sample receiver to prevent ambient light from entering the optical assembly through the receiver.
61. The optical assembly of any embodiment, wherein the planar structure is configured to guide the coupled illumination light to the sample to illuminate the sample and cause the sample to emit the fluorescent light.
62. The optical assembly of any embodiment, further comprising the sample holder,
63. the optical assembly of any embodiment, wherein the sample is a liquid sample and the sample holder comprises a first plate and a second plate that sandwich the liquid sample.
64. The optical assembly of any embodiment, further comprising a second wavelength filter located in the path of the illumination light between the light source and the portion of the sample holder extending partially into the light path.
65. The optical assembly of any preceding implementation, wherein the lens, the receiver, and the wavelength filter are housed in a common optical box, and further comprising a replaceable holder frame for attaching the optical box to the handheld electronic device.
66. The optical assembly of any implementation, wherein the light source and the camera are positioned at a fixed distance from each other on the same side of the handheld electronic device.
67. The optical assembly of any implementation, wherein the handheld electronic device is a cell phone.
68. A device comprising the optical assembly of any preceding implementation and the handheld electronic device.
69. An optical assembly attachable to a handheld electronic device having a light source, a camera, and a computer processor, wherein the optical assembly is configured to enable microscopic imaging of the sample by the camera while the sample is illuminated by light from the light source, the optical assembly comprising:
a first assembly lens configured to provide a first microscopic field of view to the first camera module;
a second assembled lens configured to provide a second microscopic field of view to the second camera module; and
a receiver for receiving the sample and positioning the sample within the first microscopic field of view and within the second microscopic field of view.
70. The optical assembly of any implementation, wherein the first camera module includes a first inner lens and the second camera module includes a second inner lens, wherein a first optical magnification provided by the first fitting lens and the first inner lens is the same as a second optical magnification provided by the second fitting lens and the second inner lens.
71. The optical assembly of any embodiment, wherein a first ratio of a focal length of the first fitting lens to a focal length of the first inner lens is equal to a second ratio of a focal length of the second fitting lens to a focal length of the second inner lens.
72. The optical assembly of any embodiment, wherein a first image resolution provided by the first camera module and the first assembled lens is the same as a second image resolution provided by the second camera module and the second assembled lens.
73. The optical assembly of any implementation, wherein the first camera module includes a first inner lens and the second camera module includes a second inner lens, wherein a first optical magnification provided by the first fitting lens and the first inner lens is different than a second optical magnification provided by the second fitting lens and the second inner lens.
74. The optical assembly of any embodiment, wherein a first ratio of a focal length of the first fitting lens to a focal length of the first inner lens is less than a second ratio of a focal length of the second fitting lens to a focal length of the second inner lens.
75. The optical assembly of any embodiment, wherein a first image resolution provided by the first camera module and the first assembled lens is less than a second image resolution provided by the second camera module and the second assembled lens.
76. The optical assembly of any preceding embodiment, wherein the first microscopic field of view overlaps the second microscopic field of view.
77. The optical assembly of any embodiment, wherein the first microscopic field of view overlaps the second microscopic field of view by an amount between 1% and 90%.
78. The optical assembly of any embodiment, wherein the first microscopic field of view does not overlap the second microscopic field of view.
79. The optical assembly of any preceding embodiment, wherein each of the first and second fitting lenses is arranged to receive light scattered or emitted by the sample.
80. The optical assembly of any preceding embodiment, wherein the first microscopic field of view is smaller than the second microscopic field of view.
81. The optical assembly of any preceding embodiment, wherein the angular field of view of the first assembly lens is less than the angular field of view of the second assembly lens.
82. The optical assembly of any embodiment, wherein a ratio of the angular field of view of the first assembled lens to the angular field of view of the second assembled lens is between 1.1 and 1000.
83. The optical assembly of any implementation, comprising:
a first optical filter disposed in a first illumination path to or from the first assembly lens; and
a second optical filter disposed in a second illumination path to or from the second assembly lens.
84. The optical assembly of any embodiment, wherein the first optical filter is configured to filter a first wavelength range, the second optical filter is configured to filter a second wavelength range, and the first wavelength range is different from the second wavelength range.
85. The optical assembly of any implementation, comprising:
a first polarizer disposed in a first illumination path to or from the first assembly lens; and
a second polarizer disposed in a second illumination path to or from the second assembly lens.
86. The optical assembly of any embodiment, wherein the first and second polarizers have different polarization-dependent light transmission and blocking characteristics.
87. A device comprising the optical assembly of any preceding implementation and the handheld electronic device.
88. The optical assembly of any implementation, wherein the handheld electronic device is a cell phone.
89. The device of any implementation, wherein the handheld electronic device is configured to computationally merge a first image obtained from the first camera module with a second image obtained from the second camera module.
90. An imaging method, comprising:
pressing the sample between two plates, wherein the two plates are separated from each other by an array of spacers, at least one of which has a reference mark;
acquiring a plurality of images of the sample using an imaging system comprising a camera and at least one lens, wherein each image corresponds to a different object plane within the thickness of the sample;
computationally analyzing each image to determine information about the respective object plane based on one or more reference markers; and
computationally constructing a three-dimensional image of the sample based on the plurality of images and information of the corresponding object planes.
91. The imaging method of any preceding implementation, wherein the determined information about the respective object plane includes a depth of the object plane relative to an imaging system.
92. The imaging method according to any embodiment 2, wherein at least some of the spacers each have a reference mark.
93. The imaging method according to any preceding embodiment, wherein the determined information about the respective object plane comprises a depth and an orientation of the object plane relative to an imaging system.
94. The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises determining a degree of defocus of one or more reference marks.
95. The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises: determining a depth of each reference mark of the plurality of reference marks based on the defocus degree of each reference mark; and determining a depth and orientation of the respective object plane relative to the imaging system based on the determined depth of the reference marker.
96. A method of imaging according to any preceding embodiment, wherein the reference mark is rotationally asymmetric with respect to an axis perpendicular to at least one of the plates.
97. The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises determining a rotational orientation of one or more reference markers relative to an axis of the imaging system.
98. The imaging method of any preceding embodiment, wherein the computational analysis of each image comprises comparing image information about the reference marker with a priori knowledge about the reference marker.
99. The imaging method according to any preceding embodiment, wherein the a priori knowledge about the reference markers is based on one or more of a shape of each reference marker and a position of each reference marker relative to the plate.
100. The imaging method according to any preceding embodiment, wherein the spacer is a cylinder.
101. The imaging method of any preceding implementation, wherein acquiring the plurality of images includes moving one or more components of the imaging system relative to the plate holding the sample.
102. The imaging method of any preceding implementation, wherein the computational construction of the three-dimensional image includes processing each acquired image to remove defocused features.
103. The imaging method of any preceding embodiment, wherein processing each acquired image to remove defocused features comprises using a band-pass filter.
104. The imaging method of any preceding embodiment, wherein the acquired image corresponds to an interference image formed by combining light from the sample with reference light of the sample that is not directed onto a camera.
105. An imaging device, comprising:
an imaging system comprising a camera and at least one lens;
a sample holder for supporting a sample cartridge relative to the imaging system, the sample cartridge comprising two plates separated from each other by an array of spacers, at least one of which has a reference mark, wherein a sample to be imaged is configured to be squeezed between the two plates; and
a processing and control system coupled to the sample holder and camera and configured to acquire a plurality of images of the sample using the imaging system, wherein each image corresponds to a different object plane within a thickness of the sample, an
Wherein the processing and control system is further configured to:
computationally analyzing each image based on one or more reference markers to determine information about a corresponding object plane; and
a three-dimensional image of the sample is computationally constructed based on the plurality of images and information about the respective object planes.
106. The imaging device of any preceding implementation, wherein the determined information about the respective object plane includes a depth of the object plane relative to the imaging system.
107. The imaging device of any implementation, or wherein at least some of the spacers each have a reference mark.
108. The imaging device of any preceding embodiment, wherein the determined information about the respective object plane comprises a depth and an orientation of the object plane relative to the imaging system.
109. The device of any preceding implementation, wherein the computational analysis of each image includes determining a degree of defocus for one or more of the reference markers.
110. The device of embodiment 20, wherein the computational analysis of each image comprises: determining a depth of each reference mark of the plurality of reference marks based on the defocus degree of each reference mark; and determining a depth and orientation of the respective object plane relative to the imaging system based on the determined depth of the reference marker.
111. The device of any preceding implementation, wherein the reference mark is rotationally asymmetric with respect to an axis perpendicular to at least one of the plates.
112. The device of any implementation, wherein the computational analysis of each image includes determining a rotational orientation of one or more of the reference markers relative to the axis of the imaging system.
113. The device of any preceding embodiment, wherein the computational analysis of each image comprises comparing image information about the reference marker with a priori knowledge about the reference marker.
114. The apparatus of any preceding embodiment, wherein the a priori knowledge about the reference markers is based on one or more of a shape of each reference marker and a position of each reference marker relative to the plate.
115. The device of any preceding embodiment, wherein the spacer is a cylinder.
116. The device of any preceding embodiment, wherein the control system is configured to facilitate movement of one or more components of the imaging system relative to the plate holding the sample to acquire the plurality of images.
117. The device of any preceding implementation, wherein the computational construction of the three-dimensional image includes processing each acquired image to remove defocused features.
118. The device of any implementation, wherein processing each acquired image to remove defocused features includes using a band pass filter.
119. The device of any preceding embodiment, wherein the acquired image corresponds to an interference image formed by combining light from the sample with reference light of the sample that is not directed onto the camera.
CN201880020973.8A 2017-02-08 2018-02-08 Optical device, apparatus and system for assay Active CN111465882B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310507084.7A CN116794819A (en) 2017-02-08 2018-02-08 Optical device, apparatus and system for assay

Applications Claiming Priority (17)

Application Number Priority Date Filing Date Title
US201762456504P 2017-02-08 2017-02-08
US201762456598P 2017-02-08 2017-02-08
US201762456590P 2017-02-08 2017-02-08
US62/456,504 2017-02-08
US62/456,598 2017-02-08
US62/456,590 2017-02-08
US201762457133P 2017-02-09 2017-02-09
US201762456904P 2017-02-09 2017-02-09
US62/457,133 2017-02-09
US62/456,904 2017-02-09
US201762459554P 2017-02-15 2017-02-15
US62/459,554 2017-02-15
US201762460075P 2017-02-16 2017-02-16
US201762460062P 2017-02-16 2017-02-16
US62/460,062 2017-02-16
US62/460,075 2017-02-16
PCT/US2018/017504 WO2018148471A2 (en) 2017-02-08 2018-02-08 Optics, device, and system for assaying

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310507084.7A Division CN116794819A (en) 2017-02-08 2018-02-08 Optical device, apparatus and system for assay

Publications (2)

Publication Number Publication Date
CN111465882A true CN111465882A (en) 2020-07-28
CN111465882B CN111465882B (en) 2023-05-26

Family

ID=63107057

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310507084.7A Pending CN116794819A (en) 2017-02-08 2018-02-08 Optical device, apparatus and system for assay
CN201880020973.8A Active CN111465882B (en) 2017-02-08 2018-02-08 Optical device, apparatus and system for assay

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310507084.7A Pending CN116794819A (en) 2017-02-08 2018-02-08 Optical device, apparatus and system for assay

Country Status (5)

Country Link
EP (1) EP3580597A4 (en)
JP (1) JP7177073B2 (en)
CN (2) CN116794819A (en)
CA (1) CA3053009A1 (en)
WO (1) WO2018148471A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10991185B1 (en) 2020-07-20 2021-04-27 Abbott Laboratories Digital pass verification systems and methods
EP4199810A4 (en) * 2020-08-23 2024-03-13 My Or Diagnostics Ltd Apparatus and method for determining hemoglobin levels

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK3901281T3 (en) 2015-04-10 2023-01-23 Spatial Transcriptomics Ab SPATIALLY SEPARATE, MULTIPLEX NUCLEIC ACID ANALYSIS OF BIOLOGICAL SAMPLES
USD897555S1 (en) 2018-11-15 2020-09-29 Essenlix Corporation Assay card
USD898221S1 (en) 2018-11-15 2020-10-06 Essenlix Corporation Assay plate
USD898224S1 (en) 2018-11-15 2020-10-06 Essenlix Corporation Assay plate with sample landing mark
USD898939S1 (en) 2018-11-20 2020-10-13 Essenlix Corporation Assay plate with sample landing mark
USD893469S1 (en) 2018-11-21 2020-08-18 Essenlix Corporation Phone holder
USD910202S1 (en) 2018-11-21 2021-02-09 Essenlix Corporation Assay plate with sample landing mark
USD910203S1 (en) 2018-11-27 2021-02-09 Essenlix Corporation Assay plate with sample landing mark
USD893470S1 (en) 2018-11-28 2020-08-18 Essenlix Corporation Phone holder
USD912842S1 (en) 2018-11-29 2021-03-09 Essenlix Corporation Assay plate
WO2020123320A2 (en) 2018-12-10 2020-06-18 10X Genomics, Inc. Imaging system hardware
USD898222S1 (en) 2019-01-18 2020-10-06 Essenlix Corporation Assay card
CN110297322A (en) * 2019-01-21 2019-10-01 福鼎市一雄光学仪器有限公司 The mobile phone high definition close-shot enlarging lens device of built-in fiber illumination
CN111289443A (en) * 2019-07-03 2020-06-16 无锡市人民医院 Drainage liquid colorimetric card convenient for recycling and colorimetric method thereof
AU2020309098A1 (en) * 2019-07-11 2022-03-10 Sensibility Pty Ltd Machine learning based phone imaging system and analysis method
US20220276235A1 (en) * 2019-07-18 2022-09-01 Essenlix Corporation Imaging based homogeneous assay
EP3798614A1 (en) * 2019-09-30 2021-03-31 IAssay, Inc. Modular multiplex analysis devices and platforms
GB201918948D0 (en) 2019-12-20 2020-02-05 Oxford Immune Algorithmics Ltd Portable device for imaging biological sample
CN113155555B (en) * 2020-01-23 2023-04-11 天津市政工程设计研究总院有限公司 Manufacturing method of magnesium alloy model for simulating concrete pipe gallery
US11768175B1 (en) 2020-03-04 2023-09-26 10X Genomics, Inc. Electrophoretic methods for spatial analysis
WO2021236929A1 (en) 2020-05-22 2021-11-25 10X Genomics, Inc. Simultaneous spatio-temporal measurement of gene expression and cellular activity
WO2021252499A1 (en) 2020-06-08 2021-12-16 10X Genomics, Inc. Methods of determining a surgical margin and methods of use thereof
EP4164796A4 (en) * 2020-06-10 2024-03-06 10X Genomics Inc Fluid delivery methods
WO2022061152A2 (en) * 2020-09-18 2022-03-24 10X Genomics, Inc. Sample handling apparatus and fluid delivery methods
CN112276370B (en) * 2020-11-27 2021-10-08 华中科技大学 Three-dimensional code laser marking method and system based on spatial light modulator
WO2022119459A1 (en) * 2020-12-03 2022-06-09 Pictor Limited Device and method of analyte detection
EP4121555A1 (en) 2020-12-21 2023-01-25 10X Genomics, Inc. Methods, compositions, and systems for capturing probes and/or barcodes
JP2024502159A (en) * 2021-01-06 2024-01-17 スコップゲンクス・プライベート・リミテッド Compact, portable multimode microscope
WO2022178054A1 (en) * 2021-02-18 2022-08-25 Peek Technologies, Inc. Configurable diagnostic platform systems and methods for performing chemical test assays
CN113984759B (en) * 2021-09-24 2023-12-12 暨南大学 Optical instant detection system and body fluid slide crystallization and detection method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120157160A1 (en) * 2010-12-21 2012-06-21 The Regents Of The University Of California Compact wide-field fluorescent imaging on a mobile device
CN103181154A (en) * 2010-10-29 2013-06-26 加利福尼亚大学董事会 Cellscope apparatus and methods for imaging
CN204439554U (en) * 2015-03-27 2015-07-01 华南师范大学 Smart mobile phone wide field fluoroscope imager
TW201536255A (en) * 2014-03-21 2015-10-01 Ind Tech Res Inst Portable analytical device and system
CN105793694A (en) * 2013-12-12 2016-07-20 梅斯医疗电子系统有限公司 Home testing device
JP2016161550A (en) * 2015-03-05 2016-09-05 アークレイ株式会社 Colorimetric measuring adapter

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7372985B2 (en) * 2003-08-15 2008-05-13 Massachusetts Institute Of Technology Systems and methods for volumetric tissue scanning microscopy
JP2005099786A (en) * 2003-08-29 2005-04-14 Olympus Corp Optical element switching device of microscope
DE102004034975A1 (en) * 2004-07-16 2006-02-16 Carl Zeiss Jena Gmbh Method for acquiring images of a sample with a microscope
JP5080186B2 (en) * 2007-09-26 2012-11-21 富士フイルム株式会社 Molecular analysis photodetection method, molecular analysis photodetection device used therefor, and sample plate
CN101952762B (en) * 2008-01-02 2012-11-28 加利福尼亚大学董事会 High numerical aperture telemicroscopy apparatus
JP6382309B2 (en) * 2013-07-12 2018-08-29 カルロバッツ,ネベン General purpose rapid diagnostic test reader with transvisual sensitivity
TWI533025B (en) * 2014-07-07 2016-05-11 億觀生物科技股份有限公司 Portable microscope
ES2894912T3 (en) * 2014-07-24 2022-02-16 Univ Health Network Collection and analysis of data for diagnostic purposes
US10068145B2 (en) * 2015-07-30 2018-09-04 Fuji Xerox Co., Ltd. Photographing system configured to hold a pill

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103181154A (en) * 2010-10-29 2013-06-26 加利福尼亚大学董事会 Cellscope apparatus and methods for imaging
US20120157160A1 (en) * 2010-12-21 2012-06-21 The Regents Of The University Of California Compact wide-field fluorescent imaging on a mobile device
CN105793694A (en) * 2013-12-12 2016-07-20 梅斯医疗电子系统有限公司 Home testing device
TW201536255A (en) * 2014-03-21 2015-10-01 Ind Tech Res Inst Portable analytical device and system
JP2016161550A (en) * 2015-03-05 2016-09-05 アークレイ株式会社 Colorimetric measuring adapter
CN204439554U (en) * 2015-03-27 2015-07-01 华南师范大学 Smart mobile phone wide field fluoroscope imager

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10991185B1 (en) 2020-07-20 2021-04-27 Abbott Laboratories Digital pass verification systems and methods
US10991190B1 (en) 2020-07-20 2021-04-27 Abbott Laboratories Digital pass verification systems and methods
US11514737B2 (en) 2020-07-20 2022-11-29 Abbott Laboratories Digital pass verification systems and methods
US11514738B2 (en) 2020-07-20 2022-11-29 Abbott Laboratories Digital pass verification systems and methods
US11574514B2 (en) 2020-07-20 2023-02-07 Abbott Laboratories Digital pass verification systems and methods
EP4199810A4 (en) * 2020-08-23 2024-03-13 My Or Diagnostics Ltd Apparatus and method for determining hemoglobin levels

Also Published As

Publication number Publication date
EP3580597A2 (en) 2019-12-18
WO2018148471A2 (en) 2018-08-16
WO2018148471A3 (en) 2018-10-04
JP2020509403A (en) 2020-03-26
EP3580597A4 (en) 2021-05-05
CN111465882B (en) 2023-05-26
JP7177073B2 (en) 2022-11-22
CA3053009A1 (en) 2018-08-16
CN116794819A (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN111465882B (en) Optical device, apparatus and system for assay
US11463608B2 (en) Image-based assay using mark-assisted machine learning
JP7084959B2 (en) Devices and systems for analyzing samples, especially blood, and how to use them.
US11526984B2 (en) Method of computing tumor spatial and inter-marker heterogeneity
US11842483B2 (en) Systems for cell shape estimation
KR101982330B1 (en) Apparatus and systems for collecting and analyzing vapor condensate, in particular condensate, and methods of using the apparatus and system
EP3882851B1 (en) Method for co-expression analysis
US20220407988A1 (en) Image-Based Assay Using Mark-Assisted Machine Learning
US11326989B2 (en) Devices and methods for tissue and cell staining
Cregger et al. Immunohistochemistry and quantitative analysis of protein expression
Liu et al. Pocket MUSE: an affordable, versatile and high-performance fluorescence microscope using a smartphone
JP5496906B2 (en) Method and system for removing autofluorescence from images
US11885952B2 (en) Optics, device, and system for assaying and imaging
US8673650B2 (en) Optical molecular detection
CN105247348A (en) Digitally enhanced microscopy for multiplexed histology
Kwon et al. Automated measurement of multiple cancer biomarkers using quantum-dot-based microfluidic immunohistochemistry
Deshmukh et al. A confirmatory test for sperm in sexual assault samples using a microfluidic-integrated cell phone imaging system
CN110824165B (en) Lung cancer tumor marker detection device and method based on micro-fluidic chip and mobile phone
Reilly et al. Advances in confocal microscopy and selected applications
Dolled-Filhart et al. Automated analysis of tissue microarrays
CN113424051A (en) System and method for remote evaluation of sample analysis for disease diagnosis
Williams et al. Fourier ptychographic microscopy for rapid, high-resolution imaging of circulating tumor cells enriched by microfiltration
Gordon et al. Low cost microscope for malarial parasitemia quantification in microfluidically generated blood smears
Liu System and Process Optimization for Biomedical Optical Imaging
Burke et al. High Content Analysis for Tissue Samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231101

Address after: Room 409, 4th Floor, Building 2, No. 1508 Kunyang Road, Minhang District, Shanghai

Patentee after: Shanghai Yisheng Biotechnology Co.,Ltd.

Address before: Unit B, 14th Floor, Wing Cheung Commercial Building, 19-25 Suhang Street, Sheung Wan, Hong Kong, China

Patentee before: Yewei Co.,Ltd.

Effective date of registration: 20231101

Address after: Unit B, 14th Floor, Wing Cheung Commercial Building, 19-25 Suhang Street, Sheung Wan, Hong Kong, China

Patentee after: Yewei Co.,Ltd.

Address before: Zhang Keshen, New Jersey, USA

Patentee before: Essenlix