WO2017053592A1 - Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur - Google Patents

Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur Download PDF

Info

Publication number
WO2017053592A1
WO2017053592A1 PCT/US2016/053153 US2016053153W WO2017053592A1 WO 2017053592 A1 WO2017053592 A1 WO 2017053592A1 US 2016053153 W US2016053153 W US 2016053153W WO 2017053592 A1 WO2017053592 A1 WO 2017053592A1
Authority
WO
WIPO (PCT)
Prior art keywords
recited
optical
features
particle
cell
Prior art date
Application number
PCT/US2016/053153
Other languages
English (en)
Inventor
Bahram Jalali
Ata MAHJOUBFAR
Lifan Chen
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2017053592A1 publication Critical patent/WO2017053592A1/fr
Priority to US15/928,992 priority Critical patent/US10593039B2/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • G16B40/20Supervised data analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1429Signal processing
    • G01N15/1433Signal processing using image recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1468Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
    • G01N15/147Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle the analysis being performed on a sample stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/149Optical investigation techniques, e.g. flow cytometry specially adapted for sorting particles, e.g. by their size or optical properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N2015/1006Investigating individual particles for cytology
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/144Imaging characterised by its optical setup
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/144Imaging characterised by its optical setup
    • G01N2015/1445Three-dimensional imaging, imaging in different image planes, e.g. under different angles or at different depths, e.g. by a relative motion of sample and detector, for instance by tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Optical investigation techniques, e.g. flow cytometry
    • G01N15/1434Optical arrangements
    • G01N2015/1454Optical arrangements using phase shift or interference, e.g. for improving contrast

Definitions

  • the technology of this disclosure pertains generally to label-free cell analysis, and more particularly to label-free cell classification with
  • Flow cytometry is a powerful tool for large-scale cell analysis due to its ability to measure anisotropic elastic light scattering of millions of individual cells as well as emission of fluorescent labels conjugated to cells.
  • each cell is represented with single values per detection channels (forward scatter, side scatter, and emission bands) and often requires labeling with specific biomarkers for acceptable classification accuracy.
  • Imaging flow cytometry captures images of cells, revealing significantly more information about the cells. For example, it can distinguish clusters and debris that would otherwise result in false positive identification in a conventional flow cytometer based on light scattering.
  • a label-free imaging flow-cytometry technique based on coherent optical implementation of the photonic time stretch concept.
  • This instrument overcomes the trade-off between sensitivity and speed by using an amplified time-stretch dispersive Fourier transform (TS-DFT).
  • TS-QPI time stretch quantitative phase imaging
  • ADC electronic analog-to-digital converter
  • Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching. Moreover, a warped stretch transform is utilized in the disclosed time stretch imaging to achieve optical image compression and non-uniform spatial resolution over the field-of-view (FOV). In the coherent version of the instrument, the time stretch imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real- time and at high throughput.
  • coherent time stretch imaging system in this work measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, for example able to capture 36 million images per second in flow rates as high as 10 meters per second, reaching up to 100,000 cells per second throughput.
  • FIG.1 is a block diagram of a time stretch quantitative phase
  • TS-QPI imaging imaging
  • FIG.2A through FIG.2C are detailed block diagrams of the time stretch quantitative phase imaging (TS-QPI) system shown in FIG.1 according to an embodiment of the present disclosure.
  • TS-QPI time stretch quantitative phase imaging
  • FIG.3A and FIG.3B are optical phase and optical loss images for a white blood cell line (OT-11) in FIG.3A and a colon cancer cell line (SW- 480) in FIG.3B, captured according to an embodiment of the present disclosure.
  • OT-11 white blood cell line
  • SW- 480 colon cancer cell line
  • FIG.4A and FIG.4B are a heat map of pairwise correlation and a ranking, respectively, of sixteen biophysical features according to an embodiment of the present disclosure.
  • FIG.5 is a neural stage diagram of a machine learning pipeline
  • FIG.6 is a 3D scatter plot of three features for OT-II and SW-480 models plotted in relation to size, protein concentration, and attenuation according to an embodiment of the presently disclosed time stretch quantitative phase imaging (TS-QPI) system.
  • TS-QPI time stretch quantitative phase imaging
  • FIG.7A through FIG.7C are plots of accuracy, sensitivity, and
  • FIG.8A and FIG.8B are plots comparing features obtained for an example algal population obtained according to an embodiment of present disclosure.
  • FIG.9A and FIG.9B are plots of testing and training averages with respect to number of points (FIG.9A) and a comparison of balanced accuracy for an embodiment of the present disclosure compared with other techniques (FIG.9B).
  • FIG.10A and FIG.10B are plots of spectral interference fringes and associated spectral interferograms as determined according to an embodiment of the present disclosure.
  • FIG.11A and FIG.11B are plots indicating variance for the principle classification parameters and accuracy increases with respect to number of classification parameters utilized according to an embodiment of the present disclosure.
  • FIG.12A and FIG.12B are a diagram and plot of training and testing during tuning of a neural network utilized according to an embodiment of the present disclosure.
  • FIG.13A and FIG.13B are an image and 2D outline of a PDMS
  • microfluidic channel according to an embodiment of the present disclosure.
  • Deep learning extracts patterns and knowledge from rich multi- dimensional datasets. While the deep learning paradigm is used
  • the present disclosure illustrates a method and apparatus which benefits from deep learning.
  • FIG.1 illustrates an example embodiment 10 of a time stretch
  • the system generally comprises an optical beam system 12 coupled to an optical circulator 14, which is coupled to a qualitative phase imaging system 16 for interacting with target 18.
  • Output from qualitative phase imaging system 16 is received at the optical circulator 14 and directed to an amplified time stretch module 20, with output to a detection and conversion module 22, which is coupled to a data analytics module 24.
  • the following figure explains these elements with increased particularity.
  • FIG.2A through FIG.2C illustrate the time stretch quantitative phase imaging (TS-QPI) and analytics system separated into three figures to show additional details.
  • TS-QPI time stretch quantitative phase imaging
  • optical beam system 12 coupled to an optical circulator 14, with output from the optical circulator coupled to an amplified time stretch system 20, then to detection and conversion module 22.
  • the optical beam system is depicted with a mode-locked laser 26 followed by a nonlinear fiber 28, a dispersion compensating fiber 30, an erbium doped fiber amplifier (EDFA) 32, followed by a wavelength-division multiplexing (WDM) filter 34 which generates and shapes a train of broadband optical pulses directed into optical circulator 14.
  • EDFA erbium doped fiber amplifier
  • WDM wavelength-division multiplexing
  • FIG.2B In FIG.2B is seen quantitative phase imaging 16 directed at target 18.
  • the pulse train from optical beam system 12 through optical circulator 14 of FIG.2A is received at coupler 50 which outputs light to wave plates 52 to a mirror 54 to diffraction gratings 56, coupled to a spherical lens 58 directed to an objective lens 60.
  • Output from objective lens 60 is directed to a beam splitter 62, to/from target 18 and mirror 64 forming the reference arm of the interferometer.
  • target 18 is shown on a stage 66, or other structure as desired.
  • the light pulses of the input pulse train are thus spatially dispersed into a train of rainbow flashes illuminating the target as line scans.
  • the spatial features of the target are encoded into the spectrum of the broadband optical pulses, each representing a one-dimensional frame.
  • the ultra-short optical pulse illumination freezes the motion of cells during high speed flow to achieve blur-free imaging with a throughput of 100,000 cells/second.
  • the phase shift and intensity loss at each location within the field of view (FOV) it thus embedded into the spectral interference patterns using this Michelson interferometer.
  • first WDM 36 receives a first input from optical circulator 14 and a second input from a Raman pump laser 38. Output from first WDM 36 is coupled to a dispersion compensating fiber 40 and received at a second WDM 42 which performs the converse of first WDM 36. Second WDM 42 is coupled to a Raman pump laser 44 and outputs pixel waveform patterns to detection and conversion module 22.
  • the interferogram pulses are stretched in time in module 20 so that spatial information is mapped into time through time-stretch dispersive Fourier transform (TS-DFT), allowing them to be captured by a detection and conversion module 22 exemplified as comprising single pixel photodetector 46 and an analog-to-digital converter (ADC) 48.
  • TS-DFT time-stretch dispersive Fourier transform
  • ADC analog-to-digital converter
  • pulse synchronization is performed in which the time-domain signal carrying serially captured rainbow pulses is transformed into a series of one-dimensional spatial maps, which are used for forming line images.
  • biomass density of a cell leads to a spatially varying optical phase shift.
  • Hilbert transformation and phase unwrapping are used to extract the spatial phase shift.
  • feature extraction is performed by decoding phase shift in each pulse at each wavelength and remapping it into a pixel to reveal the protein concentration distribution within cells.
  • the optical loss induced by the cells, embedded in the pulse intensity variations, is obtained from the amplitude of the slowly varying envelope of the spectral interferograms.
  • quantitative optical phase shift and intensity loss images are captured simultaneously. Both images are calibrated based on the regions where the cells are absent.
  • Cell features describing morphology, granularity, biomass, and so forth are extracted from the images. In block 74 these biophysical features are utilized in a machine learning algorithm for high-accuracy label-free classification of the cells.
  • the information of quantitative optical loss and phase images are fused into expert designed features, leading to a record label-free classification accuracy when combined with deep learning.
  • Image mining techniques are applied, for the first time, to time stretch quantitative phase imaging to measure biophysical attributes including protein concentration, optical loss, and morphological features of single cells at an ultrahigh flow rate and in a label-free fashion. These attributes differ widely among cells and their variations reflect important information of genotypes and physiological stimuli.
  • the multiplexed biophysical features thus lead to information-rich hyper-dimensional representation of the cells for label-free classification with high statistical precision.
  • the present disclosure also provides improved accuracy, repeatability, and balance between sensitivity and specificity of label-free cell classification by a novel machine learning pipeline, which harnesses the advantages of multivariate supervised learning, as well as unique training by evolutionary global optimization of receiver operating
  • ROC Chromydomonas reinhardtii algal cells
  • TS-QPI time stretch quantitative phase imaging
  • Broadband optical pulses from a mode-locked laser were firstly conditioned in fiber optics and then spatially dispersed in free-space optics with a pair of reflection diffraction gratings creating 1-D“rainbow flashes” (refer to FIG.2A through FIG.2C).
  • Each of the rainbow flashes was composed of all the wavelength components distributed laterally over the field of view. These flashes illuminated the target as in traditional photography, but in addition, rainbow flashes targeted different spatial points with distinct colors of light, resulting in space-to-spectrum encoding.
  • Rainbow pulses were then split into the two arms of a Michelson
  • An amplified time-stretch system that utilizes a low-noise distributed Raman amplifier within dispersive fiber with a net optical gain of
  • An ultrafast single-pixel photodetector transformed instantaneous optical power into an electrical signal and subsequently, an analog-to-digital converter (ADC) samples and quantizes the signal.
  • ADC analog-to-digital converter
  • Acquired data are passed down to processing stages for‘big data analytics’. It should be appreciated that the term‘big data analytics’ is generally considered to be the process of examining large data sets to uncover hidden patterns, unknown
  • the photodetected time-stretched pulses are converted to analytic signals using Hilbert transformation and the intensity and phase components are extracted.
  • the phase component is a fast oscillating fringe (carrier frequency), caused by the interference of the linearly chirped pulses arriving from the reference and sample arms in the Michelson interferometer. Acting as a radio-frequency (RF) carrier whose frequency is set by the user adjusted arm length mismatch, the fringe frequency is modulated when the optical path length in the sample arm is changed by the arrival of a cell. This frequency shift and the accompanying phase change are used to measure the optical path length of the cell (see method: Coherent Detection and Phase Extraction).
  • RF radio-frequency
  • phase profile contains the phase shift induced by the cell and an increasing term with time, corresponding to the fringe (beat) frequency.
  • the second component in the waveform is a lower frequency envelope corresponding to temporal shape of the optical pulse. The amplitude of this envelope provides information about optical loss caused by transparency, surface roughness, and inner organelle complexity (see later section on Cell Transmittance Extraction).
  • FIG.3A and FIG.3B illustrate optical phase and optical loss images for a white blood cell line (OT-11) in FIG.3A and a colon cancer cell line (SW-480) in FIG.3B.
  • OTP-11 white blood cell line
  • SW-480 colon cancer cell line
  • the optical phase image is shown across the top, under which is the optical loss image.
  • the scale on these images is shown with a bar at the left side of each showing 100 .
  • the decomposed components of sequential line scans form pairs of spatial maps, namely, optical phase and loss images as created according to the description herein of Image Reconstruction. These images are utilized to obtain biophysical fingerprints of the cells.
  • raw images are fused and transformed into a suitable set of biophysical features, as described below, which the deep learning model further converts into learned features for improved classification.
  • Diameter-RB Diameter along the interrogation rainbow. It is
  • Diameter-FL Diameter along the flow direction. It is sensitive to flow rate fluctuation, but can be a candidate parameter for monitoring flow speed and channel condition.
  • Tight Area Total number of pixels in the segmented region in the phase image.
  • Perimeter Total number of pixels around the boundary of each segmented region.
  • Circularity .
  • Major Axis Considering the cell as elliptical in lateral imaging plane, the length of the major axis of the ellipse with a normalized second central moment same as the cell.
  • Orientation Angle between the flow direction and the major axis of the cell elliptical shape.
  • Loose Area Total number of pixels in the expanded segmented region for measurement of the pixel intensities.
  • Median Radius The median distance of any pixel in the object to the closest pixel outside of the object.
  • OPD-1 Integrated optical path length difference within the entire segmented area (cell), calibrated by the power distribution within different wavelength components of the incident laser pulses.
  • OPD-2 Integrated optical path length difference within the entire segmented area (cell). In addition to the calibration of OPD-1, it is calibrated by the pulse-to-pulse fluctuations within a 1 detection window.
  • Refractive index The mean refractive index difference between the object and the surrounding liquid (buffer solution), which is calculated based on OPD-2 and size measurement (see detail in Section Methods). Refractive index difference for cells is proportional to their protein concentration.
  • Absorption-1 Mean absorption coefficient within the entire
  • segmented area (cell). It is calibrated by the power distribution within different wavelength components of the incident laser pulses and by the pulse-to-pulse fluctuations within a 1 detection window. This parameter corresponds to an absorption dominant model for the cell.
  • Absorption-2 Mean absolute absorption coefficient within the entire segmented area (cell). It is calibrated by the power distribution within different wavelength components of the incident laser pulses and by the pulse-to-pulse fluctuations within a 1 detection window. This parameter corresponds to an absorption-dominant model for the cell.
  • Scattering-1 Mean optical loss within the entire segmented area (cell). It is calibrated by the power distribution within different wavelength components of the incident laser pulses and by the pulse-to-pulse fluctuations within a 1 detection window. This parameter corresponds to a scattering-dominant model for the cell.
  • Scattering-2 Mean absolute optical loss within the entire segmented area (cell). It is calibrated by the power distribution within different wavelength components of the incident laser pulses and by the pulse-to- pulse fluctuations within a 1 detection window. This parameter corresponds to a scattering dominant model for the cell.
  • the optical loss images of the cells are affected by the attenuation of multiplexed wavelength components passing through the cells.
  • the attenuation itself is governed by the absorption of the light in cells as well as the scattering from the surface of the cells and from the internal cell organelles.
  • the optical loss image is derived from the low frequency component of the pulse
  • the optical phase image is extracted from the analytic form of the high frequency component of the pulse interferograms using Hilbert Transformation, followed by a phase unwrapping algorithm. Details of these derivations are described in a later section.
  • the feature extraction operates on optical phase and loss images simultaneously, including object detection, segmentation, and feature measurement, as well as clump identification, noise suppression, and so forth.
  • the average refractive index used as a measure of protein concentration, is obtained by dividing the integral of the optical path length by the cell volume. Since cells in suspension relax to a spherical shape (due to surface tension), an independent measure of cell diameter can be obtained from its lateral dimension for volume estimation.
  • the large data set captured by TS-QPI provides sufficient statistical characteristics for performing cell analysis based on biophysical features. Since cells from even the same line or tissue exhibit variations in size, structure, and protein expression levels, high accuracy classification can only be achieved by a model tolerant to these intrinsic variations. On the other hand, the feature extractor must reflect the intricate and tangled characteristics caused by extrinsic variations, for example in drug
  • a total of sixteen features are chosen (although a different number or selection can be used without departing from the invention) among the features extracted from fusion of optical phase and loss images of each cell. Features that are highly correlated do not provide unique information.
  • FIG.4A and FIG.4B illustrate a heat map of the pairwise correlation matrix among these features in FIG.4A, and with a ranking of sixteen biophysical features based on their area-under-curve (AUC) in single- feature classification as seen in FIG.4B.
  • AUC area-under-curve
  • a‘heat map’, or‘heatmap’ is a graphical representation of data (2D/3D) in which individual values are represented as colors. Colors are not seen in these images due to the limitations of the patent application process.
  • the diagonal elements of the matrix indicate a correlation of each feature with itself, that is to say an autocorrelation.
  • box“1” there is a box toward the upper left marked as box“1”, then a box“2” and box“3” are shown on the diagonal from box“1”.
  • the subset of features in box 1 indicates high correlation among morphological features.
  • the subset of features in box 2 and box 3 are correlated as they are mainly related to optical phase shift and optical loss, respectively.
  • the bars in FIG.4B show performance of the listed morphological parameters, arranged by classification accuracy based on each single feature arranged in descending order, which includes diameter along the interrogation rainbow, tight cell area, loose cell area, perimeter, major axis length, median radius, diameter along the flow direction, orientation, and circularity.
  • morphology parameters contain the bulk of the information, but other biophysical features can contribute to improved performance of label-free cell classification.
  • Another group of bars show optical phase shift features, including optical path length differences (OPD- 1, OPD-2) and refractive index differences.
  • OPD-1 optical path length differences
  • Another group of bars indicate optical loss features representing absorption (Absorption-2, Absorption-1) and scattering (Scattering-1, Scattering-2) by the cell.
  • the best performing features (depicted as Diameter-RB, OPD-1, and
  • the features are coded (e.g., color coded in the original image) into three categories: morphology, optical phase, and optical loss, to describe the main type of information provided by each.
  • the figure provides valuable insight into the relative importance of each category of cell features and suggests that morphological features carry the most information about cells, but at the same time, significant additional information is contained in optical phase and loss measurements.
  • FIG.5 illustrates an example machine learning pipeline embodiment 130 as a neural network.
  • the neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, as convenient for classification.
  • This deep neural network (DNN) is globally trained utilizing area under the curve (AUC) of the receiver operating characteristics (ROC).
  • AUC area under the curve
  • ROC receiver operating characteristics
  • Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node.
  • the training process maximizes AUC, pushing the ROC curve toward the upper left corner, which equates to improved sensitivity and specificity in classification.
  • Information of quantitative optical phase and loss images are fused 132 to extract multivariate biophysical features of each cell, which are fed into a fully-connected neural network.
  • an image fusion 132 is mapped into a first neural net layer of biophysical feature space 134, which is coupled to multiple hidden layers 136, followed by a decision making layer 138 interoperating with AUC-based global training 140.
  • neural networks are a flexible and powerful bioinspired learning model, which perform layers of nonlinear feature transformations, learned from the training data.
  • the transformations morph the input data with weighted sums and nonlinear activation functions into feature spaces more suitable for classification.
  • the unique feedforward neural network learning model as seen in the figure is globally trained by the objective of improving receiver operating characteristic (ROC).
  • ROC receiver operating characteristic
  • the learning algorithm introduced here maximizes the area under ROC curve (AUC), which is a global indicator of the classifier performance on the entire training dataset.
  • AUC area under ROC curve
  • a heuristic genetic algorithm For the purpose of end-to-end supervised learning with AUC whose gradient is not well-behaved, a heuristic genetic algorithm (GA) is employed by way of example and not limitation, which is resilient to discontinuities of the cost function and being trapped in local minima during optimization.
  • GA heuristic genetic algorithm
  • the neural network maps input features by a chain of weighted sum and nonlinear activation functions into learned feature space, convenient for classification.
  • This deep neural network is globally trained via area under the curve (AUC) of the receiver operating characteristics (ROC).
  • AUC area under the curve
  • ROC receiver operating characteristics
  • Each ROC curve corresponds to a set of weights for connections to an output node, generated by scanning the weight of the bias node.
  • the training process maximizes AUC, pushing the ROC curve toward the upper left corner, thus providing improved sensitivity and specificity in classification.
  • the network is composed of multiple hidden layers, which
  • Each layer performs a linear combination on its inputs from the previous layer and operates a nonlinear function on the weighted sums.
  • the output of the node j in layer denoted by is generated from inputs
  • nonlinear activation function examples include logistic sigmoid function , hyperbolic tangent function tanh(a), and as commonly used in deep learning, rectified linear unit (ReLU)
  • ReLU typically speeds up the supervised learning process of deep neural network by inducing sparsity and preventing gradient vanishing problems.
  • the only exception is the output node, where the logistic sigmoid function is utilized as the activation function.
  • the deep neural networks DNNs in these tests are described by way of example and not limitation as having three hidden layers with 48 ReLUs in each, but the DNNs may be configured with differing numbers of layers and ReLUs in each.
  • ROC characteristics
  • the AUC parameter serves as an effective analysis metric for finding the best classifier collection and has been proven to be advantageous over the mean square error for evaluating learning algorithms.
  • OT-II hybridoma T cells were utilized as a model for normal white blood cells and SW-480 epithelial colon cancer cells. The features previously described were measured by the disclosed TS-QPI system for the aforementioned cells.
  • FIG.6 illustrates three of the previously described features for OT-II and SW-480 models, in a three-dimensional (3D) scatter plot, attributed to size, protein concentration, and attenuation, determined from TS-QPI.
  • the 2D projections on the three orthogonal planes are also shown.
  • the cell diameter along the rainbow (Diameter-RB) is used as a size indicator, while the cell protein concentration corresponds to the mean refractive index difference of the cell (Refractive index feature).
  • the attenuation is a feature describing the optical intensity loss caused by cell absorption (Absorption-1 feature).
  • Comparison of 3D and 2D scatter plots reveal that additional biophysical features serve to classify the cell types more accurately. It is clear that additional dimensions improve the level of distinguishing
  • the cell diameter along the rainbow (Diameter-RB) is used as a size indicator.
  • the cell protein concentration corresponds to the mean refractive index difference of the cell.
  • the attenuation is a feature describing the optical intensity loss caused by cell absorption. Comparison of 3D and 2D scatter plots reveal that additional biophysical features serve to classify the cell types more accurately.
  • FIG.7A through FIG.7C show progress in label-free classification depicted by balanced accuracy in FIG.7A, sensitivity and specificity in FIG. 7B, and principle component analysis in FIG.7C.
  • a five-fold cross- validation methodology is applied on the dataset to split data points into training, validation, and test subsets.
  • FIG.7A the progress of in label-free classification is depicted by balanced accuracy as the learning model evolves over genetic algorithm (GA) generations.
  • a training process of the neural network leads to improvements in classification accuracy over generations of the genetic algorithm.
  • the multivariate analysis uses all 16 biophysical features extracted from the TS-QPI quantitative images (uppermost set of curves). It will be appreciated that additional, or fewer, biophysical features, along with different selections and/or definitions of these biophysical features, can be similarly utilized without departing from the teachings of the present disclosure.
  • a training process by using three single features, seen in the lower curve sets, morphology (Diameter-RB), optical phase (OPD-1), and optical loss (Absorption-2). The values represent average balanced accuracy among training datasets at the end of optimization.
  • the final achievable accuracy by multivariate classification is considerably higher than that of single features.
  • the balanced accuracy curves are shown in the figure based on several single features: cell diameter for morphology is depicted in the second set of curves seen near the 80% scale marking, integral of a cell’s optical path difference for optical phase information in the third set down adjacent the 70% scale marking, and cellular absorption for optical loss in near-infrared window seen with the curves at the bottom of that figure.
  • cell diameter for morphology is depicted in the second set of curves seen near the 80% scale marking
  • integral of a cell’s optical path difference for optical phase information in the third set down adjacent the 70% scale marking and cellular absorption for optical loss in near-infrared window seen with the curves at the bottom of that figure.
  • receiver operating characteristic (ROC) curves for each fold are generated based on the test subsets, and reveal the superior sensitivity and specificity of multivariate classifier. Also, the small variations of the ROC curves among different folds show the consistency of the classification performance for different test datasets.
  • the upper set of curves here indicates Multivariate Features, with the next highest peaking set being Morphology, followed by Optical Phase, and then Optical loss. For comparison purposes a diagonal is also seen in the figure which statistically represents results from a random guess classification.
  • FIG.7C illustrates a visualization of the hyperspace decision
  • OT-II and SW-480 data points are shown in first and second principal components analysis (PCA) components.
  • PCA principal components analysis
  • Microalgae are considered one of the most promising feedstock for biofuels.
  • the productivity of these photosynthetic microorganisms in converting carbon dioxide into useful carbon-rich lipids greatly exceeds that of agricultural crops.
  • Worldwide, research and demonstration programs are being carried out to develop the technology needed to expand algal lipid production as a major industrial process. Selecting high-yield microalgae with fast growth factors are essential in the biofuel production industry. Because algae differ greatly in size and structure, cell size alone provides insufficient information for cell classification.
  • FIG.8A depicts a 3D scatter plot showing the three principal
  • biophysical features for the two algal populations based on their lipid content as determined by TS-QPI are biophysical features for the two algal populations based on their lipid content as determined by TS-QPI.
  • the optical loss category of the features play a dominant role in label-free classification.
  • This three- dimensional scatter plot based on size, protein concentration, and attenuation of the cells measured by TS-QPI, is shown with 2D projections for every combination of two features.
  • the inset shows that conventional label-free flow cytometry using forward scattering and side scattering is not enough to distinguish the difference between high-lipid content and low-lipid content algal cells.
  • the disclosed TS-QPI method is significantly more effective in separating the two algae populations.
  • FIG.8B depicts ROC curves for binary classification of these
  • FIG.9A depicts a test learning curve, obtained during tumor cell detection, with a test average across folds curve on top under which the train average across folds curve is seen, with cross entropy plotted with response to the number of sample training points. It can be seen from this figure that as the number of training data points increases, the test error reduces and the model performance improves. On the other hand, the training error contrastingly increases for a larger number of training examples because it is more difficult for the learning model to fit many training data points than a few. Larger number of training data points decreases the cross entropy of the test dataset, which means the classifier is performing more accurately; however, the trend is opposite for the training dataset because the fitting error accumulates with a larger number of training data points.
  • the discrepancy of the training and test errors is the generalization error of the learning model. Notice that beyond the generalization error does not decrease, and the learning curves converge to their ultimate performances. In other words, training data points are required to accomplish target achievable performance for the deep learning model used here in this example.
  • FIG.9B illustrates a comparison between multiple machine learning classification techniques based on the biophysical features extracted from the label-free cell images captured by TS-QPI.
  • the disclosed AUC-based deep learning model (DNN + AUC) has both the highest accuracy and consistency compared with support vector machine (SVM) with Gaussian kernel, logistic regression (LR), naive Bayes (Bayes), and conventional deep neural network trained by cross entropy and backpropagation (DNN).
  • SVM support vector machine
  • LR logistic regression
  • Bayes naive Bayes
  • DNN conventional deep neural network trained by cross entropy and backpropagation
  • the interquartile range of the balanced accuracy (shown with box plot) is the smallest for the regularized AUC-based deep learning model, which confirms its consistency and repeatability are the best among learning methods.
  • Time-stretch quantitative phase imaging (TS-QPI) is capable of
  • TS-QPI relies on spectral multiplexing to simultaneously capture both phase and intensity quantitative images in a single measurement, generating a wealth of information of each individual cell and eliminating the need for labeling with biomarkers.
  • the information content of these images has been summarized in a set of 16 features for each cell, and classification performed in the hyperdimensional space composed of these features.
  • Applications of various learning algorithms were demonstrated, including deep neural networks, support vector machine, logistic regression, naive Bayes, as well as a new training method based on area under the ROC curve. The results from two experimental demonstrations, one on detection of cancerous cells among white blood cells, and another one on
  • the disclosed system opens the way to cellular phenotypic analysis as well as data-driven diagnostics, and thus, is a valuable tool for high-throughput label-free cell screening in medical, biotechnological, and research applications.
  • TS-QPI time stretch quantitative phase imaging
  • Free-space laser pulses were linearly polarized with quarter- and half-wave plates, and then spatially dispersed with a pair of reflection diffraction gratings, so that each wavelength component of the collimated beam was positioned at a different lateral point similar to a line flash rainbow.
  • a beam reducer shrank the rainbow beam six times with a pair of 90 degree off-axis parabolic gold-coated mirrors with reflected focal lengths of 152.4 mm and 25.4 mm, respectively.
  • a 15 degree off-axis parabolic gold-coated mirror with 635 mm reflected focal length and a long working-distance objective lens with 0.4 numerical aperture further shrank the rainbow to about 130 in width, i.e., field of view (FOV).
  • FOV field of view
  • Reflective optics with parabolic gold-coated mirrors are utilized in these experimental demonstrations to minimize loss, aberration, and polarization sensitivity.
  • the rainbow pulses pass through the cells and are reflected by the reflective substrate of the microfluidic device.
  • a dielectric mirror reflected the rainbow with a length mismatch with the sample arm causing spectral interference fringes, (seen in FIG.10A).
  • Cells are hydrodynamically focused at the center of the channel flow at a velocity of 1.3 m/s.
  • the reflected pulses from reference and sample arms were recombined at the beam splitter, compressed by the two diffraction gratings and coupled back into the fiber. These return pulses were spectrally encoded by the spatial information of the interrogation field of view. Then they were redirected by the optical circulator to a Raman-amplified time-stretch dispersive Fourier Transform (TS-DFT) system followed by a 10 Gb/s photodetector
  • TS-DFT Raman-amplified time-stretch dispersive Fourier Transform
  • the coherent detection in the disclosed time stretch system uses an unmodulated copy of the original optical input, which is a broadband optical pulse train.
  • the complex field at any specific spatial location within the field of view is a narrowband optical wave.
  • the complex envelope of the input electric field is split into two arms of the Michelson interferometer at the beam splitter.
  • is the optical frequency of the input signal, which corresponds to the spatial location x being interrogated by the optical wave at this frequency (i.e., spectral encoding of the object image).
  • is the optical frequency of the input signal, which corresponds to the spatial location x being interrogated by the optical wave at this frequency (i.e., spectral encoding of the object image).
  • is the optical frequency of the input signal, which corresponds to the spatial location x being interrogated by the optical wave at this frequency (i.e., spectral encoding of the object image).
  • Michelson interferometer can be expressed as:
  • L is the length of reference arm, and is the arm length mismatch between two arms.
  • each frequency component ⁇ , or wavelength will be one-to-one mapped into the time domain.
  • the relative time delay of is defined compared to the central wavelength, , as , which is usually called intra-pulse time delay.
  • Eq.8 can be simplified as
  • D is the group velocity dispersion, that is, the temporal pulse spreading, , per unit bandwidth, per unit distance traveled.
  • the temporal samples of the energy flux absorbed at the photodetector are the intra-pulse concatenation of spectral samples followed by inter-pulse concatenation of pulse waveforms:
  • the time stretched temporal waveform corresponding to each line scan image consists of two features. One is , a
  • temporal envelope of the time-stretched optical pulse at baseband frequencies corresponds to the temporal shape of the optical pulse and its deviations caused by the object transmission as in brightfield microscopy. It provides information about optical loss, for instance light absorption and scattering caused by surface roughness, granularity, and inner cell organelle complexity.
  • oscillating fringe caused by the spectral interference of the recombined pulses from the sample and the reference arms in the Michelson interferometer.
  • This term can be separated by a bandpass filter, and its envelope can be derived by a nonlinear envelope detection technique.
  • a moving minimum/maximum filter was utilized to extract the envelope. After normalization to the envelope, the cosine com onent
  • the fringe frequency, in our setup is about 4.7 GHz determined by the optical path length mismatch between the interferometer arms.
  • the instantaneous phase of can be readily retrieved from it
  • arg means the argument of a complex number.
  • a one-dimensional phase unwrapping algorithm followed by background phase removal gives the object phase shift, where corresponds to an empty pulse when no cell is in the field of view, for example background phase.
  • the unwrapping algorithm used in the disclosed processing acts when the absolute phase difference between two consecutive samples of the signal is greater than or equal to ⁇ radians, and adds multiples of to the following samples in order to bring the consecutive sample phase differences in the acceptable range of to .
  • phase derived by Hilbert transformation should be corrected to eliminate the artifacts caused by the intensity variations induced by the passing cells.
  • Most cells of interest in clinical or industrial applications have a diameter of about 3 to 40 , when suspended in fluid.
  • the time duration of the instantaneous intensity change induced by the single cells in each laser pulse is approximately 0.6 ns to 8.3 ns, which will generate baseband frequency components up to about 1.6 GHz.
  • the frequency of intensity variations is small (less than about 1.6 GHz), and in this scenario, the disclosed method remains robust to separate the two electrical spectral components for optical loss and phase.
  • TS-QPI One of the advantages of TS-QPI is its ability to extract the cell transmittance, without prior knowledge of the transmittance of the solution, , that of the beam-splitter, , and the reflectance of substrate of the microfluidic channel, . During measurements when there is no Eq.11 becomes
  • the signal from only the reference arm can be recorded by blocking the sam le arm:
  • the disclosed method provides for reconstructing both quantitative brightfield and phase-contrast images simultaneously from single-shot frequency-multiplexed interferometric measurements.
  • the envelope and phase of the time-domain signal was first mapped into a series of spatial information , forming linescan bright-field and phase- contrast images, illuminated by the optical pulse at time . This is because within each optical pulse, the spatial information is mapped one-to-one into the spectral domain, , and spectrum is stretched in time, , where is the relative group delay time of each frequency component within a pulse with respect to the central wavelength.
  • These line-scan images based on , , ,... were then cascaded into a two dimensional image corresponding to , where the second dimension is the spatial mapping of time lapse based on object flow speed.
  • optical path length difference image can be calculated by the phase shift
  • an average refractive index contrast for the cell can be derived, which corresponds to the average protein concentration of the cell: where is the volume of the cell obtained from its lateral diameter, d, as .
  • the capability to identify clumped cells from single large cells greatly reduces the misclassification rate in imaging flow cytometry compared to traditional flow cytometry. Intensity peaks of pixel brightness within each object are used to distinguish clumped objects. The object centers are defined as local intensity maxima in the smoothed image. Retaining outlines of the identified objects helps validate and visualize the algorithm. In the next step, the objects touching the borders of the image, such as the edges of the field of view and data acquisition time window, are discarded. However, the chance of cells showing up at the edges is very low due to hydrodynamic focusing.
  • the disclosed technology is also capable of excluding dust, noise, and debris by neglecting the objects that are too small or have extreme aspect ratios.
  • polystyrene beads had a distribution with 5.06 expected mean and 0.5 standard deviation.
  • the broadened standard deviation was within the range of optical resolution limit and was caused mainly by performing object recognition on resolution limited images. Due to limited optical resolution of the setup, the edges of bead or cell are blurred, generating distribution of point spread functions in optical phase and loss images outside of the cell boundaries. In order to maximize the accuracy in morphological, phase, and loss measurements, after object segmentation we expanded the object boundaries by 2.5 (optical resolution of the setup measured by knife- edge method), which serve as loose boundaries, indicating the area within which the pixel intensities are measured and integrated in phase and loss images.
  • Data cleaning includes two steps. Firstly, Hotelling’s T-squared
  • PCA Principle Component Analysis
  • FIG.10A and FIG.10B depict a comparison of plots of the
  • FIG.10A interference pattern
  • FIG.10B time domain
  • spectral interference fringes are shown as captured by the TS-QPI apparatus, and with its optical spectrum after quantitative phase imaging and before it enters the amplified time-stretch system.
  • FIG.10B the output after the spectral interferogram is linearly mapped into time by the disclosed technique. It will be noted that the baseband intensity envelope is slightly modified by the wavelength-dependent gain profile of the Raman amplifier.
  • the insets of FIG.10A and FIG.10B show zoomed-in spectrum and waveform, respectively.
  • the single-shot interferogram measured by Raman-amplified time-stretch dispersive Fourier Transform has a higher signal-to-noise ratio compared to that captured by optical spectrum analyzer.
  • FIG.11A shows the percent of the variance in data explained by each component in the lower portion of the chart.
  • the key observation is that most of the variance can be accounted for by the first two principle components.
  • the upper portion of the plot shows the accuracy for binary classification using each of the principle components.
  • the first component with the highest explained variance is not necessarily the most important component for classification. Therefore, a priori intuition about the physical significance of the features in the case here, is superior to PCA in eliminating dimensions that do not provide high value in classification.
  • PCA achieves data compression via dimensionality reduction.
  • PCA components act as the input features for the classification
  • the value at each data point corresponds to the number of PCA components retained in order to achieve that total explained variance.
  • a subset of the PCA components can be used for classification.
  • the classification accuracy improves as the total variance retained in the subset of PCA components increases. Nearly 90% accuracy can be achieved with the first three PCA components.
  • the small deviation among accuracies of data points with the same number of PCA components are due to variations in random data partitioning.
  • FIG.12A and FIG.12B illustrate training, test, and validation subsets across a number of iterations and folds (FIG.12A), and network error changes in response to changes in regularization parameter (FIG.12B).
  • the k-fold cross-validation implemented here splits data points into training, validation, and test subsets as seen in FIG.12A.
  • one fold is used for fine tuning the learning model (validation dataset)
  • another fold is used for evaluation of the final results (test dataset), while the rest of the data points are used for training (training dataset).
  • training dataset the training dataset
  • one fold is used as test data, one for validation while the other folds are used during training process.
  • the performance of the network is analyzed by the validation data to fine tune the neural network architecture and regularization parameter.
  • the final reported results are aggregate of the outcomes from the test datasets.
  • regularization parameter increases network error due to overfitting or underfitting, respectively.
  • a suitable regularization parameter balances the trade-off between overfitting (variance) and underfitting (bias) and minimizes the cross entropy of the validation dataset. Therefore, there is a suitable range of regularization parameter for each learning model.
  • the disclosed deep learning technique uses AUC as the cost
  • AUC is calculated based on the entire dataset, the genetic algorithm is employed as a global optimization method.
  • the disclosed technique has inherently higher accuracy and repeatability compared to conventional deep learning and other
  • Field of view is the area covered by the interrogation rainbow when the rainbow pulses hit the imaging plane.
  • the rainbow pulse width is decided by the optical bandwidth selected from the laser source, , the magnification factor of the objective lens, the focal length of the other lenses and parabolic mirrors, as well as the dimensions and blaze angles of the diffraction gratings.
  • the resolution of phase measurement along the axial direction is determined by the effective number of bits (ENOB) of the digitizer and affected by the noise of laser source. Since pulse-to-pulse intensity and phase fluctuations are small, noise from laser source is not the limiting factor in our phase measurements. Supposing the ENOB of the digitizer is , the minimum detectable optical path length difference, can be estimated as
  • ENOB of the analog-to-digital converter is five by way of example and not limitation.
  • the OPD resolution along the axial direction is about 8:0 nm, corresponding to refractive index difference down to the order of 0.001 for cellular level measurements.
  • the Polydimethylsiloxane (PDMS) microfluidic channel according to at least one example embodiment is custom-designed to fit into the reflective optics design. Cells are hydrodynamically focused at the center of the channel owing at a velocity of 1.3 m/s.
  • the microfluidic device consists of a hydrodynamic focusing region and an imaging region targeted by the interrogation rainbow flashes in TS-QPI system.
  • the sheath pressure focused the sample at the center of the channel by narrowing its row width from 200 to about 40 with a sheath to sample volume ratio of 3:1.
  • the dimension of the channel was chosen as 200 (width) X 25 (height) so that the cells will be imaged within depth of focus with a narrow lateral distribution.
  • the size of the entire PDMS channel in the example embodiment is optimized for fitting on a 2 inch diameter dielectric mirror with sufficient space at the edges to achieve strong bonding.
  • the thickness of the channel top layer is optimized for stabilizing peek tubes performance reliability while
  • FIG.13A and FIG.13B illustrate an example embodiment of a
  • PDMS microfluidic channel fabricated using standard soft lithography.
  • FIG.13A is seen the PDMS microfluidic channel mounted on a highly reflective surface with near-infrared dielectric coating.
  • the microfluidic device consists of a hydrodynamic focusing region and an imaging region targeted by the interrogation rainbow flashes in TS-QPI system.
  • a sample solution is seen with suspended cells fed into the channel through the sample inlet, and deionized water as the sheath fluid injected through the sheath inlet.
  • the sheath pressure focused the sample at the center of the channel by narrowing its flow width from 200 to about 40 with a sheath to sample volume ratio of 3:1.
  • FIG.13B the pattern of the mask is shown which was utilized to imprint microfluidic channel design on a silicon wafer with photoresist.
  • the circles are inlet and outlet reservoirs.
  • the mask was designed in
  • a 4-inch silicon wafer was spin-coated with 75 thickness of a negative photoresist (SU-8 from MicroChem®) and was exposed under the mask using an aligner. After post-exposure baking, the wafer was developed at room temperature, rinsed with isopropyl alcohol (IPA), and placed in a petri dish.
  • IPA isopropyl alcohol
  • a PDMS mixture (Sylgard 184TM Silicone Elastomer, Dow Corning®) was poured onto the patterned wafer, degassed in a vacuum chamber for 30 min and cured at 80 °C for one hour. Once cured, the PDMS channel was cut out and peeled off from the master wafer.
  • a 1:25 diameter hollow needle was utilized to punch the inlet and outlet holes.
  • the punched PDMS channel was then cleaned with nitrogen gun and magic tape (3M), treated with oxygen plasma (Enercon® Dyne-A-Mite 3D TreaterTM) for 2 min, and bonded to a 2-inch diameter broadband dielectric mirror (Thorlabs® BB2-E04) for obtaining high reflectance from channel substrate at near infrared spectral window.
  • microtubes PE-50 tubing, 0.023 X 0.038 in
  • steel catheter couplers Instech®, 22 ga X 15 mm
  • Chlamydomonas reinhardtii strains used were cw15 (nit1 NIT2 mt +- ) and sta6 (cw15 nit1 NIT2 arg7-7 sta6-1::ARG7 mt + ), available as CC-4568, CC-4348 respectively from the Chlamydomonas resource center (CRC).
  • CRC Chlamydomonas resource center
  • TEP tris-acetate-phosphate
  • Embodiments of the present technology may be described herein with reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products.
  • each block or step of a flowchart, and combinations of blocks (and/or steps) in a flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code.
  • any such computer program instructions may be executed by one or more computer processors, including without limitation a general purpose computer or special purpose computer, or other programmable processing apparatus to produce a machine, such that the computer program instructions which execute on the computer processor(s) or other programmable processing apparatus create means for
  • blocks of the flowcharts, and procedures, algorithms, steps, operations, formulae, or computational depictions described herein support combinations of means for performing the specified function(s), combinations of steps for performing the specified function(s), and computer program instructions, such as embodied in computer-readable program code logic means, for performing the specified function(s).
  • each block of the flowchart illustrations, as well as any procedures, algorithms, steps, operations, formulae, or computational depictions and combinations thereof described herein can be implemented by special purpose hardware-based computer systems which perform the specified function(s) or step(s), or combinations of special purpose hardware and computer-readable program code.
  • embodied in computer-readable program code may also be stored in one or more computer-readable memory or memory devices that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s).
  • the computer program instructions may also be executed by a computer processor or other programmable processing apparatus to cause a series of operational steps to be performed on the computer processor or other programmable processing apparatus to produce a computer-implemented process such that the instructions which execute on the computer processor or other programmable processing apparatus provide steps for implementing the functions specified in the block(s) of the flowchart(s), procedure (s) algorithm(s), step(s), operation(s), formula(e), or computational
  • program executable refer to one or more instructions that can be executed by one or more computer processors to perform one or more functions as described herein.
  • the instructions can be embodied in software, in firmware, or in a combination of software and firmware.
  • the instructions can be stored local to the device in non-transitory media, or can be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely. Instructions stored remotely can be downloaded (pushed) to the device by user initiation, or automatically based on one or more factors.
  • processors, computer processor, central processing unit (CPU), and computer are used synonymously to denote a device capable of executing the instructions and communicating with input/output interfaces and/or peripheral devices, and that the terms processor, computer processor, CPU, and computer are intended to encompass single or multiple devices, single core and multicore devices, and variations thereof.
  • present disclosure encompasses multiple embodiments which include, but are not limited to, the following:
  • a computer-based automated method of sorting particles for flow cytometry comprising: (a) training, by a training device, an artificial neural network (ANN) according to parameters derived from digitally observed features of known particles that belong to at least two known particle classes; (b) tuning, by the training device, weights of the ANN as a function of at least one property of receiver operating
  • ANN artificial neural network
  • ROC characteristics
  • step of heuristically optimizing the weights of the ANN includes calculating new weights via implementations of genetic algorithms that model the weights of the ANN where the new weights are selected based on a fitness value derived from the property of the ROC.
  • step of tuning the weights of the ANN includes calculating new weights via implementations of genetic algorithms that model the weights of the ANN where the new weights are selected based on a fitness value derived from the property of the ROC.
  • the at least two known particle class include at least two of the following: a healthy cell, a diseased cell, a target tissue cell, a type of cancer cell, a circulating tumor cell (CTC), and a blood cell.
  • step of configuring the storing device with the ANN includes loading the ANN into a non-transitory memory of the sorting device.
  • step of configuring the storing device with the ANN includes programming a programmable gate array of the sorting device.
  • the programmable gate array includes at least one of the following: a PLA and an FPGA.
  • observed features include a measured interference.
  • observed features include at least one of the following morphology features: a shape, a diameter transverse to flow, a diameter parallel to flow, a perimeter, a circularity, a major axis, an orientation, a loose area, and a median radius.
  • observed features include at least one of the following optical phase features: an optical path distance with respect to power distribution, an optical path distance with respect to pulse-to-pulse fluctuation, and a refractive index.
  • observed features include at least one of the following optical loss features: an absorption with respect to power distribution, an absorption with respect to pulse-to-pulse fluctuation, a mean scattering optical loss, and a mean absolute scattering optical loss.
  • observed particles comprise nanoparticles.
  • observed features are derived from time-stretch quantitative phase imaging of the observed particles.
  • a particle sorting apparatus comprising: (a) an optical path having a plurality of optical elements toward a particle sorting field of view and characterized as having different path lengths for different wavelengths of light; (b) an optical pulse generator that directs a plurality of optical pulses along the optical path, each pulse comprising multiple wavelengths of light and incident on target particles within the particle sorting field of view; (c) a time-stretch amplifier that converts return optical pulses from the target particles into time-stretched optical pulses according to wavelengths of light in the return optical pulse; (d) an optical sensor that converts the time-stretched optical pulses into a digital interferogram that includes optical phase and information loss associated with at least one of the target particles; (e) a particle feature extractor that extracts particle features from the digital interferogram; and (f) a classification module that classifies the target particles according to known classes defined based on known particle features that are derived based on known optical phase and known information loss, and as a
  • particle features include at least one of the following morphology features derived from the interferogram: a shape, a diameter transverse to flow, a diameter parallel to flow, a perimeter, a circularity, a major axis, an orientation, a loose area, and a median radius.
  • particle features include at least one of the following optical phase features derived from the interferogram: an optical path distance with respect to power distribution, an optical path distance with respect to pulse-to-pulse fluctuation, and a refractive index.
  • particle features include at least one of the following optical loss features derived from the interferogram: an absorption with respect to power distribution, an absorption with respect to pulse-to-pulse fluctuation, a mean scattering optical loss, and a mean absolute scattering optical loss.
  • target particle features further include morphological features of the target cell derived from the interferogram.
  • optical pulse comprises wavelengths of light in the range from 800 nm to 1700 nm.
  • optical pulse comprises a duration of no more than 2 ns.
  • interferogram comprises a duration of no more than 60 ns.
  • interferogram comprises a duration of no more than 30 ns.
  • time- stretch amplifier comprises a Raman amplifier coupled with an optical fiber spool.
  • classification module is coupled with a classification database storing the known particle features correlated with the known classes.
  • classification module includes a neural network trained according to the known particle features, including the known optical phases and the known information loss of classes of known particles.
  • optical phase images and loss images are reconstructed simultaneously from the interferogram.
  • optical phase images and loss images are reconstructed from serial line images of the at least one target particle.
  • particle feature extraction module extracts the target particle features from the reconstruction optical phase images and loss images.
  • biological cell is selected from the group consisting of: a healthy cell, a tumor cell, a blood cell, a liver cell, a pancreatic cell, and a breast cell.
  • particle feature extraction module is calibrated according to fluid characteristics of a fluid flowing through the fluid transport conduit.
  • the image sensor comprises at least one of the following: a photodetector, a CCD sensor, and a CMOS sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Dispersion Chemistry (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biotechnology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)

Abstract

Cette invention concerne un procédé et un appareil utilisant des techniques d'apprentissage profond dans le domaine de la classification des cellules non labellisées et l'extraction de particules par vision par ordinateur. Un système d'imagerie quantitative de phase à étirement temporel (TS-QPI) permettant une imagerie quantitative à haut débit, et utilisant l'étirement temporel photonique est décrit. Dans au moins un mode de réalisation, la TS-QPI est intégrée à l'apprentissage profond pour obtenir des précisions élevées d'enregistrements dans le domaine de la classification des cellules non labellisées. Le système capture des images quantitatives de phase et d'intensité optique et extrait de multiples caractéristiques biophysiques des cellules individuelles. Ces mesures biophysiques forment un espace de caractéristiques hyperdimensionnel dans lequel l'apprentissage supervisé est mis au service de la classification cellulaire. Le système se prête tout particulièrement au diagnostic phénotypique assisté par ordinateur et permet une compréhension améliorée de l'expression des gènes hétérogènes dans les cellules.
PCT/US2016/053153 2015-09-23 2016-09-22 Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur WO2017053592A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/928,992 US10593039B2 (en) 2015-09-23 2018-03-22 Deep learning in label-free cell classification and machine vision extraction of particles

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562222720P 2015-09-23 2015-09-23
US201562222714P 2015-09-23 2015-09-23
US62/222,714 2015-09-23
US62/222,720 2015-09-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/928,992 Continuation US10593039B2 (en) 2015-09-23 2018-03-22 Deep learning in label-free cell classification and machine vision extraction of particles

Publications (1)

Publication Number Publication Date
WO2017053592A1 true WO2017053592A1 (fr) 2017-03-30

Family

ID=58387340

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/053153 WO2017053592A1 (fr) 2015-09-23 2016-09-22 Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur

Country Status (2)

Country Link
US (1) US10593039B2 (fr)
WO (1) WO2017053592A1 (fr)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194082A (zh) * 2017-05-26 2017-09-22 西北工业大学 基于转基因理论的工业产品智能设计方法
CN108362628A (zh) * 2018-01-11 2018-08-03 天津大学 基于偏振衍射成像流式细胞仪的无标记细胞流式分选方法
CN108802008A (zh) * 2018-09-12 2018-11-13 珠海彩晶光谱科技有限公司 一种利用受激拉曼光谱检测血液中肿瘤细胞的方法和装置
WO2018226863A1 (fr) * 2017-06-06 2018-12-13 University Of Maryland, College Park Analyse de phénotypage mécanique de cellules individuelles pour la détection métastatique
CN109472265A (zh) * 2017-09-07 2019-03-15 彭文伟 一种图像特征提取及描述方法
CN109884657A (zh) * 2019-02-25 2019-06-14 北京化工大学 一种基于光学时间拉伸的高速高通量微粒测速系统
CN110071767A (zh) * 2019-04-03 2019-07-30 电子科技大学 一种基于有限时间拉伸下变频微波信号测频方法及装置
US10386288B2 (en) 2015-12-22 2019-08-20 Canon U.S. Life Sciences, Inc. System and method of label-free cytometry based on Brillouin light scattering
CN110427457A (zh) * 2019-06-28 2019-11-08 厦门美域中央信息科技有限公司 一种基于ann的数据库文本分类中的特征选择方法
WO2019236569A1 (fr) * 2018-06-04 2019-12-12 The Regents Of The University Of California Cytomètre de flux d'imagerie portable à apprentissage profond pour analyse sans marqueur d'échantillons d'eau
WO2020018154A1 (fr) * 2018-07-19 2020-01-23 The Regents Of The University Of California Procédé et système de coloration numérique d'images en phase sans étiquette à l'aide d'un apprentissage profond
WO2020028313A1 (fr) * 2018-07-31 2020-02-06 The Regents Of The University Of Colorado A Body Corporate Systèmes et procédés d'application d'apprentissage automatique pour analyser des images de microcopie dans des systèmes à haut débit
WO2020056118A1 (fr) * 2018-09-12 2020-03-19 Molecular Devices, Llc Système et procédé d'identification sans marqueur et de classification d'échantillons biologiques
US10598594B2 (en) 2015-12-22 2020-03-24 University Of Maryland Cell classification based on mechanical signature of nucleus
WO2020081812A1 (fr) 2018-10-17 2020-04-23 Georgia Tech Research Corporation Systèmes et procédés pour le décodage de signaux coulter multiplexés par code au moyen d'apprentissage automatique
US10732092B2 (en) 2015-12-22 2020-08-04 University Of Maryland, College Park Analysis of single cell mechanical phenotyping for metastatic detection
EP3659065A4 (fr) * 2017-07-28 2020-08-19 Siemens Healthcare Diagnostics Inc. Procédés et appareil de quantification de volume d'apprentissage profond
WO2020183231A1 (fr) 2019-03-13 2020-09-17 Tomocube, Inc. Identification de micro-organismes à l'aide d'une imagerie de phase quantitative tridimensionnelle
WO2021141764A1 (fr) * 2020-01-06 2021-07-15 Becton, Dickinson And Company Procédé d'apprentissage profond dans l'aide à un diagnostic de patient et une identification de population cellulaire aberrante en cytométrie de flux
JP2021522503A (ja) * 2018-04-27 2021-08-30 ナノスティックス インコーポレイテッド マイクロフローサイトメトリーを使用して疾患を診断する方法
CN113607628A (zh) * 2021-09-02 2021-11-05 清华大学 神经形态计算驱动图像流式细胞仪对细胞图像流处理方法
US11232344B2 (en) 2017-10-31 2022-01-25 General Electric Company Multi-task feature selection neural networks
US20220272124A1 (en) * 2021-02-19 2022-08-25 Intuit Inc. Using machine learning for detecting solicitation of personally identifiable information (pii)
EP4120103A1 (fr) * 2021-07-16 2023-01-18 Malvern Panalytical Limited Procédé et appareil de caractérisation de particules
EP3997439A4 (fr) * 2019-07-10 2023-07-19 Becton, Dickinson and Company Circuits intégrés reconfigurables pour ajuster une classification de tri de cellules
US11893739B2 (en) 2018-03-30 2024-02-06 The Regents Of The University Of California Method and system for digital staining of label-free fluorescence images using deep learning
WO2024030346A1 (fr) * 2022-08-02 2024-02-08 Visiongate, Inc. Procédé et système de classification de cellules à base d'ia
US11940369B2 (en) 2015-10-13 2024-03-26 Becton, Dickinson And Company Multi-modal fluorescence imaging flow cytometry system
US11946851B2 (en) 2014-03-18 2024-04-02 The Regents Of The University Of California Parallel flow cytometer using radiofrequency multiplexing
US12001940B2 (en) 2022-09-23 2024-06-04 Tomocube, Inc. Identifying microorganisms using three-dimensional quantitative phase imaging

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017053592A1 (fr) * 2015-09-23 2017-03-30 The Regents Of The University Of California Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur
US10539495B2 (en) * 2015-10-14 2020-01-21 The University Of Tokyo Systems and methods for generating an image of an inspection object using an attenuated beam
US20170270406A1 (en) * 2016-03-18 2017-09-21 Qualcomm Incorporated Cloud-based processing using local device provided sensor data and labels
US11182804B2 (en) * 2016-11-17 2021-11-23 Adobe Inc. Segment valuation in a digital medium environment
US11834696B2 (en) * 2017-04-05 2023-12-05 Arizona Board Of Regents On Behalf Of Arizona State University Antimicrobial susceptibility testing with large-volume light scattering imaging and deep learning video microscopy
US11650149B2 (en) * 2018-03-26 2023-05-16 Georgia Tech Research Corporation Cell imaging systems and methods
JP6627069B2 (ja) * 2018-06-01 2020-01-08 株式会社フロンティアファーマ 画像処理方法、薬剤感受性試験方法および画像処理装置
JP2022506135A (ja) 2018-10-30 2022-01-17 アレン インスティテュート ヒトの寄与を組み込む反復的深層学習フローを使用した顕微鏡画像内の3d細胞間構造のセグメント化
US11562225B2 (en) * 2018-11-26 2023-01-24 International Business Machines Corporation Automatic monitoring and adjustment of machine learning model training
EP3903092A4 (fr) * 2018-12-26 2022-02-16 The Regents of the University of California Systèmes et procédés de propagation bidimensionnelle d'ondes de fluorescence sur des surfaces à l'aide d'un apprentissage profond
JP7352365B2 (ja) * 2019-03-22 2023-09-28 シスメックス株式会社 細胞の分析方法、深層学習アルゴリズムの訓練方法、細胞分析装置、深層学習アルゴリズムの訓練装置、細胞の分析プログラム及び深層学習アルゴリズムの訓練プログラム
CN110245562A (zh) * 2019-05-13 2019-09-17 中国水产科学研究院东海水产研究所 基于深度学习的海洋产毒微藻种类自动识别方法
JP7367343B2 (ja) * 2019-05-28 2023-10-24 ソニーグループ株式会社 分取システム、及び分取方法
CN110455747B (zh) * 2019-07-19 2021-09-28 浙江师范大学 一种基于深度学习的无光晕效应白光相位成像方法及系统
PT116012B (pt) * 2019-12-18 2022-05-25 Inst Superior Tecnico Método de deteção e classificação de sinais não periódicos e respetivo sistema que o implementa
CN111242231B (zh) * 2020-01-17 2023-05-23 西安建筑科技大学 一种基于P-LinkNet网络的露天矿道路模型构建方法
WO2022197921A1 (fr) * 2021-03-18 2022-09-22 Brown University Prédiction de la vélocimétrie à l'aide de modèles d'apprentissage automatique
CN112750098B (zh) * 2021-04-06 2021-07-06 杭州蓝芯科技有限公司 深度图优化方法及装置、系统、电子设备、存储介质
WO2023003993A1 (fr) * 2021-07-21 2023-01-26 Coriell Institute For Medical Research Classification sans étiquette de cellules par analyse d'image et apprentissage automatique
WO2023096642A1 (fr) * 2021-11-24 2023-06-01 Bluware, Inc. Marquage direct qualitatif-quantitatif interactif pour apprentissage profond par intelligence artificielle
US20230196539A1 (en) * 2021-12-18 2023-06-22 Imageprovision Technology Private Limited Artificial intelligence based method for detection and analysis of image quality and particles viewed through a microscope
CN114511851B (zh) * 2022-01-30 2023-04-04 中国南水北调集团中线有限公司 一种基于显微镜图像的游丝藻类细胞统计方法
WO2023235895A1 (fr) * 2022-06-03 2023-12-07 The Regents Of The University Of California Systèmes et procédés de tri de particules activées par image sur la base d'un portillonnage d'ia
CN116393188B (zh) * 2023-06-08 2024-02-27 杭州华得森生物技术有限公司 适用于循环肿瘤细胞捕捉的微流控芯片及其方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463438B1 (en) * 1994-06-03 2002-10-08 Urocor, Inc. Neural network for cell image analysis for identification of abnormal cells
US20050266395A1 (en) * 2003-09-10 2005-12-01 Bioimagene, Inc. Method and system for morphology based mitosis identification and classification of digital images
US20080082468A1 (en) * 2004-11-11 2008-04-03 The Trustees Of Columbia University In The City Of New York Methods and systems for identifying and localizing objects based on features of the objects that are mapped to a vector
WO2015021332A1 (fr) * 2013-08-07 2015-02-12 The Regents Of The University Of California Criblage de cellules en flux, en temps réel, sans marqueur et à haut rendement
WO2015069827A2 (fr) * 2013-11-06 2015-05-14 H. Lee Moffitt Cancer Center And Research Institute, Inc. Examen et analyse d'un cas pathologique et prédiction associée

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5862304A (en) * 1990-05-21 1999-01-19 Board Of Regents, The University Of Texas System Method for predicting the future occurrence of clinically occult or non-existent medical conditions
US7058616B1 (en) * 2000-06-08 2006-06-06 Virco Bvba Method and system for predicting resistance of a disease to a therapeutic agent using a neural network
US8140300B2 (en) * 2008-05-15 2012-03-20 Becton, Dickinson And Company High throughput flow cytometer operation with data quality assessment and control
CN102037343B (zh) * 2008-06-12 2013-10-02 东卡莱罗纳大学 用于三维衍射成像的流式细胞仪系统及方法
WO2017053592A1 (fr) * 2015-09-23 2017-03-30 The Regents Of The University Of California Apprentissage profond dans le domaine de la classification des cellules non labellisées et extraction de particules par vision par ordinateur
US9934364B1 (en) * 2017-02-28 2018-04-03 Anixa Diagnostics Corporation Methods for using artificial neural network analysis on flow cytometry data for cancer diagnosis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463438B1 (en) * 1994-06-03 2002-10-08 Urocor, Inc. Neural network for cell image analysis for identification of abnormal cells
US20050266395A1 (en) * 2003-09-10 2005-12-01 Bioimagene, Inc. Method and system for morphology based mitosis identification and classification of digital images
US20080082468A1 (en) * 2004-11-11 2008-04-03 The Trustees Of Columbia University In The City Of New York Methods and systems for identifying and localizing objects based on features of the objects that are mapped to a vector
WO2015021332A1 (fr) * 2013-08-07 2015-02-12 The Regents Of The University Of California Criblage de cellules en flux, en temps réel, sans marqueur et à haut rendement
WO2015069827A2 (fr) * 2013-11-06 2015-05-14 H. Lee Moffitt Cancer Center And Research Institute, Inc. Examen et analyse d'un cas pathologique et prédiction associée

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LAU, KSA ET AL.: "Interferometric time-stretch microscopy for ultrafast quant itative cellular and tissue imaging at 1 µm", JOURNAL OF BIOMEDICAL OPTICS, vol. 19, no. 7, 2014, pages 076001.1 - 076001.7, XP060047489 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11946851B2 (en) 2014-03-18 2024-04-02 The Regents Of The University Of California Parallel flow cytometer using radiofrequency multiplexing
US11940369B2 (en) 2015-10-13 2024-03-26 Becton, Dickinson And Company Multi-modal fluorescence imaging flow cytometry system
US10732092B2 (en) 2015-12-22 2020-08-04 University Of Maryland, College Park Analysis of single cell mechanical phenotyping for metastatic detection
US10670511B2 (en) 2015-12-22 2020-06-02 Canon U.S.A., Inc. System and method of label-free cytometry based on Brillouin light scattering
US10386288B2 (en) 2015-12-22 2019-08-20 Canon U.S. Life Sciences, Inc. System and method of label-free cytometry based on Brillouin light scattering
US10598594B2 (en) 2015-12-22 2020-03-24 University Of Maryland Cell classification based on mechanical signature of nucleus
CN107194082B (zh) * 2017-05-26 2020-09-22 西北工业大学 基于转基因理论的工业产品智能设计方法
CN107194082A (zh) * 2017-05-26 2017-09-22 西北工业大学 基于转基因理论的工业产品智能设计方法
WO2018226863A1 (fr) * 2017-06-06 2018-12-13 University Of Maryland, College Park Analyse de phénotypage mécanique de cellules individuelles pour la détection métastatique
US11657593B2 (en) 2017-07-28 2023-05-23 Siemens Healthcare Diagnostics Inc. Deep learning volume quantifying methods and apparatus
EP3659065A4 (fr) * 2017-07-28 2020-08-19 Siemens Healthcare Diagnostics Inc. Procédés et appareil de quantification de volume d'apprentissage profond
CN109472265A (zh) * 2017-09-07 2019-03-15 彭文伟 一种图像特征提取及描述方法
US11232344B2 (en) 2017-10-31 2022-01-25 General Electric Company Multi-task feature selection neural networks
CN108362628A (zh) * 2018-01-11 2018-08-03 天津大学 基于偏振衍射成像流式细胞仪的无标记细胞流式分选方法
US11893739B2 (en) 2018-03-30 2024-02-06 The Regents Of The University Of California Method and system for digital staining of label-free fluorescence images using deep learning
JP2021522503A (ja) * 2018-04-27 2021-08-30 ナノスティックス インコーポレイテッド マイクロフローサイトメトリーを使用して疾患を診断する方法
EP3785014A4 (fr) * 2018-04-27 2022-01-26 Nanostics Inc. Méthodes de diagnostic d'une maladie par cytométrie en microflux
JP7451424B2 (ja) 2018-04-27 2024-03-18 ナノスティックス インコーポレイテッド マイクロフローサイトメトリーを使用して疾患を診断する方法
WO2019236569A1 (fr) * 2018-06-04 2019-12-12 The Regents Of The University Of California Cytomètre de flux d'imagerie portable à apprentissage profond pour analyse sans marqueur d'échantillons d'eau
WO2020018154A1 (fr) * 2018-07-19 2020-01-23 The Regents Of The University Of California Procédé et système de coloration numérique d'images en phase sans étiquette à l'aide d'un apprentissage profond
WO2020028313A1 (fr) * 2018-07-31 2020-02-06 The Regents Of The University Of Colorado A Body Corporate Systèmes et procédés d'application d'apprentissage automatique pour analyser des images de microcopie dans des systèmes à haut débit
CN108802008A (zh) * 2018-09-12 2018-11-13 珠海彩晶光谱科技有限公司 一种利用受激拉曼光谱检测血液中肿瘤细胞的方法和装置
CN112689756A (zh) * 2018-09-12 2021-04-20 分子装置有限公司 用于生物样品的无标记识别和分类的系统和方法
WO2020056118A1 (fr) * 2018-09-12 2020-03-19 Molecular Devices, Llc Système et procédé d'identification sans marqueur et de classification d'échantillons biologiques
WO2020081812A1 (fr) 2018-10-17 2020-04-23 Georgia Tech Research Corporation Systèmes et procédés pour le décodage de signaux coulter multiplexés par code au moyen d'apprentissage automatique
EP3867624A4 (fr) * 2018-10-17 2022-06-22 Georgia Tech Research Corporation Systèmes et procédés pour le décodage de signaux coulter multiplexés par code au moyen d'apprentissage automatique
US11392831B2 (en) 2018-10-17 2022-07-19 Georgia Tech Research Corporation Systems and methods for decoding code-multiplexed coulter signals using machine learning
CN109884657A (zh) * 2019-02-25 2019-06-14 北京化工大学 一种基于光学时间拉伸的高速高通量微粒测速系统
WO2020183231A1 (fr) 2019-03-13 2020-09-17 Tomocube, Inc. Identification de micro-organismes à l'aide d'une imagerie de phase quantitative tridimensionnelle
EP3906526A4 (fr) * 2019-03-13 2022-03-02 Tomocube, Inc. Identification de micro-organismes à l'aide d'une imagerie de phase quantitative tridimensionnelle
CN110071767B (zh) * 2019-04-03 2020-09-04 电子科技大学 一种基于有限时间拉伸下变频微波信号测频方法及装置
CN110071767A (zh) * 2019-04-03 2019-07-30 电子科技大学 一种基于有限时间拉伸下变频微波信号测频方法及装置
CN110427457A (zh) * 2019-06-28 2019-11-08 厦门美域中央信息科技有限公司 一种基于ann的数据库文本分类中的特征选择方法
US11978269B2 (en) 2019-07-10 2024-05-07 Becton, Dickinson And Company Reconfigurable integrated circuits for adjusting cell sorting classification
EP3997439A4 (fr) * 2019-07-10 2023-07-19 Becton, Dickinson and Company Circuits intégrés reconfigurables pour ajuster une classification de tri de cellules
WO2021141764A1 (fr) * 2020-01-06 2021-07-15 Becton, Dickinson And Company Procédé d'apprentissage profond dans l'aide à un diagnostic de patient et une identification de population cellulaire aberrante en cytométrie de flux
US11662295B2 (en) 2020-01-06 2023-05-30 Becton, Dickinson And Company Deep learning method in aiding patient diagnosis and aberrant cell population identification in flow cytometry
US20220272124A1 (en) * 2021-02-19 2022-08-25 Intuit Inc. Using machine learning for detecting solicitation of personally identifiable information (pii)
WO2023285841A1 (fr) * 2021-07-16 2023-01-19 Malvern Panalytical Limited Procédé et appareil de caractérisation de particules
EP4120103A1 (fr) * 2021-07-16 2023-01-18 Malvern Panalytical Limited Procédé et appareil de caractérisation de particules
CN113607628B (zh) * 2021-09-02 2023-02-10 清华大学 神经形态计算驱动图像流式细胞仪对细胞图像流处理方法
CN113607628A (zh) * 2021-09-02 2021-11-05 清华大学 神经形态计算驱动图像流式细胞仪对细胞图像流处理方法
WO2024030346A1 (fr) * 2022-08-02 2024-02-08 Visiongate, Inc. Procédé et système de classification de cellules à base d'ia
US12001940B2 (en) 2022-09-23 2024-06-04 Tomocube, Inc. Identifying microorganisms using three-dimensional quantitative phase imaging

Also Published As

Publication number Publication date
US10593039B2 (en) 2020-03-17
US20180286038A1 (en) 2018-10-04

Similar Documents

Publication Publication Date Title
US10593039B2 (en) Deep learning in label-free cell classification and machine vision extraction of particles
Chen et al. Deep learning in label-free cell classification
Li et al. Deep cytometry: deep learning with real-time inference in cell sorting and flow cytometry
US11668641B2 (en) Image-based cell sorting systems and methods
US11861889B2 (en) Analysis device
Isozaki et al. AI on a chip
US10481076B2 (en) Method for determining the state of a cell
US10162161B2 (en) Ptychography imaging systems and methods with convex relaxation
US20200400563A1 (en) Organism identification
CN113960908B (zh) 用于表征样本中的颗粒的全息方法
Maguire et al. Competitive evaluation of data mining algorithms for use in classification of leukocyte subtypes with Raman microspectroscopy
Mahjoubfar et al. Artificial Intelligence in Label-free Microscopy
Mukhopadhyay et al. Tissue multifractality and hidden Markov model based integrated framework for optimum precancer detection
CN103903015B (zh) 一种细胞有丝分裂检测方法
Ribeiro et al. Analysis of the influence of color normalization in the classification of non-hodgkin lymphoma images
US10664978B2 (en) Methods, systems, and computer readable media for using synthetically trained deep neural networks for automated tracking of particles in diverse video microscopy data sets
US11530434B2 (en) Cell mass evaluation method and device for analyzing state of cell mass
Fang et al. Recent progress and applications of Raman spectrum denoising algorithms in chemical and biological analyses: A review
Kazemzadeh et al. Deep learning as an improved method of preprocessing biomedical Raman spectroscopy data
Yousuff et al. Deep autoencoder based hybrid dimensionality reduction approach for classification of SERS for melanoma cancer diagnostics
Mahjoubfar et al. Time Stretch Quantitative Phase Imaging
CN117274236B (zh) 基于高光谱图像的尿液成分异常检测方法及系统
Ciaparrone et al. Label-free cell classification in holographic flow cytometry through an unbiased learning strategy
Ryu et al. Deep learning-enabled image quality control in tomographic reconstruction: Robust optical diffraction tomography
Chen Machine Learning in Label-free Phenotypic Screening-High-throughput Image-based Diagnosis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16849612

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16849612

Country of ref document: EP

Kind code of ref document: A1