WO2024098153A1 - Machine-learning processing for photon absorption remote sensing signals - Google Patents

Machine-learning processing for photon absorption remote sensing signals Download PDF

Info

Publication number
WO2024098153A1
WO2024098153A1 PCT/CA2023/051497 CA2023051497W WO2024098153A1 WO 2024098153 A1 WO2024098153 A1 WO 2024098153A1 CA 2023051497 W CA2023051497 W CA 2023051497W WO 2024098153 A1 WO2024098153 A1 WO 2024098153A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pars
signals
sample
radiative
Prior art date
Application number
PCT/CA2023/051497
Other languages
French (fr)
Inventor
Parsin Haji Reza
Benjamin Ryan ECCLESTONE
James Edwin Daniel TWEEL
James Alexander TUMMON SIMMONS
Kristof SUBRYAN
Marian BOKTOR
Ilona Anna URBANIAK
Original Assignee
Illumisonics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Illumisonics Inc. filed Critical Illumisonics Inc.
Publication of WO2024098153A1 publication Critical patent/WO2024098153A1/en

Links

Definitions

  • This relates to the field of optical imaging and, in particular, to machine learning processing for a photon absorption remote sensing (PARS) system for analyzing samples, including biological tissues, in vivo, ex vivo, or in vitro.
  • PARS photon absorption remote sensing
  • a computer-implemented method for analyzing a sample may include: receiving, from the sample, a plurality of signals including optical absorption radiative and non-radiative relaxations signals; extracting a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and applying the plurality of features to a machine learning architecture to generate an inference regarding the sample.
  • the radiative and non-radiative signals include radiative and non-radiative absorption relaxation signals.
  • the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal.
  • the radiative signals includes one or more autofluorescence signals.
  • the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
  • processing the plurality of signals may include: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
  • said extracting the plurality of features includes processing both radiative signals and non-radiative signals.
  • the plurality of signals include absorption spectra signals.
  • the plurality of signals include scattering signals.
  • the sample is an in vivo or an in situ sample.
  • the sample is not stained.
  • the sample is stained.
  • the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities.
  • the complementary modalities comprises at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
  • PET positron emission tomography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • image data obtained from the complementary modalities may include photoactive labels for contrasting or highlighting specific regions in the images.
  • the plurality of features is supplemented with at least one of features informative of patient information.
  • said processing includes converting the at least one of the plurality of signals to at least one image.
  • said converting to said at least one image includes applying a simulated stain.
  • the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
  • H&E Hematoxylin and Eosin
  • MPAS Jones’ Stain
  • PAS and GMS stain Toluidine Blue
  • Congo Red Congo Red
  • Masson's Trichrome Stain Lillie's Trichrome
  • Verhoeff Stain Immunohistochemistry
  • IHC Immunohistochemistry
  • histochemical stain histochemical stain
  • ISH In-Situ Hybridization
  • the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue.
  • preserved tissue sample may include a sample preserved using formalin, or alcohol fixed using alcohol fixatives.
  • said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images.
  • said converting includes applying a colorization machine learning architecture.
  • the colorization machine learning architecture includes a Generative Adversarial Network (GAN).
  • GAN Generative Adversarial Network
  • the colorization machine learning architecture includes a cycle-consistent generative adversarial network (CycleGAN).
  • CycleGAN cycle-consistent generative adversarial network
  • the colorization machine learning architecture includes a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
  • cGAN conditional generative adversarial network
  • the inference comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
  • the method may further include generating signals for causing to render, at a display device, a user interface (Ul) showing a visualization of the inference.
  • a computer system for analyzing a sample comprising: a processor operating in conjunction with computer memory and non-transitory computer-readable storage, the processor configured to: receive, from the sample, a plurality of signals including radiative and non-radiative signals; extract a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and apply the plurality of features to a machine learning architecture to generate an inference regarding the sample.
  • the radiative and non-radiative signals include radiative and non-radiative absorption relaxation signals.
  • the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal.
  • the radiative signals includes one or more autofluorescence signals.
  • the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
  • a computer system for training a machine learning architecture comprising: a processor operating in conjunction with computer memory and non-transitory computer-readable storage, the processor configured to, in each training iteration: instantiate a machine learning architecture including neural network having a plurality of nodes and weights stored on a memory device; obtain a true total absorption (TA) image; generate a simulated stained image based on the true TA image; generate a fake TA image based on the generated stained image; compute a first loss based on the generated fake TA image and the true TA image; obtain a labelled and stained image; compute a second loss based on the generated simulated stained image and the labelled and stained image; and update weights of the neural network based on at least one of the first and second losses.
  • TA true total absorption
  • a computer-implemented method for training a machine learning architecture for generating a simulated stained image comprising, in each training iteration: obtaining a true total absorption (TA) image; generating a simulated stained image based on the true TA image; generating a fake TA image based on the generated stained image; computing a first loss based on the generated fake TA image and the true TA image; obtaining a labelled and stained image; computing a second loss based on the generated simulated stained image and the labelled and stained image; and updating weights of the neural network based on at least one of the first and second losses.
  • TA true total absorption
  • the simulated stained image is generated by a second neural network comprising a second set of nodes and weights, the second set of weights being updated based on at least one of the first and second losses during each iteration.
  • the fake TA image is generated by a third neural network comprising a second set of nodes and weights, the third set of weights being updated based on at least one of the first and second losses during each iteration.
  • computing the second loss based on the generated simulated stained image and the labelled and stained image may include steps of: processing the generated simulated stained image by a first discriminator network; processing the labelled and stained image by a second discriminator network; and computing the second loss based on a respective output from each of the first and second discriminator networks.
  • the method may further include processing the respective output from each of the first and second discriminator networks through a respective classification matrix prior to computing the second loss.
  • the machine learning architecture comprises a cycleconsistent generative adversarial network (CycleGAN) machine learning architecture.
  • CycleGAN cycleconsistent generative adversarial network
  • the machine learning architecture comprises a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
  • cGAN conditional generative adversarial network
  • the labelled and stained image is a labelled PARS image.
  • the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image.
  • automatically labelling the unlabeled PARS image comprises labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities.
  • the database is a H&E database.
  • a portion of interrogation, signal enhancement, excitation or autofluorescence from the sample may be collected to form images. These signals may be used to unmix the size, shape, feature, dimensions, nature, and composition of the sample.
  • any portion of the light returning from the sample such as the detection, excitation, or thermal enhancement beams may be collected.
  • the returning light may be analyzed based on wavelength, phase, polarization, etc. to capture any absorption-induced signals including, pressure, temperature, and optical emissions.
  • the PARS may simultaneously capture for example, scattering, autofluorescence, and polarization contrast attributed to each detection, excitation, and thermal enhancement source.
  • the PARS laser sources may be specifically chosen to highlight these different contrast mechanisms.
  • FIG. 1 shows an overview of a PARS system.
  • FIG. 2 shows an overview of a PARS system with PARS excitation and PARS detection.
  • FIG. 3 shows an implementation of PARS being combined with other modalities.
  • FIG. 4 shows a signal processing pathway of PARS signals.
  • FIG. 5 shows exemplary architecture for total absorption (TA) PARS, where an autofluorescence detection system is used as an example.
  • FIG. 6 shows a visualization produced by the autofluorescence sensitive total absorption PARS (TA-PARS) architecture.
  • FIG. 7 shows an exemplary signal evolution of a TA-PARS signal.
  • FIG. 8 shows an example of radiative and non-radiative signals.
  • FIG. 9 shows exemplary architecture using two excitation sources, one detection source, and a plurality of photodiodes.
  • FIG. 10 shows a comparison of non-radiative absorption (view (a)), radiative absorption (view (b)), and scattering (view (c)) provided by a TA-PARS system.
  • FIG. 11 shows examples of TA-PARS imaging.
  • FIG. 12 shows exemplary applications of a quantum efficiency ratio (QER).
  • FIG. 13 shows examples of TA-PARS imaging using a QER acquisition process.
  • FIG. 14 shows comparisons of imaging using a QER acquisition process with traditional stains.
  • FIG. 15 shows an exemplary PARS signal evolution.
  • FIG. 16 shows an example of a lifetime PARS image in resected rattus brain tissues.
  • FIG. 17 shows an exemplary PARS signal evolution in connection with a rapid lifetime extraction technique.
  • FIG. 18 shows exemplary architecture for a multi-pass (MP) PARS system.
  • FIG. 19 compares Multi-Photon PARS with normal PARS.
  • FIGs. 20A and 20B show a reconstructed grayscale PARS image and a corresponding stain.
  • FIGs. 21 A and 21 B show principal components of a time-domain TD-PARS signal and a synthesized stain based on the principal components.
  • FIG. 22 shows exemplary architecture to analyze TD-PARS signals.
  • FIG. 23 shows a graph of TD-PARS signals and centroids.
  • FIG. 24 shows a visualization using a clustering method.
  • FIG. 25 shows a visualization of three different regions of brain tissues using the clustering method.
  • FIG. 26 shows an exemplary clustering algorithm to analyze the TD-PARS signals and determine an image.
  • FIG. 27 shows a method of determining an image using the clustering algorithm.
  • FIG. 28 exemplifies non-radiative signal extraction.
  • FIG. 29 exemplifies various filtered instances of a PARS signal.
  • FIG. 30 exemplifies expected spatial correlation between adjacent points or signals.
  • FIG. 31 exemplifies two signals with different lifetimes in connection with functional extraction.
  • FIG. 32 shows a comparison of an original image and a denoised image.
  • FIG. 33 shows a chirped-pulse signal and acquisition.
  • FIG. 34 shows an exemplary TD-PARS acquisition by imposing a delay to reconstruct a signal.
  • FIG. 35 shows data compression using digital and/or analog techniques.
  • FIG. 36 shows an exemplary fast acquisition approach.
  • FIG. 37 shows a direct construction of a colorized image.
  • FIGs. 38A and 38B show two example architectures for generating one or more inferences regarding a sample.
  • FIG. 39 shows another example architecture for generating one or more inferences regarding a sample.
  • FIG. 40 shows an example user interface rendering one or more inferences generated by the architecture in FIG. 38A, 38B or 39.
  • FIG. 41 shows an example machine learning architecture that may be used to implement an image generator.
  • FIG. 42 shows an example process for preparing one or more training data for training the image generator.
  • FIG. 43 shows an example neural network that may be used to implement the image generator.
  • FIG. 44 shows examples of contrasts extracted from PARS signals in tissue slides.
  • FIG. 45 shows examples of combinations of contrasts from the combination of PARS signals into unique contrasts.
  • FIG. 46 shows two virtually (simulated) stained PARS images.
  • FIG. 47A shows an example of an unlabeled PARS virtual H&E image.
  • FIG. 47B shows a historical labelled H&E image correlated with the image in FIG.
  • FIG. 48 shows examples of different tissue types imaged and identified using the machine learning architectures.
  • FIG. 49 shows unique keratin pearl features identified and isolated within an example simulated stained image.
  • FIG. 50 shows biomarkers of localized inflammation and malignancy, identified and encircled based on an example simulated stained image.
  • FIG. 51 shows different cell types and tissue regions, identified and delineated within an example simulated stained image.
  • FIG. 52 shows example of an abnormal tissue region, identified and delineated from an example simulated stained image.
  • FIG. 53 is a schematic diagram of computing device which may be used to implement a computing device used to train or execute (at inference time) a machine learning model.
  • FIG. 54 shows a process performed by a processor of an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
  • FIG. 55 shows an example heat map generated by an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
  • FIG. 56 shows an example multi-stain image generated by an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
  • FIG. 57A shows an example embodiment of image generator connected to a PARS system.
  • the image generator may be part of machine learning system or architecture in FIGs. 38A, 38B or 39.
  • FIG. 57B shows another example embodiment of image generator connected to a PARS system.
  • FIG. 58 shows yet another example embodiment of image generator connected to a preprocessing module.
  • FIG. 59 shows an example user interface for analyzing one or more images generated by the architecture in FIG. 38A, 38B or 39.
  • FIG. 60 shows an example user interface for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39.
  • FIG. 61 shows another example user interface for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39.
  • FIG. 62 shows an example user interface for scanning and processing one or more images.
  • FIG. 63 shows an example user interface for displaying an annotated image.
  • FIG. 64 to FIG. 79 illustrate various schematic diagrams of example embodiments of machine learning architectures or processes for generating one or more inferences based on output from a PARS system.
  • FIG. 80 shows an example of raw PARS data in PARS TA-PARS images denoised using a Noise2Void (N2V) framework.
  • N2V Noise2Void
  • FIG. 81 shows an example implementation of an error correction submodule for denoising of PARS images.
  • FIGs. 82A and 82B show example visualization of data preparation process and virtual staining process of images.
  • FIG. 83 shows example denoising results with a denoising process and an errorcorrection process applied to both raw PARS image data.
  • FIG. 84 shows example PARS non-radiative time domain features extracted from PARS events.
  • FIG. 85 shows an example multi-channel virtual staining architecture for signal processing and virtual staining of PARS image data.
  • FIG. 86 shows a comparison of virtual staining results using different combinations of PARS feature images as inputs.
  • FIG. 87 shows an example PARS data vector or feature vector.
  • FIG. 88 example PARS virtual multi-staining images based on the same PARS image data.
  • PARS photoacoustic remote sensing
  • US 2016/0113507, and US 2017/0215738 A recently reported photoacoustic technology known as photoacoustic remote sensing (PARS) microscopy (US 2016/0113507, and US 2017/0215738) has solved many of these sensitivity issues through a novel detection mechanism. Rather than detecting acoustic pressures at an outer surface once they have propagated away from their source, PARS enables direct detection of excited photoacoustic regions. This is accomplished by monitoring changes in material optical properties that coincide with the photoacoustic excitation. These changes then encode various salient material properties such as the optical absorption, physical target dimensions, and constituent chromophores to name a few.
  • PARS devices may utilize only two optical beams which may be in a confocal arrangement
  • spatial resolution of the imaging technique may be defined as excitation-defined (ED) or interrogation-defined (ID) depending on which of the beams provide a tighter focus at the sample.
  • This aspect also may facilitate imaging deeper targets, beyond the limits of optical resolution devices. This may be accomplished by leveraging a deeply-penetrating (long transport mean-free-path) detection wavelength such as a short-wave infrared (like 1310 nm, 1700 nm or 10um) which may provide spatial resolution to a depth superior to that provided by a given excitation (such as 532 nm or 266 nm) within highly scattering media such as biological tissues.
  • a deeply-penetrating (long transport mean-free-path) detection wavelength such as a short-wave infrared (like 1310 nm, 1700 nm or 10um) which may provide spatial resolution to a depth superior to that provided by a given excitation (such as
  • Intensity-modulated PARS signals hold dependence on not only optical absorption and incident excitation fluence, but also on detection laser wavelength, fluence and the temperature of the sample. PARS signals may also arise from other effects such as scatterer position modulation and surface oscillations. A similar analog may exist for PARS devices which take advantage of other modulating optical properties such as intensity, polarization, frequency, phase, fluorescence, non-linear scattering, non-linear absorption, etc. As material properties are dependent on ambient temperature, there is a corresponding temperature dependence in the PARS signal. At some intensity levels, additional saturation effects may also be leveraged.
  • These generated signals may be intentionally controlled or effected by secondary physical effects such as vibration, temperature, stress, surface roughness, mechanical bending among others.
  • temperature may be introduced to the sample, which may augment the generated PARS signals as compared to those which would be generated without having introduced this additional temperature.
  • Another example may involve introducing mechanical stress to the sample (such as bending) which may in turn effect the material properties of the sample (e.g., density or local optical properties such as birefringence, refractive index, absorption coefficient, scattering behavior) and thereby perturbing the generated PARS signals as compared to those which would have been generated without having introduced this mechanical stress.
  • Additional contrast agents may be added to the sample to boost the generated PARS signals, this includes but is not limited to dyes, proteins, specially designed cells, liquids and optical agents or windows. The target may be altered optically to provide optimized results.
  • Some techniques may simply monitor intensity back reflection and may extract the amplitude of these time-domain signals.
  • additional information may be extracted from the time-varying aspects of the signals.
  • some of the scattering, polarization, frequency, and phase content with a PARS signal may be attributed to the size, shape, features, and dimensions of the region which generated that signal. This may encode unique/orthogonal additional information with utility towards improving final image fidelity, classifying sample regions, sizing constituent chromophores, and classifying constituent chromophores to name a few.
  • frequency information may describe the microscopic structures within the sample, this may be combined with conventional PARS which uses scattering modulation to highlight regions which are both absorbing and of a specific size.
  • Photon Absorption remote sensing (PARS) microscopy is an all-optical non-contact optical absorption microscopy technique.
  • PARS may use a co-focused excitation and detection laser pair to generate and detect optical absorption contrast in a variety of specimens.
  • the excitation laser may include a pulsed excitation laser, which may be used to deposit optical energy into a sample.
  • the photon energy is captured by the specimen.
  • the absorbed energy may then be dissipated through either optical radiation (radiative) or non-radiative relaxation. During non-radiative relaxation, absorbed optical energy is converted into heat.
  • the generation of heat may cause thermoelastic expansion resulting in photoacoustic pressures and photothermal signals.
  • absorbed optical energy is released through the emission of photons.
  • emitted photons exhibit a different energy level compared to the absorbed photons.
  • an excitation pulse generated by a pulsed excitation laser may be described to be at a particular scale. It is to be appreciated that whenever an excitation pulse is said to be generated at nanosecond, it may be similarly generated at microsecond or picosecond scale. For example, a picosecond scale pulsed excitation laser may elicit radiative and non-radiative (thermal and pressure) perturbations in a sample.
  • Fig. 1 shows a high-level diagram of a photon absorption remote sensing (PARS) system.
  • PARS photon absorption remote sensing
  • This consists of a PARS system (101), an optical combiner (102), and an imaging head (104).
  • the PARS system may further include other systems (e.g., signal enhancement system), and the optical combiner may combine the beams from the PARS system (101) and these other systems.
  • Fig. 2 shows a high-level diagram with the PARS Excitation (202), PARS Detection (204) and Optical Combiner (203) delineated. These could be combined with other systems (e.g., signal enhancement system) and Imaging Head (205).
  • Fig. 3 shows a high-level embodiment of a PARS system combined with other modalities (305).
  • This consists of a PARS system (301), optical combiner (302), and an imaging head (304).
  • These can be combined with a variety of other modalities (305) such as bright-field microscopy, scanning laser ophthalmoscopy, ultrasound imaging, stimulated Raman microscopy, fluorescence microscopy, two-photon and confocal fluorescence microscopy, Coherent-Anti-Raman-Stokes microscopy, Raman microscopy, other PARS, photoacoustic and ultrasound systems, among others.
  • FIG. 4 shows a signal processing pathway. This consists of an optical detector (401), a signal processing unit (402), a digitizer (403), a digital signal processing unit (404) and a signal extraction unit (405).
  • TA total absorption
  • Fig. 5 shows exemplary architecture for a radiative relaxation sensitive PARS.
  • the radiative relaxation may be fluorescent or autofluorescent, but aspects disclosed herein are not limited.
  • the radiative relaxation may include Raman scattering, fluorescence, autofluorescence, multiphoton fluorescence, etc.
  • an autofluorescence sensitive TA-PARS system will be described as an example with reference to FIG. 5.
  • a multi-wavelength fiber excitation laser (5812) is used to generate PARS signals.
  • An excitation beam (5817) passes through a multi-wavelength unit (5840) and a lens system (5842) to adjust its focus on the sample (5818).
  • the optical subsystem used to adjust the focus may be constructed by components known to those skilled in the art including but not limited to beam expanders, adjustable beam expanders, adjustable collimators, adjustable reflective expanders, telescope systems, etc.
  • the signal signatures are interrogated using either a short or long-coherence length probe beam (5816) from a detection laser (5814) that is co-focused and co-aligned with the excitation spots on the sample (5818).
  • the interrogation/probe beam (5816) passes through a lens system (5843), polarizing beam splitter (5844) and quarter wave plate (5856) to guide the reflected light (5820) from the sample (5818) to the photodiode (5846).
  • this architecture is not limited to including a polarizing beam splitter (5844) and quarter wave plate (5856).
  • the aforementioned components may be substituted for fiber-based, equivalent components, e.g., a circulator, coupler, Faraday rotator, electro-optic modulator, WDM, and/or double-clad fiber, that are non-reciprocal elements. Such elements may receive light from a first path, but then redirect said light to a second path.
  • equivalent components e.g., a circulator, coupler, Faraday rotator, electro-optic modulator, WDM, and/or double-clad fiber, that are non-reciprocal elements.
  • Such elements may receive light from a first path, but then redirect said light to a second path.
  • the interrogation beam (5816) is combined with the excitation beam using a beam combiner (5830).
  • the combined beam (5821) is scanned by a scanning unit (5819). This passes through an objective lens (5855) and is focused onto the sample (5818).
  • the reflected beam (5820) returns along the same path.
  • the reflected beam is filtered with a beam combiner/splitter (5831) to separate the detection beam (5816) from any autofluorescence light returned from the sample.
  • the autofluorescence light (5890) passes through a lens system (5845) to adjust its focus onto the autofluorescence sensitive photodetector (5891).
  • the isolated detection beam (5820) is transmitted through the beam splitter (5831) towards the signal collection/analysis pathway.
  • the returned detection light is redirected by the polarized beam splitter (5844).
  • the detection pathway consists of a photodiode (5846), amplifier (5858), fast data acquisition card (5850) and computer (5852).
  • the autofluorescence sensitive photodetector may be any such device including a camera, photodiode, photodiode array etc.
  • the autofluorescence detection pathway may include more beam splitters and photodetectors to further isolate and detect specific wavelengths of light.
  • Fig. 6 shows exemplary visualizations which may potentially be provided by autofluorescence sensitive TA-PARS. Any portion of the light returning from the sample, excluding the detection beam, may be collected, and analyzed based on wavelength. By isolating specific wavelengths of light emissions from the sample, specific molecules of interest can be visualized.
  • the autofluorescence sensitive PARS may be applied to imaging tissues.
  • the PARS excitation are selected to capture absorption contrast of nuclei.
  • UV excitation is used to generate pressure and temperature signals attributed to nuclei in tissues.
  • the autofluorescence contrast generated by the PARS excitation are captured. In this case, the non-nuclear regions of the tissues, are highly fluorescent.
  • the resulting visualizations may require only a single (or only one or exactly one) excitation wavelength to capture.
  • this method may be used with other radiative relaxation sensitive PARS, and radiative relaxation other than autofluorescence may be generated and captured.
  • the PARS radiative signal could be implemented into a PARS absorption spectrometer to accurately measure all absorption of light by a sample.
  • the radiative relaxation e.g., autofluorescence in FIG. 5
  • sensitive PARS can be used to measure the proportion of absorbed energy which is converted to heat and pressure or light respectively. This may enable sensitive quantum efficiency measurements in a broad range of biological and non-biological samples.
  • the TA-PARS signal may also be collected on a single (only one or exactly one) detector as highlighted in FIG. 7. Given that the salient components of the TA-PARS signal may appear distinct from each other, a single detector may appropriately characterize these components. For example, the initial signal level (Scattering) may be indicative of the unperturbed intensity reflectivity of the detection beam from the sample at the interrogation location encoding the scatter intensity. Then, following excitation by the excitation pulse (at 100 ns in FIG. 7), PARS excitation signals related to non-radiative relaxation (e.g., thermal, temperature), and radiative relaxation (e.g., fluorescence or autofluorescence) may be observed as unique overlapping signals (labeled PA and AF in the diagram).
  • non-radiative relaxation e.g., thermal, temperature
  • radiative relaxation e.g., fluorescence or autofluorescence
  • these excited signals are measurably unique (e.g., in amplitude or magnitude and/or evolution time) from each other, they may be decomposed from the combined signal to extract these magnitudes along with their characteristic lifetimes.
  • This wealth of information may be useful in improving available contrast, providing additional multiplexing capabilities, and providing characteristic molecular signatures of constituent chromophores.
  • such an approach may provide pragmatic benefits in that only a single detector and a single (only one or exactly one) detection path may be required, drastically reducing physical hardware complexity and cost. Capturing signals overtime are discussed in more detail in the section covering TD-PARS.
  • any given PARS excitation event always generates some fraction of radiative and non-radiative relaxation.
  • TA-PARS facilitates the capture of a chromophores total-absorption profile. The thermal and pressure perturbations may generate corresponding modulations in the local optical properties.
  • the TA-PARS microscope may capture a chromophores’ scattering, and total absorption (radiative and non-radiative relaxation) visualizations in a single (only one or exactly one) excitation event.
  • the non- radiative relaxation leads to heat and pressure induced modulations, which in turn cause back- reflected intensity variations in the detection beam.
  • PARS signals are denoted as some change in reflectivity multiplied by the incident detection (RI det ).
  • the radiative absorption pathway captures optical emissions attributed to radiative relaxation such as stimulated Raman scattering, fluorescence, multiphoton fluorescence, etc. Emissions are denoted as some wavelength and energy optical emission (hv em ).
  • the local scattering contrast is captured as the unmodulated backscatter (pre-excitation pulse) of the detection beam.
  • the scattering contrast is denoted as the unperturbed scattering profile multiplied by the incident detection power (o s I det
  • the non-radiative relaxation-induced modulations are detected at the excited location by the probe beam.
  • the PARS may then visualize any photothermal heat or photoacoustic pressures which cause modulation in the local optical properties.
  • the TA-PARS leverages an additional detection pathway to capture non-specific optical emissions regardless of properties such as wavelength, frequency, polarization from the sample (excluding the excitation and detection). These emissions may then be attributed to any radiative relaxation effects such as stimulated Raman scattering, fluorescence, and multiphoton fluorescence.
  • Using this detection pathway may provide enhanced sensitivity to any range of chromophores.
  • the contrast may not be bound by efficiency factors such as the photothermal conversion efficiency or fluorescence quantum yield.
  • the TA-PARS may capture all or nearly all the optical properties of a chromophore such as the absorption coefficient, scattering coefficient, quantum efficiency, non-linear interaction coefficients, providing simultaneous sensitivity to most chromophores.
  • TA-PARS may yield an absorption metric proposed as the quantum efficiency ratio (QER), which visualizes a biomolecules proportional radiative and non- radiative absorption response.
  • QER quantum efficiency ratio
  • the TA-PARS may provide label-free visualization of a range of biomolecules enabling convincing analogues to traditional histochemical staining of tissues, effectively providing label-free Hematoxylin and Eosin (H&E)-like visualizations.
  • QER may be defined as a ratio of radiative PARS signals (P r ) to non-radiative
  • QER (P r - P nr )/(P r + P nr ))-
  • This ratio will be specific to a Pnr given chromophore.
  • a biomolecule like collagen will exhibit high radiative contrast, and low non-radiative contrast providing a high QER.
  • DNA will exhibit low radiative contrast, and high non-radiative contrast providing a high QER.
  • Calculating the QER in addition from the radiative and non-radiative absorption may allow for properties such as the chromophore composition, density, and quantity to be extracted in a single (only one or exactly one) event. This may also allow for single-shot functional imaging.
  • a picosecond scale pulsed excitation laser may elicit radiative and nonradiative (thermal and pressure) perturbations in a sample.
  • the thermal and pressure perturbations generate corresponding modulations in the local optical properties.
  • a secondary probe beam co-focused with the excitation may capture the non-radiative absorption induced modulations to the local optical properties as changes in backscattering intensity.
  • these backscatter modulations may be directly correlated to the local non-radiative absorption contrast.
  • the unperturbed backscatter preexcitation event
  • the TA-PARS probe may instantaneously detect the induced modulations at the excited location. Therefore, TA-PARS offers non-contact operation, facilitating imaging of delicate, and sensitive samples, which would otherwise be impractical to image with traditional contact-based PAM methods.
  • TA-PARS may rely only on the generation of heat and subsequently pressure to provide contrast, the absorption mechanism is non-specific, and highly sensitive to small changes in relative absorption. This allows any variety of absorption mechanisms such as vibrational absorption, stimulated Raman absorption, and electronic absorption to be detected with PARS.
  • PARS has demonstrated label-free non-radiative absorption contrast of hemoglobin, DNA, RNA, lipids, and cytochromes, in specimens such as chicken embryo models, resected tissue specimens, and live murine models.
  • a unique secondary detection pathway captures radiative relaxation contrast, in addition to the non- radiative absorption.
  • the radiative absorption pathway was designed to broadly collect all- optical emissions at any wavelength of light, excluding the excitation and detection. As a result, the radiative detection pathway captures non-specific optical emissions from the sample regardless of properties such as wavelength, frequency, polarization.
  • a TA-PARS 900 may include excitation at first and second excitation wavelengths that are different from each other (e.g., 266 nm and 515 nm excitation), providing sensitivity to DNA, heme proteins, NADPH, collagen, elastin, amino acids, and a variety of fluorescent dyes.
  • the TA-PARS may include a specific optical pathway with dichroic filters and avalanche photodiode, to isolate and detect the radiative absorption contrast. As exemplified in FIG.
  • the TA-PARS system may include excitation at the first excitation wavelength (e.g., visible light such as 515 nm visible excitation) from a first excitation source 920 and excitation at the second excitation wavelength (e.g., UV light such as 266 nm UV excitation) from a second excitation source 940.
  • the first excitation source 920 may include a first excitation laser 902, such as a 50 kHz to 2.7 MHz 2 ps pulsed 1030 nm fiber laser (e.g., YLPP-1-150-V-30, IPG Photonics), but aspects disclosed herein are not limited.
  • the second harmonic may be generated with a lithium triborate crystal or LBO 922.
  • the first (e.g., 515 nm) harmonic may be separated via a dichroic mirror 906, then spatial filtered with a pinhole 908 prior to use in the imaging system.
  • the first excitation source 902 may include one or more lenses or plates, such as a half-wave plate or HWP 924 provided between LBO 922 and the first excitation laser 902, a filtering lens, and/or a lens assembly 928.
  • the pinhole 908 may be provided between, as an example, two lenses or lens assemblies 928.
  • the second excitation source 940 may include a second excitation laser 904, such as a 50 kHz 400 ps pulsed diode laser (e.g., Wedge XF 266, RPMC), but aspects disclosed herein are not limited.
  • Output from the second excitation laser 904 may be separated from residual excitation (e.g., 532 nm excitation) using a prism 910, then expanded (e.g., using a variable beam expander or VBE 926) prior to use in the imaging system.
  • the TA-PARS system may include a detection system 950 shared between the first and second excitation sources 920 and 940.
  • the TA-PARS detection system 950 may include a probe beam 912, which may include a 405 nm laser diode such as a 405 nm OBIS-LS laser (OBIS LS 405, Coherent).
  • the detection may be fiber coupled through a circulator 914 into the system, where it may be combined with the excitations via one or more dichroic mirrors 916 and/or guided via mirrors 934.
  • the combined excitation and detection may be co-focused onto the sample using a lens 918, such as a 0.42 NA UV objective lens.
  • Back-reflected detection from the sample may return to the circulator 914 by the same path as forward propagation.
  • the back-reflected detection contains the PARS non-radiative absorption contrast as nanosecond scale intensity modulations which may be captured with a photodiode.
  • the detection system 950 may also include a collimator and/or collimating assembly 936 to collimate the detection light.
  • This probe wavelength provides improved scattering resolution, which improves the confocal overlap between the PARS excitation and detection spots on the sample.
  • the TA- PARS provides improved sensitivity compared to previous implementations.
  • the visible wavelength probe also provides improved compatibility between the visible and UV excitation wavelengths.
  • Radiative relaxation from each of the first and second excitations (266 nm and 515 nm excitation) may be independently captured with different (or first and second) photodiodes 930 and 932.
  • the radiative relaxation induced from the first excitation (515 nm induced radiative relaxation) may be isolated with dichroic mirrors 916, then captured using the first photodiode 930.
  • the radiative relaxation induced from the second excitation (266 nm induced radiative relaxation) may be isolated by redirecting some portion (e.g., 1%-50%) of the total light intensity returned from the sample towards a photodetector and/or second photodiode 932. This light may then be spectrally filtered (e.g., via lens assemblies 936) to remove residual excitation and detection prior to measurement.
  • the excitation sources 920 and 940 may be continuously pulsed (e.g., at 50 kHz), while the stage velocity may be regulated to achieve a desired pixel size (spacing between interrogation events).
  • a collection event may be triggered.
  • a few hundred nanosecond segment may be collected from 4 input signals using a high-speed digitizer (e.g., RZE-004- 200, Gage Applied).
  • These signals may include the laser input reference measurements (excitation and detection), PARS scattering signal, the PARS non-radiative relaxation signal, the PARS radiative relaxation signal, and a positional signal from the stages.
  • the time resolved scattering, absorption, and position signals may then be compressed down to single characteristic features. This serves to substantially reduce the volume of data capture during a collection.
  • the raw data may be fitted to a Cartesian grid based on the location signal at each interrogation.
  • Raw images may then be Gaussian filtered and rescaled based on histogram distribution prior to visualization.
  • TA-PARS visualization fidelity is assessed through one-to-one comparison against traditional H&E-stained images.
  • the TA-PARS total-absorption and QER contrast mechanisms are also validated in a series of dye and tissue samples. Results show high correlation between radiative relaxation characteristics and TA-PARS-measured QER in a variety of fluorescent dyes, and tissues.
  • These QER visualizations are used to extract regions of specific biomolecules such as collagen, elastin, and nuclei in tissue samples. This enables realization of a broadly applicable high resolution absorption contrast microscope system.
  • the TA-PARS may provide unprecedented label-free contrast in any variety of biological specimens, providing otherwise inaccessible visualizations.
  • FIG. 10 shows a comparison of three different contrasts (non-radiative absorption in view (a), radiative absorption in view (b), and scattering in view (c)) provided by a TA-PARS system using 266 nm excitation in thin sections of formalin fixed paraffin embedded (FFPE) human brain tissues.
  • the non-radiative relaxation signals were captured based on nanosecond scale pressure- and temperature-induced modulations in the collected backscattered 405 nm detection beam from the sample.
  • the radiative absorption contrast was captured as optical emissions from the sample, excluding the excitation and detection wavelengths which were blocked by optical filters. Concurrently, the unperturbed backscatter of the 405 nm probe captures the local optical scattering from the sample.
  • the non-radiative absorption contrast highlights predominately nuclear structures, while the radiative contrast captures extranuclear features.
  • the optical scattering contrast captures the morphology of the thin tissue section. In resected tissues this scattering contrast becomes less applicable, and hence was not explored in other samples.
  • FIG. 11 shows an example of TA-PARS imaging.
  • TA-PARS captured the epithelial layer at the margin of resected human skin tissues.
  • the stratum corneum layer was captured in the radiative and non-radiative visualizations concurrently.
  • the radiative visualization provides improved contrast in recovering these tissue layers as compared to the non-radiative image.
  • the TA-PARS captures connective tissues, with sparse nuclei, and elongated fibrin features.
  • the disclosed system was also applied to imaging resected unprocessed rattus brain tissues.
  • the TA-PARS acquisition highlights the gray matter layer in the brain revealing dense regions of nuclear structures.
  • the nuclei of the gray matter layer are presented with higher contrast relative to surrounding tissues in the non-radiative image as compared to the radiative representation. Since nuclei do not provide significant radiative contrast the nuclear structures in the radiative image appear as voids or lack of signal within the specimen. While some potential nuclei may be observed, they may not be identified with significant confidence, as compared to those in the TA-PARS non-radiative representation.
  • structures resembling myelinated neurons can be identified surrounding the more sparsely populated nuclei in that area.
  • the QER or the ratio of the non-radiative and radiative absorption fractions is expected to contain further biomolecule-specific information.
  • the local absorption fraction should correlate directly with radiative relaxation properties.
  • Relative radiative and non-radiative signal intensities may be plotted, and QER may be plotted against reported quantum efficiency (QE) values.
  • the TA-PARS was applied to measure a series of fluorescent dyes with varying quantum efficiencies.
  • the 515 nm excitation was used to generate radiative and non-radiative relaxation signals which were captured simultaneously.
  • FIG. 13 exemplifies images from a QER acquisition process applied to imaging of thin sections of FFPE human tissues. Based on the non-radiative and radiative signals, the QER was calculated for each image pixel, generating a QER image. The result represents a dataset encoding chromophore-specific attributes, in addition to the independent absorption fractions. The QER processing helps to further separate otherwise similar tissue types from solely the radiative or non-radiative acquisitions.
  • a colorized version of the QER image shown in Fig. 13 highlights various tissue components.
  • the low QER biomolecules may appear as a first color (e.g., a color having a lower wavelength or a light blue color), while the high QER biomolecules (collagen, elastin, etc.) may appear as a second color and/or a third color different from (e.g., having a higher wavelength than) the first color (e.g., pink and purple).
  • a first color e.g., a color having a lower wavelength or a light blue color
  • the high QER biomolecules may appear as a second color and/or a third color different from (e.g., having a higher wavelength than) the first color (e.g., pink and purple).
  • first color, second color, third color, fourth color, fifth color, and sixth are used, aspects disclosed herein may not be limited to six, etc. predetermined colors.
  • the color appearing in the visualization may have a wavelength proportional to the QER. For example, structures with a higher QER may appear as colors with higher wavelengths (e.g., red) and structures with a lower QER may appear as colors with lower wavelengths (e.g., blue).
  • the TA-PARS mechanism may provide an opportunity to accurately emulate traditional histochemical staining contrast, such as H&E staining, and TA-PARS may provide label-free histological imaging.
  • the non-radiative TA-PARS signal contrast may be analogous to that provided by hematoxylin staining, while the radiative TA-PARS signal contrast may be analogous to that provided by eosin staining.
  • the TA-PARS may capture label-free features such as adipocytes, fibrin, connective tissues, neuron structures, and cell nuclei. Visualizations of intranuclear structures may be captured with sufficient clarity and contrast to identify individual atypical nuclei.
  • FIG. 14 shows an example of label-free histological imaging applied to FFPE human brain tissue.
  • the non-radiative TA-PARS signal contrast is analogous to that provided by the hematoxylin staining of cell nuclei (Fig. 14, view (a)).
  • a section of FFPE human brain tissue was imaged with the non-radiative PARS (Fig. 14, view (a-i)).
  • This non-radiative information was then colored to emulate the contrast of hematoxylin staining (Fig. 14, view (a-ii)).
  • the same tissue section was then stained only with hematoxylin and imaged under a brightfield microscope (Fig. 14, view (a-iii)), providing a direct one-to-one comparison.
  • These visualizations are expected to be highly similar since the primary target of hematoxylin stain and the non-radiative portion of TA-PARS is nuclei, though other chromophores will also contribute to some degree.
  • the disclosed system may provide true H&E-like contrast in a single (only one or exactly one) acquisition.
  • the TA-PARS may provide substantially improved visualizations compared to previous PARS emulated H&E systems which relied on scattering microscopy to estimate eosin-like contrast.
  • the scattering microscopy-based methods are unable to provide clear images in complex scattering samples such as bulk resected human tissues.
  • the TA-PARS can directly measure the extranuclear chromophores via radiative contrast mechanisms, thus providing analogous contrast to H&E regardless of specimen morphology.
  • the different TA-PARS visualizations were combined using a linear color mixture to generate an effective representation of traditional H&E staining within unstained tissues.
  • FIG. 14 An example in resected FFPE human brain tissue is shown in Fig. 14, lew (c). The wide field image highlights the boundary of cancerous and healthy brain tissues.
  • TA-PARS To qualitatively compare the TA-PARS to traditional H&E images, a series of human breast tissue sections was scanned with the TA-PARS (Fig. 14, view (d-i) and Fig. 14, view (e-i)), then stained with H&E dyes and imaged under a brightfield microscope (Fig. 14, view (d-ii) and Fig. 14(e-ii)).
  • the TA-PARS emulated H&E visualizations are effectively identical to the H&E preparations. In both images, clinically relevant features of the metastatic breast lymph node tissues are equally accessible.
  • H&E simulations may be enhanced by extracting time-domain features, which are discussed in more detail in the below section discussing TD-PARS and Feature Extraction Imaging. While the total amplitude of the PARS modulation captures the local absorption of the excitation, the evolution of the pressure and temperature induced modulations will also capture local material properties.
  • FIG. 15 exemplifies a PARS signal evolution over time.
  • Each PARS excitation event will capture the scattering of the detection and excitation sources, the radiative emissions, and the PARS non-radiative relaxation time domain signal.
  • the PARS decay or evolution time is likely tied to metrics such as the thermal and pressure confinement times which govern traditional photoacoustic imaging. This means that properties such as the thermal diffusivity, conductivity, and speed of sound may dictate the PARS relaxation time.
  • the PARS may then provide further chromophore specific information on a specimen. This may enable chromophore unmixing (e.g. detect, separate, or otherwise discretize constituent species and/or subspecies) from a single excitation event, or single shot functional imaging.
  • FIG. 16 An example of a lifetime PARS image in resected rattus brain tissues is shown in Fig. 16.
  • the nuclei which may appear as a first color such as white
  • the surrounding gray matter which may appear as a second color such as green
  • the interwoven myelinated neuron structures which may appear as a third color such as orange. This unmixing is performed based on the PARS lifetime signals.
  • a rapid lifetime extraction technique may be used to greatly improve the PARS collection contrast.
  • PARS amplitude may be calculated as the difference between the average pre-and post-excitation signal. This acquisition is less sensitive to imaging noise compared to alternative extraction techniques.
  • PARS used a min-max acquired signal approach to extract the PARS-specific signals. By capturing the minimum of the signal minus the maximum, the PARS may highlight the total amplitude of the PARS modulation. However, this is highly susceptible to collection and measurement noise in the PARS signals.
  • One possible signal extraction method can be performed by determining an average pre-excitation signal. Then the average post-excitation signal is calculated from the initial portion of the lifetime signal. The PARS amplitude is then calculated as the difference between the two average signals. This metric for rapid signal extraction provides substantial improvements in signal to noise ratio, and sensitivity when collecting PARS signals. Since the technique relies on average signals, the PARS collection is substantially less sensitive to acquisition noise.
  • the backscattered detection may be captured and subsequently redirected back to the sample where it interacts with the sample again before it is detected. Each time the detection interacts with the sample, it may pick up further information of the PARS modulation.
  • the non-radiative absorption induced perturbations in the optical properties are visualized using a secondary co-focused detection laser.
  • the detection laser is co-focused with the excitation spot such that the absorption induced modulations may be captured as changes in the backscatter intensity of the detection laser.
  • I det detection intensity
  • the signals can be approximated based on the following relationship: PARS pre-ext a / det (R), where R is the unperturbed reflectivity of the sample.
  • the signal may be approximated as: PARS post-ext a I det (R + AR), where the pressure and temperature induced change in reflectivity are denoted by AR.
  • the total PARS absorption contrast is then approximated as: PARS sig a PARS post-ext - PARS pre-ext .
  • PARS sig a i det R + AR - / det (R).
  • the backscattering of the MP-PARS is then approximated based on the following relationship: MPPARS pre-ext where R is the unperturbed reflectivity of the sample, and n is the number of times the excitation interacts with the sample.
  • the signal may be approximated as: MPPARS post-ext + AR)) n where the pressure and temperature induced change in reflectivity are denoted by AR.
  • the total MP-PARS absorption contrast is then approximated as: MPPARS sig a MPPARS post-ext - MPPARS pre-ext .
  • MP-PARS architectures such as an architecture 1800 exemplified in FIG. 18, may be oriented such that passes consist of reflection or transmission events, which may occur at normal incidence to the sample or at some relevant transmission or reflection angle. For example, if the target features a particularly strong Mie-scattering angle, it may be advantageous to orient the multiple passes along this direction. Multiple passes may occur along a single (only one or exactly one) path (such as a normal-incidence reflection), or along multiple paths such as a normal-incidents transmission architecture, or even architectures with additional (more than two) pathways to take advantage of additional spatial non-linearities.
  • an MP-PARS architecture 1800 may include an excitation source 1802 (e.g., 266 nm excitation source or laser), one or more detection sources 1804 (e.g., a 405 nm detection source or laser), one or more photodiodes or photodetectors 1806, a circulator 1808, a collimator 1810, one or more mirrors 1810 to guide the excitation and/or detection light, a prism 1816, and a variable beam expander 1818.
  • the MP-PARS architecture 1800 may include a pair of alignment mirrors 1820 to align the excitation and/or detection light, and one or more scanners or scanning heads 1822, 1824 arranged at different sides of the sample.
  • the one or more scanners may include a first scanner 1822 to transmit excitation and detection light to the sample, and a second scanner 1824, arrange with mirror 1826, to allow for multiple passes.
  • a computer 1828 may be used to analyze the received signals and/or control the excitation and detection sources 1802 and 1804.
  • MP-PARS can act as an optical amplifier for detected PARS signals. It can be employed the same way that laser cavity systems or photomultiplier tubes are implemented to further improve the sensitivity of the measured signal. This may result in substantial improvements in PARS imaging fidelity. PARS may be captured with improved sensitivity to any or all of the radiative, non-radiative, or scattering contrast facilitating acquisitions with lower imaging powers. This may facilitate acquisition of lower concentrations of chromophores, chromophores with lower optical absorption, or to reduce sample exposure. These non-linear effects may be leveraged to improve recovered imaging resolution by taking advantage of non-linear spatial dependencies to provide super-resolution imaging.
  • multi-photon PARS may provide several benefits over traditional PARS excitation.
  • a number of photons are absorbed by a target at virtually the same instant and/or in a single (only one or exactly one) event.
  • the energy of these photons is then added together such that the absorbed photons are equivalent to a single (only one or exactly one) higher energy and shorter wavelength photon.
  • two photons with half the energy and twice the wavelength of the single photon excitation event are absorbed by a chromophore providing analogous excitation.
  • PARS in fluorescence microscopy, non-linear absorption mechanisms may be leveraged.
  • PARS targets single photon absorption effects, for example the 266 nm UV excitation of DNA.
  • the PARS may also target multiphoton absorption characteristics such as those used in multiphoton fluorescence microscopy.
  • multiphoton microscopy a number of photons are absorbed by a target at virtually the same instant. The energy of these photons is then added together such that the absorbed photons are equivalent to a single higher energy and shorter wavelength photon.
  • the excitation wavelength would be selected as double the traditional value. Two photons would then be absorbed simultaneously providing an excitation event equivalent to standard one-photon excitation (Fig. 19).
  • a 532 nm excitation could be used to target the absorption of DNA.
  • the two photon 532 nm absorption is equivalent to a single 266 nm absorption. Aspects disclosed herein are not limited to 532 nm excitation.
  • the wavelength of the excitation may be configured to be double a predetermined excitation wavelength, such as double of a UV wavelength (e.g., double 100-400 nm) or a UVC wavelength (100-280 nm).
  • the multi-photon PARS may provide several benefits over traditional PARS excitation.
  • multiphoton excitation uses longer-wavelength photons, which are lower energy and penetrate more deeply.
  • moving towards longer wavelengths may provide further biological compatibility avoiding tissue damage. This is especially prevalent in the case of in-situ histology since the PARS UV excitation may not be compatible with imaging deep into the body. It also can improve the safety of PARS system to be used for in-situ applications.
  • PARS operates by capturing nanosecond-scale optical perturbations generated by photoacoustic pressures or photothermal temperature signals. These time-domain (TD) modulations are usually projected by amplitude to determine absorption magnitude. A single characteristic intensity value may be extracted from each TD signal to visualize the total absorption magnitude at each point. For example, TD amplitude, computed as the difference between the maximum and minimum of the TD signal, is commonly used to represent the absorption magnitude.
  • Time-evolution of PARS signals may be dictated by material properties such as the density, heat capacity, and acoustic impedance.
  • H&E-like visualizations may be generated directly from PARS time domain data by employing machine learning algorithms which bypass the PARS image reconstruction step. This approach is beneficial compared to direct PARS-to-H&E image-to-image translation as it provides additional valuable information which can help to better discriminate between different tissue types in the image.
  • H&E-like representations may be made by the application of Al image-to-image translation algorithms based on deep neural network architectures such as generative adversarial networks (GANs), conditional generative adversarial networks (cGANs) or Cycle-Consistent Adversarial Networks (cycleGans).
  • GANs generative adversarial networks
  • cGANs conditional generative adversarial networks
  • cycleGans Cycle-Consistent Adversarial Networks
  • Imaging modalities may scan, pixel-by-pixel, capturing a signal over time at each pixel. While scanning over time may be continuous, realistically, signals are recorded periodically or discretely using an image acquisition system. Characteristic values may be extracted from each signal, accomplished by either using a Hilbert transform to find an envelope of the signal, from which the difference between maximum and minimum values may be computed, or by directly computing the difference between the maximum and minimum of the raw signal itself.
  • methods and techniques disclosed herein may bypass an image reconstruction stage where images are reconstructed by extracting the amplitude of the captured optical absorption signals or averaging their values over time.
  • Methods and techniques disclosed herein may directly use signal representations as input to the artificial intelligence-based colorization algorithm instead of the pixels of the reconstructed image. In this way, additional valuable information on the underlying tissue can be included to create virtual H&E-like images.
  • some compressed representations of the time domain signal can be used. These, for example, may include, but are not limited to: principal linear components of the signal, coefficients of other signal decomposition methods, salient signal points, etc. Such techniques reduce the dimensionality of datasets, increase interpretability but at the same time minimize information loss.
  • FIGs. 21 A and 21 B An example of creating an H&E like visualization by applying the Pix2Pix algorithm is shown in FIGs. 21 A and 21 B.
  • FIG. 21 A shows three principal components of the time domain signals
  • FIG. 21 B shows the corresponding synthesized H&E image. Differences between FIGS. 20A-B and FIGS. 21A-B may not be readily apparent in black and white, and may be better assessed in color form.
  • FIG. 21A may show some coloring
  • FIG. 20A may be black and white and/or grayscale.
  • FIG. 21 B may be less granular and/or show more color than FIG. 20B.
  • An unsupervised clustering method may be used to form colorized, synthetic H&E images without needing to reconstruct a grayscale image.
  • the clustering method may learn TD features which relate to underlying biomolecule characteristics. This technique identifies features related to constituent biomolecules, enabling single-acquisition virtual tissue labelling. Colorized visualizations of tissue are produced, highlighting specific tissue components.
  • the clustering may be performed on any or all of the PARS radiative, non-radiative, and scattering channels.
  • the PARS TD signals may have specific shapes. However, signals from a given target may vary in amplitude (e.g. based on concentration) and may suffer from noise. Clustering signals by shape and learning an associated prototype for each cluster may be used to determine constituent time-domain features that capture the material-specific information of the underlying tissue target, regardless of the noise and amplitude variation present in the TD signals.
  • a modified K-Means clustering method may be used. Measured signals are treated as vectors, where the vector angle is analogous to signal shape. The distance or difference between TD signals is the sine of the subtended angle, such that orthogonal signals have maximal distance and scaled or inverted signals have zero distance. Cluster centroids are then calculated as the first principal component of the union set of each cluster and its negative, causing the learned centroids to be robust to noise. Once the TD features (centroids) are learned, corresponding feature amplitudes are extracted by performing a change-of-basis from the time- to feature-domain.
  • a broadly absorbed UV excitation may target several biomolecules such as collagen, elastin, myelin, DNA, and RNA with a single (only one or exactly one) excitation. Subsequently, the clustering approach may be used to create enhanced absorption contrast visualizations and to extract biomolecule-specific features from the TD signals.
  • UV excitation may be provided by an excitation light source 2202, such as a 50 kHz 266 nm laser (e.g., WEDGE XF 266, Bright Solutions). Excitation may be spectrally filtered with a prism 2204, then expanded (e.g., with a variable beam expander or VBE 2206) before combination with the detection beam. Excitation light may be guided via one or more mirrors 2208.
  • Detection light may be provided by a detection light source 2212, such as a continuous-wave 405 nm OBIS LS laser.
  • the detection may be fiber-coupled through the circulator 2214, collimated (e.g., using collimator 2216), then combined with the excitation beam via a dichroic mirror 2210.
  • Detection light may be guided via one or more mirrors 2218
  • Combined excitation and detection may pass through a pair of alignment mirrors 2200 and be co-focused through a UV-transparent window onto the specimen.
  • Back-reflected light from the sample may return to the collimator 2216 and circulator 2214 by the same path as forward propagation.
  • the circulator 2214 may re-direct backscattered light to a photodiode 2222 capturing the nanosecond-scale intensity modulations.
  • the stages 2226 may raster scan the specimen over the objective lens, while the excitation pulses continuously.
  • Analog photodiode output may be captured for each excitation event using a high-speed digitizer, forming the PARS TD signals.
  • each PARS TD may be then mapped to a pixel in the final image, which may be output on an electronic display and/or a computer 2228.
  • tissue-specific time-domain features are learned.
  • the feature amplitudes at each pixel are extracted by performing a change-of-basis from the time-domain to the feature-domain.
  • the TD signals may be clustered by shape, but not by amplitude.
  • a given pixel (and its corresponding TD signal) may be expressed in terms of characteristic signal shapes of one or more targets and a residual term.
  • TD signals may be vectors in space Rn, where the dimension, n, of the space is simply the number of discrete TD samples. Because TD signals are treated as Cartesian vectors, the signal shape is then analogous to the vector angle. A unit-vector pointing in the direction of the non-noise portion of the given cluster may define a centroid. A union set may be constructed of the cluster and its negated points, and the centroid may be found as the direction of greatest variance (the principal component from a sample covariance), allowing higher amplitude signals to have the greatest influence.
  • a clustering algorithm is reflected in FIG. 26, and a corresponding method 2700 is reflected in FIG. 27.
  • the calculation of cluster centroids is reflected in line 16, and Singular Value Decomposition (SVD) may be used to extract a first principle component.
  • S of PARS TD signals and the requested number of clusters (identical to number of learned features), K.
  • the convergence criteria are specified by a minimum number of moves criterion and a difference in mean residual criterion. These are required to ensure convergence.
  • the algorithm may be run several times, and only the most optimal solution (in terms of minimal mean residual) may be returned.
  • the algorithm initializes by randomly selecting K TD signals to act as initial cluster centroids, shown on lines 1-3 and in step 2702.
  • the number of points that move (change cluster membership) is recorded (lines 9-11).
  • step 2708 the mean residual is evaluated (line 13), as well as the change in the mean residual from the previous iteration (line 14), starting from zero in the case of the first iteration.
  • step 2710 the “Centroid Update” step, shown on lines 16-21 and in step 2710, where centroids are updated, and are calculated as the first principal component of the union set of each cluster and its negative. Practically this is computed via a Singular Value Decomposition (SVD), shown on line 19.
  • step 2712 centroids are normalized such that they are unit magnitude.
  • step 2714 the convergence criteria are checked. If the algorithm has not converged (“No” in FIG.
  • step 2716 the “Membership Update” step, followed by the “Centroid Update” step are repeated until the convergence criteria are met (“Yes” in FIG. 27).
  • the algorithm returns, in step 2716, as outputs, a set of cluster labels, indicating which cluster each PARS TD signal is associated to, and a set of K cluster centroids, the learned time-domain features.
  • PARS TD signals may contain sufficient information to identify biomolecules based on their clustered TD features. Such characteristics may be transferrable across images of different tissue specimens. Feature identification may be performed on an initial specimen, then transferred to others, producing similarly convincing results. Moreover, this technique offers unique advantages as the clustering approach requires no prior information, with the exception of the number of clusters. Training may be performed blindly across the signals captured within the specimen of interest. This is especially beneficial in complex specimens such as the resected brain tissues explored here. The challenge is that blindly clustering for a pre-selected number of features does not guarantee that a singular biomolecule/tissue type will be isolated per feature. Each cluster simply targets a unique characteristic of the PARS TD signals, which may be used to highlight distinct tissue components.
  • Biomolecules may be visualized based on their PARS TD characteristics. This method may enable a single (only one or exactly one) broadly absorbed excitation source to provide otherwise inaccessible material specificity, while simultaneously targeting the optical absorption of several biomolecules. This can enhance absorption contrast visualizations, acquired in a fraction of the time compared to analogous multiwavelength approaches. This enables several new avenues for label-free PARS microscopy by adding an additional dimension to the absorption contrast, vastly expanding the potential for biomolecule specificity.
  • nonmodulated scattering may be approximated by using the mean of both pre- and postmodulated regions from which the PARS amplitude and time-domain information can be extracted. More refined approaches such as partial curve fitting of specific pre- and postmodulated can also be envisioned with the same end goal.
  • additional information may also be provided by recording various analog-filtered instances of a single (only one or exactly one) PARS signal.
  • a relatively unfiltered signal may be acquired alongside a highly band-passed signal by splitting the original analog signal from the photo detector and recording it on two separate channels. From these, intelligent methods such as the aforementioned K-means approach may be utilized independently on the various recorded filtered iterations. As these each represent highly independent signal measurements, additional signal fidelity may be extracted from such processes allowing for improved sensitivity.
  • additional information may also be provided by taking advantage of expected spatial correlation between adjacent points.
  • a data volume may be reconstructed with the two traditional lateral image axes, along with a third axis containing each respective time-domain. This may facilitate lateral processing operations prior to time-domain signal extractions.
  • mutually dependent and mutually independent dependencies along the lateral and time axes may be leveraged to approximate a significantly lower-noise central signal.
  • Similar non-intelligent approaches may be performed on any or all of the PARS radiative, non-radiative, and scattering channels.
  • PARS time-domain (TD) signals can be analyzed, when there are multiple absorption events occurring in close proximity or simultaneously in time. This may result in overlapping PARS TD signals.
  • intelligent clustering approaches can be used to extract and isolate the different absorption events, and time resolved signals from one another effectively unmixing the different PARS events, even though they overlap in time.
  • intelligent clustering methods can also be used to extract maximally different signal combinations from the combined PARS time domain signals.
  • two different wavelengths of PARS excitation e.g., 266 nm and 532 nm
  • the PARS non-radiative TD signals are blended.
  • Intelligent clustering methods in this example K-means, are applied to optimally extract information from the blended signals.
  • FIG. 84 shows example of PARS non-radiative time domain features extracted from overlapping 532 nm and 266 nm excitation events. The three features represent the maximally different absorption combinations, which optimally define the PARS signals based on the difference in absorption contrast at the two wavelengths.
  • Transforming the PARS signals to view them with respect to these clusters may provide enhanced separation of the underlying biomolecules. This is because the signals are represented based on the difference in magnitude between the two absorption events, rather than their direct absorption magnitude at each excitation event.
  • An intelligent clustering method applying K-means clustering with a modified approach to compute cluster centroids is described herein.
  • TD signals are treated as Cartesian vectors in space R n , where n corresponds to the number of TD samples, and thus the shape of the signal is associated with the angle of the corresponding vector, and the distance between TD signals is quantified by the sine of the angle between them, resulting in a maximum distance for orthogonal signals and zero distance for scaled or inverted signals.
  • Cluster centroids are computed as the principal component of the combined set of each cluster and its negative, ensuring that the learned centroids are resilient to noise.
  • the amplitudes of the learned TD features (centroids) contained within each time domain are extracted by transforming from the time-domain to the feature-domain. This is performed by multiplying each TD signal with the pseudo-inverse of F [2],
  • Extracted features can then be used for colorization, direct visualization, or further pixel level analysis as discussed in the next section on the PARS Data Vectors.
  • the PARS-TD features can be used to reduce data volumes for colorization. Using only features with the maximal information, the model’s prediction power is improved by eliminating redundant data, increasing contrast between the selected features, and reducing the training volumes and times.
  • FIG. 85 An example a multi-channel virtual staining architecture 8500 is shown in FIG. 85 for signal processing and virtual staining of PARS image data.
  • a feature learning process 8510 of K features takes place using a representative subset of the NR TD signals.
  • a feature extraction process 8520 is then performed on all the TD signals to form K feature images.
  • MC multi-channel
  • feature learning process 8510 of K features takes place using a subset (shown in red box) of the NR channel TD signals.
  • feature extraction is performed on all the TD signals of the data in hand forming K feature images.
  • NR images of each excitation wavelength (266 nm and 532 nm in this case) and R images are extracted separately and passed along with the K feature images to the feature selection phase.
  • the selected features are then used as the input data to an example virtual staining machine learning model 8540, which can be a MC-GAN model, and the true H&E image 8550 is used as the model ground truth.
  • using the extracted features may enhance the model’s prediction power by eliminating redundant data and increasing contrast between the selected features.
  • An example of this is shown below in five sets of images 8600 in FIG. 86, which shows that the feature-based colorization implemented using the architecture shown in FIG. 85 outperforms alternative methods.
  • FIG. 86 shows a comparison of virtual staining results using different combinations of PARS feature images as inputs: (a) RGB image of a raw PARS data where R: NR532, G: R266, B: NR266 (displayed for visualization), highlighting different parts of a human skin tissue sample; (b)-(d) show worst, moderate, best results, respectively, using the feature combination labeled on the II; and (e) true H&E image of the same field-of-view.
  • a PARS feature vector For each PARS event, a PARS feature vector may be formed.
  • An example PARS feature vector is a PARS data vector.
  • the PARS data vector for a given pixel can be thought of as a Euclidean vector in ‘n’ dimensional space, where ‘n’ is the number of PARS features in a given vector.
  • This feature vector or data vector may include primary measurements, e.g., radiative and non-radiative signal amplitudes and energy, radiative and non-radiative signal lifetime or signal features, or may include secondary measurements extracted as different combinations, calculations or ratios from the primary signals.
  • An example of a secondary measurement may include the quantum efficiency ratio (QER), or the total absorption (TA), or ratio of radiative or non-radiative absorptions at different wavelengths.
  • An example of a PARS feature vector 8700 is presented in Error! Reference source not found.
  • FIG. 87 which shows an example of a PARS data vector 8700.
  • the example PARS feature vector 8700 is not presented as an exhaustive list, only a representation of some of the potential data which is extracted for each image pixel.
  • the PARS feature vector may contain any information which is collected and extracted from each PARS event.
  • the PARS feature vectors can then be processed further for pixel level analysis or may be passed directly into a colorization/ visualization algorithm e.g., an image generator model.
  • a colorization/ visualization algorithm e.g., an image generator model.
  • the PARS feature vectors may be directly correlated against ground truth tagging such as histochemical, or immunohistochemical staining. This may provide a one-to-one mapping between PARS data vectors, and different histochemical stains, or their underlying biomolecule targets.
  • This process allows for a PARS “signature/fingerprint” or ground truth PARS data vector to be calculated for a given biomolecule, or mixture of biomolecules. For example, this could be used to develop a fingerprint for cells expressing HER2 protein. This could then be used as a ground truth to test if cells were expressing HER2 protein, or not.
  • the same process of developing a “ground truth” PARS data vector could be performed for any biomolecule, or mixture of biomolecules.
  • intelligent blinded methods such as clustering (e.g., k-means, or principal component analysis) may be applied directly to the PARS data vectors to identify unique groups of constituent features. This may provide different representations of the data which better separate, underlying biomolecules. These methods may also be used to determine which constituents of the PARS data vector provide optimal identification of specific tissue features of interest. This approach may in turn be used to reduce PARS data volumes, while retaining as much detail of the underlying composition as possible.
  • clustering e.g., k-means, or principal component analysis
  • the signals may be processed in any vector space (i.e., polar, Euclidian, ).
  • This can leverage many different vector processing methods. For example, the relative presence of a biomolecule at a given pixel may be calculated by projecting that pixels PARS vector onto the ground truth vector for the target biomolecule. In Euclidean space, this operation is performed by taking the cross product of the ground truth PARS data vector and the pixels PARS data vector.
  • This method may be optimal for use in hardware accelerated processing, such as CUDA, or graphics cardbased processing.
  • properties such as the thermal diffusivity, conductivity, and speed of sound may dictate the PARS relaxation time.
  • Features such related to temperature, speed of sound, and molecular information may be extracted from time-domain signals.
  • two targets may have a same or similar optical absorption but slightly different other characteristics such as a different speed of sound, which may result in a different decay, evolution, and/or shape of the signals.
  • the decay, evolution, and/or shape of the signals may be used to determine or add novel molecular information to PARS images.
  • the rate at which the signal returns to the background scattering level may be determined by the local thermal diffusivity.
  • regions with, for example, higher thermal diffusivity may feature shorter signal lengths as opposed to regions with lower thermal diffusivity. This may be used to differentiate between cell nuclei and surrounding regions with similar optical absorption.
  • the signal lifetime may also be affected by the local speed of sound.
  • Aluminum and copper will feature different thermal diffusivity and speed of sound facilitating multiplexing by solely measuring signal lifetime.
  • FIG. 31 exemplifies two signals with different lifetimes.
  • FIG. 32 by acquiring two (or more) unique absorption-based measurements (radiative & non-radiative), local variations in these acquisitions may be used to compensate for excitation pulse energy variations. For example, two acquisitions may be compared for similar local (pixel level) variations which are near- or sub-resolution in spacing. Rapid local variations may be unlikely caused as a result of spatial variations in the sample, as it is not expected that the system would provide such level of spatial discrimination. As such, similar variations may be interpreted as similar reconstruction errors between the two visualizations. This interpretation can then be used to provide post-imaging intensity correction providing additional qualitative recovery.
  • FIG. 32 shows an example of autofluoresence-based compensation, aspects disclosed herein are not limited to autofluorescence and may use other absorption-based measurements.
  • a chirped-pulse (a pulse with varying wavelength along the length of the pulse) may be used for detection, and the various wavelength components, which may now encode time information, may be spatially separated using one or more diffractive or dispersion elements such as prisms or gratings.
  • This process may provide significant improvements in time-resolving capabilities while maintaining high signal fidelity by s40istologythe detection over a substantial number of detectors.
  • Such an architecture would have clear occupations such as combining with a line-scanning architecture where detection is made over a large array such as a camera, where the two spatial coordinates of the camera now encode one spatial dimension and one temporal dimension from the sample.
  • Other methods of streaking the time-axis across a sensor array could also be envisioned, such as the use of a high speed optical scanner.
  • the backscattered detection light carrying the PARS modulation may be distributed across an arrangement of integrating photo-detecting units.
  • a tunable delay may be introduced between the integration start time of each photo-detecting unit (e.g., by using a rolling shutter, predetermined trigger sequence, delayed binning, and/or capturing differently timed sections of the recovered signals). If the delay time is shorter than the photo-detecting unit integration time, it is then possible to reconstruct a signal with a time resolution defined by the imposed delay.
  • PARS time domain information can be extracted by taking the derivative of these time-spaced integration windows and/or by analyzing their common regions when plotted. A visual depiction of this acquisition method is shown in FIG. 34.
  • a time domain signal leveraging a CCD/CMOS camera sensor.
  • the rows of the CCD/CMOS camera are the photodetecting units which capture the signal in a rolling shutter fashion.
  • a PARS time domain signal can be constructed with a time resolution greater than a single integrating sensor.
  • data may be compressed using digital and/or analog techniques.
  • K-means K-means
  • raw time-domain signals may be appropriately represented by their respective K-means weights. If, for example, three such prototypes were in use on a particular dataset, rather than storing full time domains ( ⁇ 200+ samples), the time-axis may be well compressed to simply three values or floats. Similar such extracted features may be used in lieu of full non-compressed time domains for the purposes of decreased system RAM usage, reduced data bandwidth requirements, reduced systems storage loads, etc.
  • techniques and methods disclosed herein may allow a direct construction of a colorized H&E simulated image, bypassing a grayscale or scalar-amplitude based reconstruction.
  • the colors used may emulate those traditionally used in H&E stains, such as various shades of pink, purple, and/or blue.
  • aspects disclosed herein are not limited to pink, purple, and/or blue colors, and systems and processors may be configured to use other colors.
  • red, green, and blue color channels may be used to represent three extracted K-means prototypes.
  • these visualizations may be displayed in combination with and/or overlaid with other visualizations on a user interface screen.
  • a bright field image of the sample may form the background of the presented PARS visualizations.
  • Such augmentations maybe used to help maintain orientation between the required visualizations and the original sample.
  • FIGs. 38A and 38B show two example architectures 3800, 3850 for generating one or more inferences regarding a sample.
  • the architectures 3800, 3850 may include a PARS system 3801 , which may include one or more of the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems.
  • the PARS system 3801 may be a PARS system from FIG. 5 described above, for example.
  • the PARS system 3801 detect generated signals in the detection beam(s) returning from a given sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc.
  • the sample which be an unstained sample, may be an in vivo or an in situ sample.
  • it may be tissue underneath skin of a patient.
  • it may be a tissue on a glass.
  • the PARS system 3801 , 3901 may operate by capturing nanosecond-scale (or picosecond scale) optical perturbations generated by photoacoustic pressures or photothermal temperature signals. These time-domain (TD) modulations are usually projected by amplitude to determine absorption magnitude. A single characteristic intensity value may be extracted from each TD signal to visualize the total absorption magnitude at each point. For example, TD amplitude, computed as the difference between the maximum and minimum of the TD signal, is commonly used to represent the absorption magnitude.
  • TD amplitude computed as the difference between the maximum and minimum of the TD signal
  • the PARS system 3801 , 3901 may operate by capturing optical perturbations generated by thermal pressure perturbations, in addition to or as alternative of the optical perturbations generated by photoacoustic pressures and photothermal temperature signals.
  • Signals detected by the PARS system 3801 , 3901 may include, for example, absorption spectra signals, radiative signals, non-radiative signals, scattering signals, or a combination of any of the above mentioned signals.
  • Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a given sample.
  • the sample absorbs energy, i.e., photons, from the radiating field.
  • the intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum.
  • a number of contrast can be obtained across a broad spectrum of wavelengths to characterize a biomolecules’ response across an excitation range (e.g., 190 nm to 20 urn).
  • affected by material properties such as: speed of sound, density, compressibility, shear modulus, pressure, stiffness, bulk modulus, viscoelasticity, thermal diffusivity, heat capacity, conductivity, viscosity, absorber size and shape, temperature
  • affected by: conductivity, viscosity, temperature, polarity
  • the various signals from the PARS system 3801 , 3901 may be processed to extract one or more PARS features 3804, 3904.
  • one or more PARS features 3804, 3904 may represent one or more contrasts.
  • One or more PARS features 3804, 3904 may include one or more PARS diagnostic vectors.
  • the one or more PARS features 3804, 3904 may represent one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
  • a feature vector for a machine learning architecture for making one or more determination to help with a diagnosis may be constructed to include one or more of: PARS features 3804, 3904, features extracted from time-domain (TD) modulations such as absorption magnitude and intensity value, TD post excitation average, radiative channel, scattering channel, H-stain, E-stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo’Red, Masson's Trichrome S’ain, Lillie's Trichrome, and Verhoeff Stain.
  • TD time-domain
  • the extracted PARS features 3804, 3904 may be used to segment nuclei and use for quantification, which may be required for making a diagnosis.
  • the quantification may be a cancer quantification.
  • the quantification may include, for example, quantification of nucleolus, nuclei, shape, size, and circularity.
  • FIG. 44 shows examples of contrasts 4400 extracted from PARS signals in tissue slides.
  • the examples include for example, non-radiative, radiative, and scattering contrasts.
  • FIG. 45 shows examples of combinations of contrasts 4500, from the combination of PARS signals into unique contrasts.
  • the processing of said signals may include, by the PARS system 3801 , 3901 exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
  • the excitation beam being focused at or below the sample may include being at or below a surface of the sample.
  • a system, or multiple systems other than the PARS system 3801 may be used to generate the range of absorption spectra signals, radiative signals, non-radiative signals, attenuation signals, scattering signals, or a combination of any of the above mentioned signals that are used to generate the one or more features 3804.
  • These systems may include conventional imaging systems or imaging modalities.
  • the extracted PARS features 3804, 3904 may include features informative of an attenuation contrast provided by the at least one of the plurality of signals.
  • attenuation can be the reduction of the intensity of the excitation beam generated by the PARS system 3801 , 3901 as it traverses matter (e.g., tissue).
  • matter e.g., tissue
  • the contrast between the tissues can be generated by the difference between the beam signal attenuation, which may be influenced by density and atomic number of the respective tissues.
  • a machine learning model 3802 shown in FIG. 38A may be trained and deployed to generate simulated stained image 3806 such as H&E-like stained images.
  • the machine learning model 3802 may also be trained and deployed to generate one or more inferences 3808 that can be displayed at a user interface 4000 of an user application 3825, which may be installed at a user device.
  • a database 3815 may be used to store the one or more simulated stained images 3806, and to transmit one or more simulated stained images 3806 to the user application 3825 for display or further processing.
  • the simulated stained images 3806 may include images stained with, for example, at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo’Red, Masson's Trichrome S’ain, Lillie's Trichrome, and Verhoeff Stain.
  • H&E Hematoxylin and Eosin
  • MPAS Jones’ Stain
  • PAS and GMS stain Toluidine Blue
  • Congo’Red Toluidine Blue
  • Masson's Trichrome S’ain Lillie's Trichrome
  • Verhoeff Stain Verhoeff Stain
  • the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
  • H&E Hematoxylin and Eosin
  • MPAS Jones’ Stain
  • PAS and GMS stain Toluidine Blue
  • Congo Red Congo Red
  • Masson's Trichrome Stain Lillie's Trichrome
  • Verhoeff Stain Immunohistochemistry
  • IHC Immunohistochemistry
  • histochemical stain histochemical stain
  • ISH In-Situ Hybridization
  • the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue.
  • preserved tissue sample may include a sample preserved using formalin, or alcohol fixed using alcohol fixatives.
  • an image generator 3812 may be used to generate simulated stained image 3806 such as H&E-like stained images.
  • a machine learning model 3822 may be trained and deployed to generate one or more inferences 3808 that can be displayed at a user interface 4000 of an user application 3825, which may be installed at a user device.
  • a database 3815 may be used to store the one or more simulated stained images 3806, and to transmit one or more simulated stained images 3806 to the user application 3825 for display or further processing.
  • FIG. 39 shows yet another example machine learning architecture 3900 for generating one or more inferences 3908 based on extracted features 3904 from a sample.
  • the extracted features may be PARS features 3904, may be generated in a similar manner as the PARS features 3804 from FIGs. 38A and 38B.
  • the features 3904 may be extracted by exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating t he sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
  • the excitation beam and interrogation beam may be generated by a PARS system 3901 , which is a similar system to PARS system 3801.
  • the extracted features 3904 may be processed by an image generator 3912, which may be similar to the image generator 3812 from FIG. 38B, to generate (or convert the features to) one or more simulated stained images 106 (e.g., H&E-like stained images).
  • a machine learning model or architecture 3922 which may be similar to the machine learning model 3822 from FIG. 38B, may be used to generate one or more inferences 3908 based on the one or more simulated stained images 106.
  • the inferences may be sent to an user application 3925 for display or further processing.
  • the machine learning model 3802 and image generator 3812, 3912 are configured to generate one or more simulated stained images 3806, 106 based on the one or more extracted PARS features 3804, 3904, which are extracted based on one or more PARS signals.
  • the one or more PARS signals may include radiative and non-radiative signals.
  • the non-radiative signals may be processed to generate features representative of amplitude or absorption contrast analogous to that provided by hematoxylin staining, while the radiative signals may be processed to generate features representative of amplitude or absorption contrast analogous to that provided by eosin staining. Therefore, the machine learning model 3802 and image generator 3812, 3912 are trained and configured to generate H&E-like images, as one type of simulated stained images 3806, 106 based on the radiative and non-radiative signals.
  • the radiative and non-radiative signals may be obtained from a PARS system 3801 , 3901.
  • the radiative and non-radiative signals may be obtained from a different system or imaging modality.
  • non-radiative signals may be obtained via photothermal microscopy and photoacoustic Microscopy, for example, radiative signals may be obtained via multi/ single wavelength autofluorescence microscopy, stimulated I spontaneous raman spectroscopy, or autofluorescence lifetime microscopy.
  • the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal.
  • the radiative signals includes one or more autofluorescence signals.
  • the image generator 3812, 3912 may include a stain selector 3914 to select one or more stains applicable to an image (e.g., PARS black and white image) generated based on the PARS features 3904.
  • a stain selector 3914 to select one or more stains applicable to an image (e.g., PARS black and white image) generated based on the PARS features 3904.
  • the image generator 3812, 3912 may include a colorization machine learning architecture, such as a generative adversarial network (GAN), which may include, for example, a cycle-consistent generative adversarial network (CycleGAN).
  • GAN generative adversarial network
  • CycleGAN cycle-consistent generative adversarial network
  • the image generator 3812, 3912, or the image generator in the machine learning model 3802 may be implemented using one of: a CycleGAN, a Pix2Pix model (a type of conditional GAN), a Stable Diffusion model, a U-Net model, an encoderdecoder model, a convolutional neural network, a regional convolutional network, or the like.
  • Image segmentation is a process to extract a region of interest (ROI) through a semiautomatic or automatic process. It divides an image into areas based on a specified description, such as segmenting body organs/tissues in the medical applications for border detection, tumor detection/segmentation, and mass detection.
  • ROI region of interest
  • Image registration is a process to align two images from the two domains (TA- PARS and H&E) through a semiautomatic or automatic process.
  • the TA-PARS may be an input image
  • the H&E image may be a reference image.
  • the system via for example, the image generator 3812, 3912, may be configured to select points of interest in the two images (e.g., input image and reference image), associate each point of interest in the input image to its corresponding point in the reference image, and transform at least one of the input image and the reference image so that both images are aligned. In some cases, both images are transformed and aligned.
  • the image generator 3812, 3912 may include one or more machine learning techniques for image segmentation, including for example: (1) traditional methods: threshold segmentation, region growth segmentation, (2) classification and clustering methods: K- nearest neighbors (KNN), kernel principal component analysis (kPCA), fuzzy C-means (FCM), Monte Carlo random field model (MRF), dataset-based guided segmentation, expectation maximization (EM), Bayesian methods, support vector machines (SVM), artificial neural networks (ANNs), random forest methods, and convolutional neural networks (DNNs), and (3) deformation model methods: parametric deformation models, geometric deformation models.
  • the stains selectable may include, for example, at least one of (or any combination of):
  • one or more stained images 3806, 106 may be generated from the same sample. Furthermore, additional stains can be generated by generating and combining separate stains.
  • the image generator 3812, 3912 may be configured to virtually generate the constituent stains.
  • Masson’s Trichrome Stain is a three color staining procedure including:
  • Acid dyes e.g., red culvert acid + ac-d fuchsin
  • image generator 3812, 3912, or an image generator within the machine learning model 3802 can be configured to apply each constituent stain of any particular stain, by applying the respective stains to an image (e.g., a black and white PARS image) generated based on the PARS features 3804, 3904.
  • an image e.g., a black and white PARS image
  • the image generator 3812, 3912, or an image generator within the machine learning model 3802 can be trained to generate, at inference time, an image showing tissue map overlay by processing the PARS features.
  • the overlay can identify at least one salient feature, the at least one salient feature comprising a biomarker, cancer, cancer grade, parasite, toxicity, inflammation, and/or cancer.
  • the overlay can supress nonsalient features.
  • a user through user application 3825, 3925 can switch between different stained images 3806, 106. This is not always possible and is rarely practical after chemically labeling the sample in the traditional manner.
  • the machine learning architectures 3850, 3800, 3900 may provide the same contrast as the constituent stains.
  • the machine learning architectures 3850, 3800, 3900 can mimic these individual stains and mix/match them together digitally to create different combination stains. Because stains are combined digitally (using intrinsic contrast) instead of chemically, new stain combinations may be feasible based on the given sample.
  • the machine learning architectures 3850, 3800, 3900 may generate stains or stain combinations, which may include stains or stain combinations that have not been generated previously, or cannot be achieved via conventional chemical staining method.
  • the machine learning architectures 3850, 3800, 3900 may generate molecular stains, which may not be possible with traditional staining methods.
  • Inferences 3808, 3908 generated by the architectures 3800, 3850, 3900 may include, without limitation: a prediction of a biomarker, a prediction of one or more of survival time, drug response, patient level phenotype/molecular characteristics, mutational burden, tumor molecular characteristics, transcriptomic features, protein expression features, patient clinical outcomes, a resistance index associated with a tumor and surrounding tissue based on one or more PARS signals, a determination of the best tissue sample in a collection of samples for testing, and a verification that a chosen tissue sample contains an adequate quantity of tumortissue for analysis, a determination, among a plurality of PARS signals, which signals are suspicious or non-suspicious, and generating a report based on identification of suspicious signals, identification of locations of biomarkers in tumor tissue and surrounding margin region, prediction of a treatment outcome or a resistance prediction or treatment recommendation, a cancer qualification and a cancer quantification for a specimen.
  • Inferences 3808, 3908 generated by the architectures 3800, 3850, 3900 may further include, without limitation, at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
  • FIG. 48 shows examples of different tissue types imaged and identified using the machine learning architectures 3850, 3800, 3900, including a skin tissue 4800 and breast tissue 4850.
  • the inference 3808, 3908 may include, for example, the determination that the image on the left contains skin tissue, and the image on the right contains breast tissue.
  • FIG. 49 shows unique keratin pearl features identified and isolated within an example simulated stained image 4900.
  • the inference 3808, 3908 may include, for example, an identification of areas showing kerati pearls.
  • FIG. 50 shows biomarkers of localized inflammation and malignancy, identified and encircled based on an example simulated stained image 5000 including label-free visualizations.
  • the inference 3808, 3908 may include, for example, an identification of area likely belonging to cancer and an area likely belonging to lymphocytes.
  • the extracted PARS features 3804, 3904 may be used by the system to label unique biomarkers, such as, for example, red blood cells, tissue types, melanin, collagen, different proteins, and so on.
  • the image generator 3812, 3912, or an image generator within the machine learning model 3802 can be trained to generate, at inference time, an image showing tissue map overlay by processing the extracted PARS features 3804, 3904.
  • the overlay can identify at least one salient feature, the at least one salient feature may be a biomarker location and a biomarker value for an identified tissue region on the image.
  • FIG. 51 shows different cell types and tissue regions, identified and delineated within an example simulated stained image 5100.
  • the inference 3808, 3908 may include, for example, an identification of area likely belonging to one of: hair follicle, sebaceous gland, and epidermis layers.
  • FIG. 52 shows example of an abnormal tissue region, identified and delineated from an example simulated stained image 5200.
  • the inference 3808, 3908 may include, for example, an identification of area likely belonging to abnormal tissue.
  • the user application 3825, 3925 may, at execution time, render a user interface (Ul) 4000 as shown in FIG. 40.
  • the Ul 4000 may include a first area 2510 showing features 3804, 3904 from the PARS system 3801 , 3901 , a second area 2512 showing a first simulated stained image, and a third area 2516 showing a second simulated stained image 2516.
  • One or more inferences 3808, 3908 may be displayed within area 2517.
  • One or more stain selectors 2520, 2540 may be provided to the user, each with a respective scroll bar 2528, 2538 for zooming in or out of rendered simulated stained images showing in areas 2515, 2516.
  • moving the scroll button within scroll bar 2528 for the first stain selector 2520 may cause the first stained image in area 2515 to zoom in or out.
  • moving the scroll button within scroll bar 2538 for the second stain selector 2540 may cause the second stained image in area 2516 to zoom in or out.
  • the one or more inferences 3808, 3908 displayed within area 2517 may include clinically-significant determinations generated by the machine learning models 3802, 3822, 3922.
  • the Ul 4000 can further include visualization to assist a user (e.g., a clinician), such as a report generated by the machine learning models 3802, 3822, 3922.
  • the visualization or report can be interactive.
  • the visualization or report can include a visual overlay that highlights salient features while suppressing or hiding non-salient features.
  • the visualization may be provided in real-time to assist surgeons, for example, by showing a margin of tumor tissue.
  • the plurality of features 3804, 3904 may be supplemented with at least one of features informative of image data obtained from complementary modalities including for example, at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
  • PET positron emission tomography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • the image data may further include photoactive labels for contrasting or highlighting specific regions in the images.
  • the plurality of features 3804, 3904 may be supplemented with at least one of features informative of image data obtained from complementary modalities including for example, at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
  • PET positron emission tomography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • the plurality of features 3804, 3904 may be supplemented with at least one of features informative of one or more of the following information:
  • an user application which may be user application 3825, 3925 or a separate user application of the architecture in FIG. 38A, 38B or 39, may be configured to render a user interface 5900 shown in FIG. 59 to select and analyze one or more images generated by the architecture in FIG. 38A, 38B or 39.
  • the Ul 5900 may include a first area 5920 showing a plurality of procedures 5930 and a second area 5950 showing corresponding procedure information for one of the plurality of procedures 5930.
  • a user input may be received by the user application to select one of the plurality of procedures 5930.
  • Each procedure may be associated with a set of corresponding procedure information and one or more corresponding images.
  • the image viewing area in Ul 5900 may include several subcomponents or subsections which are used to navigate, visualize, or manipulate collected data.
  • the Ul 5900 may group procedures by date or relevancy. For example, Ul 5900 may group procedures by date, into one of the listed tabs: “In Progress”, “Recent” and “2 months and older”.
  • Each tab in the Ul 5900 may be configured to display one or more data sets visualized in a manner deemed applicable and/or appropriate to the user.
  • a user e.g., a clinician or physician
  • pathological staining procedures such as Hematoxylin and Eosin or Toluidine Blue to highlight specific structures.
  • Such virtual staining may be presented, combined, and overlapped as it might appear through conventional staining and light microscopy techniques.
  • the user application may be configured to likewise display each stain (e.g., Hematoxylin or Eosin from H&E) separately from each other to further elucidate salient morphology.
  • each stain e.g., Hematoxylin or Eosin from H&E
  • other virtual stains, collected channels, or layers of the datasets may be displayed on its own or in a combination based on user preference.
  • a single image layer may occupy the entire viewing area to show image details with clarity while maintaining a wide field of view, providing additional context to the user.
  • two or more such images or image layers can be arranged in horizontal and/or vertical splits with their own separate viewing areas. The order and orientation of the images may be set or modified by the user through graphical user interface elements.
  • FIGs. 60 and 61 show example user interfaces 6000, 6100 for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39
  • displayed visualizations or images may be represented as a combination of various data layers.
  • individual stains may be presented in an overlapped fashion, or as isolated individual layers.
  • FIG. 60 shows an Ul 6000 displaying a virtual H&E image 6020 of a tissue, as well as a single non-radiative image 6050 of the same tissue.
  • Visibility of, and combinations of, different image layers may be toggled, managed and manipulated via GUI elements located, for example, on the top, left-hand side, or righthand side of the screen area relative to the presented image frames.
  • GUI elements located, for example, on the top, left-hand side, or righthand side of the screen area relative to the presented image frames.
  • graphical user interface elements such as dropdown menu 6010, 6030 can receive user input and the Ul 6000 can display the selected image, layer or stain based on user input received via the dropdown menu 6010, 6030.
  • image manipulation process such as scanning, moving, zooming in/out, locating, contrast adjustment, color adjustment, opacity, etc., may be performed on the visualizations or images.
  • image manipulation process such as scanning, moving, zooming in/out, locating, contrast adjustment, color adjustment, opacity, etc.
  • a graphical user interface element such as a checkbox for “link image” located at the bottom of the Ul 6000 may be clicked by the user to lock the virtual H&E image 6020 and the non-radiative image 6050, such that moving or zooming one of the locked images (e.g., virtual H&E image 6020) such automatically cause the other locked image (e.g., non-radiative image 6050) to have the same field of view and the same display ratio.
  • a transform to a region of a locked image may cause the user application to highlight the same region across multiple locked images. This may provide a mechanism for rapidly assessing constituent chromophore contributions, helping to highlight regions of interest or to aid with raw data imaging artifacts in pathological analysis.
  • displaying a H&E stain of a tissue region in a first display area and a separate Toluidine Blue stain of the same tissue region in a second display area located next to the first display area can highlight the unique contrast of each stain in the same region to aid diagnosis.
  • image data visualization tools may facilitate easy and quick comparison between datasets collected based on a given sample, an adjacent sample, another sample on the same patient taken from a different location and/or a different time, or comparisons with other patients or other imaging sessions.
  • FIG. 61 shows a user interface 6100 showing a first image layer 6150 (e.g. virtual H&E image layer), and two additional image layers, “Scattering (405 nm)” and “Radiative (266 nm)”, that can be selected by an user through a GUI (dropdown menu) 6130.
  • a first image layer 6150 e.g. virtual H&E image layer
  • two additional image layers “Scattering (405 nm)” and “Radiative (266 nm)”, that can be selected by an user through a GUI (dropdown menu) 6130.
  • the Ul 6100 may proceed to show the first image layer 6150 with an overlay of the selected additional image layer, which may be, in this example, “Scattering (405 nm)” or “Radiative (266 nm)”.
  • Layer combinations can be grouped and modified as a group.
  • Grayscale layers or group layers can be colorized to match certain individual stains or combination stains.
  • the PARS radiative absorption layer can be modified and colorized to emulate eosin stain.
  • the PARS non-radiative layer can be colorized to emulate hematoxylin stains. These layers can be viewed separately or combined as a group layer where the stains are overlaid to emulate a combined hematoxylin and eosin stain.
  • Larger collections of datasets, single patient acquisition sessions, multi-patient acquisition sessions, or other projects which account for one or more datasets intended to be grouped together in a collection may be presented as projects in a project viewing area of an user interface of the user application.
  • a project viewing Ul may be positioned as a separate sub-region of the primary viewing area or presented on a separate tab or similar separation.
  • a project Ul may facilitate the grouping or collections of similar images which may have been collected on a given image in session or from within a given imaging project. Such imaging projects may be more easily transferable as opposed to large collections of individual datasets. Grouping and presentation of the constituent datasets may be further organized for user convenience by aspects such as location, date of collection, patient ID number, and so on.
  • Collected data may be visualized in a variety of salient forms.
  • any of the collected data channels may be visualized by plotting their respective signal values as grayscale intensity values mapped to their respective locations on a two-dimensional image, which may correspond to their respective locations on the sample.
  • Examples of such collected data channels may include the PARS non-radiative absorption contrast, whose signals may be extracted from collected time domains. When imaging biological tissues, such contrast may highlight regions of high DNA densities such as cell nuclei.
  • Another example of such collected data may include the PARS radiative absorption contrast. Some biological samples such as connective tissues (fibrin, collagen) may be well represented by this contrast. Another example may involve the visualization of linear back-scattered light from the sample highlighting structural morphology. Other similar extractions can be processed, visualized and displayed through a user interface, where various aspects of the time domains are extracted to produce visualizations of similar concepts. In addition, various combinations, products, and ratios of these visualizations may be created to elicit further informative contrast.
  • the radiative and non-radiative contrasts may be summed to produce a measure of total regional absorption, whereas their ratios provide information related to the absorption quantum efficiency within the probed region.
  • Such combinations may provide a user with unique information sets over single-component visualizations.
  • colorization of such combinations may be created either through algorithmic means or through machine learning models 3802, 3822, 3922 to emulate other colorizations known to the respective user’s field.
  • imaging tissue for pathological analysis it may be useful to color data to replicate the look and contrast of existing staining procedures such as Hematoxylin & Eosin, or Toluidine Blue.
  • FIG. 62 shows an example user interface 6200 for scanning and processing one or more images using an imaging device.
  • the Ul 6200 may be configured to facilitate a user to control and operate an imaging device, which may be part of the architecture in FIG. 38A, 38B or 39.
  • Ul 6200 includes a scan control interface, which includes a preview area 6250 of an image being scanned. As rows of pixels are scanned or otherwise generated, the preview area 6250 may show the progress of the scan or generation of the digital image.
  • An operator of the PARS system 3801 , 3901 can be notified when the scan can be performed safely. In some embodiments, the operator of the PARS system 3801 , 3901 can be prevented from performing the scan if any safety condition is not satisfied.
  • icon 6210 may indicate that the issue sample is not properly positioned or installed for scanning, or a pressure is not applied correctly for scanning.
  • Icon 6220 may indicate that the laser used in scanning is not heated to a sufficient level.
  • Icon 6230 may indicate that the laser used in scanning is overheated.
  • Icon 6240 may indicate that the scan enclosure area is not securely closed.
  • Icon 6260 may indicate that all the safety conditions are satisfied and the scan can proceed.
  • a progress bar within the second area 6280 may indicate a progress of the scan, and a user may start or stop the scan using the GUI elements located within the second area 6280.
  • a collection of image processing tools may be included in the user application to help modify and manipulate visualizations by the user.
  • image processing tools may include but are not limited to: modification of brightness-contrast levels, sharpening filters, blurring filters, hue-saturation adjustments, and so on.
  • one or more processing steps may be configured as one or more preset options, such that a set of processing steps may be selected by the user to be quickly performed on subsequent data acquisitions.
  • one or more GUI elements of an user interface rendered by the user application may display one or more machine learning results (e.g., inferences 2517) to aid users in segmentation, image optimization, labeling, diagnosing, and so on.
  • machine learning results e.g., inferences 2517
  • the user application may include the following example tools for assisting a user with image analysis: a tool which automatically selects tumor margins, a tool which performs an image search in a PARS or H&E database to provide similar examples (e.g., in terms of structure or diagnosis), a tool which provides an automatic diagnosis to act as quality assurance for a pathologist, a tool which automatically identifies tumor type, treatment management, etc., a tool which allows the user to segment salient sub regions of tissue to highlight cell nuclei, fibrous tissue, melanin, fatty tissues, red blood cells, and so on.
  • an image can be annotated by one or more users (e.g., medical professionals) through an Ul rendered by the user application.
  • FIG. 63 shows an example user interface 6300 for displaying an annotated image 6350.
  • User selection(s) can be made via GUI elements in an annotation selection region 6310 to view some or all comments 6320a, 6320b, 6320c, which may be made by different users.
  • a Quick Hide button located at the bottom left corner of the Ul 6300 allows the user to hide all comments to view the image without any comments.
  • This annotation Ul 6300 can be accessed remotely through an online viewer, such as an online viewer application from a website or an mobile application.
  • FIG. 41 shows an example machine learning architecture 4100 that may be used to train the image generator 3812, 3912, or the image generator within the machine learning model 3802.
  • the image generator 3812, 3912 may be, for example a colorization machine learning model trained using a generative adversarial network (GANs), which may include, for example, a cycle-consistent generative adversarial network (CycleGAN) model.
  • GANs generative adversarial network
  • CycleGAN cycle-consistent generative adversarial network
  • the image generator 3812, 3912 may be, for example a colorization machine learning model trained using a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
  • cGAN conditional generative adversarial network
  • a colorization machine learning model may include a neural network.
  • a neural network 4300 may include an input layer, a plurality of hidden layers, and an output layer.
  • the input layer receives input features.
  • the hidden layers map the input layer to the output layer.
  • the output layer provides the prediction (e.g., inference) of the neural network.
  • Each hidden layer may include a plurality of nodes, which may include weights, bias and input from a preceding layer. Weight is the parameter within a neural network that transforms input data within’the network's hidden layers.
  • initial weights for a neural network model 4300 within the colorization machine learning model can be transferred from another neural network model (the “donor model”) trained on a large-scale stained H&E image dataset.
  • each weight in one or more initial layers of the neural network model 4300 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the large-scale dataset, instead of being assigned a random value prior to the training of the neural network model 4300.
  • all layers of the neural network model 4300 are trained and fine-tuned, and weights updated accordingly.
  • the weights of the one or more initial layers are kept constant (i.e., equal to the weights from the one or more initial layers ofthe donor model), and throughout the training process only weights of the subsequent layers (after the initial layers) of the neural network model 4300 are trained or fine-tuned during training.
  • the CycleGan model includes a first GAN having a first generator model 4103 and a first discriminator model 4107, and a second GAN having a second generator model 4113 and a second discriminator model 4117.
  • a true total absorption (TA) image 4101 may be obtained from an existing PARS image database, and sent to a first generator model 4103.
  • the first generator model 4103 may include a neural network configured to generate a simulated stained image 4105 (“fake” stain) based on the TA image 4101. Then a fake TA image 4111 is generated by a second generator model 4113 based on the simulated stained image 4105.
  • a first loss, the cycle consistency loss 4120 may be computed based on comparing the true TA image 4101 and the fake TA image 4111. This loss 4120 is then used to update weights of the first generator model 4101 and the second generator model 4113.
  • the simulated stained image 4105 may be processed by a first discriminator model 4107 to generate an output, which may be further processed through a classification matrix 4109 to generate a first discriminator output.
  • the discriminator model 4107 is configured to predict how likely the simulated stained image 4105 is to have come from a target image collection (e.g., a collection of real stains 4115).
  • a labelled and stained image 4115 is obtained, for example, from an existing stained image database.
  • the labelled and stained image 4115 may be processed by a second discriminator model 4117 to generate an output, which may be further processed through a second classification matrix 4119 to generate a second discriminator output.
  • the first and second discriminator output may be used to compute a second loss 4125.
  • the processor may update weights of: the first generator model 4103, the second generator model 4113, the first discriminator model 4107 and the second discriminator model 4117.
  • the training may stop once the first or second loss, or both losses, have reached a threshold value, or may stop after a pre-determined number of iterations.
  • the first generator network 4103 once trained, may be deployed as part of image generator 3812, 3912 or machine learning model 3802, at inference time to generate one or more simulated stained images 3806, 106.
  • a colorization machine learning model may include may include an one-shot GAN, a type of single-image GANs, to generate images from a training set as small as a single image, which is suitable in applications or settings where the samples are limited, such as in histology.
  • labelled and stained image 4115 may be obtained from traditional chemical staining process, such as spectroscopy-based methods.
  • FIG. 42 shows an example process 4200 for preparing one or more training data 4205, 4207 for training the image generator 3812, 3912, or the image generator within the machine learning model 3802.
  • An unstained tissue section 401 may be processed by a PARS system such as TA-PARS, to generate unlabeled multichannel image 4205, which may be provided to the machine learning architecture 4100 as a true TA image 4101.
  • PARS system such as TA-PARS
  • the unstained tissue section 401 may undergo traditional chemical staining process, and a stained slide 4203 may be obtained and imaged with bright-field microscope to generate a labelled and stained image 4207, which may be provided to the machine learning architecture 4100 as a labelled and stained image 4115.
  • FIG. 46 shows two virtually (simulated) stained PARS images, one simulated stained hematoxylin and eosin (H&E) image 4610, and one simulated stained toluidine blue image 462, both of which may be used as the true TA image 4101 during training for different stained image generation processes.
  • H&E hematoxylin and eosin
  • FIG. 47A shows an example of an unlabeled PARS virtual H&E image 4700 as generated by a PARS system, which may be used as input to the architecture 4100 in the form of a true TA image 4101.
  • the unlabeled PARS virtual H&E image 4700 is correlated with a historical, labelled stained (H&E) image 4750 in FIG. 47B, which can be provided to the machine learning architecture 4100 as a labelled and stained image 4115.
  • H&E labelled stained
  • a PARS image of a tissue sample may be generated, then the subsequently the tissue sample may be chemically stained with a stain of interest. This generates a one-to-one correspondence dataset for training the coloration machine learning mode in architecture 4100.
  • a PARS image of the tissue may be captured before processing the tissue through the traditional histopathological workflow. This will produce a correlated section for training the coloration machine learning mode in architecture 4100.
  • multiple pathologists can hand label a dataset of tissue slides to identify location, type, grade, etc. of cancer within each tissue slide. This labeled dataset can then be used to train the machine learning model 3802, 3822, 3922 to make proper inference regarding one or more PARS images from the PARS system.
  • a system may be able to automatically label PARS data in order to train the machine learning model 3802, 3822, 3922 to make one or more inferences including diagnostics on the PARS features.
  • a traditional tissue image or an image generated based on PARS signals may contain different structures therein.
  • the machine learning model 3802, 3822, 3922 may receive the traditional tissue image or the image generated based on PARS signals (“input image”) and process the input image to generate a colorized image 5600 that is simultaneously stained or colored with different stains.
  • the basal layer of the input image may be stained with T-Blue while the inner connective tissue of the input image may be stained with H&E.
  • the machine learning model 3802, 3822, 3922 may be trained based on historical data generated by pathologists or other professionals, where the historical data include different structures of tissue images and corresponding stains for each of the different structures.
  • FIG. 56 shows an example multi-stain image 5600 that may be generated by a machine learning model 3802, 3822, 3922.
  • the different colored regions may be generated, in some embodiments, by color shifting an H&E image.
  • simultaneous or sequential use of histochemical, IHC, and FISH agents is not possible on a single tissue section.
  • the labelling process can introduce irreversible structural and chemical changes which render the specimen unacceptable for subsequent analysis.
  • each section must be independently sectioned, mounted, and stained; a technically challenging, expensive, and time-consuming workflow.
  • a trained histotechnologist may spend several hours to prepare a section fortesting, with some labeling protocols requiring overnight incubation and steps spaced out across multiple days.
  • repeating staining or producing additional stains in a stepwise fashion can delay diagnostics and treatment timelines, degrading patient outcomes.
  • performing multiple stained sections can rapidly expend invaluable diagnostic samples, particularly when the diagnostic material is derived from needle core biopsies. This increases the probability of needing the patient to undergo further procedures to collect additional biopsy samples, incurring diagnostic delays, and significant patient stress.
  • various embodiments of the PARS system as described herein are able to recover rich biomolecule specific contrast, such as quantum efficiency ratio, not afforded by other independent modalities.
  • the optical relaxation processes (radiative and non-radiative) are observed following a targeted excitation pulse incident on a sample.
  • the radiative relaxation generates optical emissions from the sample which are then directly measured.
  • the non-radiative relaxation causes localized thermal modulations and, if the excitation event is sufficiently rapid, pressure modulations within the excited region. These transients induce nano-second scale variations in the sample’s local optical properties, which are captured with a co- focused detection laser. Additionally, the co-focused detection is able to measure the local optical scattering prior to excitation.
  • various embodiments of the PARS system as described herein is able to simultaneously capture radiative and non-radiative absorption as well as and optical scattering from a single excitation event.
  • the multi-stain image 5600 has five different regions with different tissue structures: 5610, 5620, 5639, 5640, 5650.
  • Region 5610 is stained with a light purple color (a), which m’y be Masson's trichrome stain that is typically used to differentiate different types of connective tissues.
  • Regions 5620 and 5640 are stained with a blue color (b), which may be PAS stain used to identify regions of fungal infection in the tissues.
  • Regions 5630 and 5650 are stained with a pink-purple color (c), which may be H&E stain used to differentiate the different layers of the epithelium and the structures of subdermal glands.
  • the machine learning model 3802, 3822, 3922 may be able to automatically determine a most appropriate stain or color for a particular region in an image and apply the most appropriate stain or color to the particular region in the image.
  • multi-staining images are generated based on PARS feature vector (e.g., the PARS data vector shown in FIG. 87), and PARS Time Domain clustering methods.
  • PARS feature vector e.g., the PARS data vector shown in FIG. 87
  • PARS Time Domain clustering methods Some examples of PARS multi-staining images are presented in FIG. 88, which shows three different virtual stains 8820, 8840, 8860 are produced from the same initial PARS dataset 8800.
  • a PARS data vector (similar to the example shown in FIG. 87, containing PARS amplitudes, and time domain features) is passed to a series of GAN networks which were then used to develop a virtual staining result.
  • FIG. 88 shows example PARS virtual multi-staining images based on the same PARS image data 880.
  • An RGB representation of PARS image data 8800 is shown on the left, while three different virtual stains 8820, 8840, 8860 are shown to the right which were produced from the raw PARS image data 8800.
  • the multi-staining results are produced using PARS feature vector data which contains a number of primary and secondary features including time domain features.
  • an image generator is designed to better leverage the PARS and ground truth image data to produce an accurate stain transform.
  • an example neural network in the image generator may use perceptual based losses such as learned image-patch similarity, or “VGG” networks to optimize perceived similarity between the PARS virtual staining images, and ground truth images.
  • GAG learned image-patch similarity
  • more advanced GAN architectures e.g., Wasserstein GAN with Gradient Penalties, or Unrolled GAN’s
  • GAG learned image-patch similarity
  • initial weights for a machine learning model 3802, 3822, 3922 can be transferred from another neural network model (the “donor model”) trained on a traditional stained H&E image dataset.
  • the “donor model” trained on a traditional stained H&E image dataset.
  • each weight in one or more initial layers of the machine learning model 3802, 3822, 3922 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the traditional stained H&E image dataset, instead of being assigned a random value prior to the training of the machine learning model 3802, 3822, 3922.
  • the donor model can be trained on a traditional stained H&E image dataset. These stained H&E images may be obtained from traditional chemical staining process, such as spectroscopy-based methods.
  • the training data for the donor model can include a group of stained H&E images (ground truth data) and a group of corresponding greyscale H&E images converted from the group of stained H&E images.
  • the donor model during training, may receive the group of corresponding greyscale H&E images as input, and output corresponding colorized H&E images that are compared to the ground truth data, where the comparison may cause updating of the weights of the donor model during each training iteration.
  • the training data for the donor model can include a group of stained H&E images (ground truth data) and a group of corresponding channel data for a first channel (e.g., H channel) and a second channel (e.g., E channel) for each respective stained H&E image in the group.
  • the channel data for a H channel or E channel may include, for example, amplitude and/or intensity values for each respective channel, similar to an RGB channel for a traditional color image.
  • the donor model during training, may receive the group of channel data as input, and output corresponding colorized H&E images that are compared to the ground truth data, where the comparison may cause updating of the weights of the donor model during each training iteration.
  • initial weights for a machine learning model 3802, 3822, 3922 can be taken from the trained donor model.
  • each weight in one or more initial layers of the machine learning model 3802, 3822, 3922 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the traditional stained H&E image dataset.
  • all layers of the machine learning model 3802, 3822, 3922 are trained and finetuned, and weights updated accordingly.
  • the weights of the one or more initial layers are kept constant (i.e., equal to the weights from the one or more initial layers of the donor model), and throughout the training process only weights of the subsequent layers (after the initial layers) of the machine learning model 3802, 3822, 3922 are trained or fine-tuned during training.
  • the machine learning model 3802, 3822, 3922 may, at inference time, receive a PARS image and/or PARS signals, and generate a corresponding PARS virtual H&E image.
  • an image generator 5750 which can be the image generator 3812, 3912, orthe image generator within the machine learning model 3802, may include two neural network models 5712, 5722.
  • the first neural network model 5712 can be, as an example, a cycleGAN trained to generate simulated grayscale H&E images 5715 based on TA-PARS images 5720 from a PARS system 5710
  • the second neural network model 5722 can be, as an example, a conditional GAN (e.g., pix2pix) trained to generate simulated color H&E images 5730 based on simulated grayscale H&E images 5715 from the first neural network model 5712.
  • the first neural network model 5712 can be trained based on historical sets of TA- PARS data and corresponding grayscale H&E images.
  • the second neural network model 5722 can be trained on historical sets of greyscale H&E images and corresponding stained H&E images, which may be obtained from traditional chemical staining process, such as spectroscopy-based methods, or may be obtained from data sets of virtual greyscale and color H&E images.
  • an image generator 5850 which can be the image generator 3812, 3912, orthe image generator within the machine learning model 3802, may include two neural network models 5812, 5822.
  • the first neural network model 5812 can be, as an example, a cycleGAN trained to generate separated H channel 5815 and separated E channel 5817 based on TA-PARS images 5820 from a PARS system 565istolnd
  • the second neural network model 5822 can be, as an example, a conditional GAN (e.g., pix2pix) trained to generate simulated color H&E images 5830 based on separated H channel 5815 and separated E channel 5817 from the first neural network model 5812.
  • the first neural network model 5812 can be trained based on historical sets of TA- PARS data and corresponding separated H channel and E channel data.
  • the second neural network model 5822 can be trained on historical sets of separated H channel and E channel data and corresponding stained H&E images, which may be obtained from traditional chemical staining process, such as spectroscopy-based methods, or may be obtained from data sets of virtual color H&E images.
  • a cycleGAN model may be implemented to be a multi-task cycleGAN model configured to perform a plurality of tasks, including for example: 1) superresolve a relatively lower resolution image to a higher resolution image (enhancing the resolution of the image); 2) generate H-stained TA-PARS images; 3) generate E-stained TA- PARS images; and 4) generate H&E-stained TA-PARS images.
  • the cycleGAN model is trained on higher resolution images from an image dataset to transform a greyscale image to a color H&E image, the cycleGAN model learns the transfer from greyscale to color, as well as to enhance resolution.
  • the architecture 3800, 3850, 3900 may include an image search machine learning model (“image search model”), which may be part of the machine learning model 3802, 3822, 3922 or may be a separate machine learning model, to output one or more labelled images based on a given unlabeled input image.
  • image search model an input image to the image search model may be the unlabeled PARS virtual H&E image 4700, which has no labels for any region of interest within the image 4700.
  • the image search model once properly trained, may, based on a plurality of existing (e.g., historical) labelled images stored in an imaging database, output at least one labelled image (e.g., such as the labelled stained image 4750 in FIG. 47B) from the existing labelled images, where the output labelled image has the highest correlation score as the input image,
  • the correlation score may be determined, in some embodiments, based on features extracted from the input image (e.g., unlabeled PARS virtual H&E image 4700). For example, a higher correlation score may be assigned to images that exhibit features with greater similarities to the input image.
  • a minimum threshold may be predetermined, such that any existing labelled image(s) from the database with a correlation score above the minimum threshold may be selected as an output by the image search model.
  • the image search model may therefore be configured to retrieve one or more existing labelled images from an existing imaging database, based on an input image that is not yet labelled.
  • the retrieved labelled images which may be stained, can be used as training data to the machine learning architecture 4100 as a labelled and stained image 4115.
  • the image search model may be configured to retrieve one or more existing labelled images from an existing imaging database, based on an input image that is not yet labelled, and transmit the one or more existing labelled images to an user interface 4000 of a user application 3825, 3925 to aid with further medical diagnosis.
  • the input image and output image(s) may or may not be stained.
  • a clinician through user application 3825, 3925 may send a new medical image showing a patient’s lung to the image search model for retrieval of similar medical images that are already labelled (or annotated).
  • the image search model may output one or more output images (e.g., through Ul 4000) that can aid with the clinician with an understanding of the input medical image showing the patient’s lung. For instance, if the one or more output images generally contain images labelled with pneumonia, it is likely that the patient in the input medical image may have pneumonia as well.
  • a percentage of likelihood of pathologic finding in a current image acquisition may be similar to, and can be determined based on previous diagnosis of similar or same pathologic finding in one or more previous images on PARS acquisitions.
  • the heat map 5500 includes several heat regions 5510, 5520, 5530, 5540. Each respective region 5510, 5520, 5530 or 5540 may represent a corresponding percentage of likelihood of pathology or pathologic finding (e.g., malignancy) superimposed on to an H&E staining image 5550 to aid in diagnosis and intraoperative guidance. This can help medical personnel in reading a current H&E staining image and making relevant conclusions.
  • the heat map 5500 may assist surgeons that maybe inexperienced at reading pathologic slides make decisions on where to resect cancerous tissue intraoperatively.
  • the heat region 5510 appears to be the darkest in color, followed by heat region 5530, then further followed heat regions 5540 and 5520.
  • This may indicate, for example: the area covered by heat region 5510 has a high likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 80%, or at a range of 80-100%; the area covered by heat region 5530 has a medium-to-high likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 50%, or at a range of 50-79%; the areas covered by heat regions 5540 and 5520 have a low likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 30%, or at a range of 30-49%.
  • the areas not covered by any heat region in blue
  • the architecture 3800, 3850, 3900 may generate an inference, based on a sample (e.g., an image), a probability of a disease for at least one region in the sample, the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture.
  • a sample e.g., an image
  • the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture.
  • the inference may include a heat map identifying one or more regions of the sample and a corresponding probability of a disease for each of the one or more regions of the sample.
  • the corresponding probability of a disease for each of the one or more regions of the sample is illustrated by a corresponding intensity of a color shown in the respective region in the heat map.
  • the heat map can guide clinicians in identifying and managing diseases in the patient associated with the sample.
  • the image search model maybe a standalone system architecture without a PARS system.
  • the image search model may include a Convolutional Neural Network (CNN), and may further include an autoencoder.
  • CNN Convolutional Neural Network
  • a PARS image of tissue scans can be obtained from PARS system before staining with molecular stains and developing an image based correlation.
  • a PARS image can be obtained and compared with ground truth spectroscopic methods including mass spectrometry, mass cytometry, fluorescence spectroscopy, transient absorption spectroscopy.
  • the labelled and stained image 4115 is a labelled PARS image from a PARS image database.
  • the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image.
  • automatically labelling the unlabeled PARS image may include labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities.
  • the existing labelled stained image is obtained from an existing H&E database.
  • an image generator which can be the image generator 3812, 3912, or the image generator within the machine learning model 3802, may include a cycleGAN 5812 trained to generate, based on TA-PARS images 5820 from a PARS system 5810, simulated color H&E images 5830.
  • TA-PARS images 5820 which may include radiative and non-radiative absorption images, are preprocessed via a preprocessing module 5811 , and then virtually stained through the cycleGAN 5812 to generate simulated color H&E images 5830.
  • the preprocessing module 5811 may include, for example, a self-supervised Noise2Void denoising convolutional neural network (CNN) 5813 as well as an error-correction submodule 5823 for pixel-level mechanical scanning and error correction.
  • CNN convolutional neural network
  • the implementation described herein can significantly enhance the recovery of sub-micron tissue structures, such as nucleoli location and chromatin distribution.
  • the preprocessed PARS image data 5826 are then virtually stained using the cycleGAN 5812 by applying virtual stains to the preprocessed PARS image data 5826, which may include images representing thin unstained sections of malignant human skin and breast tissue samples.
  • FIG. 58 shows an improved virtual staining and image processing architecture 5800 for emulating histology images which are effectively indistinguishable from standard H&E pathology.
  • the presented architecture in FIG. 58 includes an optimized image preprocessing module 5811 and a cycle-consistent generative adversarial network (CycleGAN) 5812 for virtual staining.
  • CycleGAN virtual staining does not require pixel-to-pixel level registration for training data. However, semi- registered data is used here to reduce hallucination artifacts, while improving virtual staining integrity.
  • the image preprocessing module 5811 reduces inter-measurement variability during signal acquisition, through the implementation of pulse energy correction and image denoising using the self-supervised Noise2Void network.
  • An error correction submodule 5823 is implemented for removal of pixel level mechanical scanning position artifacts, which blurs subcellular level features. These enhancements afford marked improvements in the clarity of small tissue structures, such as nucleoli and chromatin distribution.
  • the loosely or semi- registered CycleGAN 5812 facilitates precise virtual staining with the highest quality of any PARS virtual staining method explored to date.
  • the architecture 5800 When applied to images containing entire whole slide sections of resected human tissues, the architecture 5800 provides detailed emulation of subcellular and subnuclear diagnostic features comparable to the gold standard H&E.
  • This architecture 5800 represents a significant step towards the development of a label-free virtual staining microscope.
  • the successful label-free virtual staining opens a pathway to the development of in-vivo virtual histology, which could allow pathologists to immediately access multiple specialized stains from a single slide, enhancing diagnostic confidence, improving timelines and patient outcomes.
  • label-free TA-PARS images 5820 images are captured using the PARS system 5810.
  • a 400ps pulsed 50KHz 266nm UV laser (Wedge XF 266, RPMC) is used to excite the sample, simultaneously inducing non-radiative and radiative relaxation processes.
  • the non-radiative relaxation processes are sampled as time-resolved photothermal, and photoacoustic signals probed with a continuous wave 405nm detection beam (OBIS-LS405, Coherent). This detection beam is co-aligned and focused onto the sample with the excitation light using a 0.42 numerical aperture (NA) UV objective lens (NPAL-50-UV-YSTF, OptoSigma).
  • NA numerical aperture
  • the 405nm detection wavelength and the radiative emissions are spectrally separated, and each directed toward an avalanche photodiode (APD130A2, Thorlabs).
  • Pixels are then arranged in a cartesian grid based on the stage position feedback, forming a stack of three co-registered label-free image contrasts: non-radiative, radiative, and scattering. Finally, the excitation pulse energy and detection power, recorded throughout imaging, are used to correct image noise caused by laser power and pulse energy variability.
  • the entire tissue area is divided into subsections (500x500pm), each individually scanned at their optimal focus position. Using their relative stage positions and small amount of overlap ( ⁇ 5%), these sections are stitched and blended into a single whole slide image.
  • the Noise2Void (N2V) denoising convolutional neural network (CNN) 5813 is, in some embodiments, used to further denoise the raw PARS images.
  • the N2V denoising CNN 5813 does not require paired training data with both a noisy and clean image target. It assumes that image noise is pixel-wise independent, while the underlying image signal contains statistical dependencies. As such, it facilitates a simple approach for denoising PARS images, and was used to train a denoising CNN for the radiative and non-radiative contrast channels, separately.
  • Example machine learning models were trained on a body of raw data taken from both human skin and breast whole slide images.
  • a series of 125 PARS tiles was used to generate a model for each of the radiative and non-radiative images.
  • Each model was trained over a series of 300 epochs, with 500 steps per epoch, using 96 pixel neighbourhoods.
  • the final processing step before training the virtual staining model is to correc- a scanning- related image artifact, which is uncovered after denoising the raw data.
  • These artifacts are line-by-line distortions caused by slight inconsistencies in the mechanical scanning fast axis (x-axis) velocity, which results in uneven spatial sampling.
  • a custom jitter or error correction submodule 5823 is used to fix these distortions.
  • a CycleGAN image translation model 5812 can be used for virtual staining. While CycleGAN 5812 is able to learn an image domain mapping with unpaired data, it can be advantageous to provide the model with semi or loosely registered images, as a form of high-level labeling to better guide the training process and strengthen the model. As one-to-one H&E and PARS whole slide image pairs are obtainable, it seems most appropriate to prepare the dataset accordingly. However, the two datasets are not intrinsically registered, so a simple affine transform is used. Affine transforms allow for shearing and scaling, as well as rotation and translation. In general, it is sufficient for the alterations of tissue layout on the slide which occur during the staining process. The affine transform is determined using the geometric relationship between three registration points. This found relation, or transformation matrix, is then applied to the entire whole slide image for both the non-radiative and radiative channels.
  • FIGs. 82A and 82B show example visualization of data preparation process and inversion.
  • the registered total-absorption and H&E images are cut into matching tiles, to generate a loosely registered dataset.
  • the pixel intensities of the total-absorption images are then inverted, to provide a better initialization for training.
  • the datasets are used to train the virtual colorization model (e.g., CycleGAN 5812).
  • FIG. 82B as shown in example schematic block diagram 8250, to form virtually stained images, the model is repeatedly applied to overlapping tiles of the total absorption images. The overlapping tiles are subsequently averaged to form the final virtual colorization.
  • TA Total Absorption
  • H&E H&E
  • the total absorption (TA) image shows the radiative (blue) and non- radiative (red) raw images in a combine single colored image.
  • the network uses inverted TA patches, in which the radiative and non- radiative image pixel intensities are inverted before they are stacked into a colored image. Inverting these channels provides a colored image where the white background in the PARS data maps to the white background in the H&E data.
  • the model can be applied to larger images, such as entire whole slide images, by virtually staining 512x512 tiles in parts. This process is shown in FIG. 82B, where overlap regions are averaged together in the final virtually stained image. Here an overlap of 50% is used.
  • CycleGAN models were trained on loosely paired data using the registration and dataset preparation methods described earlier.
  • One model was trained on human skin tissue and another on human breast tissue.
  • the training sets were composed of 5000 training pairs of size 512x512px (128x128 pm) sourced from standard 40x magnification (250nm/pixel) whole slide images of each tissue type.
  • the model generators were trained for 500 epochs with an early stopping criteria to terminate training when losses stopped improving.
  • the model was trained with a learning rate of 0.0002, batch size of 1 and an 80/20% split of training and validation pairs.
  • a pix2pix model and standard unpaired CycleGAN model were also trained for each tissue type.
  • the pix2pix models were trained on the same dataset as the paired CycleGAN model, with more rigorous registration process and the same model parameters.
  • the same number of training pairs were used, however the TA and H&E domains were sourced from different whole slide images of the same tissue type.
  • a current shortcoming of the PARS raw images is the presence of measurement noise. Improvements in PARS image quality were achieved by measuring detection power and excitation pulse energy. Image noise was then correction based on the laser energy variability. Even with the energy reference correction, measurement noise is still present in the non-radiative signals. This additive noise disproportionately impacts signals which exhibit low non-radiative relaxation since they generate smaller non-radiative perturbations in the detection beam.
  • Paired or unpaired denoising method can be applied to the raw PARS data in the TA-PARS images 5820 to remove noise prior to colourization using an image generator such as the cycleGAN 5812.
  • Unpaired denoising algorithms do not require matched noisy and clean image targets for training and facilitate a simple self-supervised approach for denoising PARS images.
  • Paired denoising algorithms may also be used on PARS images. For example, clean and noisy image pairs for training could be generated by acquiring two images of the same area, one at high pulse energies and one at low pulse energies. High pulse energies would yield lower noise (i.e., clean) images, whereas low pulse energies would produce noisier image targets. [00409] FIG.
  • FIG. 80 shows an example of the raw PARS data 8010 in the TA-PARS images 5820 denoised using a Noise2Void (N2V) framework, as seen in A. Krull, T.-O. Buchholz, and F. Jug, -Noise2Void - Learning Denoising from Single noisysy Images.” arXiv, Apr. 05, 2019. doi: 10.48550/arXiv.1811.10980, the entire content of which is herein incorporated by reference.
  • the denoising example in FIG. 80 has been adapted in J. E. D. Tweel, B. R. Ecclestone, M. Boktor, J. A. T. Simmons, P. Fieguth, and P. H.
  • the denoised image 8020 may contain mechanical scanning- related jitter artifacts, as seen in FIG. 80. These artifacts are line-by-line distortions caused by slight inconsistencies in the mechanical scanning fast axis velocity, which results in uneven spatial sampling.
  • a custom jitter or error correction submodule 5823 may be used to fix these distortions in the denoised images 8020 and generate images 8030 with artifact removed.
  • the generated images 8030 may be used as the preprocessed PARS image data 5826 for input into an image generator such as the cycleGAN 5812.
  • FIG. 81 One example implementation 8100 of an error correction submodule 5823 to fix the line-by-line jitter distortion in the denoised PARS images 8020 is illustrated in FIG. 81.
  • the implementation 8100 of the error correction submodule 5823 shown in FIG. 81 determines the optimal pixel shifts for a series of chunks spaced across a given row, with overlap. Chunks are then moved to their appropriate locations and summed together into a corrected row, with areas of overlapping chunks averaged.
  • FIG. 81 illustrates three example chunks and their optimal pixel shifts. These shifts are determined by moving a chunk left and right until a minimal mean square error is reached between the chunk and a reference row. This reference is calculated as the average between the top and bottom rows for the given row being corrected.
  • the error correction submodule 5823 is implemented based on the assumption that the fast axis speed profile differs mostly for velocity sweeps in opposing directions and minimally for velocity sweeps in matching directions. As such, the top and bottom rows were captured in the same direction and averaging them together provides a suitable in-between row to use as reference for correction.
  • FIG. 83 shows an example of the raw non-radiative and radiative image channels after reconstruction and laser power reference correction. At high magnification, significant noise can be seen in the raw data channels. This motivates denoising as a preprocessing step. However, noiseless PARS image targets were not available for training a traditional denoising CNN. Hence, the N2V denoising CNN 5813 is an ideal method as it allows effective denoising without a clean image target.
  • example denoising results generated after execution of the N2V-based denoising CNN 5812 and the error correction submodule 5813 are shown, based on raw PARS image data 8300 including both raw non-radiative and radiative image channels. Three example regions are shown at higher magnification to see the effect of the denoising and jitter correction algorithms.
  • the structure imaged here shows a hair follicle capture from human skin tissue. After removing noise from the raw data, the jitter artifacts seen in FIG. 80 are uncovered and become the main source of noise in the images. While these sub-resolution shifts and distortions between the rows of the image can be seen embedded within the noise, they are difficult to resolve and correct. Denoising not only helps improve raw data quality but helps make the jitter correction possible. As shown in FIG. 83, most of the artifacts are removed after applying the correction submodule 5823.
  • the whole slide radiative and non- radiative images are registered to the ground truth H&E image.
  • a simple affine transform is used here to account for the tissue layout alterations accrued during the staining process, which may generate upwards of 6000 closely registered 512x512 training pairs for a single 40x, 1cm2, whole slide image.
  • WG stain such as Verhoeff-Van Gieson (WG), which highlights normal or pathologic elastic fibers, would be required to visualize the internal elastic membrane of arteries.
  • WG stain is sometimes combined with Masson’s trichrome stain, to differentiate collagen and muscle fibers within tissue samples. This is performed to visualize potential increases in collagen associated with diseases like cirrhosis and assess muscle tissue morphology for pathological conditions affecting muscle fibers. In contrast, all these structures are well highlighted in the PARS raw data.
  • the H&E virtual staining model flattens these structures during the image translation process. However, this highlights the potential use of the rich PARS raw data to replicate various clinically relevant contrasts beyond H&E staining.
  • a practical application for PARS virtual staining is to provide several emulated histochemical stains from a single acquisition. Moreover, there is a potential to develop completely new histochemical like contrasts based on the endogenous PARS contrast.
  • the PARS system as described herein may be able to provide contrast to biomolecules which are inaccessible with current chemical staining methods.
  • emulated H&E images are produced from label- free PARS images with quality and contrast that compare favorably to traditional H&E staining.
  • the colorization performance represents the current best PARS virtual staining implementation. Applied to entire sections of unstained human tissues, the presented method enables accurate recovery of subtle structural and subnuclear details. With these improvements, the PARS virtual H&E images, may be effectively indistinguishable from gold standard chemically stained H&E scans.
  • PARS label-free virtual staining the has potential to provide multiple histochemical stains from a single unlabelled sample enhancing diagnostic confidence, and greatly improving patient outcomes.
  • FIG. 53 is a schematic diagram of computing device 5300 which may be used to implement a computing device used to train or execute (at inference time) an image generator or machine learning model 3802, 3812, 3912.
  • computing device 5300 includes at least one processor 5302, memory 5304, at least one I/O interface 5306, and at least one network interface 5308.
  • Each processor 5302 may be, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or any combination thereof.
  • DSP digital signal processing
  • FPGA field programmable gate array
  • PROM programmable read-only memory
  • Memory 5304 may include a suitable combination of any type of computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
  • RAM random-access memory
  • ROM read-only memory
  • CDROM compact disc read-only memory
  • electro-optical memory magneto-optical memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically-erasable programmable read-only memory
  • FRAM Ferroelectric RAM
  • Each I/O interface 5306 enables computing device 5300 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output devices such as a display screen and a speaker.
  • input devices such as a keyboard, mouse, camera, touch screen and a microphone
  • output devices such as a display screen and a speaker
  • Each network interface 5308 enables computing device 5300 to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile (e.g., 4G, 5G network), wireless (e.g. Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.
  • POTS plain old telephone service
  • PSTN public switch telephone network
  • ISDN integrated services digital network
  • DSL digital subscriber line
  • coaxial cable fiber optics
  • satellite mobile (e.g., 4G, 5G network), wireless (e.g. Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including
  • system 100 may include multiple computing devices 5300.
  • the computing devices 5300 may be the same or different types of devices.
  • the computing devices 5300 may be connected in various ways including directly coupled, indirectly coupled via a network, and distributed over a wide geographic area and connected via a network (which may be referred to as “cloud computing”).
  • a computing device 5300 may be a server, network appliance, set-top box, embedded device, computer expansion module, personal computer, laptop, personal data assistant, cellular telephone, smartphone device, UMPC tablets, video display terminal, gaming console, or any other computing device capable of being configured to carry out the methods described herein.
  • FIG. 54 shows a process performed by a processor of an example embodiment of machine learning system or architecture 3800, 3850, 3900.
  • the processor receives, from a sample, a plurality of signals including radiative and non-radiative signals.
  • the plurality of signals include absorption spectra signals.
  • the plurality of signals include scattering signals.
  • the sample is an in vivo or an in situ sample.
  • the sample is not stained.
  • the processor extracts a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals.
  • the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
  • processing the plurality of signals may include: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
  • said extracting the plurality of features includes processing both radiative signals and non-radiative signals.
  • the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities.
  • the complementary modalities comprises at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
  • PET positron emission tomography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • image data obtained from complementary modalities may include photoactive labels for contrasting or highlighting specific regions in the images.
  • the plurality of features is supplemented with at least one of features informative of patient information.
  • said processing includes converting the at least one of the plurality of signals to at least one image.
  • said converting to said at least one image includes applying a simulated stain.
  • the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
  • H&E Hematoxylin and Eosin
  • MPAS Jones’ Stain
  • PAS and GMS stain Toluidine Blue
  • Congo Red Congo Red
  • Masson's Trichrome Stain Lillie's Trichrome
  • Verhoeff Stain Immunohistochemistry
  • IHC Immunohistochemistry
  • histochemical stain histochemical stain
  • ISH In-Situ Hybridization
  • the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue.
  • said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images.
  • said converting includes applying a colorization machine learning architecture.
  • the colorization machine learning architecture includes a generative adversarial network (GAN).
  • GAN generative adversarial network
  • the colorization machine learning architecture includes a cycle-consistent generative adversarial network (CycleGAN).
  • CycleGAN cycle-consistent generative adversarial network
  • the colorization machine learning architecture includes a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
  • cGAN conditional generative adversarial network
  • the processor applies the plurality of features to a machine learning architecture to generate an inference 2517 regarding the sample.
  • the inference 2517 comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
  • the processor generates signals for causing to render, at a display device, a user interface (Ul) 4000 showing a visualization of the inference 2517.
  • a set of instructions configured to training the GAN may include, in each training iteration, instructions causing a process to: instantiate a machine learning architecture including neural network having a plurality of nodes and weights stored on a memory device; obtain a true total absorption (TA) image; generate a simulated stained image based on the true TA image; generate a fake TA image based on the generated stained image; compute a first loss based on the generated fake TA image and the true TA image; obtain a labelled and stained image; compute a second loss based on the generated simulated stained image and the labelled and stained image; and update weights of the neural network based on at least one of the first and second losses.
  • TA true total absorption
  • a computer-implemented method for training a machine learning architecture for generating a simulated stained image comprising, in each training iteration: obtaining a true total absorption (TA) image; generating a simulated stained image based on the true TA image; generating a fake TA image based on the generated stained image; computing a first loss based on the generated fake TA image and the true TA image; obtaining a labelled and stained image; computing a second loss based on the generated simulated stained image and the labelled and stained image; and updating weights of the neural network based on at least one of the first and second losses.
  • TA true total absorption
  • the simulated stained image is generated by a second neural network comprising a second set of nodes and weights, the second set of weights being updated based on at least one of the first and second losses during each iteration.
  • the fake TA image is generated by a third neural network comprising a second set of nodes and weights, the third set of weights being updated based on at least one of the first and second losses during each iteration.
  • computing the second loss based on the generated simulated stained image and the labelled and stained image may include steps of: processing the generated simulated stained image by a first discriminator network; processing the labelled and stained image by a second discriminator network; and computing the second loss based on a respective output from each of the first and second discriminator networks.
  • the method may further include processing the respective output from each of the first and second discriminator networks through a respective classification matrix prior to computing the second loss.
  • the machine learning architecture comprises a CycleGAN machine learning architecture.
  • the machine learning architecture comprises a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
  • cGAN conditional generative adversarial network
  • the labelled and stained image is a labelled PARS image.
  • the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image.
  • automatically labelling the unlabeled PARS image comprises labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities.
  • the database is a H&E database.
  • FIG. 64 is an example machine learning architecture 6400 for processing one or more output 6404 from a PARS system 6402.
  • the PARS system 6402 similar to the PARS system 3801 , 3901 previously described in connection with FIGs. 38A, 38B and 39, may include one or more of the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems.
  • the PARS system 6402 may be a PARS system from FIG. 5 described above, for example.
  • the PARS system 6402 may detect generated signals in the detection beam(s) returning from a given sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc.
  • the sample which be an unstained sample, may be an in vivo or an in situ sample. For example, it may be tissue underneath skin of a patient. For another example, it may be a tissue on a glass.
  • a computer-implemented deep learning model 6406 for processing PARS signal and/or image data.
  • the input 6404 to the deep learning model 6406 may include a plurality of PARS signals including radiative and non- radiative signals, and/or a plurality of extracted features based on processing at least one of the plurality of signals, the features informative of a contrast provided by the at least one of the plurality of signals (e.g., PARS data/PARS Image/PARS features/PARS Image features) and other related data (i.e. genomic data, clinical characteristics).
  • the deep learning model 6406 may be trained and deployed to generate one or more inferences 6408 based on the output 6404 from the PARS system 6402.
  • the generated inference 6408 may then be transmitted to a user application display device 6410 for further interpretation and/or display.
  • the user application display device 6410 may be connected to an user application (e.g., user application 3825), which may be installed at a user device.
  • the generated inference 6408 may include one or more of:
  • the deep learning model 6406 can be based on deep neural network models that use one or more types of learning: Supervised Learning (e.g., classification models, regression Models, segmentation models) such as Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN), weakly supervised learning (multiple instance learning models, other weakly supervised models), unsupervised learning, and transfer learning deep neural networks (pre-trained models, domain adaptation models) having one or more of the following architectures (and their modified versions): CNN, RNN, Fully Convolutional Networks (FCN), Auto-decoders (A-D), Generative Adversarial Networks (GAN) or Pre-trained Networks (PRE-T-N).
  • the PARS machine learning models described herein may be used to generate one or more inferences including:
  • unsupervised learning classification such as GAN and A-D may be used for:
  • Supervised learning classification such as CNN or RNN may be used for: • detection/segmentation/classification of cell/nuclei
  • Weakly-supervised learning classification (weakly supervised CNN, RNN) may be used for:
  • Transfer learning (CNN, GAN, PRE-T-N) may be used for:
  • a computer-implemented machine learning architecture for automatic nuclei detection, segmentation, and classification of PARS data is disclosed herein.
  • a deep learning model 6406 may receive a plurality of PARS signals and PARS data from a PARS system 6402, the PARS signals may include radiative and non-radiative signals, and the PARS data may include a plurality of extracted features based on processing at least one of the plurality of signals, the features informative of a contrast provided by the at least one of the plurality of signals.
  • the deep learning model 6406 may include one or more of: a classification deep neural network 6610, a segmentation deep neural network 6620, and a nuclei detection deep neural network 6630.
  • the deep learning model 6406 may include, for instance, Densely Connected Neural Network (DCNN), Densely Connected Recurrent Convolutional Neural Network (DCRN) and Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net.
  • the outputs of the deep learning model 6406 may include nuclei type, segmentation, and detection masks, which may be transmitted to a user application display device 6410 for further processing and display.
  • the deep learning model 6406 may receive a set of multi structured input data, which may include, for example, PARS images, PARS features and PARS image features. Some or all of the multi structured input data may include PARS data and/features 6404 from the PARS system 6402. Due to the nature of deep data represented by the PARS signal from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives.
  • PCA Principal Component Analysis
  • the deep learning model 6406 may be implemented and trained using other loss calculation methods, for example, the deep learning model 6406 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations.
  • the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases images, diagnoses
  • CBIR computer- implemented content-based image retrieval
  • the Nuclei Segmentation Region-Based CNN 6710 which may be one example of deep learning model 6406, can receive a plurality of PARS signals, features, and images from a PARS system 6402 as input.
  • the PARS signals may include radiative and non-radiative signals.
  • the PARS features may include a plurality of extracted features based on processing at least one of the plurality of PARS signals, the features informative of a contrast provided by the at least one of the plurality of signals.
  • the Nuclei Segmentation Region-Based CNN 6710 may include a Backbone Network (e.g., Region Proposal Network (RPN)) 6712, a feature map generator 6715 and a mask module 6717.
  • the Backbone Network 6712 may be implemented to find areas that may contain an object.
  • the Nuclei Segmentation Region-Based CNN 6710 may predict classes of proposed areas and to refine a bounding box for the proposed area, and the mask module 6717 may be used to generate masks for an object at the pixel level in the next stage based on the proposed areas.
  • An output of the Nuclei Segmentation Region-Based CNN 6710 may be an image with segmented nuclei, which may be transmitted to a user application display device 6410 for further processing and/or display.
  • Principal Component Analysis may be used for dimensionality reduction to to obtain the most relevant feature representatives.
  • loss function computation alternative methods provided in as described above may be employed. Due to the nature of deep data represented by the PARS signal from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives.
  • the Nuclei Segmentation Region-Based CNN 6710 may be implemented and trained using other loss calculation methods, for example, the Nuclei Segmentation Region- Based CNN 6710 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations.
  • the outputs of the Nuclei Segmentation Region-Based CNN 6710 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei.
  • Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
  • CBIR content-based image retrieval
  • the output from the system 6406 in FIG. 66 may be combined with the output of the Nuclei Segmentation Region-Based CNN 6710 in FIG. 67 for validation.
  • a computer-implemented machine learning architecture is disclosed herein for identification of malignancy of tissues.
  • a deep learning model 6406 e.g., a modified Convolutional Neural Network (CNN) model
  • CNN Convolutional Neural Network
  • a deep learning model 6406 may be configured to receive local PARS image features obtained by an image transform submodule 6403, by using techniques such as and not limited to: Contourlet T ransform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
  • CT Contourlet T ransform
  • DFT Discrete Fourier Transform
  • LBP Local Binary Pattern
  • the deep learning model 6406 may receive PARS signals (radiative, non-radiative, scattering signals) and PARS images from a PARS system 6402 and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology).
  • the generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
  • a computer-implemented machine learning architecture for identification of the malignancy of tissues.
  • a deep learning model 6406 e.g., modified CNN
  • the PARS signals may include radiative and non-radiative signals.
  • the PARS features may include a plurality of extracted features based on processing at least one of the plurality of PARS signals, the features informative of a contrast provided by the at least one of the plurality of signals.
  • the deep learning model 6406 may be configured to receive local PARS image features obtained by an image transform sub-module 6403, by using techniques such as and not limited to: Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
  • CT Contourlet Transform
  • Histogram pixel strength distribution
  • DFT Discrete Fourier Transform
  • LBP Local Binary Pattern
  • the deep learning model 6406 may receive multichannel PARS signals (radiative, non-radiative, scattering signals) and PARS signals and features 7015 from a PARS system 6402 and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology).
  • the generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
  • the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei.
  • Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases images, diagnoses
  • CBIR computer-implemented content-based image retrieval
  • a deep learning model 6406 may receive simulated stained PARS image(s) from an image generator 7010 (similar to image generator 3812, 3912), which may produce the simulated stained PARS image(s) based on PARS signals and features from the PARS system 6402.
  • the deep learning model 6406 may also be configured to receive local PARS image features obtained by an image transform sub-module 6403, by using techniques such as and not limited to: Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
  • CT Contourlet Transform
  • Histogram pixel strength distribution
  • DFT Discrete Fourier Transform
  • LBP Local Binary Pattern
  • the deep learning model 6406 may receive the simulated stained PARS image(s) and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology).
  • the generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
  • the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei.
  • Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases images, diagnoses
  • CBIR computer-implemented content-based image retrieval
  • a deep learning model 6406 may receive a set of multi-dimensional input data.
  • the set of multi-dimensional input data may be a set of multi structured input data, which may include, for example, two or more from: PARS images 7020, PARS signals and features 7015, PARS image features 7023, and simulated stained images 7025 generated from selected PARS features 7021.
  • Some or all of the multi structured input data may include PARS data and/features from the PARS system 6402. Due to the nature of deep data represented by the PARS signal data from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives.
  • PCA Principal Component Analysis
  • pixel information in PARS images 7020 may be combined with information contained in the PARS signal features 7015 to form a multidimensional (deep data) input to the deep learning model 6406.
  • Example of PARS signal features 7015 may include data values representative of mechanical properties (e.g., stiffness, speed of sound, pea87istolocity, thermo conductivity) and data values representative of chemical properties (e.g., QER, total absorption, bonding state, viscosity, ion concentration, charge, chemical composition).
  • the input data sent to deep learning model 6406 can be used to generate outcome of increased complexity.
  • the output of the deep learning model 6406 may include tissue malignancy class, malignancy grading, cancer prognosis, treatment prognosis.
  • the generated inferences may be transmitted to a user application display 6410.
  • the deep learning model 6406 may be implemented and trained using other loss calculation methods, for example, the deep learning model 6406 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations.
  • SSIM Structural Similarity Index
  • EM Earth Movers
  • the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei.
  • Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases images, diagnoses
  • CBIR computer-implemented content-based image retrieval
  • Multi-stain graph fusion for multimodal integration in pathology to predict cancer grading
  • a computer-implemented machine learning architecture for performing multi-stain graph fusion for multimodal integration of a simulated stained PARS image and multiple non-registered stained histology images to predict pathologic scores, as shown in FIG. 71.
  • This multimodal deep learning graph fusion process may use information from a simulated stained PARS image, and multiple non- registered histopathology images 7110 to predict pathologic scores.
  • the simulated stained PARS image may be obtained from image generator 7010 (similar to image generator 3812, 3912), which may produce the simulated stained PARS image(s) based on PARS signals and features from the PARS system 6402.
  • the deep learning model 6406 may receive a set of multidimensional input data.
  • the set of multi-dimensional input data may be a set of multi structured input data, which may include, for example, two or more from: PARS images 7020, PARS signals and features 7015, historical unregistered histology images 7110, and simulated stained images 7025 generated from selected PARS features.
  • Some of the multi structured input data may include PARS data and/features from the PARS system 6402.
  • the deep learning model 6406 may be implemented to perform pixel-level classification of various stains.
  • Output of the deep learning model 6406 may include heatmaps 7130, which is used to generate graphs by a graph generator 7130.
  • a Graph Neural Network model 7150 is trained on a plurality of input image graphs. The output of the trained Graph Neural Network model 7150 can generate inferences that represent tissue malignancy grading and/or probability scores. The generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
  • the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei.
  • Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
  • Historical reference cases images, diagnoses
  • CBIR computer-implemented content-based image retrieval
  • genomic data 7250 may include any signals derived from analysis of DNA or RNA, or mRNA derived from any sequencing or other nucleic analysis technique, including epigenetic features. Genomic data 7250 can be derived from germline analysis, bulk tumor analysis, single cell analysis, analysis of malignant cells or subsets thereof, or analysis of benign cells or subsets thereof, including benign stromal elements.
  • the machine learning architecture 7200 has three parts: PARS image cluster processing, genomic data processing (for example, but not limited to, mRNA-seq analysis by WGCNA), and multi-modality survival analysis.
  • PARS images and PARS image features from a PARS system 6402 may be received as input by a patch clustering process 7210, during which patches are clustered into n categories followed by patch augmentation process 7220 (horizontal flip, vertical flip, and rotation).
  • patch clustering process 7210 e.g., multi instance full convolutional network (Ml- FCN) composed of multiple sub-networks with the same structure and shared weight parameters).
  • Ml- FCN multi instance full convolutional network
  • the output from the deep neural network 7230 may undergo attention aggregation to obtain a deep learning risk score of a given patient.
  • the deep neural network 7230 may be implemented to receive PARS image features obtained by using techniques such as and not limited to Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
  • CT Contourlet Transform
  • Histogram pixel strength distribution
  • DFT Discrete Fourier Transform
  • LBP Local Binary Pattern
  • eigengenes may be obtained by weighted gene co-expression network analysis (WGCNA) 7260. Then, modules may be selected by a least absolute shrinkage and selection operator (LASSO) process 7270 based on eigengenes. The top hub genes of the retained modules may then be extracted as risk factors.
  • WGCNA weighted gene co-expression network analysis
  • LASSO least absolute shrinkage and selection operator
  • the deep learning risk score and hub genes may be integrated using the cox proportional hazard model 7280.
  • module hub genes from genetic data, and clinical characteristics (e.g., age and sex) 7240
  • a multi-input integrative prognosis machine learning model based on cox proportional hazards model 7280 is implemented and trained.
  • the cox proportional hazards model 7280 is a regression model to investigate the association between the survival time of patients and one or more predictor variables.
  • the integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290.
  • genomic data 7250 may include any signals derived from analysis of DNA or RNA, or mRNA derived from any sequencing or other nucleic analysis technique, including epigenetic features. Genomic data 7250 can be derived from germline analysis, bulk tumor analysis, single cell analysis, analysis of malignant cells or subsets thereof, or analysis of benign cells or subsets thereof, including benign stromal elements.
  • the machine learning architecture 7300 has three parts: PARS data processing, genomic data processing (for example, but not limited to, mRNA-seq analysis by WGCNA), and multi-modality survival analysis.
  • PARS features from a PARS system 6402 may be received as input by a deep neural network 6406, and the output from the deep neural network 6406 may undergo attention aggregation to obtain a deep learning risk score of a given patient.
  • eigengenes may be obtained by weighted gene co-expression network analysis (WGCNA) 7260. Then, modules may be selected by a least absolute shrinkage and selection operator (LASSO) process 7270 based on eigengenes. The top hub genes of the retained modules may then be extracted as risk factors.
  • WGCNA weighted gene co-expression network analysis
  • LASSO least absolute shrinkage and selection operator
  • the deep learning risk score and hub genes may be integrated using the cox proportional hazard model 7280.
  • module hub genes from genetic data, and clinical characteristics (e.g., age and sex) 7240
  • a multi-input integrative prognosis machine learning model based on cox proportional hazards model 7280 is implemented and trained.
  • the cox proportional hazards model 7280 may be a regression model to investigate the association between the survival time of patients and one or more predictor variables.
  • the integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290.
  • survival analysis based on PARS image data, historical unregistered histology images and genomic data [00505]
  • historical unregistered histology images 7110 may also be used as an input to the deep neural network 6406.
  • genomic data 7250 may also be used as an input to the deep neural network 6406 to compute the risk score.
  • the risk score and genomic data 7250 can be integrated using the cox proportional hazards model 7280, which is a regression model to investigate the association between the survival time of patients and one or more predictor variables.
  • the integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290.
  • a computer-implemented system is implemented to perform image fusion of PARS image data 7410 with images 7405 from other modalities (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)) based on image feature representations and a similarity measure, as shown in FIG. 74.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • An embodiment system may perform registration of PARS Image data with other imaging modalities such as CT and MRI; dimensional complexity may include image-to- volume (2D to 3D), image-to-image (2D to 2D), and volume-to-volume (3D to 3D).
  • FIG. 74 An example process 7400 performed by the embodiment system is illustrated in FIG. 74.
  • the multimodal image fusion can be accomplished through image registration techniques implemented with iterative optimization algorithms. In each iteration, better alignment can be achieved based on a predefined similarity measure 7450 that computes the amount of correspondence between the input images.
  • An optimization algorithm may calculate and update the new transformation/interpolation parameters. The operations continue until the optimal registration is achieved, or some predefined criteria are satisfied.
  • the system’s output can be either the transformation parameters 7480 or the final interpolated fused image 7470.
  • the example process 7400 can enable measures of 3D features and to spatially localize the findings of histology within the local 3D tissue environment. Combining PARS image data with high- resolution 2D/3D-imaging techniques such as micro-CT and micro-MRI (prior to sectioning) can provide access to morphological characteristics, relate histological findings to the 2D/3D structure of the local tissue environment and enable guided sectioning of tissue. Multimodal fusion of PARS signal/image (Deep learning)
  • multimodal image fusion may be performed by a computer- implemented system configured to perform an example process 7500 shown in FIG. 75.
  • the system performs registration of PARS image data 7510 with images 7505 from other imaging modalities such as CT and MRI via a similarity metrics 7520 and a deep learning registration model 7530.
  • Dimensional complexity may include image-to-volume (2D to 3D), image-to- image (2D to 2D), and volume-to-volume (3D to 3D).
  • the system output may be a final 2D or 3D fused image, which may be transmitted to a user application display device 6410 for display.
  • This fusion technique can enable measures of 3D features and to spatially localize t92istology of histology within the local 2D/3D tissue environment and enable guided sectioning of tissue.
  • a customized staining deep learning model 7606 may be implemented as part of a machine learning architecture 7600 to perform customized PARS image staining, which may allow a user to control different aspects, such as a total number and nature, of different stained images being displayed at a user application display device 6410. The user may, for example, mix, change and combine stains in real time based on one or more specified criteria through user input 7610 received by the user application display device 6410.
  • the customized staining deep learning model 7606 may receive one or more PARS images and PARS features from a PARS system 6401 and user input 7610 as input.
  • the user input 7610 may include user-defined criteria for generating one or more stained images.
  • the customized staining deep learning model 7606 may generate one or more custom stained images for display at the user application display device 6410 based on the user criteria.
  • Some example user criteria for custom staining that may be included in the user input 7610 may include:
  • the customized staining deep learning model 7606 may include a feature extraction mechanism that may determine size, shape, features (density for example) and structures to customize the colour map.
  • a computer-implemented machine learning architecture 7700 is implemented to perform detection, classification and grading of tissue malignancy (cancer).
  • the machine learning architecture 7700 may include a stain fusion deep learning model 7720 to produce a simulated stained/ fused PARS image 7730 relevant to the predicted model outcome for display at the user application display device 6410.
  • the stain fusion deep learning model 7720 may, in order to generate the relevant simulated stained/ fused PARS image 7730, receive a plurality of simulated stained images (stain 1 , stain 2,... stain n) from an image generator 7710, which can generate the plurality of simulated stained images based on PARS features from a PARS system 6402.
  • the machine learning architecture 7700 includes a diagnostic deep learning model 7706, which may include two deep neural network models to perform classification and grading, respectively, based on PARS features and images from the PARS system 6402.
  • the output of the diagnostic deep learning model 7706 and the output of the image generator 7710 may serve as input to the stain fusion deep learning model 7720 to generate the relevant simulated stained PARS image 7730.
  • the output 7750 of the diagnostic deep learning model 7706 for classification, grading and staining of PARS image data may include a predicted class and grading, such as malignancy grading, and the output of the stain fusion deep learning model 7720 is a simulated stained PARS image 7730.
  • the two outputs 7730, 7750 may be transmitted to the user application display device 6410 for further processing (if any) and display.
  • CBIR Content-based Image Retrieval
  • a computer-implemented content-based image retrieval (CBIR) system 7800 is implemented to assist pathologists in diagnosis.
  • the system 7800 may be configured to query one or more images, which may be, for example, a PARS image 7802, a simulated stained PARS image 7805, or a histology image, and retrieve similar images 7850 from an image repository 7810 based on the queried image 7802, 7805.
  • images obtained from the repository 7810 are processed by an image feature extraction module 7820 to obtain semantically meaningful features (feature vector) which are then indexed (represented by index features 7840) based on their pair-wise differences computed with a distance measure 7830.
  • the queried image 7802, 7805 can be processed by the same feature extraction module 7820 to generate a feature vector 7825 of the queried image.
  • the feature vector 7825 of the queried image is then compared, by a distance measure module 7835, to the indexed features 7840 obtained based on the image from the image repository 7810.
  • the final output 7850 is obtained by choosing one or more images from the image repository 7810 that are the closest to the queried image 7802, 7805 based on the computed distances generated by the distance measure module 7835. For example, if a computed distance D between the repository image and the queried image is beneath a certain threshold, the image from the image repository 7810 is considered sufficiently similar to the queried image 7802, 7805 to be included as part of the final output 7850, which may be used for diagnostic reporting.
  • a computer-implemented architecture 7900 may be implemented to generate meaningful information for explaining one or more PARS images from a PARS system 6402, as shown in FIG. 79.
  • the architecture 7900 may include a Explainable Al PARS Module 7950, which may include a plurality of different modules.
  • the Al PARS Module 7950 may receive as input, one or more PARS images from a PARS system 6402, histology images 7920, and user query 7930, in order to perform diagnostic analyses to assist a pathologist in diagnosis.
  • the Al PARS Module 7950 may include one or more modules and machine learning models that represent classes of explanation-generation methods, which may include, for example: deep learning diagnostic module, deep learning saliency maps generator, concept attribution generator, prototypes generator, counterfactuals generator, trust scores generator, and a user query interpreter.
  • the deep learning diagnostic module can generate diagnostic predictions based on the PARS images.
  • Global and local saliency maps from the deep learning saliency maps generator can explain model predictions by providing visualisations.
  • the concept attribution generator can provide explanation of model predictions with the use of high-level concepts including synthetically generated visualisations and/or domain-related natural language.
  • the prototypes generator can generate explanations of model inner workings. These explanations are provided through real or synthetically generated examples such as typical instances of a particular category or feature.
  • Counterfactuals generator can generate counterfactuals used to explain a model outcome by presenting outcomes of other possible scenarios that lead to a different outcome. Counterfactual examples are synthetically generated visualisations or real data.
  • the trust scores generator can generate trust scores or measures indicating trustworthiness of the model predictions and outcomes.
  • the user query interpreter analyzes user’s input query (e.g., visualizing specific part of tissue, specific sub-structures, indicators for a specific type of cancer, count of nuclei of specific size, etc. for a given patient).
  • the output of the Al PARS Module 7950 is a collection of images, a quantitative measure, presentation of similar cases, generated report in the form of domain-related natural language, which may be transmitted to the user application display device 6410 for display to a user.
  • one or more simulated stain images from an image generator 7910 may be used as input to the Al PARS Module 7950 for performing diagnostic analyses to assist a pathologist in diagnosis.
  • the one or more simulated stain images may also be transmitted to the user application display device 6410 for display to a user together with the output from the Al PARS Module 7950.
  • aspects disclosed herein may include non-radiative (heat and pressure) and radiative (fluorescence is one of the possible signals) signals in a sample.
  • Aspects disclosed herein may include collecting radiative relaxation and non-radiative relaxation due to optical absorption and also scattering from both excitation and detection.
  • the collected signals and/or raw data may be used to directly form and color an image of a sample, such as an H&E (hematoxylin and eosin) histology image without staining the sample.
  • H&E hematoxylin and eosin
  • H&E histology images may be directly formed and colorized by using methods (such as based on a comparison of non-radiative and radiative signals, QER, lifetime or evolution of signals, and/or a clustering algorithm) disclosed herein and using features in raw PARS signals.
  • Aspects disclosed herein may be used to determine or measure, using a photon absorption remote sensing system or PARS, mechanical characteristics such as the speed of sound and/or temperature characteristics of the sample.
  • a tiny or pinpointed area of the sample e.g., a size of a focused laser beam or beam of light
  • Aspects disclosed herein may extract more than just an amplitude or scalar amplitude of signals in a sample.
  • two targets may have a same or similar optical absorption but slightly different other characteristics such as a different speed of sound, which may result in a different evolution and/or shape of the signals.
  • Aspects disclosed herein may be used to determine or add novel molecular information to PARS images.
  • the target can be prepared with water or any liquid such as oil before a non-contact imaging session.
  • an intermediate window such as a cover slip or glass window may be placed between the imaging system and the sample.
  • OCT optical coherence tomography
  • TD-OCT time domain optical coherence tomography
  • FD-OCT frequency domain optical coherence tomography
  • multiple A-scans are typically acquired while the sample beam is scanned laterally across the tissue surface, building up a two-dimensional map of reflectivity versus depth and lateral extent typically called a B-scan.
  • the lateral resolution of the B-scan is approximated by the confocal resolving power of the sample arm optical system, which is usually given by the size of the focused optical spot in the tissue.
  • All optical sources including but not limited to PARS excitations, PARS detections, PARS signal enhancements, and OCT sources may be implemented as continuous beams, modulated continuous beams, or short pulsed lasers in which pulse widths may range from attoseconds to milliseconds. These may be set to any wavelength suitable for taking advantage of optical (or other electromagnetic) properties of the sample, such as scattering and absorption. Wavelengths may also be selected to purposefully enhance or suppress detection or excitation photons from different absorbers. Wavelengths may range from nanometer to micron scales. Continuous-wave beam powers may be set to any suitable power range such as from attowatts to watts.
  • Pulsed sources may use pulse energies appropriate for the specific sample under test such as within the range from attojoules to joules.
  • Various coherence lengths may be implemented to take advantage of interferometric effects. These coherence lengths may range from nanometers to kilometers.
  • pulsed sources may use any repetition rate deemed appropriate for the sample under test such as from continuous- wave to the gigahertz regime.
  • the sources may be tunable, monochromatic or polychromatic.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may include an interferometer, such as a Michelson interferometer, Fizeau interferometer, Ramsey interferometer, Fabry-Perot interferometer, Mach-Zehnder interferometer, or optical-quadrature detection. Interferometers may be free- space or fiber-based or some combination. The basic principle is that phase and amplitude oscillations in the probing receiver beam can be detected using interferometry and detected at AC, RF or ultrasonic frequencies using various detectors.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may use and implement a non-interferometry detection design to detect amplitude modulation within the signal.
  • the non-interferometry detection system may be free-space or fiber-based or some combination therein.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may use a variety of optical fibers such as photonic crystal fibers, image guide fibers, double-clad fibers etc.
  • the PARS subsystems may be implemented as a conventional photoacoustic remote sensing system, non-interferometric photoacoustic remote sensing (NI-PARS), camera-based photoacoustic remote sensing (C-PARS), coherence-gated photoacoustic remote sensing (CG-PARS), single-source photoacoustic remote sensing (SS-PARS), or extensions thereof.
  • NI-PARS non-interferometric photoacoustic remote sensing
  • C-PARS camera-based photoacoustic remote sensing
  • CG-PARS coherence-gated photoacoustic remote sensing
  • SS-PARS single-source photoacoustic remote sensing
  • all beams may be combined and scanned.
  • PARS excitations may be sensed in the same area as they are generated and where they are the largest.
  • OCT detection may also be performed in the same location as the PARS to aid in registration.
  • Other arrangements may also be used, including keeping one or more of the beams fixed while scanning the others or vice versa.
  • Optical scanning may be performed by galvanometer mirrors, MEMS mirrors, polygon scanners, stepper/DC motors, etc.
  • Mechanical scanning of the sample may be performed by stepper stages, DC motor stages, linear drive stages, piezo drive stages, piezo stages, etc.
  • Both the optical scanning and mechanical scanning approaches may be leveraged to produce one-dimensional, two-dimensional, or three-dimensional scans about the sample.
  • Adaptive optics such as TAG lenses and deformable mirrors may be used to perform axial scanning within the sample.
  • Both optical scanning and mechanical scanning may be combined to form a hybrid scanner.
  • This hybrid scanner may employ one-axis or two-axis optical scanning to capture large areas or strips in a short amount of time.
  • the mirrors can potentially be controlled using custom control hardware to have customized scan patterns to increase scanning efficiency in terms of speed and quality.
  • one optical axis can be used to scan rapidly and simultaneously one mechanical axis can be used to move the sample. This may render a ramp-like scan pattern which can then be interpolated.
  • Another example, using custom control hardware would be to step the mechanical stage only when the fastaxis has finished moving yielding a Cartesian-like grid which may not need any interpolation.
  • PARS may provide 3D imaging by optical or mechanical scanning of the beams or mechanical scanning of the samples or the imaging head or the combination of mechanical and optical scanning of the beams, optics, and the samples. This may allow rapid structural and function en-face or 3D imaging.
  • One or multiple pinholes may be employed to reject out of focus light when optically or mechanically scanning the beams or mechanical scanning of the samples or the imaging head or the combination of mechanical and optical scanning of the beams, optics, and samples. They may improve the signal to noise ratio of the resulting images.
  • Beam combiners may be implemented using dichroic mirrors, prisms, beamsplitters, polarizing beamsplitters, WDMs etc.
  • Beam paths may be focused on to the sample using different optical paths.
  • Each of the single or multiple PARS excitation, detection, signal enhancement etc. paths and OCT paths may use an independent focusing element onto the sample, or all share a single (only one or exactly one) path or any combination.
  • Beam paths may return from the sample using unique optical paths which are different from those optical paths used to focus on to the sample. These unique optical paths may interact with the sample at normal incidence, or may interact at some angle where the central beam axis forms an angle with the sample surface ranging from 5 degrees to 90 degrees.
  • the imaging head may not implement any primary focusing element such as an objective lens to tightly focus the light onto the sample.
  • the beams may be collimated, or loosely focused (as to create a spot size much larger than the optical diffraction limit) while being directed at the sample.
  • ophthalmic imaging devices made direct a collimated beam into the eye allo’ing the eye's lens to focus the beam on to the retina.
  • the imaging head may focus the beams into the sample at least to a depth of 50 nm.
  • the imaging head may focus the beams into the sample at most to a depth of 10 mm.
  • the added depth over previous PARS arises from the novel use of deeply-penetrating detection wavelengths as described above.
  • Light may be amplified by an optical amplifier prior to interacting with a sample or prior to detection.
  • Light may be collected by photodiodes, avalanche photodiodes, phototubes, photomultipliers, CMOS cameras, CCD cameras (including EM-CCD, intensified-CCDs, back- thinned and cooled CCDs), spectrometers, etc.
  • the detected signals may be amplified by an RF amplifier, lock-in amplifier, trans-impedance amplifier, or other amplifier configuration.
  • Modalities may be used for A-, B- or C- scan images for in vivo, ex vivo or phantom studies.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may take the form of any embodiment common to microscopic and biological imaging techniques. Some of these may include but are not limited to devices implemented as a table-top microscope, inverted microscope, handheld microscope, surgical microscope, endoscope, or ophthalmic device, etc. These may be constructed based on principles known in the art.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be optimized in order to take advantage of a multi-focus design for improving the depth-of-focus of 2D and 3D imaging.
  • the chromatic aberration in the collimating and objective lens pair may be harnessed to refocus light from a fiber into the object so that each wavelength is focused at a slightly different depth location. These chromatic aberrations may be used to encode depth information into the recovered PARS signals which may be later recovered using wavelength specific analysis approaches. Using these wavelengths simultaneously may also be used to improve the depth of field and signal to noise ratio (SNR) of the PARS images.
  • SNR signal to noise ratio
  • PARS methods may provide lateral or axial discrimination on the sample by spatially encoding detection regions, such as by using several pinholes, or by the spectral content of a broadband beam.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be combined with other imaging modalities such as stimulated Raman microscopy, fluorescence microscopy, two-photon and confocal fluorescence microscopy, Coherent-Anti-Raman-Stokes microscopy, Raman microscopy, other photoacoustic and ultrasound systems, etc. This could permit imaging of the microcirculation, blood oxygenation parameter imaging, and imaging of other molecularly-specific targets simultaneously, a potentially important task that is difficult to implement.
  • a multi-wavelength visible laser source may also be implemented to generate photon absorption signals for functional or structural imaging.
  • Polarization analyzers may be used to decompose detected light into respective polarization states. The light detected in each polarization state may provide information about the sample. Phase analyzers may be used to decompose detected light into phase components. This may provide information about the sample.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may detect generated signals in the detection beam(s) returning from the sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc.
  • Analog-based signal extraction may be performed along electrical signal pathways.
  • Some examples of such analog devices may include but are not limited to lock-in amplifiers, peak-detections circuits, etc.
  • the PARS subsystem may detect temporal information encoded in the back- reflected detection beam. This information may be used to discriminate chromophores, enhance contrast, improve signal extraction, etc. This temporal information may be extracted using analog and digital processing techniques. These may include but are not limited to the use of lock-in amplifiers, Fourier transforms, wavelet transforms, intelligent algorithm extraction to name a few. In one example, lock in detection may be leveraged to extract PARS signals which are similar to known expected signals for extraction of particular chromophores such as DNA, cytochromes, red blood cells, etc.
  • the imaging head of the system may include close-loop or open-loop adaptive optic components including but not limited to wave-front sensors, deformable mirrors, TAG lenses, etc. for wave-front and aberration correction.
  • Aberrations may include de-focus, astigmatism, coma, d is tortion, 3rd-order effects, etc.
  • the signal enhancement beam may also be used to suppress signals from undesired chromophores by purposely inducing a saturation effect such as photobleaching.
  • axicons may be used as a primary objective to produce Bessel beams with a larger depth of focus as compared to that available by standard Gaussian beam optics. Such optics may also be used in other locations within beam paths as deemed appropriate. Reflective optics may also take the place of their respective refractive elements, such as the use of a reflective objective lens rather than a standard compound objective lens.
  • Optical pathways may include nonlinear optical elements for various related purposes such as wavelength generation and wavelength shifting. Beam foci may overlap at the sample but may also be laterally and axially offset from each other when appropriate by a small amount.
  • the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be used as a spectrometer for sample analysis.
  • the system may be used for imaging angiogenesis for different pre-clinical tumor models.
  • the system may be used for unmixing targets (e.g. detect, separate or otherwise discretize constituent species and/or subspecies) based on their absorption, scattering or frequency contents by taking advantage of different wavelengths, different pulse widths, different coherence lengths, repetition rates, exposure time, different evolution or lifetime of signals, quantum efficiency ratio and/or other comparisons of non-radiative and radiative signals, etc.
  • the system may be used to image with resolution up to and exceeding the diffraction limit.
  • the system may be used to image anything that absorbs light, including exogenous and endogenous targets and biomarkers.
  • the system may have some surgical applications, such as functional and structural imaging during brain surgery, use for assessment of internal bleeding and cauterization verification, imaging perfusion sufficiency of organs and organ transplants, imaging angiogenesis around islet transplants, imaging of skin-grafts, imaging of tissue scaffolds and biomaterials to evaluate vascularization and immune rejection, imaging to aid microsurgery, guidance to avoid cutting critical blood vessels and nerves.
  • surgical applications such as functional and structural imaging during brain surgery, use for assessment of internal bleeding and cauterization verification, imaging perfusion sufficiency of organs and organ transplants, imaging angiogenesis around islet transplants, imaging of skin-grafts, imaging of tissue scaffolds and biomaterials to evaluate vascularization and immune rejection, imaging to aid microsurgery, guidance to avoid cutting critical blood vessels and nerves.
  • the system may also have some gastroenterological applications, such as imaging vascular beds and depth of invasion in Barrett’s esophagus and colorectal cancers. Depth of invasion, in at least some embodiments, is key to prognosis and metabolic potential. This may be used for virtual b’opsy, crohn's diseases, monitoring of IBS, inspection of carotid artery. Gastroenterological applications may be combined or piggy-backed off of a clinical endoscope and the miniaturized PARS system may be designed either as a standalone endoscope or fit within the accessory channel of a clinical endoscope.
  • the system may also be used for clinical imaging of micro- and macro-circulation and pigmented cells, which may find use for applications such as in (1) the eye, potentially augmenting or replacing fluorescein angiography; (2) imaging dermatological lesions including melanoma, basal cell carcinoma, hemangioma, psoriasis, eczema, dermatitis, imaging Mohs surgery, imaging to verify tumor margin resections; (3) peripheral vascular disease; (4) diabetic and pressure ulcers; (5) burn imaging; (6) plastic surgery and microsurgery; (7) imaging of circulating tumor cells, especially melanoma cells; (8) imaging lymph node angiogenesis; (9) imaging response to photodynamic therapies including those with vascular ablative mechanisms; (10) imaging response to chemotherapeutics including anti-angiogenic drugs; (11) imaging response to radiotherapy.
  • imaging dermatological lesions including melanoma, basal cell carcinoma, hemangioma, psoriasis, eczema,
  • the system may also be used for some histopathology imaging applications, such as frozen pathology, generating H&E-stain like images from tissue samples, virtual biopsy, etc.
  • the system may be implemented to generate virtual stains and other types of images for various tissue preparations, such as, for example, formalin-fixed paraffin-embedded (FFPE) tissue blocks, formalin-fixed paraffin-embedded (FFPE) tissue slides, FFPE tissue sections, frozen pathology sections, formalin fixed tissue, freshly resected unprocessed tissue, freshly resected specimen, and so on.
  • FFPE formalin-fixed paraffin-embedded
  • FFPE formalin-fixed paraffin-embedded
  • FFPE formalin-fixed paraffin-embedded
  • FFPE formalin-fixed paraffin-embedded
  • frozen pathology sections formalin fixed tissue, freshly resected unprocessed tissue, freshly resected specimen, and so on.
  • macromolecules such as DNA, RNA, cyto
  • the generated stains or images may be used for one or more histopathology imaging applications for different diseases including but not limited to: wound healing, angiogenesis and tissue regeneration, hypersensitivity, infection, inflammation, autoimmunity, scarring and fibrosis.
  • the system may be useful in estimating oxygen saturation using multi-wavelength PARS excitation in applications including: (1) estimating venous oxygen saturation where pulse oximetry cannot be used including estimating cerebrovenous oxygen saturation and central venous oxygen saturation. This could potentially replace catheterization procedures which can be risky, especially in small children and infants.
  • Oxygen flux and oxygen consumption may also be estimated by using PARS imaging to estimate oxygen saturation, and to estimate blood flow in vessels flowing into and out of a region of tissue.
  • the system may be useful in separating salient histological chromophores such as cell nuclei and the surrounding cytoplasm by leveraging their respective absorption spectra.
  • the systems may be used for unmixing targets using their absorption contents, scattering, phase, polarization or frequency contents by taking advantage of different wavelengths, different pulse widths, different coherence lengths, repetition rates, fluence, exposure time, etc.
  • Other examples of applications may include imaging of contrast agents in clinical or pre-clinical applications; identification of sentinel lymph nodes; non- or minimally-invasive identification of tumors in lymph nodes; non-destructive testing of materials; imaging of genetically-encoded reporters such as tyrosinase, chromoproteins, fluorescent proteins for pre-clinical or clinical molecular imaging applications; imaging actively or passively targeted optically absorbing nanoparticles for molecular imaging; and imaging of blood clots and potentially staging the age of the clots.
  • Other examples of applications may include clinical and pre-clinical ophthalmic applications; oxygen saturation measurement and retinal metabolic rate in diseases such as age related macular degeneration, diabetic retinopathy and glaucoma, limbal vasculature and stem cells imaging, corneal nerve and neovascularization imaging, evaluating Schlemm canal changes in glaucoma patients, choroidal neovascularization imaging, anterior and posterior segments blood flow imaging and blood flow state, wound healing, angiogenesis and tissue regeneration, hypersensitivity, infection, inflammation, autoimmunity, and scarring and fibrosis.
  • diseases such as age related macular degeneration, diabetic retinopathy and glaucoma, limbal vasculature and stem cells imaging, corneal nerve and neovascularization imaging, evaluating Schlemm canal changes in glaucoma patients, choroidal neovascularization imaging, anterior and posterior segments blood flow imaging and blood flow state, wound healing, angiogenesis and tissue regeneration, hypersensitivity, infection, inflammation, autoimmunity, and scar
  • the system may be used for measurement and estimation of metabolism within a biological sample leveraging the capabilities of both PARS and OCT.
  • the OCT may be used to estimate volumetric blood flow within a region of interest
  • the PARS systems may be used to measure oxygen saturation within blood vessels of interest. The combination of these measurements then may provide estimation of metabolism within the region.
  • the system may be used for head and neck cancer types and skin cancer types, functional brain activities, Inspecting stroke patient’s vasculature to help locate clots, monitoring changes in neuronal and brain function/development as a result of changing gut bacteria composition, atherosclerotic plaques, monitoring oxygen sufficiency following flap reconstruction, profusion sufficiency following plastic or cosmetic surgery and imaging the cosmetic injectable.
  • the system may be used for topology tracking of surface deformations.
  • the OCT may be used to track the location of the sample surface. Then corrections may be applied to a tightly focused PARS device using mechanisms such as adaptive optics to maintain alignment to that surface as scanning proceeds.
  • the system may be implemented in various different form factors appropriate to these applications such as a tabletop microscope, inverted microscope, handheld microscope, surgical microscope, ophthalmic microscope, endoscope, etc.
  • aspects disclosed herein may be used with the following applications: imaging histological samples; imaging cell nuclei; imaging proteins; imaging DNA; imaging RNA; imaging lipids; imaging of blood oxygen saturation; imaging of tumor hypoxia; imaging of wound healing, burn diagnostics, or surgery; imaging of microcirculation; blood oxygenation parameter imaging; estimating blood flow in vessels flowing into and out of a region of tissue; imaging of molecularly-specific targets; imaging angiogenesis for pre-clinical tumor models; clinical imaging of micro- and macro-circulation and pigmented cells; imaging of the eye; augmenting or replacing fluorescein angiography; imaging dermatological lesions; imaging melanoma; imaging basal cell carcinoma; imaging hemangioma; imaging psoriasis; imaging eczema; imaging dermatitis; imaging Mohs surgery; imaging to verify tumor margin resections; imaging peripheral vascular disease; imaging diabetic and/or pressure ulcers; burn imaging; plastic surgery; microsurgery; imaging of circulating tumor cells;
  • Aspects disclosed herein may provide a computer-implemented method of visualizing features in a sample.
  • the method may include receiving one or more photon absorption remote sensing or system (PARS) signals, clustering the received one or more PARS signals using a clustering algorithm to determine features of the sample, and determining an image based on the clustered PARS signals.
  • the method may include determining a ratio of non-radiative signals to radiative signals, determining a value that is a function of non-radiative signals and radiative signals, and/or comparing non-radiative signals, radiative signals, and/or scattering signals, and determining the image, including colors, based on the determined ratio, value, and/or comparison.
  • the PARS signals may be collected by generating signals in the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample, including for example at or below a surface of the sample, interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample, and detecting a portion of the interrogation beam returning from the sample.
  • Generating signals may include generating pressure, temperature, and fluorescence (and/or other radiative and/or non-radiative signals).
  • the returned portion of the interrogation beam may be indicative of the generated pressure and temperature signals.
  • the PARS signals are further collected by detecting fluorescence signals from the excitation location of the sample while detecting the generated pressure and temperature signals.
  • the PARS signals may be further collected by redirecting a portion of the returned interrogation beam and detecting an interaction with the sample.
  • a wavelength of the excitation beam may be configured such that the sample absorbs two or more photons simultaneously, wherein a sum of energy of the two or more photons may be equal to a predetermined energy.
  • the method may include collecting the PARS signals.
  • Clustering the received PARS signals may be based on shape.
  • the method may not include analyzing a reconstructed grayscale image to determine the image.
  • Clustering the received PARS signals may not be based on a scalar amplitude.
  • the method may not include mapping or visualizing a scalar amplitude.
  • the PARS signals may be indicative of temperature characteristics of the sample.
  • the PARS signals may be indicative of a speed of sound in the sample.
  • the PARS signals may be indicative of molecular information.
  • the PARS signals may be indicative of characteristics in the sample in an area having a size defined by a focused beam of light.
  • Receiving the PARS signals may include receiving time domain (TD) signals.
  • the method may include determining cluster centroids based on the clustered PARS signals.
  • the determined cluster centroids may include characteristic time-domain signals.
  • Receiving the PARS signals may include receiving backscattering intensity, radiative signals, and non-radiative relaxation time-domain signals.
  • Receiving the PARS signals may include receiving radiative PARS signals and non-radiative PARS signals.
  • the method may further include determining a ratio of and/or value based on the radiative PARS signals and the non-radiative PARS signals.
  • the ratio and/or value may be plotted against quantum efficiency (QE) values.
  • the method may include determining an image and/or biomolecular information based on the ratio and/or value.
  • the method may include displaying the image on a display.
  • the system may include an excitation light source configured to generate signals in the sample at an excitation location, the excitation light source being focused at or below the sample, including at or or below a surface of the sample, an interrogation light source configured to interrogate the sample and directed toward the excitation location of the sample, the interrogation light source being focused at or below the sample, a portion of the at least one interrogation light source returning from the sample that is indicative of the generated signals, and a processor configured to execute a clustering algorithm to cluster the generated signals and determine an image based on the clustered generated signals, the image being indicative of features in the sample.
  • the system may include a display configured to display the determined image. The image may be formed directly from the received signals.
  • the processor may be configured to determine one or more colors based on the clustering.
  • the determined colors may include purple, blue, and pink such that the image is configured to resemble an hematoxylin and eosin (H&E) stained image.
  • Systems and techniques disclosed herein may provide a computer-implemented method of visualizing features in a sample.
  • the method may include receiving one or more signals, clustering the received signals based on shape using a clustering algorithm to determine time-domain features of the sample, and determining an image, including one or more colors used in the image, based on the clustered signals and determined time-domain features.
  • the method may include determining vector angles from the received one or more signals. Clustering the received signals based on shape may include clustering the received signals based on the vector angles.
  • the one or more signals may include at least one of non- radiative signals or radiative signals.
  • the one or more signals may include at least one of non- radiative heat signals or non-radiative pressure signals.
  • the one or more signals may include radiative fluorescence signals.
  • the radiative fluorescence signals may be radiative autofluorescence signals.
  • the non-radiative and radiative signals may include pressure signals, temperature signals, ultrasound signals, autofluorescence signals, nonlinear scattering, and/or nonlinear fluorescence signals.
  • aspects disclosed herein may provide a computer-implemented method of visualizing features in a sample.
  • the method may include receiving signals, the signals including non-radiative and radiative signals from the sample, clustering the received one or more signals using a clustering algorithm to determine features of the sample, and determining an image based on the clustered signals.
  • the non-radiative signals may include heat signals and pressure signals
  • the radiative signals may include fluorescence signals.
  • the entire non-radiative and radiative relaxations may be received, such as pressure signals, temperature signals, ultrasound signals, autofluorescence signals, nonlinear scattering, and nonlinear fluorescence.
  • At least some of the signals are collected by generating signals in the sample at an excitation location using an excitation beam, interrogating the sample with an interrogation beam directed toward the excitation location of the sample, and detecting a portion of the interrogation beam returning from the sample. At least some of the signals may be collected by detecting optical absorption and scattering from the sample. The optical absorption and scattering may occur from excitation and detection of the sample.
  • aspects disclosed herein may provide a method of visualizing features in a sample.
  • the method may include receiving one or more signals, clustering the received signals based on shape using a clustering algorithm to determine features of the sample, the shape being based on a vector, and determining an image, including one or more colors used in the image, based on the clustered signals and determined features.

Landscapes

  • Investigating Or Analysing Biological Materials (AREA)

Abstract

Systems and methods are provided for analyzing a sample, the method may include: receiving, from the sample, a plurality of signals including radiative and non-radiative signals; extracting a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and applying the plurality of features to a machine learning architecture to generate an inference regarding the sample. Method of training and a machine learning architecture for generating a stained image is also provided.

Description

MACHINE-LEARNING PROCESSING FOR PHOTON ABSORPTION REMOTE SENSING SIGNALS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of and priority to: United States provisional patent application no. 63/382906 filed November 9, 2022, United States provisional patent application no. 63/424647 filed November 11 , 2022, United States provisional patent application no. 63/443838 filed February 7, 2023, and United States provisional patent application no. 63/453371 filed March 20, 2023, the content of each of which is herein incorporated by reference in its respective entirety.
FIELD
[0002] This relates to the field of optical imaging and, in particular, to machine learning processing for a photon absorption remote sensing (PARS) system for analyzing samples, including biological tissues, in vivo, ex vivo, or in vitro.
SUMMARY
[0003] In accordance with one aspect, there is provided a computer-implemented method for analyzing a sample, the method may include: receiving, from the sample, a plurality of signals including optical absorption radiative and non-radiative relaxations signals; extracting a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and applying the plurality of features to a machine learning architecture to generate an inference regarding the sample.
[0004] In some embodiments, the radiative and non-radiative signals include radiative and non-radiative absorption relaxation signals.
[0005] In some embodiments, the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal.
[0006] In some embodiments, the radiative signals includes one or more autofluorescence signals. [0007] In some embodiments, the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
[0008] In some embodiments, processing the plurality of signals may include: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
[0009] In some embodiments, said extracting the plurality of features includes processing both radiative signals and non-radiative signals.
[0010] In some embodiments, the plurality of signals include absorption spectra signals.
[0011] In some embodiments, the plurality of signals include scattering signals.
[0012] In some embodiments, the sample is an in vivo or an in situ sample.
[0013] In some embodiments, the sample is not stained.
[0014] In some embodiments, the sample is stained.
[0015] In some embodiments, the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities.
[0016] In some embodiments, the complementary modalities comprises at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
[0017] In some embodiments, image data obtained from the complementary modalities may include photoactive labels for contrasting or highlighting specific regions in the images.
[0018] In some embodiments, the plurality of features is supplemented with at least one of features informative of patient information.
[0019] In some embodiments, said processing includes converting the at least one of the plurality of signals to at least one image.
[0020] In some embodiments, said converting to said at least one image includes applying a simulated stain.
[0021] In some embodiments, the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
[0022] In some embodiments, the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue.
[0023] For example, preserved tissue sample may include a sample preserved using formalin, or alcohol fixed using alcohol fixatives.
[0024] In some embodiments, said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images.
[0025] In some embodiments, said converting includes applying a colorization machine learning architecture.
[0026] In some embodiments, the colorization machine learning architecture includes a Generative Adversarial Network (GAN).
[0027] In some embodiments, the colorization machine learning architecture includes a cycle-consistent generative adversarial network (CycleGAN).
[0028] In some embodiments, the colorization machine learning architecture includes a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
[0029] In some embodiments, the inference comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
[0030] In some embodiments, the method may further include generating signals for causing to render, at a display device, a user interface (Ul) showing a visualization of the inference. [0031] In accordance with another aspect, there is provided a computer system for analyzing a sample, the system comprising: a processor operating in conjunction with computer memory and non-transitory computer-readable storage, the processor configured to: receive, from the sample, a plurality of signals including radiative and non-radiative signals; extract a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and apply the plurality of features to a machine learning architecture to generate an inference regarding the sample.
[0032] In some embodiments, the radiative and non-radiative signals include radiative and non-radiative absorption relaxation signals.
[0033] In some embodiments, the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal.
[0034] In some embodiments, the radiative signals includes one or more autofluorescence signals.
[0035] In some embodiments, the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
[0036] In accordance with yet another aspect, there is provided a computer system for training a machine learning architecture, the system comprising: a processor operating in conjunction with computer memory and non-transitory computer-readable storage, the processor configured to, in each training iteration: instantiate a machine learning architecture including neural network having a plurality of nodes and weights stored on a memory device; obtain a true total absorption (TA) image; generate a simulated stained image based on the true TA image; generate a fake TA image based on the generated stained image; compute a first loss based on the generated fake TA image and the true TA image; obtain a labelled and stained image; compute a second loss based on the generated simulated stained image and the labelled and stained image; and update weights of the neural network based on at least one of the first and second losses.
[0037] In accordance with still another aspect, there is provided a computer-implemented method for training a machine learning architecture for generating a simulated stained image, the machine learning architecture including a plurality of nodes and weights stored on a memory device, the method comprising, in each training iteration: obtaining a true total absorption (TA) image; generating a simulated stained image based on the true TA image; generating a fake TA image based on the generated stained image; computing a first loss based on the generated fake TA image and the true TA image; obtaining a labelled and stained image; computing a second loss based on the generated simulated stained image and the labelled and stained image; and updating weights of the neural network based on at least one of the first and second losses.
[0038] In some embodiments, the simulated stained image is generated by a second neural network comprising a second set of nodes and weights, the second set of weights being updated based on at least one of the first and second losses during each iteration.
[0039] In some embodiments, the fake TA image is generated by a third neural network comprising a second set of nodes and weights, the third set of weights being updated based on at least one of the first and second losses during each iteration.
[0040] In some embodiments, computing the second loss based on the generated simulated stained image and the labelled and stained image may include steps of: processing the generated simulated stained image by a first discriminator network; processing the labelled and stained image by a second discriminator network; and computing the second loss based on a respective output from each of the first and second discriminator networks.
[0041] In some embodiments, the method may further include processing the respective output from each of the first and second discriminator networks through a respective classification matrix prior to computing the second loss.
[0042] In some embodiments, the machine learning architecture comprises a cycleconsistent generative adversarial network (CycleGAN) machine learning architecture.
[0043] In some embodiments, the machine learning architecture comprises a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
[0044] In some embodiments, the labelled and stained image is a labelled PARS image.
[0045] In some embodiments, the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image.
[0046] In some embodiments, automatically labelling the unlabeled PARS image comprises labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities. [0047] In some embodiments, the database is a H&E database.
[0048] A portion of interrogation, signal enhancement, excitation or autofluorescence from the sample may be collected to form images. These signals may be used to unmix the size, shape, feature, dimensions, nature, and composition of the sample. In a given architecture, any portion of the light returning from the sample such as the detection, excitation, or thermal enhancement beams may be collected. The returning light may be analyzed based on wavelength, phase, polarization, etc. to capture any absorption-induced signals including, pressure, temperature, and optical emissions. In this way, the PARS may simultaneously capture for example, scattering, autofluorescence, and polarization contrast attributed to each detection, excitation, and thermal enhancement source. Moreover, the PARS laser sources may be specifically chosen to highlight these different contrast mechanisms.
[0049] Other aspects will be apparent from the description and claims below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0050] FIG. 1 shows an overview of a PARS system.
[0051] FIG. 2 shows an overview of a PARS system with PARS excitation and PARS detection.
[0052] FIG. 3 shows an implementation of PARS being combined with other modalities.
[0053] FIG. 4 shows a signal processing pathway of PARS signals.
[0054] FIG. 5 shows exemplary architecture for total absorption (TA) PARS, where an autofluorescence detection system is used as an example.
[0055] FIG. 6 shows a visualization produced by the autofluorescence sensitive total absorption PARS (TA-PARS) architecture.
[0056] FIG. 7 shows an exemplary signal evolution of a TA-PARS signal.
[0057] FIG. 8 shows an example of radiative and non-radiative signals.
[0058] FIG. 9 shows exemplary architecture using two excitation sources, one detection source, and a plurality of photodiodes.
[0059] FIG. 10 shows a comparison of non-radiative absorption (view (a)), radiative absorption (view (b)), and scattering (view (c)) provided by a TA-PARS system.
[0060] FIG. 11 shows examples of TA-PARS imaging.
[0061] FIG. 12 shows exemplary applications of a quantum efficiency ratio (QER).
[0062] FIG. 13 shows examples of TA-PARS imaging using a QER acquisition process.
[0063] FIG. 14 shows comparisons of imaging using a QER acquisition process with traditional stains.
[0064] FIG. 15 shows an exemplary PARS signal evolution.
[0065] FIG. 16 shows an example of a lifetime PARS image in resected rattus brain tissues.
[0066] FIG. 17 shows an exemplary PARS signal evolution in connection with a rapid lifetime extraction technique.
[0067] FIG. 18 shows exemplary architecture for a multi-pass (MP) PARS system.
[0068] FIG. 19 compares Multi-Photon PARS with normal PARS.
[0069] FIGs. 20A and 20B show a reconstructed grayscale PARS image and a corresponding stain.
[0070] FIGs. 21 A and 21 B show principal components of a time-domain TD-PARS signal and a synthesized stain based on the principal components.
[0071] FIG. 22 shows exemplary architecture to analyze TD-PARS signals.
[0072] FIG. 23 shows a graph of TD-PARS signals and centroids.
[0073] FIG. 24 shows a visualization using a clustering method.
[0074] FIG. 25 shows a visualization of three different regions of brain tissues using the clustering method.
[0075] FIG. 26 shows an exemplary clustering algorithm to analyze the TD-PARS signals and determine an image.
[0076] FIG. 27 shows a method of determining an image using the clustering algorithm. [0077] FIG. 28 exemplifies non-radiative signal extraction.
[0078] FIG. 29 exemplifies various filtered instances of a PARS signal.
[0079] FIG. 30 exemplifies expected spatial correlation between adjacent points or signals.
[0080] FIG. 31 exemplifies two signals with different lifetimes in connection with functional extraction.
[0081] FIG. 32 shows a comparison of an original image and a denoised image.
[0082] FIG. 33 shows a chirped-pulse signal and acquisition.
[0083] FIG. 34 shows an exemplary TD-PARS acquisition by imposing a delay to reconstruct a signal.
[0084] FIG. 35 shows data compression using digital and/or analog techniques.
[0085] FIG. 36 shows an exemplary fast acquisition approach.
[0086] FIG. 37 shows a direct construction of a colorized image.
[0087] FIGs. 38A and 38B show two example architectures for generating one or more inferences regarding a sample.
[0088] FIG. 39 shows another example architecture for generating one or more inferences regarding a sample.
[0089] FIG. 40 shows an example user interface rendering one or more inferences generated by the architecture in FIG. 38A, 38B or 39.
[0090] FIG. 41 shows an example machine learning architecture that may be used to implement an image generator.
[0091] FIG. 42 shows an example process for preparing one or more training data for training the image generator.
[0092] FIG. 43 shows an example neural network that may be used to implement the image generator.
[0093] FIG. 44 shows examples of contrasts extracted from PARS signals in tissue slides. [0094] FIG. 45 shows examples of combinations of contrasts from the combination of PARS signals into unique contrasts.
[0095] FIG. 46 shows two virtually (simulated) stained PARS images.
[0096] FIG. 47A shows an example of an unlabeled PARS virtual H&E image.
[0097] FIG. 47B shows a historical labelled H&E image correlated with the image in FIG.
47A.
[0098] FIG. 48 shows examples of different tissue types imaged and identified using the machine learning architectures.
[0099] FIG. 49 shows unique keratin pearl features identified and isolated within an example simulated stained image.
[00100] FIG. 50 shows biomarkers of localized inflammation and malignancy, identified and encircled based on an example simulated stained image.
[00101] FIG. 51 shows different cell types and tissue regions, identified and delineated within an example simulated stained image.
[00102] FIG. 52 shows example of an abnormal tissue region, identified and delineated from an example simulated stained image.
[00103] FIG. 53 is a schematic diagram of computing device which may be used to implement a computing device used to train or execute (at inference time) a machine learning model.
[00104] FIG. 54 shows a process performed by a processor of an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
[00105] FIG. 55 shows an example heat map generated by an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
[00106] FIG. 56 shows an example multi-stain image generated by an example embodiment of machine learning system or architecture in FIGs. 38A, 38B or 39.
[00107] FIG. 57A shows an example embodiment of image generator connected to a PARS system. The image generator may be part of machine learning system or architecture in FIGs. 38A, 38B or 39. [00108] FIG. 57B shows another example embodiment of image generator connected to a PARS system.
[00109] FIG. 58 shows yet another example embodiment of image generator connected to a preprocessing module.
[00110] FIG. 59 shows an example user interface for analyzing one or more images generated by the architecture in FIG. 38A, 38B or 39.
[00111] FIG. 60 shows an example user interface for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39.
[00112] FIG. 61 shows another example user interface for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39.
[00113] FIG. 62 shows an example user interface for scanning and processing one or more images.
[00114] FIG. 63 shows an example user interface for displaying an annotated image.
[00115] FIG. 64 to FIG. 79 illustrate various schematic diagrams of example embodiments of machine learning architectures or processes for generating one or more inferences based on output from a PARS system.
[00116] FIG. 80 shows an example of raw PARS data in PARS TA-PARS images denoised using a Noise2Void (N2V) framework.
[00117] FIG. 81 shows an example implementation of an error correction submodule for denoising of PARS images.
[00118] FIGs. 82A and 82B show example visualization of data preparation process and virtual staining process of images.
[00119] FIG. 83 shows example denoising results with a denoising process and an errorcorrection process applied to both raw PARS image data.
[00120] FIG. 84 shows example PARS non-radiative time domain features extracted from PARS events.
[00121] FIG. 85 shows an example multi-channel virtual staining architecture for signal processing and virtual staining of PARS image data. [00122] FIG. 86 shows a comparison of virtual staining results using different combinations of PARS feature images as inputs.
[00123] FIG. 87 shows an example PARS data vector or feature vector.
[00124] FIG. 88 example PARS virtual multi-staining images based on the same PARS image data.
DETAILED DESCRIPTION
[00125] Reference will now be made in detail to examples of the present disclosure, which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. In the discussion that follows, relative terms such as “about,” “substantially,” “approximately,” etc. are used to indicate a possible variation in a stated numeric value.
[00126] A recently reported photoacoustic technology known as photoacoustic remote sensing (PARS) microscopy (US 2016/0113507, and US 2017/0215738) has solved many of these sensitivity issues through a novel detection mechanism. Rather than detecting acoustic pressures at an outer surface once they have propagated away from their source, PARS enables direct detection of excited photoacoustic regions. This is accomplished by monitoring changes in material optical properties that coincide with the photoacoustic excitation. These changes then encode various salient material properties such as the optical absorption, physical target dimensions, and constituent chromophores to name a few.
[00127] Since PARS devices may utilize only two optical beams which may be in a confocal arrangement, spatial resolution of the imaging technique may be defined as excitation-defined (ED) or interrogation-defined (ID) depending on which of the beams provide a tighter focus at the sample. This aspect also may facilitate imaging deeper targets, beyond the limits of optical resolution devices. This may be accomplished by leveraging a deeply-penetrating (long transport mean-free-path) detection wavelength such as a short-wave infrared (like 1310 nm, 1700 nm or 10um) which may provide spatial resolution to a depth superior to that provided by a given excitation (such as 532 nm or 266 nm) within highly scattering media such as biological tissues. If more than two beams are used such that a system consists of more than two foci at the sample, then obvious extensions of these components would be expected. For example, if an additional beam which amplifies the signal within its focal region is added, it may also contribute towards defining the expected resolution of the system.
[00128] Intensity-modulated PARS signals hold dependence on not only optical absorption and incident excitation fluence, but also on detection laser wavelength, fluence and the temperature of the sample. PARS signals may also arise from other effects such as scatterer position modulation and surface oscillations. A similar analog may exist for PARS devices which take advantage of other modulating optical properties such as intensity, polarization, frequency, phase, fluorescence, non-linear scattering, non-linear absorption, etc. As material properties are dependent on ambient temperature, there is a corresponding temperature dependence in the PARS signal. At some intensity levels, additional saturation effects may also be leveraged.
[00129] The above mechanisms point to significant sources of scattering position or scattering cross-section modulation that could be readily measurable when the probe beam is focused to sense the confined excitation volume. However, these large local signals are not the only potential source of PARS signal. Acoustic signals propagating to the surface of the sample could also result in changes in PARS signal. These acoustic signals can generate surface oscillation as well which result in phase modulation of the PARS signals.
[00130] These generated signals may be intentionally controlled or effected by secondary physical effects such as vibration, temperature, stress, surface roughness, mechanical bending among others. For example, temperature may be introduced to the sample, which may augment the generated PARS signals as compared to those which would be generated without having introduced this additional temperature. Another example may involve introducing mechanical stress to the sample (such as bending) which may in turn effect the material properties of the sample (e.g., density or local optical properties such as birefringence, refractive index, absorption coefficient, scattering behavior) and thereby perturbing the generated PARS signals as compared to those which would have been generated without having introduced this mechanical stress. Additional contrast agents may be added to the sample to boost the generated PARS signals, this includes but is not limited to dyes, proteins, specially designed cells, liquids and optical agents or windows. The target may be altered optically to provide optimized results.
[00131] Some techniques may simply monitor intensity back reflection and may extract the amplitude of these time-domain signals. However, additional information may be extracted from the time-varying aspects of the signals. For example, some of the scattering, polarization, frequency, and phase content with a PARS signal may be attributed to the size, shape, features, and dimensions of the region which generated that signal. This may encode unique/orthogonal additional information with utility towards improving final image fidelity, classifying sample regions, sizing constituent chromophores, and classifying constituent chromophores to name a few. As such techniques may generate independent datasets for the same interrogated region they may be combined or compared with each other. For example, frequency information may describe the microscopic structures within the sample, this may be combined with conventional PARS which uses scattering modulation to highlight regions which are both absorbing and of a specific size.
[00132] Referring to FIG. 1 , Photon Absorption remote sensing (PARS) microscopy is an all-optical non-contact optical absorption microscopy technique. PARS may use a co-focused excitation and detection laser pair to generate and detect optical absorption contrast in a variety of specimens. In PARS, the excitation laser may include a pulsed excitation laser, which may be used to deposit optical energy into a sample. When the light is absorbed by a chromophore, the photon energy is captured by the specimen. The absorbed energy may then be dissipated through either optical radiation (radiative) or non-radiative relaxation. During non-radiative relaxation, absorbed optical energy is converted into heat. In certain cases, the generation of heat may cause thermoelastic expansion resulting in photoacoustic pressures and photothermal signals. During radiative relaxation, absorbed optical energy is released through the emission of photons. Generally, emitted photons exhibit a different energy level compared to the absorbed photons.
[00133] Changes in the local temperature and pressure result in nanosecond scale perturbations in a sample optical and material properties. The detection laser, co-focused with the excitation spot, may capture the absorption-induced perturbations in the optical properties as scattering intensity modulations. By measuring the perturbations in the detection laser scattering, PARS can then measure the non-radiative absorption contrast of different biomolecules. Concurrently, by capturing the unperturbed back reflection of the detection, and the back-reflected excitation energy the PARS may capture the optical scattering contrast attributed to the excitation and detection sources, respectively.
[00134] Throughout this disclosure, an excitation pulse generated by a pulsed excitation laser may be described to be at a particular scale. It is to be appreciated that whenever an excitation pulse is said to be generated at nanosecond, it may be similarly generated at microsecond or picosecond scale. For example, a picosecond scale pulsed excitation laser may elicit radiative and non-radiative (thermal and pressure) perturbations in a sample.
[00135] Fig. 1 shows a high-level diagram of a photon absorption remote sensing (PARS) system. This consists of a PARS system (101), an optical combiner (102), and an imaging head (104). The PARS system may further include other systems (e.g., signal enhancement system), and the optical combiner may combine the beams from the PARS system (101) and these other systems. [00136] Fig. 2 shows a high-level diagram with the PARS Excitation (202), PARS Detection (204) and Optical Combiner (203) delineated. These could be combined with other systems (e.g., signal enhancement system) and Imaging Head (205).
[00137] Fig. 3 shows a high-level embodiment of a PARS system combined with other modalities (305). This consists of a PARS system (301), optical combiner (302), and an imaging head (304). These can be combined with a variety of other modalities (305) such as bright-field microscopy, scanning laser ophthalmoscopy, ultrasound imaging, stimulated Raman microscopy, fluorescence microscopy, two-photon and confocal fluorescence microscopy, Coherent-Anti-Raman-Stokes microscopy, Raman microscopy, other PARS, photoacoustic and ultrasound systems, among others.
[00138] Fig. 4 shows a signal processing pathway. This consists of an optical detector (401), a signal processing unit (402), a digitizer (403), a digital signal processing unit (404) and a signal extraction unit (405).
TA-PARS
[00139] When a sample absorbs light, there is a limited number of interactions which may happen. The absorbed energy is converted to temperature and pressure, or to light of a different wavelength. While the temperature and pressure signals are captured by a PARS detection beam, the light emissions may be detected by a total absorption (TA) PARS system, which may be sensitive to radiative relaxation. In this way, all or nearly all absorption of light by the tissues (whether in the form of non-radiative signals like generated pressure, generated temperature, radiative relaxation such as fluorescence, multiphoton fluorescence, or stimulated Raman scattering), and/or scattering signals such as local scattering signals may be captured by the PARS.
[00140] Fig. 5 shows exemplary architecture for a radiative relaxation sensitive PARS. As an example, the radiative relaxation may be fluorescent or autofluorescent, but aspects disclosed herein are not limited. For example, the radiative relaxation may include Raman scattering, fluorescence, autofluorescence, multiphoton fluorescence, etc. For convenience of description, an autofluorescence sensitive TA-PARS system will be described as an example with reference to FIG. 5. A multi-wavelength fiber excitation laser (5812) is used to generate PARS signals. An excitation beam (5817) passes through a multi-wavelength unit (5840) and a lens system (5842) to adjust its focus on the sample (5818). The optical subsystem used to adjust the focus may be constructed by components known to those skilled in the art including but not limited to beam expanders, adjustable beam expanders, adjustable collimators, adjustable reflective expanders, telescope systems, etc.
[00141] The signal signatures are interrogated using either a short or long-coherence length probe beam (5816) from a detection laser (5814) that is co-focused and co-aligned with the excitation spots on the sample (5818). The interrogation/probe beam (5816) passes through a lens system (5843), polarizing beam splitter (5844) and quarter wave plate (5856) to guide the reflected light (5820) from the sample (5818) to the photodiode (5846). However, this architecture is not limited to including a polarizing beam splitter (5844) and quarter wave plate (5856). The aforementioned components may be substituted for fiber-based, equivalent components, e.g., a circulator, coupler, Faraday rotator, electro-optic modulator, WDM, and/or double-clad fiber, that are non-reciprocal elements. Such elements may receive light from a first path, but then redirect said light to a second path.
[00142] The interrogation beam (5816) is combined with the excitation beam using a beam combiner (5830). The combined beam (5821) is scanned by a scanning unit (5819). This passes through an objective lens (5855) and is focused onto the sample (5818).
[00143] The reflected beam (5820) returns along the same path. The reflected beam is filtered with a beam combiner/splitter (5831) to separate the detection beam (5816) from any autofluorescence light returned from the sample. The autofluorescence light (5890) passes through a lens system (5845) to adjust its focus onto the autofluorescence sensitive photodetector (5891). The isolated detection beam (5820) is transmitted through the beam splitter (5831) towards the signal collection/analysis pathway. Here the returned detection light is redirected by the polarized beam splitter (5844). The detection pathway consists of a photodiode (5846), amplifier (5858), fast data acquisition card (5850) and computer (5852). The autofluorescence sensitive photodetector may be any such device including a camera, photodiode, photodiode array etc. The autofluorescence detection pathway may include more beam splitters and photodetectors to further isolate and detect specific wavelengths of light.
[00144] Fig. 6 shows exemplary visualizations which may potentially be provided by autofluorescence sensitive TA-PARS. Any portion of the light returning from the sample, excluding the detection beam, may be collected, and analyzed based on wavelength. By isolating specific wavelengths of light emissions from the sample, specific molecules of interest can be visualized. For example, the autofluorescence sensitive PARS may be applied to imaging tissues. Here, the PARS excitation are selected to capture absorption contrast of nuclei. In this case, UV excitation is used to generate pressure and temperature signals attributed to nuclei in tissues. Concurrently, the autofluorescence contrast generated by the PARS excitation are captured. In this case, the non-nuclear regions of the tissues, are highly fluorescent. In this way, visualizations of nuclear and non-nuclear structures in tissues are provided simultaneously. Moreover, the resulting visualizations may require only a single (or only one or exactly one) excitation wavelength to capture. As previously described, this method may be used with other radiative relaxation sensitive PARS, and radiative relaxation other than autofluorescence may be generated and captured.
[00145] For example, the PARS radiative signal could be implemented into a PARS absorption spectrometer to accurately measure all absorption of light by a sample. Moreover, the radiative relaxation (e.g., autofluorescence in FIG. 5) sensitive PARS can be used to measure the proportion of absorbed energy which is converted to heat and pressure or light respectively. This may enable sensitive quantum efficiency measurements in a broad range of biological and non-biological samples.
[00146] The TA-PARS signal may also be collected on a single (only one or exactly one) detector as highlighted in FIG. 7. Given that the salient components of the TA-PARS signal may appear distinct from each other, a single detector may appropriately characterize these components. For example, the initial signal level (Scattering) may be indicative of the unperturbed intensity reflectivity of the detection beam from the sample at the interrogation location encoding the scatter intensity. Then, following excitation by the excitation pulse (at 100 ns in FIG. 7), PARS excitation signals related to non-radiative relaxation (e.g., thermal, temperature), and radiative relaxation (e.g., fluorescence or autofluorescence) may be observed as unique overlapping signals (labeled PA and AF in the diagram).
[00147] If these excited signals are measurably unique (e.g., in amplitude or magnitude and/or evolution time) from each other, they may be decomposed from the combined signal to extract these magnitudes along with their characteristic lifetimes. This wealth of information may be useful in improving available contrast, providing additional multiplexing capabilities, and providing characteristic molecular signatures of constituent chromophores. In addition, such an approach may provide pragmatic benefits in that only a single detector and a single (only one or exactly one) detection path may be required, drastically reducing physical hardware complexity and cost. Capturing signals overtime are discussed in more detail in the section covering TD-PARS.
[00148] Referring to FIG. 8, any given PARS excitation event always generates some fraction of radiative and non-radiative relaxation. TA-PARS facilitates the capture of a chromophores total-absorption profile. The thermal and pressure perturbations may generate corresponding modulations in the local optical properties. The TA-PARS microscope may capture a chromophores’ scattering, and total absorption (radiative and non-radiative relaxation) visualizations in a single (only one or exactly one) excitation event. The non- radiative relaxation leads to heat and pressure induced modulations, which in turn cause back- reflected intensity variations in the detection beam. PARS signals are denoted as some change in reflectivity multiplied by the incident detection (RIdet). The radiative absorption pathway captures optical emissions attributed to radiative relaxation such as stimulated Raman scattering, fluorescence, multiphoton fluorescence, etc. Emissions are denoted as some wavelength and energy optical emission (hvem). The local scattering contrast is captured as the unmodulated backscatter (pre-excitation pulse) of the detection beam. The scattering contrast is denoted as the unperturbed scattering profile multiplied by the incident detection power (osIdet
[00149] In TA-PARS, the non-radiative relaxation-induced modulations are detected at the excited location by the probe beam. The PARS may then visualize any photothermal heat or photoacoustic pressures which cause modulation in the local optical properties. Concurrently, the TA-PARS leverages an additional detection pathway to capture non-specific optical emissions regardless of properties such as wavelength, frequency, polarization from the sample (excluding the excitation and detection). These emissions may then be attributed to any radiative relaxation effects such as stimulated Raman scattering, fluorescence, and multiphoton fluorescence.
[00150] Using this detection pathway may provide enhanced sensitivity to any range of chromophores. Unlike traditional modalities which independently capture some of the radiative or non-radiative absorption, in TA-PARS, the contrast may not be bound by efficiency factors such as the photothermal conversion efficiency or fluorescence quantum yield. By capturing the non-radiative and radiative absorption contrast in addition to the scattering of the excitation and detection, the TA-PARS may capture all or nearly all the optical properties of a chromophore such as the absorption coefficient, scattering coefficient, quantum efficiency, non-linear interaction coefficients, providing simultaneous sensitivity to most chromophores.
Quantum Efficiency Ratio (QER) and Label-Free H&E Visualizations
[00151] Capturing both radiative and non-radiative absorption fractions may also yield additional information. TA-PARS may yield an absorption metric proposed as the quantum efficiency ratio (QER), which visualizes a biomolecules proportional radiative and non- radiative absorption response. The TA-PARS may provide label-free visualization of a range of biomolecules enabling convincing analogues to traditional histochemical staining of tissues, effectively providing label-free Hematoxylin and Eosin (H&E)-like visualizations. [00152] QER may be defined as a ratio of radiative PARS signals (Pr) to non-radiative
PARS (Pnr), such as: (QER = -J-; QER = (Pr - Pnr)/(Pr + Pnr))- This ratio will be specific to a Pnr given chromophore. For example, a biomolecule like collagen will exhibit high radiative contrast, and low non-radiative contrast providing a high QER. Conversely, DNA will exhibit low radiative contrast, and high non-radiative contrast providing a high QER. Calculating the QER in addition from the radiative and non-radiative absorption may allow for properties such as the chromophore composition, density, and quantity to be extracted in a single (only one or exactly one) event. This may also allow for single-shot functional imaging.
[00153] For example, a picosecond scale pulsed excitation laser may elicit radiative and nonradiative (thermal and pressure) perturbations in a sample. The thermal and pressure perturbations generate corresponding modulations in the local optical properties. A secondary probe beam co-focused with the excitation may capture the non-radiative absorption induced modulations to the local optical properties as changes in backscattering intensity.
[00154] These backscatter modulations may be directly correlated to the local non-radiative absorption contrast. By the nature of the probe architecture, the unperturbed backscatter (preexcitation event) also captures the scattering contrast as seen by the probe beam. Unlike traditional photoacoustic methods, rather than relying on the pressure waves to propagate through the sample before detection via acoustic transducer, the TA-PARS probe may instantaneously detect the induced modulations at the excited location. Therefore, TA-PARS offers non-contact operation, facilitating imaging of delicate, and sensitive samples, which would otherwise be impractical to image with traditional contact-based PAM methods.
[00155] Since TA-PARS may rely only on the generation of heat and subsequently pressure to provide contrast, the absorption mechanism is non-specific, and highly sensitive to small changes in relative absorption. This allows any variety of absorption mechanisms such as vibrational absorption, stimulated Raman absorption, and electronic absorption to be detected with PARS. Previously, PARS has demonstrated label-free non-radiative absorption contrast of hemoglobin, DNA, RNA, lipids, and cytochromes, in specimens such as chicken embryo models, resected tissue specimens, and live murine models. In TA-PARS, a unique secondary detection pathway captures radiative relaxation contrast, in addition to the non- radiative absorption. The radiative absorption pathway was designed to broadly collect all- optical emissions at any wavelength of light, excluding the excitation and detection. As a result, the radiative detection pathway captures non-specific optical emissions from the sample regardless of properties such as wavelength, frequency, polarization.
[00156] Referring to FIG. 9, to improve the sensitivity of the TA-PARS and facilitate the detection of radiative absorption contrast, a TA-PARS 900 may include excitation at first and second excitation wavelengths that are different from each other (e.g., 266 nm and 515 nm excitation), providing sensitivity to DNA, heme proteins, NADPH, collagen, elastin, amino acids, and a variety of fluorescent dyes. The TA-PARS may include a specific optical pathway with dichroic filters and avalanche photodiode, to isolate and detect the radiative absorption contrast. As exemplified in FIG. 9, the TA-PARS system may include excitation at the first excitation wavelength (e.g., visible light such as 515 nm visible excitation) from a first excitation source 920 and excitation at the second excitation wavelength (e.g., UV light such as 266 nm UV excitation) from a second excitation source 940. The first excitation source 920 may include a first excitation laser 902, such as a 50 kHz to 2.7 MHz 2 ps pulsed 1030 nm fiber laser (e.g., YLPP-1-150-V-30, IPG Photonics), but aspects disclosed herein are not limited. The second harmonic may be generated with a lithium triborate crystal or LBO 922. The first (e.g., 515 nm) harmonic may be separated via a dichroic mirror 906, then spatial filtered with a pinhole 908 prior to use in the imaging system. The first excitation source 902 may include one or more lenses or plates, such as a half-wave plate or HWP 924 provided between LBO 922 and the first excitation laser 902, a filtering lens, and/or a lens assembly 928. The pinhole 908 may be provided between, as an example, two lenses or lens assemblies 928.
[00157] The second excitation source 940 may include a second excitation laser 904, such as a 50 kHz 400 ps pulsed diode laser (e.g., Wedge XF 266, RPMC), but aspects disclosed herein are not limited. Output from the second excitation laser 904 may be separated from residual excitation (e.g., 532 nm excitation) using a prism 910, then expanded (e.g., using a variable beam expander or VBE 926) prior to use in the imaging system.
[00158] The TA-PARS system may include a detection system 950 shared between the first and second excitation sources 920 and 940. As exemplified in FIG. 9, the TA-PARS detection system 950 may include a probe beam 912, which may include a 405 nm laser diode such as a 405 nm OBIS-LS laser (OBIS LS 405, Coherent). Here, the detection may be fiber coupled through a circulator 914 into the system, where it may be combined with the excitations via one or more dichroic mirrors 916 and/or guided via mirrors 934. The combined excitation and detection may be co-focused onto the sample using a lens 918, such as a 0.42 NA UV objective lens. Back-reflected detection from the sample may return to the circulator 914 by the same path as forward propagation. The back-reflected detection contains the PARS non-radiative absorption contrast as nanosecond scale intensity modulations which may be captured with a photodiode. The detection system 950 may also include a collimator and/or collimating assembly 936 to collimate the detection light. [00159] This probe wavelength provides improved scattering resolution, which improves the confocal overlap between the PARS excitation and detection spots on the sample. Combined with a circulator-based probe beam pathway and avalanche photodetector, the TA- PARS provides improved sensitivity compared to previous implementations. The visible wavelength probe also provides improved compatibility between the visible and UV excitation wavelengths.
[00160] Radiative relaxation from each of the first and second excitations (266 nm and 515 nm excitation) may be independently captured with different (or first and second) photodiodes 930 and 932. The radiative relaxation induced from the first excitation (515 nm induced radiative relaxation) may be isolated with dichroic mirrors 916, then captured using the first photodiode 930. The radiative relaxation induced from the second excitation (266 nm induced radiative relaxation) may be isolated by redirecting some portion (e.g., 1%-50%) of the total light intensity returned from the sample towards a photodetector and/or second photodiode 932. This light may then be spectrally filtered (e.g., via lens assemblies 936) to remove residual excitation and detection prior to measurement.
[00161] To form an image, mechanical stages may be used to scan a sample over the objective lens. The excitation sources 920 and 940 may be continuously pulsed (e.g., at 50 kHz), while the stage velocity may be regulated to achieve a desired pixel size (spacing between interrogation events). Each time the excitation laser 902 and/or 904 is pulsed, a collection event may be triggered. During a collection event, a few hundred nanosecond segment may be collected from 4 input signals using a high-speed digitizer (e.g., RZE-004- 200, Gage Applied). These signals may include the laser input reference measurements (excitation and detection), PARS scattering signal, the PARS non-radiative relaxation signal, the PARS radiative relaxation signal, and a positional signal from the stages. The time resolved scattering, absorption, and position signals, may then be compressed down to single characteristic features. This serves to substantially reduce the volume of data capture during a collection.
[00162] To reconstruct the absorption and scattering images, the raw data may be fitted to a Cartesian grid based on the location signal at each interrogation. Raw images may then be Gaussian filtered and rescaled based on histogram distribution prior to visualization.
[00163] TA-PARS visualization fidelity is assessed through one-to-one comparison against traditional H&E-stained images. The TA-PARS total-absorption and QER contrast mechanisms are also validated in a series of dye and tissue samples. Results show high correlation between radiative relaxation characteristics and TA-PARS-measured QER in a variety of fluorescent dyes, and tissues. These QER visualizations are used to extract regions of specific biomolecules such as collagen, elastin, and nuclei in tissue samples. This enables realization of a broadly applicable high resolution absorption contrast microscope system. The TA-PARS may provide unprecedented label-free contrast in any variety of biological specimens, providing otherwise inaccessible visualizations.
[00164] FIG. 10 shows a comparison of three different contrasts (non-radiative absorption in view (a), radiative absorption in view (b), and scattering in view (c)) provided by a TA-PARS system using 266 nm excitation in thin sections of formalin fixed paraffin embedded (FFPE) human brain tissues. The non-radiative relaxation signals were captured based on nanosecond scale pressure- and temperature-induced modulations in the collected backscattered 405 nm detection beam from the sample. The radiative absorption contrast was captured as optical emissions from the sample, excluding the excitation and detection wavelengths which were blocked by optical filters. Concurrently, the unperturbed backscatter of the 405 nm probe captures the local optical scattering from the sample. With this contrast, most of the salient tissue structures were captured. The non-radiative absorption contrast highlights predominately nuclear structures, while the radiative contrast captures extranuclear features. The optical scattering contrast captures the morphology of the thin tissue section. In resected tissues this scattering contrast becomes less applicable, and hence was not explored in other samples.
[00165] FIG. 11 shows an example of TA-PARS imaging. In view (a), TA-PARS captured the epithelial layer at the margin of resected human skin tissues. The stratum corneum layer was captured in the radiative and non-radiative visualizations concurrently. The radiative visualization provides improved contrast in recovering these tissue layers as compared to the non-radiative image. In another subcutaneous region of the resected human skin tissues in view (b), the TA-PARS captures connective tissues, with sparse nuclei, and elongated fibrin features.
[00166] The disclosed system was also applied to imaging resected unprocessed rattus brain tissues. In view I, the TA-PARS acquisition highlights the gray matter layer in the brain revealing dense regions of nuclear structures. The nuclei of the gray matter layer are presented with higher contrast relative to surrounding tissues in the non-radiative image as compared to the radiative representation. Since nuclei do not provide significant radiative contrast the nuclear structures in the radiative image appear as voids or lack of signal within the specimen. While some potential nuclei may be observed, they may not be identified with significant confidence, as compared to those in the TA-PARS non-radiative representation. Along the top right of the non-radiative acquisition, structures resembling myelinated neurons can be identified surrounding the more sparsely populated nuclei in that area.
[00167] In view (d), further acquisitions in neighboring regions accentuate the apparent myelinated neuron structures. Dense structures indicative of the web of overlapping and interconnected dendrites and axons are apparent within these regions, where tightly woven neuronal projections are observed arranged around a void in the tissue. Then, zooming out to a larger nearby imaging field, in vil(e), sections of distinct tissues were recovered with the non- radiative contrast. The left side of the field contains dense bundles indicating myelin projections into potentially gray matter with larger nuclei, as opposed to the right side, which is potentially white matter containing more myelinated structures with decreased nuclear density.
[00168] Referring to FIG. 12, the QER or the ratio of the non-radiative and radiative absorption fractions is expected to contain further biomolecule-specific information. Ideally, the local absorption fraction should correlate directly with radiative relaxation properties. Relative radiative and non-radiative signal intensities may be plotted, and QER may be plotted against reported quantum efficiency (QE) values.
[00169] In one example, the TA-PARS was applied to measure a series of fluorescent dyes with varying quantum efficiencies. The 515 nm excitation was used to generate radiative and non-radiative relaxation signals which were captured simultaneously.
[00170] An example of relative radiative and non-radiative signal intensities were plotted, as shown in FIG. 12, view (a). The QER is then plotted against reported QE values for the samples, as shown in view (b). The radiative PARS signals (Pr) are expected to increase linearly with the QE (Pr a QE), while the non-radiative PARS (Pnr) signals are expected to decrease linearly with QE (Pnr a l - QE). Therefore, the fractional relationship between the non-radiative and radiative signals is represented by the quotient of the linear functions (QER = Pr/Pnr oc QE/ (1 - QE)) . The empirical results fit well to this expected model (R = 0.988).
[00171] FIG. 13 exemplifies images from a QER acquisition process applied to imaging of thin sections of FFPE human tissues. Based on the non-radiative and radiative signals, the QER was calculated for each image pixel, generating a QER image. The result represents a dataset encoding chromophore-specific attributes, in addition to the independent absorption fractions. The QER processing helps to further separate otherwise similar tissue types from solely the radiative or non-radiative acquisitions.
[00172] A colorized version of the QER image shown in Fig. 13 highlights various tissue components. The low QER biomolecules (DNA, RNA, etc.) may appear as a first color (e.g., a color having a lower wavelength or a light blue color), while the high QER biomolecules (collagen, elastin, etc.) may appear as a second color and/or a third color different from (e.g., having a higher wavelength than) the first color (e.g., pink and purple). Compared to the H&E visualization captured following the QER imaging session (Fig. 13, view (c-ii)), collagen and elastin (which may appear as a fourth color or dark red) composing the fibrous connective tissues may be easy to identify due to their low QER. Conversely, nuclear structures are appreciable in the first color and/or a fifth color (e.g., blue) due to their high QER. The connective tissues surrounding the carcinoma cells are also differentiated from the fibrous connective tissues in a sixth color (e.g., purple) in the QER visualization as compared to the H&E-stained image. In calculating the QER from the TA-PARS a complementary imaging contrast is provided, enabling further chromophore specificity than is accessible with radiative or non-radiative modalities independently. Although the terms first color, second color, third color, fourth color, fifth color, and sixth are used, aspects disclosed herein may not be limited to six, etc. predetermined colors. The color appearing in the visualization may have a wavelength proportional to the QER. For example, structures with a higher QER may appear as colors with higher wavelengths (e.g., red) and structures with a lower QER may appear as colors with lower wavelengths (e.g., blue).
[00173] Although the QER method presented here relies on extracted intensity values, similar analogs may be conceived which involve similar such ratios of others signal parameters such as lifetime, rise time, signal shape, frequency content, etc.
Label-Free Histological Imaging
[00174] The TA-PARS mechanism may provide an opportunity to accurately emulate traditional histochemical staining contrast, such as H&E staining, and TA-PARS may provide label-free histological imaging. The non-radiative TA-PARS signal contrast may be analogous to that provided by hematoxylin staining, while the radiative TA-PARS signal contrast may be analogous to that provided by eosin staining. The TA-PARS may capture label-free features such as adipocytes, fibrin, connective tissues, neuron structures, and cell nuclei. Visualizations of intranuclear structures may be captured with sufficient clarity and contrast to identify individual atypical nuclei.
[00175] FIG. 14 shows an example of label-free histological imaging applied to FFPE human brain tissue. Referring to FIG. 14, the non-radiative TA-PARS signal contrast is analogous to that provided by the hematoxylin staining of cell nuclei (Fig. 14, view (a)). A section of FFPE human brain tissue was imaged with the non-radiative PARS (Fig. 14, view (a-i)). This non-radiative information was then colored to emulate the contrast of hematoxylin staining (Fig. 14, view (a-ii)). The same tissue section was then stained only with hematoxylin and imaged under a brightfield microscope (Fig. 14, view (a-iii)), providing a direct one-to-one comparison. These visualizations are expected to be highly similar since the primary target of hematoxylin stain and the non-radiative portion of TA-PARS is nuclei, though other chromophores will also contribute to some degree.
[00176] A similar approach was applied to eosin staining in an adjacent section. The adjacent section was imaged with the radiative PARS (Fig. 14, view (b-i)). This radiative information was then colored to emulate the contrast of eosin staining (Fig. 14, view (b-ii)). This section was then stained with eosin (Fig. 14, view (b-iii)), providing a direct one-to-one comparison of the radiative contrast and eosin staining. In each of the TA-PARS and eosin- stained images, analogous microvasculature and red blood cells were resolved throughout the brain tissues. These visualizations are expected since the primary targets of the radiative portion of TA-PARS include hemeproteins, NADPH, flavins, collagen elastin and extracellular matrix, closely mirroring the chromophores targeted by eosin staining of extranuclear materials.
[00177] As the different contrast mechanisms of the TA-PARS closely emulate the visualizations of H&E staining, the disclosed system may provide true H&E-like contrast in a single (only one or exactly one) acquisition. The TA-PARS may provide substantially improved visualizations compared to previous PARS emulated H&E systems which relied on scattering microscopy to estimate eosin-like contrast. The scattering microscopy-based methods are unable to provide clear images in complex scattering samples such as bulk resected human tissues. In contrast, the TA-PARS can directly measure the extranuclear chromophores via radiative contrast mechanisms, thus providing analogous contrast to H&E regardless of specimen morphology. Here, the different TA-PARS visualizations were combined using a linear color mixture to generate an effective representation of traditional H&E staining within unstained tissues.
[00178] An example in resected FFPE human brain tissue is shown in Fig. 14, lew (c). The wide field image highlights the boundary of cancerous and healthy brain tissues.
[00179] To qualitatively compare the TA-PARS to traditional H&E images, a series of human breast tissue sections was scanned with the TA-PARS (Fig. 14, view (d-i) and Fig. 14, view (e-i)), then stained with H&E dyes and imaged under a brightfield microscope (Fig. 14, view (d-ii) and Fig. 14(e-ii)). The TA-PARS emulated H&E visualizations are effectively identical to the H&E preparations. In both images, clinically relevant features of the metastatic breast lymph node tissues are equally accessible.
Lifetime Imaging
[00180] H&E simulations may be enhanced by extracting time-domain features, which are discussed in more detail in the below section discussing TD-PARS and Feature Extraction Imaging. While the total amplitude of the PARS modulation captures the local absorption of the excitation, the evolution of the pressure and temperature induced modulations will also capture local material properties.
[00181] FIG. 15 exemplifies a PARS signal evolution over time. Each PARS excitation event will capture the scattering of the detection and excitation sources, the radiative emissions, and the PARS non-radiative relaxation time domain signal. Referring to FIG. 15, the PARS decay or evolution time is likely tied to metrics such as the thermal and pressure confinement times which govern traditional photoacoustic imaging. This means that properties such as the thermal diffusivity, conductivity, and speed of sound may dictate the PARS relaxation time. By measuring the decay or evolution time, the PARS may then provide further chromophore specific information on a specimen. This may enable chromophore unmixing (e.g. detect, separate, or otherwise discretize constituent species and/or subspecies) from a single excitation event, or single shot functional imaging.
[00182] An example of a lifetime PARS image in resected rattus brain tissues is shown in Fig. 16. Here the nuclei (which may appear as a first color such as white) are unmixed from the surrounding gray matter (which may appear as a second color such as green) and the interwoven myelinated neuron structures (which may appear as a third color such as orange). This unmixing is performed based on the PARS lifetime signals.
[00183] Referring to FIG. 17, a rapid lifetime extraction technique may be used to greatly improve the PARS collection contrast. Referring to FIG. 17, PARS amplitude may be calculated as the difference between the average pre-and post-excitation signal. This acquisition is less sensitive to imaging noise compared to alternative extraction techniques. Previously, PARS used a min-max acquired signal approach to extract the PARS-specific signals. By capturing the minimum of the signal minus the maximum, the PARS may highlight the total amplitude of the PARS modulation. However, this is highly susceptible to collection and measurement noise in the PARS signals.
[00184] One possible signal extraction method can be performed by determining an average pre-excitation signal. Then the average post-excitation signal is calculated from the initial portion of the lifetime signal. The PARS amplitude is then calculated as the difference between the two average signals. This metric for rapid signal extraction provides substantial improvements in signal to noise ratio, and sensitivity when collecting PARS signals. Since the technique relies on average signals, the PARS collection is substantially less sensitive to acquisition noise.
[00185] Additional time-based imaging methods will be discussed in more detail in the below section on TD-PARS and Feature Extraction Imaging. First, two other PARS will be briefly discussed.
MP-PARS
[00186] Referring to FIG. 18, in multi-pass PARS (MP-PARS), the backscattered detection may be captured and subsequently redirected back to the sample where it interacts with the sample again before it is detected. Each time the detection interacts with the sample, it may pick up further information of the PARS modulation.
[00187] In PARS, the non-radiative absorption induced perturbations in the optical properties are visualized using a secondary co-focused detection laser. The detection laser is co-focused with the excitation spot such that the absorption induced modulations may be captured as changes in the backscatter intensity of the detection laser. For a given detection intensity Idet, before the excitation pulse interacts with the sample the signals can be approximated based on the following relationship: PARSpre-ext a /det(R), where R is the unperturbed reflectivity of the sample.
[00188] Once the excitation pulse interacts with the sample, the signal may be approximated as: PARSpost-ext a Idet(R + AR), where the pressure and temperature induced change in reflectivity are denoted by AR. The total PARS absorption contrast is then approximated as: PARSsig a PARSpost-ext - PARSpre-ext. Substituting the previous relations for PARSpre-ext and PARSpost-ext leads to the following: PARSsig a idet R + AR) - /det(R).
[00189] Before the excitation pulse the backscattering of the MP-PARS is then approximated based on the following relationship: MPPARSpre-ext
Figure imgf000028_0001
where R is the unperturbed reflectivity of the sample, and n is the number of times the excitation interacts with the sample. Once the excitation pulse interacts with the sample, the signal may be approximated as: MPPARSpost-ext + AR))n where the pressure and temperature induced change in reflectivity are denoted by AR. [00190] The total MP-PARS absorption contrast is then approximated as: MPPARSsig a MPPARSpost-ext - MPPARSpre-ext. Substituting the previous relations for MPPARSpre-ext and MPPARSpost-ext leads to the following:
Figure imgf000029_0001
the number of times the detection interacts with the sample. PARS signals may be expanded non-linearly by these repeated interactions of the backscattered detection with the sample. The detection may then be redirected to interact with the sample any number of times, resulting in a corresponding degree of non-linear expansion in the non-radiative absorption contrast.
[00191] MP-PARS architectures, such as an architecture 1800 exemplified in FIG. 18, may be oriented such that passes consist of reflection or transmission events, which may occur at normal incidence to the sample or at some relevant transmission or reflection angle. For example, if the target features a particularly strong Mie-scattering angle, it may be advantageous to orient the multiple passes along this direction. Multiple passes may occur along a single (only one or exactly one) path (such as a normal-incidence reflection), or along multiple paths such as a normal-incidents transmission architecture, or even architectures with additional (more than two) pathways to take advantage of additional spatial non-linearities.
[00192] For example, an MP-PARS architecture 1800 may include an excitation source 1802 (e.g., 266 nm excitation source or laser), one or more detection sources 1804 (e.g., a 405 nm detection source or laser), one or more photodiodes or photodetectors 1806, a circulator 1808, a collimator 1810, one or more mirrors 1810 to guide the excitation and/or detection light, a prism 1816, and a variable beam expander 1818. In addition, the MP-PARS architecture 1800 may include a pair of alignment mirrors 1820 to align the excitation and/or detection light, and one or more scanners or scanning heads 1822, 1824 arranged at different sides of the sample. The one or more scanners may include a first scanner 1822 to transmit excitation and detection light to the sample, and a second scanner 1824, arrange with mirror 1826, to allow for multiple passes. A computer 1828 may be used to analyze the received signals and/or control the excitation and detection sources 1802 and 1804.
[00193] MP-PARS can act as an optical amplifier for detected PARS signals. It can be employed the same way that laser cavity systems or photomultiplier tubes are implemented to further improve the sensitivity of the measured signal. This may result in substantial improvements in PARS imaging fidelity. PARS may be captured with improved sensitivity to any or all of the radiative, non-radiative, or scattering contrast facilitating acquisitions with lower imaging powers. This may facilitate acquisition of lower concentrations of chromophores, chromophores with lower optical absorption, or to reduce sample exposure. These non-linear effects may be leveraged to improve recovered imaging resolution by taking advantage of non-linear spatial dependencies to provide super-resolution imaging.
Multi-Photon Excitation PARS
[00194] Referring to Fig. 19, multi-photon PARS may provide several benefits over traditional PARS excitation. In multiphoton excitation, a number of photons are absorbed by a target at virtually the same instant and/or in a single (only one or exactly one) event. The energy of these photons is then added together such that the absorbed photons are equivalent to a single (only one or exactly one) higher energy and shorter wavelength photon. Here two photons with half the energy and twice the wavelength of the single photon excitation event are absorbed by a chromophore providing analogous excitation.
[00195] In PARS, as in fluorescence microscopy, non-linear absorption mechanisms may be leveraged. Traditionally, PARS targets single photon absorption effects, for example the 266 nm UV excitation of DNA. However, the PARS may also target multiphoton absorption characteristics such as those used in multiphoton fluorescence microscopy. In multiphoton microscopy, a number of photons are absorbed by a target at virtually the same instant. The energy of these photons is then added together such that the absorbed photons are equivalent to a single higher energy and shorter wavelength photon.
[00196] In the case of two-photon PARS, the excitation wavelength would be selected as double the traditional value. Two photons would then be absorbed simultaneously providing an excitation event equivalent to standard one-photon excitation (Fig. 19). In the example listed above, rather than using 266 nm UV excitation to target DNA, a 532 nm excitation could be used to target the absorption of DNA. The two photon 532 nm absorption is equivalent to a single 266 nm absorption. Aspects disclosed herein are not limited to 532 nm excitation. The wavelength of the excitation may be configured to be double a predetermined excitation wavelength, such as double of a UV wavelength (e.g., double 100-400 nm) or a UVC wavelength (100-280 nm).
[00197] One primary difference between a multi-photon PARS and a conventional singlephoton PARS architecture is the requirement for high instantaneous optical energy densities. In order to minimize sample exposure levels to pragmatic levels, this architecture may require the use of very short optical excitation pulses, on the order of single picosecond or shorter. Such a requirement may be unique to the multi-photon PARS.
[00198] The multi-photon PARS may provide several benefits over traditional PARS excitation. First, multiphoton excitation uses longer-wavelength photons, which are lower energy and penetrate more deeply. Second, moving towards longer wavelengths may provide further biological compatibility avoiding tissue damage. This is especially prevalent in the case of in-situ histology since the PARS UV excitation may not be compatible with imaging deep into the body. It also can improve the safety of PARS system to be used for in-situ applications.
TD-PARS and Feature Extraction Imaging
[00199] PARS operates by capturing nanosecond-scale optical perturbations generated by photoacoustic pressures or photothermal temperature signals. These time-domain (TD) modulations are usually projected by amplitude to determine absorption magnitude. A single characteristic intensity value may be extracted from each TD signal to visualize the total absorption magnitude at each point. For example, TD amplitude, computed as the difference between the maximum and minimum of the TD signal, is commonly used to represent the absorption magnitude.
[00200] However, significant information on the target’s material properties is contained within the TD signals. Time-evolution of PARS signals may be dictated by material properties such as the density, heat capacity, and acoustic impedance. H&E-like visualizations may be generated directly from PARS time domain data by employing machine learning algorithms which bypass the PARS image reconstruction step. This approach is beneficial compared to direct PARS-to-H&E image-to-image translation as it provides additional valuable information which can help to better discriminate between different tissue types in the image.
[00201] Referring to Figs. 20A and 20B, H&E-like representations may be made by the application of Al image-to-image translation algorithms based on deep neural network architectures such as generative adversarial networks (GANs), conditional generative adversarial networks (cGANs) or Cycle-Consistent Adversarial Networks (cycleGans). These methods learn color transfer mappings from paired or unpaired samples of the source and the reference representations. In this way, reconstructed grayscale PARS images (20A) can be mapped into color H&E data (20B).
[00202] Imaging modalities may scan, pixel-by-pixel, capturing a signal over time at each pixel. While scanning over time may be continuous, realistically, signals are recorded periodically or discretely using an image acquisition system. Characteristic values may be extracted from each signal, accomplished by either using a Hilbert transform to find an envelope of the signal, from which the difference between maximum and minimum values may be computed, or by directly computing the difference between the maximum and minimum of the raw signal itself.
[00203] Referring to Figs. 21 A and 21 B, methods and techniques disclosed herein may bypass an image reconstruction stage where images are reconstructed by extracting the amplitude of the captured optical absorption signals or averaging their values over time. Methods and techniques disclosed herein may directly use signal representations as input to the artificial intelligence-based colorization algorithm instead of the pixels of the reconstructed image. In this way, additional valuable information on the underlying tissue can be included to create virtual H&E-like images.
[00204] To make the colorization algorithm more computationally efficient, some compressed representations of the time domain signal can be used. These, for example, may include, but are not limited to: principal linear components of the signal, coefficients of other signal decomposition methods, salient signal points, etc. Such techniques reduce the dimensionality of datasets, increase interpretability but at the same time minimize information loss. An example of creating an H&E like visualization by applying the Pix2Pix algorithm is shown in FIGs. 21 A and 21 B. FIG. 21 A shows three principal components of the time domain signals, and FIG. 21 B shows the corresponding synthesized H&E image. Differences between FIGS. 20A-B and FIGS. 21A-B may not be readily apparent in black and white, and may be better assessed in color form. For example, FIG. 21A may show some coloring, while FIG. 20A may be black and white and/or grayscale. In addition, FIG. 21 B may be less granular and/or show more color than FIG. 20B.
Intelligent Clustering Method
[00205] An unsupervised clustering method may be used to form colorized, synthetic H&E images without needing to reconstruct a grayscale image. The clustering method may learn TD features which relate to underlying biomolecule characteristics. This technique identifies features related to constituent biomolecules, enabling single-acquisition virtual tissue labelling. Colorized visualizations of tissue are produced, highlighting specific tissue components. The clustering may be performed on any or all of the PARS radiative, non-radiative, and scattering channels.
[00206] For a given biomolecule with constant material properties, the PARS TD signals may have specific shapes. However, signals from a given target may vary in amplitude (e.g. based on concentration) and may suffer from noise. Clustering signals by shape and learning an associated prototype for each cluster may be used to determine constituent time-domain features that capture the material-specific information of the underlying tissue target, regardless of the noise and amplitude variation present in the TD signals.
[00207] As an example, a modified K-Means clustering method may be used. Measured signals are treated as vectors, where the vector angle is analogous to signal shape. The distance or difference between TD signals is the sine of the subtended angle, such that orthogonal signals have maximal distance and scaled or inverted signals have zero distance. Cluster centroids are then calculated as the first principal component of the union set of each cluster and its negative, causing the learned centroids to be robust to noise. Once the TD features (centroids) are learned, corresponding feature amplitudes are extracted by performing a change-of-basis from the time- to feature-domain.
[00208] Referring to FIG. 22 showing exemplary architecture 2200, a broadly absorbed UV excitation (e.g., 266 nm) may target several biomolecules such as collagen, elastin, myelin, DNA, and RNA with a single (only one or exactly one) excitation. Subsequently, the clustering approach may be used to create enhanced absorption contrast visualizations and to extract biomolecule-specific features from the TD signals. UV excitation may be provided by an excitation light source 2202, such as a 50 kHz 266 nm laser (e.g., WEDGE XF 266, Bright Solutions). Excitation may be spectrally filtered with a prism 2204, then expanded (e.g., with a variable beam expander or VBE 2206) before combination with the detection beam. Excitation light may be guided via one or more mirrors 2208.
[00209] Detection light may be provided by a detection light source 2212, such as a continuous-wave 405 nm OBIS LS laser. The detection may be fiber-coupled through the circulator 2214, collimated (e.g., using collimator 2216), then combined with the excitation beam via a dichroic mirror 2210. Detection light may be guided via one or more mirrors 2218
[00210] Combined excitation and detection may pass through a pair of alignment mirrors 2200 and be co-focused through a UV-transparent window onto the specimen. Back-reflected light from the sample may return to the collimator 2216 and circulator 2214 by the same path as forward propagation. The circulator 2214 may re-direct backscattered light to a photodiode 2222 capturing the nanosecond-scale intensity modulations. During image acquisition, the stages 2226 may raster scan the specimen over the objective lens, while the excitation pulses continuously. Analog photodiode output may be captured for each excitation event using a high-speed digitizer, forming the PARS TD signals. Using a stage position signal, each PARS TD may be then mapped to a pixel in the final image, which may be output on an electronic display and/or a computer 2228.
[00211] Referring to FIGs. 23-24, instead of defining pixel values by the TD signal amplitude, the K-means method may leverage the TD features depending on a number of extracted clusters. If only a single feature is requested (K = 1), the clustering algorithm yields a feature, containing the TD shape similar to all tissue components. This feature can then be used as the basis for matched filtering, a technique designed to optimally extract the amplitude of known signal shapes with additive noise. This provides a robust noise-resistant method for determining absorption amplitude or pixel “brightness.” Applied in tissues, this extraction provides a very substantial improvement in structural image quality and noise suppression compared to traditional TD amplitude projection, as shown in FIG. 24, view (a).
[00212] If additional clusters are requested (K > 1), tissue-specific time-domain features are learned. In this case, the feature amplitudes at each pixel are extracted by performing a change-of-basis from the time-domain to the feature-domain. To visually illustrate the efficacy in learning features, time-domain signals were clustered for K = 2 requested features. By projecting the high-dimensional time-domain data onto a two-dimensional plane containing the learned features, it is possible to visualize the TD signals (dots) relative to the identified features (arrows). In the visualization, each point is colored proportionally to the signal content attributed to the constituent features.
[00213] Further visualizations are generated for resected murine brain tissues using three features (K = 3). The extracted feature amplitudes are mapped to the independent red, green and blue (R,G,B) color channels to form a colorized visualization. Hence, the pixel color represents the proportional mixture of each feature’s contribution to the time domain signal, while the intensity represents the total magnitude of absorbed energy. Referring to FIG. 24, view (c), the K = 3 colorization demonstrates the potential of the disclosed technique in recovering biomolecule-specific information. Structures of singular myelinated neurons (white matter) from the brain stem are illustrated in pink, projecting into the brain. Concurrently, unmyelinated neurons (gray matter) appear on the right side of the frame in green. Finally, nuclear structures scattered throughout the brain tissues appear in white.
[00214] Referring to FIG. 25, three different regions of the brain tissues (view (a)), gray matter (view (c)) and the transition or boundary between white and gray matter (view (b)) were selected based on macroscopic inspection. Each unique region was imaged with the PARS microscope, before being colorized using the same K = 3 model. In each of the selected regions, the TD colorization highlights identical biomolecule-specific structures as those identified in the initial colorized image (e.g., image (c) in FIG. 24).
[00215] The TD signals may be clustered by shape, but not by amplitude. A given pixel (and its corresponding TD signal) may be expressed in terms of characteristic signal shapes of one or more targets and a residual term. Specifically, for a given signal s, and learned characteristic signal shapes (features) { , the signal may be represented as s =
Figure imgf000035_0001
such that the weights, {a , specify the proportion of each characteristic signal shape, with the residual term, r, included to encapsulate any error as a result of modelling or measurement noise.
[00216] TD signals may be vectors in space Rn, where the dimension, n, of the space is simply the number of discrete TD samples. Because TD signals are treated as Cartesian vectors, the signal shape is then analogous to the vector angle. A unit-vector pointing in the direction of the non-noise portion of the given cluster may define a centroid. A union set may be constructed of the cluster and its negated points, and the centroid may be found as the direction of greatest variance (the principal component from a sample covariance), allowing higher amplitude signals to have the greatest influence.
[00217] A clustering algorithm is reflected in FIG. 26, and a corresponding method 2700 is reflected in FIG. 27. The calculation of cluster centroids is reflected in line 16, and Singular Value Decomposition (SVD) may be used to extract a first principle component. For inputs, the clustering algorithm takes a set, S =
Figure imgf000035_0002
of PARS TD signals and the requested number of clusters (identical to number of learned features), K. Furthermore, the convergence criteria are specified by a minimum number of moves criterion and a difference in mean residual criterion. These are required to ensure convergence.
[00218] The algorithm may be run several times, and only the most optimal solution (in terms of minimal mean residual) may be returned. The algorithm initializes by randomly selecting K TD signals to act as initial cluster centroids, shown on lines 1-3 and in step 2702. Next is the “Membership Update” step, shown on lines 7-12 and in steps 2704 and 2706, where the cluster membership of all points (PARS TD signals) is updated by evaluating the distance from each point to each centroid in step 2704, and assigning membership to the associated cluster of the least distant centroid in step 2706. The number of points that move (change cluster membership) is recorded (lines 9-11). Next, in step 2708, the mean residual is evaluated (line 13), as well as the change in the mean residual from the previous iteration (line 14), starting from zero in the case of the first iteration. Next is the “Centroid Update” step, shown on lines 16-21 and in step 2710, where centroids are updated, and are calculated as the first principal component of the union set of each cluster and its negative. Practically this is computed via a Singular Value Decomposition (SVD), shown on line 19. In step 2712, centroids are normalized such that they are unit magnitude. Finally, in step 2714, the convergence criteria are checked. If the algorithm has not converged (“No” in FIG. 27), the “Membership Update” step, followed by the “Centroid Update” step are repeated until the convergence criteria are met (“Yes” in FIG. 27). The algorithm returns, in step 2716, as outputs, a set of cluster labels, indicating which cluster each PARS TD signal is associated to, and a set of K cluster centroids, the learned time-domain features.
[00219] PARS TD signals may contain sufficient information to identify biomolecules based on their clustered TD features. Such characteristics may be transferrable across images of different tissue specimens. Feature identification may be performed on an initial specimen, then transferred to others, producing similarly convincing results. Moreover, this technique offers unique advantages as the clustering approach requires no prior information, with the exception of the number of clusters. Training may be performed blindly across the signals captured within the specimen of interest. This is especially beneficial in complex specimens such as the resected brain tissues explored here. The challenge is that blindly clustering for a pre-selected number of features does not guarantee that a singular biomolecule/tissue type will be isolated per feature. Each cluster simply targets a unique characteristic of the PARS TD signals, which may be used to highlight distinct tissue components.
[00220] Biomolecules may be visualized based on their PARS TD characteristics. This method may enable a single (only one or exactly one) broadly absorbed excitation source to provide otherwise inaccessible material specificity, while simultaneously targeting the optical absorption of several biomolecules. This can enhance absorption contrast visualizations, acquired in a fraction of the time compared to analogous multiwavelength approaches. This enables several new avenues for label-free PARS microscopy by adding an additional dimension to the absorption contrast, vastly expanding the potential for biomolecule specificity.
[00221] Referring to FIG. 28, additional methods of extracting signals have also been conceived which aim to provide superior PARS non-radiative signal extraction. As previously described with reference to FIG. 17, the average of a region both directly before and directly after a modulation may be used as a method of noise reduction. However, additional extensions of this concept may provide improved performance in more challenging scenarios. In particular, when the interrogation point is moving rapidly across the surface of the sample, it may be subject to additional non-PARS-based modulations due to spatial variations about the sample. In these instances, additional steps may be required to estimate the nonmodulated scattering. If the method described with respect to FIG. 17 may be referred to as “step” processing, an analogous “angled-step” processing may be envisioned. Here, nonmodulated scattering may be approximated by using the mean of both pre- and postmodulated regions from which the PARS amplitude and time-domain information can be extracted. More refined approaches such as partial curve fitting of specific pre- and postmodulated can also be envisioned with the same end goal.
[00222] Referring to FIG. 29, additional information may also be provided by recording various analog-filtered instances of a single (only one or exactly one) PARS signal. For example, a relatively unfiltered signal may be acquired alongside a highly band-passed signal by splitting the original analog signal from the photo detector and recording it on two separate channels. From these, intelligent methods such as the aforementioned K-means approach may be utilized independently on the various recorded filtered iterations. As these each represent highly independent signal measurements, additional signal fidelity may be extracted from such processes allowing for improved sensitivity.
[00223] Referring to FIG. 30, additional information may also be provided by taking advantage of expected spatial correlation between adjacent points. For example, a data volume may be reconstructed with the two traditional lateral image axes, along with a third axis containing each respective time-domain. This may facilitate lateral processing operations prior to time-domain signal extractions. Here, mutually dependent and mutually independent dependencies along the lateral and time axes may be leveraged to approximate a significantly lower-noise central signal. Similar non-intelligent approaches may be performed on any or all of the PARS radiative, non-radiative, and scattering channels.
PARS Time Domain Features
[00224] Using intelligent clustering methods, PARS time-domain (TD) signals can be analyzed, when there are multiple absorption events occurring in close proximity or simultaneously in time. This may result in overlapping PARS TD signals. In this case, intelligent clustering approaches can be used to extract and isolate the different absorption events, and time resolved signals from one another effectively unmixing the different PARS events, even though they overlap in time.
[00225] Conversely, intelligent clustering methods can also be used to extract maximally different signal combinations from the combined PARS time domain signals. In one example graph 8400 presented in FIG. 84, two different wavelengths of PARS excitation (e.g., 266 nm and 532 nm) are introduced to the sample in proximity. Consequently, the PARS non-radiative TD signals are blended. Intelligent clustering methods, in this example K-means, are applied to optimally extract information from the blended signals. [00226] FIG. 84 shows example of PARS non-radiative time domain features extracted from overlapping 532 nm and 266 nm excitation events. The three features represent the maximally different absorption combinations, which optimally define the PARS signals based on the difference in absorption contrast at the two wavelengths.
[00227] Rather than trying to isolate one PARS event from the other, an example algorithm is applied to determine maximally different absorption combinations. In the example shown in FIG. 84, this results in three different signal combinations corresponding to: Feature 1 (8410): first absorption event higher amplitude, second absorption event lower amplitude, Feature 2 (8420): first absorption and Second absorption amplitude equal, and Feature 3 (8430): first absorption amplitude low, second absorption amplitude high.
[00228] Transforming the PARS signals to view them with respect to these clusters may provide enhanced separation of the underlying biomolecules. This is because the signals are represented based on the difference in magnitude between the two absorption events, rather than their direct absorption magnitude at each excitation event.
[00229] An intelligent clustering method applying K-means clustering with a modified approach to compute cluster centroids is described herein. The intelligent clustering method aims to identify K characteristic shapes in the signals, described as a set of K centroids, F = {fi (t)}, i = 1 ,... ,K. TD signals are treated as Cartesian vectors in space R n , where n corresponds to the number of TD samples, and thus the shape of the signal is associated with the angle of the corresponding vector, and the distance between TD signals is quantified by the sine of the angle between them, resulting in a maximum distance for orthogonal signals and zero distance for scaled or inverted signals. Cluster centroids are computed as the principal component of the combined set of each cluster and its negative, ensuring that the learned centroids are resilient to noise. Following the K-means clustering, a set of feature vectors F = {f* i } is obtained, which can represent the signals as a weighted sum. These feature vectors are then arranged in the form of a matrix of features, F = [f ^ \ 2 \... \f K]. The amplitudes of the learned TD features (centroids) contained within each time domain are extracted by transforming from the time-domain to the feature-domain. This is performed by multiplying each TD signal with the pseudo-inverse of F [2], The result is an array of K feature images, Mf = [m 1 , mf2 , ... , mfK ].
[00230] Extracted features can then be used for colorization, direct visualization, or further pixel level analysis as discussed in the next section on the PARS Data Vectors. In some embodiments, the PARS-TD features can be used to reduce data volumes for colorization. Using only features with the maximal information, the model’s prediction power is improved by eliminating redundant data, increasing contrast between the selected features, and reducing the training volumes and times.
[00231] An example a multi-channel virtual staining architecture 8500 is shown in FIG. 85 for signal processing and virtual staining of PARS image data. As shown, a feature learning process 8510 of K features, takes place using a representative subset of the NR TD signals. A feature extraction process 8520 is then performed on all the TD signals to form K feature images. These K different feature images, along with the NR signal amplitudes (normal pars extraction method) from each excitation wavelength (266 nm and 532 nm), and the R signal amplitudes, were then fed into a feature selection module 8530 to generate input data for a multi-channel (MC)-GAN model 8540.
[00232] First, feature learning process 8510 of K features, takes place using a subset (shown in red box) of the NR channel TD signals. Second, feature extraction is performed on all the TD signals of the data in hand forming K feature images. NR images of each excitation wavelength (266 nm and 532 nm in this case) and R images are extracted separately and passed along with the K feature images to the feature selection phase. The selected features are then used as the input data to an example virtual staining machine learning model 8540, which can be a MC-GAN model, and the true H&E image 8550 is used as the model ground truth.
[00233] When used for colorization, using the extracted features may enhance the model’s prediction power by eliminating redundant data and increasing contrast between the selected features. An example of this is shown below in five sets of images 8600 in FIG. 86, which shows that the feature-based colorization implemented using the architecture shown in FIG. 85 outperforms alternative methods.
[00234] FIG. 86 shows a comparison of virtual staining results using different combinations of PARS feature images as inputs: (a) RGB image of a raw PARS data where R: NR532, G: R266, B: NR266 (displayed for visualization), highlighting different parts of a human skin tissue sample; (b)-(d) show worst, moderate, best results, respectively, using the feature combination labeled on the II; and (e) true H&E image of the same field-of-view.
PARS Feature Vectors
[00235] For each PARS event, a PARS feature vector may be formed. An example PARS feature vector is a PARS data vector. The PARS data vector for a given pixel can be thought of as a Euclidean vector in ‘n’ dimensional space, where ‘n’ is the number of PARS features in a given vector. This feature vector or data vector may include primary measurements, e.g., radiative and non-radiative signal amplitudes and energy, radiative and non-radiative signal lifetime or signal features, or may include secondary measurements extracted as different combinations, calculations or ratios from the primary signals.
[00236] An example of a secondary measurement may include the quantum efficiency ratio (QER), or the total absorption (TA), or ratio of radiative or non-radiative absorptions at different wavelengths. An example of a PARS feature vector 8700 is presented in Error! Reference source not found. FIG. 87, which shows an example of a PARS data vector 8700. The data included in this example may include the radiative and non-radiative signal energy sorption (NR +R) at 266 nm, and 532 nm, and the quantum efficiency ratio (QER = (NR-R)/TA) at 266 nm and 532 nm. The example PARS feature vector 8700 is not presented as an exhaustive list, only a representation of some of the potential data which is extracted for each image pixel. The PARS feature vector may contain any information which is collected and extracted from each PARS event.
[00237] The PARS feature vectors can then be processed further for pixel level analysis or may be passed directly into a colorization/ visualization algorithm e.g., an image generator model.
[00238] In one example of pixel level analysis, the PARS feature vectors may be directly correlated against ground truth tagging such as histochemical, or immunohistochemical staining. This may provide a one-to-one mapping between PARS data vectors, and different histochemical stains, or their underlying biomolecule targets. This process allows for a PARS “signature/fingerprint” or ground truth PARS data vector to be calculated for a given biomolecule, or mixture of biomolecules. For example, this could be used to develop a fingerprint for cells expressing HER2 protein. This could then be used as a ground truth to test if cells were expressing HER2 protein, or not. The same process of developing a “ground truth” PARS data vector could be performed for any biomolecule, or mixture of biomolecules.
[00239] Alternatively, instead of using a ground truth metric, intelligent blinded methods such as clustering (e.g., k-means, or principal component analysis) may be applied directly to the PARS data vectors to identify unique groups of constituent features. This may provide different representations of the data which better separate, underlying biomolecules. These methods may also be used to determine which constituents of the PARS data vector provide optimal identification of specific tissue features of interest. This approach may in turn be used to reduce PARS data volumes, while retaining as much detail of the underlying composition as possible.
[00240] One advantage of the PARS data vector representation is that the signals may be processed in any vector space (i.e., polar, Euclidian, ...). This can leverage many different vector processing methods. For example, the relative presence of a biomolecule at a given pixel may be calculated by projecting that pixels PARS vector onto the ground truth vector for the target biomolecule. In Euclidean space, this operation is performed by taking the cross product of the ground truth PARS data vector and the pixels PARS data vector. This method may be optimal for use in hardware accelerated processing, such as CUDA, or graphics cardbased processing.
Functional Extraction (from radiative, non-radiative and scattering)
[00241] As previously explained with respect to QER, properties such as the thermal diffusivity, conductivity, and speed of sound may dictate the PARS relaxation time. Features such related to temperature, speed of sound, and molecular information may be extracted from time-domain signals. As an example, two targets may have a same or similar optical absorption but slightly different other characteristics such as a different speed of sound, which may result in a different decay, evolution, and/or shape of the signals. The decay, evolution, and/or shape of the signals may be used to determine or add novel molecular information to PARS images.
[00242] Various optical and mechanical properties may cause these differences in signal shape. For example, the rate at which the signal returns to the background scattering level may be determined by the local thermal diffusivity. As a result, regions with, for example, higher thermal diffusivity may feature shorter signal lengths as opposed to regions with lower thermal diffusivity. This may be used to differentiate between cell nuclei and surrounding regions with similar optical absorption. Likewise, the signal lifetime may also be affected by the local speed of sound. One example may be for use in differentiating between two different metals. Aluminum and copper will feature different thermal diffusivity and speed of sound facilitating multiplexing by solely measuring signal lifetime. FIG. 31 exemplifies two signals with different lifetimes.
Post-Imaging Correction
[00243] Referring to FIG. 32, by acquiring two (or more) unique absorption-based measurements (radiative & non-radiative), local variations in these acquisitions may be used to compensate for excitation pulse energy variations. For example, two acquisitions may be compared for similar local (pixel level) variations which are near- or sub-resolution in spacing. Rapid local variations may be unlikely caused as a result of spatial variations in the sample, as it is not expected that the system would provide such level of spatial discrimination. As such, similar variations may be interpreted as similar reconstruction errors between the two visualizations. This interpretation can then be used to provide post-imaging intensity correction providing additional qualitative recovery. Although FIG. 32 shows an example of autofluoresence-based compensation, aspects disclosed herein are not limited to autofluorescence and may use other absorption-based measurements.
Chirped-pulse PARS Acquisitions
[00244] Referring to FIG. 33, given that PARS acquisitions are normally performed by using single photo detector elements to capture time-varying sample responses, realistic bandwidth and noise limitations in such devices may provide significant barriers towards high speed. One potential solution to this may be streak detection of PARS signals. Streak detection involves spatially separating various time-components across several detectors such as those in a linescan- or standard camera which could be accomplished in several methods.
[00245] For example, a chirped-pulse (a pulse with varying wavelength along the length of the pulse) may be used for detection, and the various wavelength components, which may now encode time information, may be spatially separated using one or more diffractive or dispersion elements such as prisms or gratings. This process may provide significant improvements in time-resolving capabilities while maintaining high signal fidelity by s40istologythe detection over a substantial number of detectors. Such an architecture would have clear occupations such as combining with a line-scanning architecture where detection is made over a large array such as a camera, where the two spatial coordinates of the camera now encode one spatial dimension and one temporal dimension from the sample. Other methods of streaking the time-axis across a sensor array could also be envisioned, such as the use of a high speed optical scanner.
Time Domain PARS Acquisition from Integrating Photodetector Units
[00246] Many imaging sensors have a minimum integration time, which may be unable to capture the nanosecond-scale modulations in the PARS signal evolution. This can be limiting due to the potentially rich time domain information provided in the PARS signals. A general process leveraging a rolling shutter / trigger sequence I delayed binning which would capture modulations within the integration time of these photosensors is described herein.
[00247] In this PARS acquisition regime, the backscattered detection light carrying the PARS modulation may be distributed across an arrangement of integrating photo-detecting units. At the start of an acquisition, a tunable delay may be introduced between the integration start time of each photo-detecting unit (e.g., by using a rolling shutter, predetermined trigger sequence, delayed binning, and/or capturing differently timed sections of the recovered signals). If the delay time is shorter than the photo-detecting unit integration time, it is then possible to reconstruct a signal with a time resolution defined by the imposed delay. For example, PARS time domain information can be extracted by taking the derivative of these time-spaced integration windows and/or by analyzing their common regions when plotted. A visual depiction of this acquisition method is shown in FIG. 34. For example, instead of a high- sample rate photodetector, it is possible to resolve a time domain signal leveraging a CCD/CMOS camera sensor. In this case, the rows of the CCD/CMOS camera are the photodetecting units which capture the signal in a rolling shutter fashion. With the imposed delay between individual photodetector lines, a PARS time domain signal can be constructed with a time resolution greater than a single integrating sensor.
Data Compression
[00248] Referring to FIG. 35, data may be compressed using digital and/or analog techniques. For example, with the K-means approach, raw time-domain signals may be appropriately represented by their respective K-means weights. If, for example, three such prototypes were in use on a particular dataset, rather than storing full time domains (~200+ samples), the time-axis may be well compressed to simply three values or floats. Similar such extracted features may be used in lieu of full non-compressed time domains for the purposes of decreased system RAM usage, reduced data bandwidth requirements, reduced systems storage loads, etc.
Fast Acquisition
[00249] Acquiring at higher interrogation rates may necessitate more elaborate acquisition processes. A variety of issues may arise while interrogating the sample at higher acquisition rates including logistical movement of the interrogation spot about the sample and higher frequency optical scattering signals. Fast lateral motion of the interrogation spot about the sample may be performed through hybrid scanning approaches combining both fast optical scanning methods such as resonant scanners and polygon scanners alongside bulk scanning approaches such as mechanical scanning stages. Such methods in other optical microscopy approaches have facilitated interrogation rates in the 10s of MHz and may provide similar benefits to PARS modalities.
[00250] However, such fast motion of the interrogation spot about the sample may also induce additional undesired scattering frequency content which may confound time-domain signal processing of the collected PARS signals. As such, as shown in FIG. 36, it may be beneficial to operate the detection focal spot on the sample at a larger size relative to the excitation spot such that the excitation spot may be scanned about a relatively stationary, or slower moving detection spot reducing the effects of rapid optical scanning of the detection.
Data Colorizing
[00251] Referring to FIG. 37, techniques and methods disclosed herein may allow a direct construction of a colorized H&E simulated image, bypassing a grayscale or scalar-amplitude based reconstruction. The colors used may emulate those traditionally used in H&E stains, such as various shades of pink, purple, and/or blue. However, aspects disclosed herein are not limited to pink, purple, and/or blue colors, and systems and processors may be configured to use other colors. For example, red, green, and blue color channels may be used to represent three extracted K-means prototypes.
Augmented Reality Interface
[00252] Upon completing processing of data visualizations or images, these visualizations may be displayed in combination with and/or overlaid with other visualizations on a user interface screen. For example, a bright field image of the sample may form the background of the presented PARS visualizations. Such augmentations maybe used to help maintain orientation between the required visualizations and the original sample.
Machine Learning Processing of PARS Signals
[00253] FIGs. 38A and 38B show two example architectures 3800, 3850 for generating one or more inferences regarding a sample. The architectures 3800, 3850 may include a PARS system 3801 , which may include one or more of the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems. The PARS system 3801 may be a PARS system from FIG. 5 described above, for example. [00254] The PARS system 3801 detect generated signals in the detection beam(s) returning from a given sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc.
[00255] The sample, which be an unstained sample, may be an in vivo or an in situ sample. For example, it may be tissue underneath skin of a patient. For another example, it may be a tissue on a glass.
[00256] In some embodiments, the PARS system 3801 , 3901 may operate by capturing nanosecond-scale (or picosecond scale) optical perturbations generated by photoacoustic pressures or photothermal temperature signals. These time-domain (TD) modulations are usually projected by amplitude to determine absorption magnitude. A single characteristic intensity value may be extracted from each TD signal to visualize the total absorption magnitude at each point. For example, TD amplitude, computed as the difference between the maximum and minimum of the TD signal, is commonly used to represent the absorption magnitude.
[00257] In some embodiments, the PARS system 3801 , 3901 may operate by capturing optical perturbations generated by thermal pressure perturbations, in addition to or as alternative of the optical perturbations generated by photoacoustic pressures and photothermal temperature signals.
[00258] Signals detected by the PARS system 3801 , 3901 may include, for example, absorption spectra signals, radiative signals, non-radiative signals, scattering signals, or a combination of any of the above mentioned signals.
[00259] Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a given sample. The sample absorbs energy, i.e., photons, from the radiating field. The intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. For absorption spectra signals, a number of contrast can be obtained across a broad spectrum of wavelengths to characterize a biomolecules’ response across an excitation range (e.g., 190 nm to 20 urn).
[00260] As an non-exhaustive list of examples, below is a list of various types of signals that may be processed by the architecture 3800, 3850, 3900: a. radiative:
• signal amplitude and energy
• emission spectra
• lifetime/decay rate
■ affected by: conductivity, viscosity, temperature, polarity; b. non-radiative:
• signal amplitude and energy
• non-radiative signal lifetime (or decay rate)
■ affected by material properties such as: speed of sound, density, compressibility, shear modulus, pressure, stiffness, bulk modulus, viscoelasticity, thermal diffusivity, heat capacity, conductivity, viscosity, absorber size and shape, temperature
• rise time
■ affected by: conductivity, viscosity, temperature, polarity
• phase shift
• polarization shift; c. scattering:
• amplitude
• polarization; and d. combinations of any of the above, such as, for example:
• total energy absorption = non-radiative + radiative signal
• quantum efficiency ratio = (non-radiative-radiative) I total absorption
• lifetime relaxation ratio
• total relaxation time (radiative and non-radiative)
[00261] The various signals from the PARS system 3801 , 3901 may be processed to extract one or more PARS features 3804, 3904. For example, one or more PARS features 3804, 3904 may represent one or more contrasts. One or more PARS features 3804, 3904 may include one or more PARS diagnostic vectors.
[00262] In some embodiments, the one or more PARS features 3804, 3904 may represent one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
[00263] In some embodiments, a feature vector for a machine learning architecture (e.g., the architecture 3800, 3850, 3900) for making one or more determination to help with a diagnosis may be constructed to include one or more of: PARS features 3804, 3904, features extracted from time-domain (TD) modulations such as absorption magnitude and intensity value, TD post excitation average, radiative channel, scattering channel, H-stain, E-stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo’Red, Masson's Trichrome S’ain, Lillie's Trichrome, and Verhoeff Stain.
[00264] In some embodiments, the extracted PARS features 3804, 3904 may be used to segment nuclei and use for quantification, which may be required for making a diagnosis. The quantification may be a cancer quantification. The quantification may include, for example, quantification of nucleolus, nuclei, shape, size, and circularity.
[00265] FIG. 44 shows examples of contrasts 4400 extracted from PARS signals in tissue slides. The examples include for example, non-radiative, radiative, and scattering contrasts. FIG. 45 shows examples of combinations of contrasts 4500, from the combination of PARS signals into unique contrasts.
[00266] The processing of said signals may include, by the PARS system 3801 , 3901 exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
[00267] In some embodiments, the excitation beam being focused at or below the sample may include being at or below a surface of the sample.
[00268] In some embodiments, a system, or multiple systems (e.g., photothermal, autofluorescence, etc.), other than the PARS system 3801 may be used to generate the range of absorption spectra signals, radiative signals, non-radiative signals, attenuation signals, scattering signals, or a combination of any of the above mentioned signals that are used to generate the one or more features 3804. These systems may include conventional imaging systems or imaging modalities.
[00269] In some embodiments, the extracted PARS features 3804, 3904 may include features informative of an attenuation contrast provided by the at least one of the plurality of signals. For instance, attenuation can be the reduction of the intensity of the excitation beam generated by the PARS system 3801 , 3901 as it traverses matter (e.g., tissue). For instance, the contrast between the tissues can be generated by the difference between the beam signal attenuation, which may be influenced by density and atomic number of the respective tissues.
[00270] A machine learning model 3802 shown in FIG. 38A may be trained and deployed to generate simulated stained image 3806 such as H&E-like stained images. The machine learning model 3802 may also be trained and deployed to generate one or more inferences 3808 that can be displayed at a user interface 4000 of an user application 3825, which may be installed at a user device. A database 3815 may be used to store the one or more simulated stained images 3806, and to transmit one or more simulated stained images 3806 to the user application 3825 for display or further processing.
[00271] The simulated stained images 3806 may include images stained with, for example, at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo’Red, Masson's Trichrome S’ain, Lillie's Trichrome, and Verhoeff Stain.
[00272] In some embodiments, the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
[00273] In some embodiments, the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue. For example, preserved tissue sample may include a sample preserved using formalin, or alcohol fixed using alcohol fixatives.
[00274] Referring now to FIG. 38B, an image generator 3812 may be used to generate simulated stained image 3806 such as H&E-like stained images. A machine learning model 3822 may be trained and deployed to generate one or more inferences 3808 that can be displayed at a user interface 4000 of an user application 3825, which may be installed at a user device. A database 3815 may be used to store the one or more simulated stained images 3806, and to transmit one or more simulated stained images 3806 to the user application 3825 for display or further processing.
[00275] Referring now to FIG. 39, which shows yet another example machine learning architecture 3900 for generating one or more inferences 3908 based on extracted features 3904 from a sample. The extracted features, may be PARS features 3904, may be generated in a similar manner as the PARS features 3804 from FIGs. 38A and 38B. For example, the features 3904 may be extracted by exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample. The excitation beam and interrogation beam may be generated by a PARS system 3901 , which is a similar system to PARS system 3801.
[00276] The extracted features 3904 may be processed by an image generator 3912, which may be similar to the image generator 3812 from FIG. 38B, to generate (or convert the features to) one or more simulated stained images 106 (e.g., H&E-like stained images). A machine learning model or architecture 3922, which may be similar to the machine learning model 3822 from FIG. 38B, may be used to generate one or more inferences 3908 based on the one or more simulated stained images 106. The inferences may be sent to an user application 3925 for display or further processing.
[00277] The machine learning model 3802 and image generator 3812, 3912 are configured to generate one or more simulated stained images 3806, 106 based on the one or more extracted PARS features 3804, 3904, which are extracted based on one or more PARS signals. For example, the one or more PARS signals may include radiative and non-radiative signals. The non-radiative signals may be processed to generate features representative of amplitude or absorption contrast analogous to that provided by hematoxylin staining, while the radiative signals may be processed to generate features representative of amplitude or absorption contrast analogous to that provided by eosin staining. Therefore, the machine learning model 3802 and image generator 3812, 3912 are trained and configured to generate H&E-like images, as one type of simulated stained images 3806, 106 based on the radiative and non-radiative signals.
[00278] The radiative and non-radiative signals may be obtained from a PARS system 3801 , 3901. In some embodiments, the radiative and non-radiative signals may be obtained from a different system or imaging modality. For example, non-radiative signals may be obtained via photothermal microscopy and photoacoustic Microscopy, for example, radiative signals may be obtained via multi/ single wavelength autofluorescence microscopy, stimulated I spontaneous raman spectroscopy, or autofluorescence lifetime microscopy.
[00279] In some embodiments, the non-radiative signals include at least one of: a photothermal signals and a photoacoustic signal. [00280] In some embodiments, the radiative signals includes one or more autofluorescence signals.
[00281] The image generator 3812, 3912 may include a stain selector 3914 to select one or more stains applicable to an image (e.g., PARS black and white image) generated based on the PARS features 3904.
[00282] The image generator 3812, 3912 may include a colorization machine learning architecture, such as a generative adversarial network (GAN), which may include, for example, a cycle-consistent generative adversarial network (CycleGAN).
[00283] In some embodiments, the image generator 3812, 3912, or the image generator in the machine learning model 3802 may be implemented using one of: a CycleGAN, a Pix2Pix model (a type of conditional GAN), a Stable Diffusion model, a U-Net model, an encoderdecoder model, a convolutional neural network, a regional convolutional network, or the like.
[00284] Image segmentation is a process to extract a region of interest (ROI) through a semiautomatic or automatic process. It divides an image into areas based on a specified description, such as segmenting body organs/tissues in the medical applications for border detection, tumor detection/segmentation, and mass detection.
[00285] Image registration is a process to align two images from the two domains (TA- PARS and H&E) through a semiautomatic or automatic process. The TA-PARS may be an input image, and the H&E image may be a reference image. The system, via for example, the image generator 3812, 3912, may be configured to select points of interest in the two images (e.g., input image and reference image), associate each point of interest in the input image to its corresponding point in the reference image, and transform at least one of the input image and the reference image so that both images are aligned. In some cases, both images are transformed and aligned.
[00286] The image generator 3812, 3912 may include one or more machine learning techniques for image segmentation, including for example: (1) traditional methods: threshold segmentation, region growth segmentation, (2) classification and clustering methods: K- nearest neighbors (KNN), kernel principal component analysis (kPCA), fuzzy C-means (FCM), Monte Carlo random field model (MRF), dataset-based guided segmentation, expectation maximization (EM), Bayesian methods, support vector machines (SVM), artificial neural networks (ANNs), random forest methods, and convolutional neural networks (DNNs), and (3) deformation model methods: parametric deformation models, geometric deformation models. [00287] The stains selectable may include, for example, at least one of (or any combination of):
Jones’ S-ain (MPAS) - typically for kidney
-AS and GMS - for fungi infections (stains chitin)
Toluidine Blu-
Congo Red - identification of amyloid mat’rial
Masson's Trichrome ’tain
Lillie's Trichrome
Ver-oeff Stain - visualize elastic tissue (blood vessels, skin, bladder etc.).
[00288] With the image generator 3812, 3912, one or more stained images 3806, 106 may be generated from the same sample. Furthermore, additional stains can be generated by generating and combining separate stains. The image generator 3812, 3912 may be configured to virtually generate the constituent stains.
[00289] For example, Masson’s Trichrome Stain is a three color staining procedure including:
(1) -ematoxylin - for nuclei staining;
(2) Acid dyes (e.g., red culvert acid + ac-d fuchsin) - for cytoplasm; and
(3) A-iline blue - for collagen.
[00290] In some embodiments, image generator 3812, 3912, or an image generator within the machine learning model 3802 can be configured to apply each constituent stain of any particular stain, by applying the respective stains to an image (e.g., a black and white PARS image) generated based on the PARS features 3804, 3904.
[00291] In some embodiments, the image generator 3812, 3912, or an image generator within the machine learning model 3802 can be trained to generate, at inference time, an image showing tissue map overlay by processing the PARS features. The overlay can identify at least one salient feature, the at least one salient feature comprising a biomarker, cancer, cancer grade, parasite, toxicity, inflammation, and/or cancer. The overlay can supress nonsalient features. [00292] In addition, a user through user application 3825, 3925, can switch between different stained images 3806, 106. This is not always possible and is rarely practical after chemically labeling the sample in the traditional manner.
[00293] The machine learning architectures 3850, 3800, 3900 may provide the same contrast as the constituent stains. The machine learning architectures 3850, 3800, 3900 can mimic these individual stains and mix/match them together digitally to create different combination stains. Because stains are combined digitally (using intrinsic contrast) instead of chemically, new stain combinations may be feasible based on the given sample.
[00294] In some embodiments, the machine learning architectures 3850, 3800, 3900 may generate stains or stain combinations, which may include stains or stain combinations that have not been generated previously, or cannot be achieved via conventional chemical staining method. For example, the machine learning architectures 3850, 3800, 3900 may generate molecular stains, which may not be possible with traditional staining methods.
[00295] Inferences 3808, 3908 generated by the architectures 3800, 3850, 3900 may include, without limitation: a prediction of a biomarker, a prediction of one or more of survival time, drug response, patient level phenotype/molecular characteristics, mutational burden, tumor molecular characteristics, transcriptomic features, protein expression features, patient clinical outcomes, a resistance index associated with a tumor and surrounding tissue based on one or more PARS signals, a determination of the best tissue sample in a collection of samples for testing, and a verification that a chosen tissue sample contains an adequate quantity of tumortissue for analysis, a determination, among a plurality of PARS signals, which signals are suspicious or non-suspicious, and generating a report based on identification of suspicious signals, identification of locations of biomarkers in tumor tissue and surrounding margin region, prediction of a treatment outcome or a resistance prediction or treatment recommendation, a cancer qualification and a cancer quantification for a specimen.
[00296] Inferences 3808, 3908 generated by the architectures 3800, 3850, 3900 may further include, without limitation, at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
[00297] Additional example embodiments of various machine learning architectures that may be implemented for processing one or more output (e.g., PARS features and signals) from a PARS system are elaborated further below in connection with FIGs. 64 to 79.
[00298] FIG. 48 shows examples of different tissue types imaged and identified using the machine learning architectures 3850, 3800, 3900, including a skin tissue 4800 and breast tissue 4850. The inference 3808, 3908 may include, for example, the determination that the image on the left contains skin tissue, and the image on the right contains breast tissue.
[00299] FIG. 49 shows unique keratin pearl features identified and isolated within an example simulated stained image 4900. The inference 3808, 3908 may include, for example, an identification of areas showing kerati pearls.
[00300] FIG. 50 shows biomarkers of localized inflammation and malignancy, identified and encircled based on an example simulated stained image 5000 including label-free visualizations. The inference 3808, 3908 may include, for example, an identification of area likely belonging to cancer and an area likely belonging to lymphocytes. The extracted PARS features 3804, 3904 may be used by the system to label unique biomarkers, such as, for example, red blood cells, tissue types, melanin, collagen, different proteins, and so on.
[00301] For example, in some embodiments, the image generator 3812, 3912, or an image generator within the machine learning model 3802 can be trained to generate, at inference time, an image showing tissue map overlay by processing the extracted PARS features 3804, 3904. The overlay can identify at least one salient feature, the at least one salient feature may be a biomarker location and a biomarker value for an identified tissue region on the image.
[00302] FIG. 51 shows different cell types and tissue regions, identified and delineated within an example simulated stained image 5100. The inference 3808, 3908 may include, for example, an identification of area likely belonging to one of: hair follicle, sebaceous gland, and epidermis layers.
[00303] FIG. 52 shows example of an abnormal tissue region, identified and delineated from an example simulated stained image 5200. The inference 3808, 3908 may include, for example, an identification of area likely belonging to abnormal tissue.
[00304] In some embodiments, the user application 3825, 3925 may, at execution time, render a user interface (Ul) 4000 as shown in FIG. 40. The Ul 4000 may include a first area 2510 showing features 3804, 3904 from the PARS system 3801 , 3901 , a second area 2512 showing a first simulated stained image, and a third area 2516 showing a second simulated stained image 2516. One or more inferences 3808, 3908 may be displayed within area 2517. [00305] One or more stain selectors 2520, 2540 may be provided to the user, each with a respective scroll bar 2528, 2538 for zooming in or out of rendered simulated stained images showing in areas 2515, 2516. For example, moving the scroll button within scroll bar 2528 for the first stain selector 2520 may cause the first stained image in area 2515 to zoom in or out. Similarly, moving the scroll button within scroll bar 2538 for the second stain selector 2540 may cause the second stained image in area 2516 to zoom in or out. Once a user is satisfied with the stained image in 2515 or 2516, he or she may proceed to finalizing the simulated stained images by clicking the submit button. Alternatively, the user may cancel the rendered stains and go back to a previous user interface (not shown) for selecting other applicable stains provided by stain selector 3914 of the image generator 3912.
[00306] In some embodiments, the one or more inferences 3808, 3908 displayed within area 2517 may include clinically-significant determinations generated by the machine learning models 3802, 3822, 3922. The Ul 4000 can further include visualization to assist a user (e.g., a clinician), such as a report generated by the machine learning models 3802, 3822, 3922. The visualization or report can be interactive. The visualization or report can include a visual overlay that highlights salient features while suppressing or hiding non-salient features. The visualization may be provided in real-time to assist surgeons, for example, by showing a margin of tumor tissue.
[00307] In some embodiments, in order to generate one or more inferences 3808, 3908, the plurality of features 3804, 3904 may be supplemented with at least one of features informative of image data obtained from complementary modalities including for example, at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
[00308] In some embodiments, when the plurality of features 3804, 3904 are supplemented with at least one of features informative of image data obtained from complementary modalities, the image data may further include photoactive labels for contrasting or highlighting specific regions in the images.
[00309] In some embodiments, in order to generate one or more inferences 3808, 3908, the plurality of features 3804, 3904 may be supplemented with at least one of features informative of image data obtained from complementary modalities including for example, at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
[00310] In some embodiments, the plurality of features 3804, 3904 may be supplemented with at least one of features informative of one or more of the following information:
• Specific Patient Information: o Age, Sex, Environmental factors (Job, Location) o Genomic Expression
• Clinical History o Risk factors (previous conditions, medical history, family history, etc.) o Previous diagnostic reports o Results of ancillary testing (e.g., blood tests, cytological screening) o Previous PARS images, or H&E images etc. o Images from complementary modalities (e.g., PET, CT, MRI), images of patient
• Automatic quality rating of data sources (e.g., H&E staining quality, PARS scan quality)
[00311] In some embodiments, an user application, which may be user application 3825, 3925 or a separate user application of the architecture in FIG. 38A, 38B or 39, may be configured to render a user interface 5900 shown in FIG. 59 to select and analyze one or more images generated by the architecture in FIG. 38A, 38B or 39. The Ul 5900 may include a first area 5920 showing a plurality of procedures 5930 and a second area 5950 showing corresponding procedure information for one of the plurality of procedures 5930. A user input may be received by the user application to select one of the plurality of procedures 5930. Each procedure may be associated with a set of corresponding procedure information and one or more corresponding images.
[00312] The image viewing area in Ul 5900 may include several subcomponents or subsections which are used to navigate, visualize, or manipulate collected data. The Ul 5900 may group procedures by date or relevancy. For example, Ul 5900 may group procedures by date, into one of the listed tabs: “In Progress”, “Recent” and “2 months and older”.
[00313] Each tab in the Ul 5900 may be configured to display one or more data sets visualized in a manner deemed applicable and/or appropriate to the user. For example, a user (e.g., a clinician or physician) may choose to visualize collected PARS data in a manner that emulates conventional pathological staining procedures such as Hematoxylin and Eosin or Toluidine Blue to highlight specific structures. Such virtual staining may be presented, combined, and overlapped as it might appear through conventional staining and light microscopy techniques.
[00314] In some embodiments, the user application may be configured to likewise display each stain (e.g., Hematoxylin or Eosin from H&E) separately from each other to further elucidate salient morphology. Similarly, other virtual stains, collected channels, or layers of the datasets may be displayed on its own or in a combination based on user preference. Based on user settings or user preferences, a single image layer may occupy the entire viewing area to show image details with clarity while maintaining a wide field of view, providing additional context to the user. Likewise, two or more such images or image layers can be arranged in horizontal and/or vertical splits with their own separate viewing areas. The order and orientation of the images may be set or modified by the user through graphical user interface elements.
[00315] Referring now to FIGs. 60 and 61 , which show example user interfaces 6000, 6100 for displaying one or more images generated by the architecture in FIG. 38A, 38B or 39, displayed visualizations or images may be represented as a combination of various data layers. For example, individual stains may be presented in an overlapped fashion, or as isolated individual layers. For instance, FIG. 60 shows an Ul 6000 displaying a virtual H&E image 6020 of a tissue, as well as a single non-radiative image 6050 of the same tissue.
[00316] Visibility of, and combinations of, different image layers may be toggled, managed and manipulated via GUI elements located, for example, on the top, left-hand side, or righthand side of the screen area relative to the presented image frames. For example, graphical user interface elements such as dropdown menu 6010, 6030 can receive user input and the Ul 6000 can display the selected image, layer or stain based on user input received via the dropdown menu 6010, 6030.
[00317] In some embodiments, image manipulation process such as scanning, moving, zooming in/out, locating, contrast adjustment, color adjustment, opacity, etc., may be performed on the visualizations or images. For multiple adjacent windows showing different images or layers of the same tissue, such as the virtual H&E image 6020 and the non-radiative image 6050, it may be desirable to lock them to the same field of view. A graphical user interface element such as a checkbox for “link image” located at the bottom of the Ul 6000 may be clicked by the user to lock the virtual H&E image 6020 and the non-radiative image 6050, such that moving or zooming one of the locked images (e.g., virtual H&E image 6020) such automatically cause the other locked image (e.g., non-radiative image 6050) to have the same field of view and the same display ratio. [00318] When locked (or linked), a transform to a region of a locked image may cause the user application to highlight the same region across multiple locked images. This may provide a mechanism for rapidly assessing constituent chromophore contributions, helping to highlight regions of interest or to aid with raw data imaging artifacts in pathological analysis. For example, displaying a H&E stain of a tissue region in a first display area and a separate Toluidine Blue stain of the same tissue region in a second display area located next to the first display area can highlight the unique contrast of each stain in the same region to aid diagnosis. Such image data visualization tools may facilitate easy and quick comparison between datasets collected based on a given sample, an adjacent sample, another sample on the same patient taken from a different location and/or a different time, or comparisons with other patients or other imaging sessions.
[00319] Acquired image contrasts can be displayed or processed by the system like image layers. Layers can be individually edited, toggled as visible (or invisible), overlaid (as an overlay), merged, or combined to yield additional contrasts that may provide more interpretability, for example, FIG. 61 shows a user interface 6100 showing a first image layer 6150 (e.g. virtual H&E image layer), and two additional image layers, “Scattering (405 nm)” and “Radiative (266 nm)”, that can be selected by an user through a GUI (dropdown menu) 6130. Once the user has clicked on the “Overlay with image” option and selected a specific additional image layer through GUI 6130 to overlay the first image layer 6150, the Ul 6100 may proceed to show the first image layer 6150 with an overlay of the selected additional image layer, which may be, in this example, “Scattering (405 nm)” or “Radiative (266 nm)”.
[00320] Layer combinations can be grouped and modified as a group. Grayscale layers or group layers can be colorized to match certain individual stains or combination stains. For example, the PARS radiative absorption layer can be modified and colorized to emulate eosin stain. The PARS non-radiative layer can be colorized to emulate hematoxylin stains. These layers can be viewed separately or combined as a group layer where the stains are overlaid to emulate a combined hematoxylin and eosin stain.
[00321] Larger collections of datasets, single patient acquisition sessions, multi-patient acquisition sessions, or other projects which account for one or more datasets intended to be grouped together in a collection may be presented as projects in a project viewing area of an user interface of the user application. Such a project viewing Ul may be positioned as a separate sub-region of the primary viewing area or presented on a separate tab or similar separation. A project Ul may facilitate the grouping or collections of similar images which may have been collected on a given image in session or from within a given imaging project. Such imaging projects may be more easily transferable as opposed to large collections of individual datasets. Grouping and presentation of the constituent datasets may be further organized for user convenience by aspects such as location, date of collection, patient ID number, and so on.
[00322] Collected data may be visualized in a variety of salient forms. For example, any of the collected data channels may be visualized by plotting their respective signal values as grayscale intensity values mapped to their respective locations on a two-dimensional image, which may correspond to their respective locations on the sample. Examples of such collected data channels may include the PARS non-radiative absorption contrast, whose signals may be extracted from collected time domains. When imaging biological tissues, such contrast may highlight regions of high DNA densities such as cell nuclei. Another example of such collected data may include the PARS radiative absorption contrast. Some biological samples such as connective tissues (fibrin, collagen) may be well represented by this contrast. Another example may involve the visualization of linear back-scattered light from the sample highlighting structural morphology. Other similar extractions can be processed, visualized and displayed through a user interface, where various aspects of the time domains are extracted to produce visualizations of similar concepts. In addition, various combinations, products, and ratios of these visualizations may be created to elicit further informative contrast.
[00323] For example, the radiative and non-radiative contrasts may be summed to produce a measure of total regional absorption, whereas their ratios provide information related to the absorption quantum efficiency within the probed region. Such combinations may provide a user with unique information sets over single-component visualizations. Moreover, colorization of such combinations may be created either through algorithmic means or through machine learning models 3802, 3822, 3922 to emulate other colorizations known to the respective user’s field. As an example, while imaging tissue for pathological analysis, it may be useful to color data to replicate the look and contrast of existing staining procedures such as Hematoxylin & Eosin, or Toluidine Blue.
[00324] FIG. 62 shows an example user interface 6200 for scanning and processing one or more images using an imaging device. The Ul 6200 may be configured to facilitate a user to control and operate an imaging device, which may be part of the architecture in FIG. 38A, 38B or 39. Ul 6200 includes a scan control interface, which includes a preview area 6250 of an image being scanned. As rows of pixels are scanned or otherwise generated, the preview area 6250 may show the progress of the scan or generation of the digital image.
[00325] An operator of the PARS system 3801 , 3901 can be notified when the scan can be performed safely. In some embodiments, the operator of the PARS system 3801 , 3901 can be prevented from performing the scan if any safety condition is not satisfied. Next to the preview area 6250, multiple icons 6210, 6220, 6230, 6240, 6260 are shown in a second area 6280.
[00326] For example, icon 6210 may indicate that the issue sample is not properly positioned or installed for scanning, or a pressure is not applied correctly for scanning. Icon 6220 may indicate that the laser used in scanning is not heated to a sufficient level. Icon 6230 may indicate that the laser used in scanning is overheated. Icon 6240 may indicate that the scan enclosure area is not securely closed. Icon 6260 may indicate that all the safety conditions are satisfied and the scan can proceed.
[00327] A progress bar within the second area 6280 may indicate a progress of the scan, and a user may start or stop the scan using the GUI elements located within the second area 6280.
[00328] In some embodiments, a collection of image processing tools may be included in the user application to help modify and manipulate visualizations by the user. Such image processing tools may include but are not limited to: modification of brightness-contrast levels, sharpening filters, blurring filters, hue-saturation adjustments, and so on. As an user option, one or more processing steps may be configured as one or more preset options, such that a set of processing steps may be selected by the user to be quickly performed on subsequent data acquisitions.
[00329] In some embodiments, one or more GUI elements of an user interface rendered by the user application may display one or more machine learning results (e.g., inferences 2517) to aid users in segmentation, image optimization, labeling, diagnosing, and so on. The user application may include the following example tools for assisting a user with image analysis: a tool which automatically selects tumor margins, a tool which performs an image search in a PARS or H&E database to provide similar examples (e.g., in terms of structure or diagnosis), a tool which provides an automatic diagnosis to act as quality assurance for a pathologist, a tool which automatically identifies tumor type, treatment management, etc., a tool which allows the user to segment salient sub regions of tissue to highlight cell nuclei, fibrous tissue, melanin, fatty tissues, red blood cells, and so on.
[00330] In some embodiments, an image can be annotated by one or more users (e.g., medical professionals) through an Ul rendered by the user application. FIG. 63 shows an example user interface 6300 for displaying an annotated image 6350. User selection(s) can be made via GUI elements in an annotation selection region 6310 to view some or all comments 6320a, 6320b, 6320c, which may be made by different users. A Quick Hide button located at the bottom left corner of the Ul 6300 allows the user to hide all comments to view the image without any comments. This annotation Ul 6300 can be accessed remotely through an online viewer, such as an online viewer application from a website or an mobile application.
[00331] FIG. 41 shows an example machine learning architecture 4100 that may be used to train the image generator 3812, 3912, or the image generator within the machine learning model 3802. The image generator 3812, 3912 may be, for example a colorization machine learning model trained using a generative adversarial network (GANs), which may include, for example, a cycle-consistent generative adversarial network (CycleGAN) model.
[00332] The image generator 3812, 3912 may be, for example a colorization machine learning model trained using a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
[00333] A colorization machine learning model may include a neural network. Depicted in FIG. 43, a neural network 4300 may include an input layer, a plurality of hidden layers, and an output layer. The input layer receives input features. The hidden layers map the input layer to the output layer. The output layer provides the prediction (e.g., inference) of the neural network. Each hidden layer may include a plurality of nodes, which may include weights, bias and input from a preceding layer. Weight is the parameter within a neural network that transforms input data within’the network's hidden layers.
[00334] In some embodiments, initial weights for a neural network model 4300 within the colorization machine learning model can be transferred from another neural network model (the “donor model”) trained on a large-scale stained H&E image dataset. In this configuration, each weight in one or more initial layers of the neural network model 4300 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the large-scale dataset, instead of being assigned a random value prior to the training of the neural network model 4300.
[00335] In some embodiments, during the training of the neural network model 4300, all layers of the neural network model 4300 are trained and fine-tuned, and weights updated accordingly.
[00336] In some embodiments, during the training of the neural network model 4300, the weights of the one or more initial layers are kept constant (i.e., equal to the weights from the one or more initial layers ofthe donor model), and throughout the training process only weights of the subsequent layers (after the initial layers) of the neural network model 4300 are trained or fine-tuned during training.
[00337] The CycleGan model includes a first GAN having a first generator model 4103 and a first discriminator model 4107, and a second GAN having a second generator model 4113 and a second discriminator model 4117.
[00338] During training, in each training iteration, a true total absorption (TA) image 4101 may be obtained from an existing PARS image database, and sent to a first generator model 4103. The first generator model 4103 may include a neural network configured to generate a simulated stained image 4105 (“fake” stain) based on the TA image 4101. Then a fake TA image 4111 is generated by a second generator model 4113 based on the simulated stained image 4105. A first loss, the cycle consistency loss 4120, may be computed based on comparing the true TA image 4101 and the fake TA image 4111. This loss 4120 is then used to update weights of the first generator model 4101 and the second generator model 4113.
[00339] The simulated stained image 4105 may be processed by a first discriminator model 4107 to generate an output, which may be further processed through a classification matrix 4109 to generate a first discriminator output. The discriminator model 4107 is configured to predict how likely the simulated stained image 4105 is to have come from a target image collection (e.g., a collection of real stains 4115).
[00340] During the same training iteration, a labelled and stained image 4115 is obtained, for example, from an existing stained image database. The labelled and stained image 4115 may be processed by a second discriminator model 4117 to generate an output, which may be further processed through a second classification matrix 4119 to generate a second discriminator output.
[00341] The first and second discriminator output may be used to compute a second loss 4125.
[00342] Based on one or both of the first loss 4120 and the second loss 4125, the processor may update weights of: the first generator model 4103, the second generator model 4113, the first discriminator model 4107 and the second discriminator model 4117.
[00343] The training may stop once the first or second loss, or both losses, have reached a threshold value, or may stop after a pre-determined number of iterations.
[00344] The first generator network 4103, once trained, may be deployed as part of image generator 3812, 3912 or machine learning model 3802, at inference time to generate one or more simulated stained images 3806, 106.
[00345] In some embodiments, a colorization machine learning model may include may include an one-shot GAN, a type of single-image GANs, to generate images from a training set as small as a single image, which is suitable in applications or settings where the samples are limited, such as in histology.
Obtaining Ground Truth Data for Training and Image Searching
[00346] In some cases, labelled and stained image 4115 (training data for the machine learning architecture 4100) may be obtained from traditional chemical staining process, such as spectroscopy-based methods.
[00347] FIG. 42 shows an example process 4200 for preparing one or more training data 4205, 4207 for training the image generator 3812, 3912, or the image generator within the machine learning model 3802. An unstained tissue section 401 may be processed by a PARS system such as TA-PARS, to generate unlabeled multichannel image 4205, which may be provided to the machine learning architecture 4100 as a true TA image 4101.
[00348] The unstained tissue section 401 may undergo traditional chemical staining process, and a stained slide 4203 may be obtained and imaged with bright-field microscope to generate a labelled and stained image 4207, which may be provided to the machine learning architecture 4100 as a labelled and stained image 4115.
[00349] FIG. 46 shows two virtually (simulated) stained PARS images, one simulated stained hematoxylin and eosin (H&E) image 4610, and one simulated stained toluidine blue image 462, both of which may be used as the true TA image 4101 during training for different stained image generation processes.
[00350] FIG. 47A shows an example of an unlabeled PARS virtual H&E image 4700 as generated by a PARS system, which may be used as input to the architecture 4100 in the form of a true TA image 4101. The unlabeled PARS virtual H&E image 4700 is correlated with a historical, labelled stained (H&E) image 4750 in FIG. 47B, which can be provided to the machine learning architecture 4100 as a labelled and stained image 4115.
[00351] For example, for colorization of paraffin embedded slides, a PARS image of a tissue sample may be generated, then the subsequently the tissue sample may be chemically stained with a stain of interest. This generates a one-to-one correspondence dataset for training the coloration machine learning mode in architecture 4100.
[00352] For coloration of fresh tissue, a PARS image of the tissue may be captured before processing the tissue through the traditional histopathological workflow. This will produce a correlated section for training the coloration machine learning mode in architecture 4100.
[00353] For training the machine learning model 3802, 3822, 3922 to make one or more inferences including diagnostics on virtual histological slides, multiple pathologists can hand label a dataset of tissue slides to identify location, type, grade, etc. of cancer within each tissue slide. This labeled dataset can then be used to train the machine learning model 3802, 3822, 3922 to make proper inference regarding one or more PARS images from the PARS system.
[00354] Assuming that historical virtually labelled images are equivalent to traditional images labelled by pathologists, it may be possible to leverage existing labeled databases to provide training data for diagnostic algorithms.
[00355] Through finding similar labeled structures or images between existing H&E databases and PARS data, a system may be able to automatically label PARS data in order to train the machine learning model 3802, 3822, 3922 to make one or more inferences including diagnostics on the PARS features.
[00356] In some embodiments, a traditional tissue image or an image generated based on PARS signals may contain different structures therein. The machine learning model 3802, 3822, 3922 may receive the traditional tissue image or the image generated based on PARS signals (“input image”) and process the input image to generate a colorized image 5600 that is simultaneously stained or colored with different stains.
[00357] For example, the basal layer of the input image may be stained with T-Blue while the inner connective tissue of the input image may be stained with H&E. For example, based on raw data contained within the input image, a certain region of the tissue may be best highlighted in H&E stain whereas another area may be best highlighted in Masson’s T richome. The machine learning model 3802, 3822, 3922 may be trained based on historical data generated by pathologists or other professionals, where the historical data include different structures of tissue images and corresponding stains for each of the different structures.
[00358] FIG. 56 shows an example multi-stain image 5600 that may be generated by a machine learning model 3802, 3822, 3922. The different colored regions may be generated, in some embodiments, by color shifting an H&E image. [00359] In practice without virtual staining, simultaneous or sequential use of histochemical, IHC, and FISH agents is not possible on a single tissue section. The labelling process can introduce irreversible structural and chemical changes which render the specimen unacceptable for subsequent analysis. As such, each section must be independently sectioned, mounted, and stained; a technically challenging, expensive, and time-consuming workflow. A trained histotechnologist may spend several hours to prepare a section fortesting, with some labeling protocols requiring overnight incubation and steps spaced out across multiple days. Hence, repeating staining or producing additional stains in a stepwise fashion can delay diagnostics and treatment timelines, degrading patient outcomes. Moreover, performing multiple stained sections can rapidly expend invaluable diagnostic samples, particularly when the diagnostic material is derived from needle core biopsies. This increases the probability of needing the patient to undergo further procedures to collect additional biopsy samples, incurring diagnostic delays, and significant patient stress.
[00360] By capturing both absorption fractions simultaneously, various embodiments of the PARS system as described herein are able to recover rich biomolecule specific contrast, such as quantum efficiency ratio, not afforded by other independent modalities. In PARS, the optical relaxation processes (radiative and non-radiative) are observed following a targeted excitation pulse incident on a sample. The radiative relaxation generates optical emissions from the sample which are then directly measured. The non-radiative relaxation causes localized thermal modulations and, if the excitation event is sufficiently rapid, pressure modulations within the excited region. These transients induce nano-second scale variations in the sample’s local optical properties, which are captured with a co- focused detection laser. Additionally, the co-focused detection is able to measure the local optical scattering prior to excitation. Overall, various embodiments of the PARS system as described herein is able to simultaneously capture radiative and non-radiative absorption as well as and optical scattering from a single excitation event.
[00361] As can be seen in FIG. 56, the multi-stain image 5600 has five different regions with different tissue structures: 5610, 5620, 5639, 5640, 5650. Region 5610 is stained with a light purple color (a), which m’y be Masson's trichrome stain that is typically used to differentiate different types of connective tissues. Regions 5620 and 5640 are stained with a blue color (b), which may be PAS stain used to identify regions of fungal infection in the tissues. Regions 5630 and 5650 are stained with a pink-purple color (c), which may be H&E stain used to differentiate the different layers of the epithelium and the structures of subdermal glands. In some embodiments, the machine learning model 3802, 3822, 3922 may be able to automatically determine a most appropriate stain or color for a particular region in an image and apply the most appropriate stain or color to the particular region in the image.
PARS multi-staining results using PARS feature vector
[00362] In some embodiments, multi-staining images are generated based on PARS feature vector (e.g., the PARS data vector shown in FIG. 87), and PARS Time Domain clustering methods. Some examples of PARS multi-staining images are presented in FIG. 88, which shows three different virtual stains 8820, 8840, 8860 are produced from the same initial PARS dataset 8800. In this example, a PARS data vector (similar to the example shown in FIG. 87, containing PARS amplitudes, and time domain features) is passed to a series of GAN networks which were then used to develop a virtual staining result.
[00363] FIG. 88 shows example PARS virtual multi-staining images based on the same PARS image data 880. An RGB representation of PARS image data 8800 is shown on the left, while three different virtual stains 8820, 8840, 8860 are shown to the right which were produced from the raw PARS image data 8800. The multi-staining results are produced using PARS feature vector data which contains a number of primary and secondary features including time domain features.
[00364] In some embodiments, an image generator is designed to better leverage the PARS and ground truth image data to produce an accurate stain transform. For example, an example neural network in the image generator may use perceptual based losses such as learned image-patch similarity, or “VGG” networks to optimize perceived similarity between the PARS virtual staining images, and ground truth images. Additionally, more advanced GAN architectures (e.g., Wasserstein GAN with Gradient Penalties, or Unrolled GAN’s) can be used to develop more robust transforms.
[00365] In some embodiments, initial weights for a machine learning model 3802, 3822, 3922 can be transferred from another neural network model (the “donor model”) trained on a traditional stained H&E image dataset. In this configuration, each weight in one or more initial layers of the machine learning model 3802, 3822, 3922 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the traditional stained H&E image dataset, instead of being assigned a random value prior to the training of the machine learning model 3802, 3822, 3922.
[00366] The donor model can be trained on a traditional stained H&E image dataset. These stained H&E images may be obtained from traditional chemical staining process, such as spectroscopy-based methods. For example, the training data for the donor model can include a group of stained H&E images (ground truth data) and a group of corresponding greyscale H&E images converted from the group of stained H&E images. The donor model, during training, may receive the group of corresponding greyscale H&E images as input, and output corresponding colorized H&E images that are compared to the ground truth data, where the comparison may cause updating of the weights of the donor model during each training iteration.
[00367] For another example, the training data for the donor model can include a group of stained H&E images (ground truth data) and a group of corresponding channel data for a first channel (e.g., H channel) and a second channel (e.g., E channel) for each respective stained H&E image in the group. The channel data for a H channel or E channel may include, for example, amplitude and/or intensity values for each respective channel, similar to an RGB channel for a traditional color image.
[00368] The donor model, during training, may receive the group of channel data as input, and output corresponding colorized H&E images that are compared to the ground truth data, where the comparison may cause updating of the weights of the donor model during each training iteration.
[00369] As described above, initial weights for a machine learning model 3802, 3822, 3922 can be taken from the trained donor model. In this configuration, each weight in one or more initial layers of the machine learning model 3802, 3822, 3922 may be assigned a value equal to a respective value from corresponding one or more initial layers of the donor model trained on the traditional stained H&E image dataset.
[00370] In some embodiments, during the training of the machine learning model 3802, 3822, 3922, all layers of the machine learning model 3802, 3822, 3922 are trained and finetuned, and weights updated accordingly.
[00371] In some embodiments, during the training of the machine learning model 3802, 3822, 3922, the weights of the one or more initial layers are kept constant (i.e., equal to the weights from the one or more initial layers of the donor model), and throughout the training process only weights of the subsequent layers (after the initial layers) of the machine learning model 3802, 3822, 3922 are trained or fine-tuned during training.
[00372] After training, the machine learning model 3802, 3822, 3922 may, at inference time, receive a PARS image and/or PARS signals, and generate a corresponding PARS virtual H&E image. [00373] In some embodiments, as shown in FIG. 57A, an image generator 5750, which can be the image generator 3812, 3912, orthe image generator within the machine learning model 3802, may include two neural network models 5712, 5722. The first neural network model 5712 can be, as an example, a cycleGAN trained to generate simulated grayscale H&E images 5715 based on TA-PARS images 5720 from a PARS system 5710, and the second neural network model 5722 can be, as an example, a conditional GAN (e.g., pix2pix) trained to generate simulated color H&E images 5730 based on simulated grayscale H&E images 5715 from the first neural network model 5712.
[00374] The first neural network model 5712 can be trained based on historical sets of TA- PARS data and corresponding grayscale H&E images. The second neural network model 5722 can be trained on historical sets of greyscale H&E images and corresponding stained H&E images, which may be obtained from traditional chemical staining process, such as spectroscopy-based methods, or may be obtained from data sets of virtual greyscale and color H&E images.
[00375] In some embodiments, as shown in FIG. 57B, an image generator 5850, which can be the image generator 3812, 3912, orthe image generator within the machine learning model 3802, may include two neural network models 5812, 5822. The first neural network model 5812 can be, as an example, a cycleGAN trained to generate separated H channel 5815 and separated E channel 5817 based on TA-PARS images 5820 from a PARS system 565istolnd the second neural network model 5822 can be, as an example, a conditional GAN (e.g., pix2pix) trained to generate simulated color H&E images 5830 based on separated H channel 5815 and separated E channel 5817 from the first neural network model 5812.
[00376] The first neural network model 5812 can be trained based on historical sets of TA- PARS data and corresponding separated H channel and E channel data. The second neural network model 5822 can be trained on historical sets of separated H channel and E channel data and corresponding stained H&E images, which may be obtained from traditional chemical staining process, such as spectroscopy-based methods, or may be obtained from data sets of virtual color H&E images.
[00377] In some embodiments, a cycleGAN model may be implemented to be a multi-task cycleGAN model configured to perform a plurality of tasks, including for example: 1) superresolve a relatively lower resolution image to a higher resolution image (enhancing the resolution of the image); 2) generate H-stained TA-PARS images; 3) generate E-stained TA- PARS images; and 4) generate H&E-stained TA-PARS images. When the cycleGAN model is trained on higher resolution images from an image dataset to transform a greyscale image to a color H&E image, the cycleGAN model learns the transfer from greyscale to color, as well as to enhance resolution.
[00378] In some embodiments, the architecture 3800, 3850, 3900 may include an image search machine learning model (“image search model”), which may be part of the machine learning model 3802, 3822, 3922 or may be a separate machine learning model, to output one or more labelled images based on a given unlabeled input image. For example, an input image to the image search model may be the unlabeled PARS virtual H&E image 4700, which has no labels for any region of interest within the image 4700. The image search model, once properly trained, may, based on a plurality of existing (e.g., historical) labelled images stored in an imaging database, output at least one labelled image (e.g., such as the labelled stained image 4750 in FIG. 47B) from the existing labelled images, where the output labelled image has the highest correlation score as the input image,
[00379] The correlation score may be determined, in some embodiments, based on features extracted from the input image (e.g., unlabeled PARS virtual H&E image 4700). For example, a higher correlation score may be assigned to images that exhibit features with greater similarities to the input image. A minimum threshold may be predetermined, such that any existing labelled image(s) from the database with a correlation score above the minimum threshold may be selected as an output by the image search model.
[00380] The image search model may therefore be configured to retrieve one or more existing labelled images from an existing imaging database, based on an input image that is not yet labelled. The retrieved labelled images, which may be stained, can be used as training data to the machine learning architecture 4100 as a labelled and stained image 4115.
[00381] In some embodiments, the image search model may be configured to retrieve one or more existing labelled images from an existing imaging database, based on an input image that is not yet labelled, and transmit the one or more existing labelled images to an user interface 4000 of a user application 3825, 3925 to aid with further medical diagnosis. The input image and output image(s) may or may not be stained.
[00382] For example, a clinician through user application 3825, 3925 may send a new medical image showing a patient’s lung to the image search model for retrieval of similar medical images that are already labelled (or annotated). The image search model may output one or more output images (e.g., through Ul 4000) that can aid with the clinician with an understanding of the input medical image showing the patient’s lung. For instance, if the one or more output images generally contain images labelled with pneumonia, it is likely that the patient in the input medical image may have pneumonia as well.
[00383] In some embodiments, a percentage of likelihood of pathologic finding in a current image acquisition may be similar to, and can be determined based on previous diagnosis of similar or same pathologic finding in one or more previous images on PARS acquisitions.
[00384] For instance, referring now to FIG. 55, a heat map 5500 is shown. The heat map 5500 includes several heat regions 5510, 5520, 5530, 5540. Each respective region 5510, 5520, 5530 or 5540 may represent a corresponding percentage of likelihood of pathology or pathologic finding (e.g., malignancy) superimposed on to an H&E staining image 5550 to aid in diagnosis and intraoperative guidance. This can help medical personnel in reading a current H&E staining image and making relevant conclusions. For example, the heat map 5500 may assist surgeons that maybe inexperienced at reading pathologic slides make decisions on where to resect cancerous tissue intraoperatively.
[00385] The heat region 5510 appears to be the darkest in color, followed by heat region 5530, then further followed heat regions 5540 and 5520. This may indicate, for example: the area covered by heat region 5510 has a high likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 80%, or at a range of 80-100%; the area covered by heat region 5530 has a medium-to-high likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 50%, or at a range of 50-79%; the areas covered by heat regions 5540 and 5520 have a low likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 30%, or at a range of 30-49%. The areas not covered by any heat region (in blue) may have a very low likelihood or percentage of pathologic finding (e.g., malignancy), for example, at 0% to 29%.
[00386] In some embodiments, the architecture 3800, 3850, 3900 may generate an inference, based on a sample (e.g., an image), a probability of a disease for at least one region in the sample, the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture.
[00387] In some embodiments, the inference may include a heat map identifying one or more regions of the sample and a corresponding probability of a disease for each of the one or more regions of the sample.
[00388] In some embodiments, the corresponding probability of a disease for each of the one or more regions of the sample is illustrated by a corresponding intensity of a color shown in the respective region in the heat map. The heat map can guide clinicians in identifying and managing diseases in the patient associated with the sample. [00389] In some embodiments, the image search model maybe a standalone system architecture without a PARS system.
[00390] In some embodiments, the image search model may include a Convolutional Neural Network (CNN), and may further include an autoencoder.
[00391] For molecular identification: a PARS image of tissue scans can be obtained from PARS system before staining with molecular stains and developing an image based correlation. A PARS image can be obtained and compared with ground truth spectroscopic methods including mass spectrometry, mass cytometry, fluorescence spectroscopy, transient absorption spectroscopy.
[00392] In some embodiments, instead of a historical stained image 4207 obtained through the conventional chemical staining process, the labelled and stained image 4115 is a labelled PARS image from a PARS image database.
[00393] In some embodiments, the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image. In some embodiments, automatically labelling the unlabeled PARS image may include labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities.
[00394] In some embodiments, the existing labelled stained image is obtained from an existing H&E database.
Example colorization process using cycleGAN and denoising in the process
[00395] In some embodiments, as shown in FIG. 58, an image generator, which can be the image generator 3812, 3912, or the image generator within the machine learning model 3802, may include a cycleGAN 5812 trained to generate, based on TA-PARS images 5820 from a PARS system 5810, simulated color H&E images 5830.
[00396] In some embodiments, TA-PARS images 5820, which may include radiative and non-radiative absorption images, are preprocessed via a preprocessing module 5811 , and then virtually stained through the cycleGAN 5812 to generate simulated color H&E images 5830. The preprocessing module 5811 may include, for example, a self-supervised Noise2Void denoising convolutional neural network (CNN) 5813 as well as an error-correction submodule 5823 for pixel-level mechanical scanning and error correction. The implementation described herein can significantly enhance the recovery of sub-micron tissue structures, such as nucleoli location and chromatin distribution. The preprocessed PARS image data 5826 are then virtually stained using the cycleGAN 5812 by applying virtual stains to the preprocessed PARS image data 5826, which may include images representing thin unstained sections of malignant human skin and breast tissue samples.
[00397] FIG. 58 shows an improved virtual staining and image processing architecture 5800 for emulating histology images which are effectively indistinguishable from standard H&E pathology. The presented architecture in FIG. 58 includes an optimized image preprocessing module 5811 and a cycle-consistent generative adversarial network (CycleGAN) 5812 for virtual staining. CycleGAN virtual staining does not require pixel-to-pixel level registration for training data. However, semi- registered data is used here to reduce hallucination artifacts, while improving virtual staining integrity. In addition, the image preprocessing module 5811 reduces inter-measurement variability during signal acquisition, through the implementation of pulse energy correction and image denoising using the self-supervised Noise2Void network. An error correction submodule 5823 is implemented for removal of pixel level mechanical scanning position artifacts, which blurs subcellular level features. These enhancements afford marked improvements in the clarity of small tissue structures, such as nucleoli and chromatin distribution. The loosely or semi- registered CycleGAN 5812 facilitates precise virtual staining with the highest quality of any PARS virtual staining method explored to date. When applied to images containing entire whole slide sections of resected human tissues, the architecture 5800 provides detailed emulation of subcellular and subnuclear diagnostic features comparable to the gold standard H&E. This architecture 5800 represents a significant step towards the development of a label-free virtual staining microscope. The successful label-free virtual staining opens a pathway to the development of in-vivo virtual histology, which could allow pathologists to immediately access multiple specialized stains from a single slide, enhancing diagnostic confidence, improving timelines and patient outcomes.
[00398] In one example embodiment, label-free TA-PARS images 5820 images are captured using the PARS system 5810. In short, a 400ps pulsed 50KHz 266nm UV laser (Wedge XF 266, RPMC) is used to excite the sample, simultaneously inducing non-radiative and radiative relaxation processes. The non-radiative relaxation processes are sampled as time-resolved photothermal, and photoacoustic signals probed with a continuous wave 405nm detection beam (OBIS-LS405, Coherent). This detection beam is co-aligned and focused onto the sample with the excitation light using a 0.42 numerical aperture (NA) UV objective lens (NPAL-50-UV-YSTF, OptoSigma). The radiative emissions (>266nm) from the radiative relaxation process, as well as the transmitted detection light, are collected using a 0.7 NA objective lens (278-806-3, Mitutoyo). The 405nm detection wavelength and the radiative emissions are spectrally separated, and each directed toward an avalanche photodiode (APD130A2, Thorlabs).
[00399] To form an image, mechanical stages move the sample in an “s”-like scanning pattern to laterally separate the excitation events on the sample (~250nm/pixel). At each excitation pulse, several hundred nanoseconds of time-resolved signal from each system photodiode is digitized at a 200MHz rate (CSE1442, RZE-004- 200, Gage Applied). A portion of the collected signal is pre-excitation and is used to form the scattering image of the sample in its unperturbed state. The non-radiative image pixels are then derived as a percentage modulation in the detection scattering (post-excitation). Next, the radiative image pixels are obtained from the peak emission amplitude recorded after each excitation event. Pixels are then arranged in a cartesian grid based on the stage position feedback, forming a stack of three co-registered label-free image contrasts: non-radiative, radiative, and scattering. Finally, the excitation pulse energy and detection power, recorded throughout imaging, are used to correct image noise caused by laser power and pulse energy variability.
[00400] In brief, the entire tissue area is divided into subsections (500x500pm), each individually scanned at their optimal focus position. Using their relative stage positions and small amount of overlap (~5%), these sections are stitched and blended into a single whole slide image.
PARS Data Preprocessing
[00401] In addition to the correction of noise due to laser power and pulse variability, the Noise2Void (N2V) denoising convolutional neural network (CNN) 5813 is, in some embodiments, used to further denoise the raw PARS images. Unlike many other traditional CNN-based denoising methods, the N2V denoising CNN 5813 does not require paired training data with both a noisy and clean image target. It assumes that image noise is pixel-wise independent, while the underlying image signal contains statistical dependencies. As such, it facilitates a simple approach for denoising PARS images, and was used to train a denoising CNN for the radiative and non-radiative contrast channels, separately. Example machine learning models were trained on a body of raw data taken from both human skin and breast whole slide images. A series of 125 PARS tiles was used to generate a model for each of the radiative and non-radiative images. Each model was trained over a series of 300 epochs, with 500 steps per epoch, using 96 pixel neighbourhoods. The final processing step before training the virtual staining model is to correc- a scanning- related image artifact, which is uncovered after denoising the raw data. These artifacts are line-by-line distortions caused by slight inconsistencies in the mechanical scanning fast axis (x-axis) velocity, which results in uneven spatial sampling. As such, before colourization using CycleGAN 5812, a custom jitter or error correction submodule 5823 is used to fix these distortions.
Dataset Preparation for Model Training
[00402] In some embodiments, a CycleGAN image translation model 5812 can be used for virtual staining. While CycleGAN 5812 is able to learn an image domain mapping with unpaired data, it can be advantageous to provide the model with semi or loosely registered images, as a form of high-level labeling to better guide the training process and strengthen the model. As one-to-one H&E and PARS whole slide image pairs are obtainable, it seems most appropriate to prepare the dataset accordingly. However, the two datasets are not intrinsically registered, so a simple affine transform is used. Affine transforms allow for shearing and scaling, as well as rotation and translation. In general, it is sufficient for the alterations of tissue layout on the slide which occur during the staining process. The affine transform is determined using the geometric relationship between three registration points. This found relation, or transformation matrix, is then applied to the entire whole slide image for both the non-radiative and radiative channels.
[00403] FIGs. 82A and 82B show example visualization of data preparation process and inversion. In FIG. 82A, as shown in example schematic block diagram 8200, the registered total-absorption and H&E images are cut into matching tiles, to generate a loosely registered dataset. The pixel intensities of the total-absorption images are then inverted, to provide a better initialization for training. Finally, the datasets are used to train the virtual colorization model (e.g., CycleGAN 5812).
[00404] In FIG. 82B, as shown in example schematic block diagram 8250, to form virtually stained images, the model is repeatedly applied to overlapping tiles of the total absorption images. The overlapping tiles are subsequently averaged to form the final virtual colorization.
[00405] After the whole slide PARS Total Absorption (TA) image and H&E image are registered, the entire image is sliced into small tiles (512x512) which are paired together as shown in FIG. 82A. The total absorption (TA) image shows the radiative (blue) and non- radiative (red) raw images in a combine single colored image. However, during training, the network uses inverted TA patches, in which the radiative and non- radiative image pixel intensities are inverted before they are stacked into a colored image. Inverting these channels provides a colored image where the white background in the PARS data maps to the white background in the H&E data. After training is complete, the model can be applied to larger images, such as entire whole slide images, by virtually staining 512x512 tiles in parts. This process is shown in FIG. 82B, where overlap regions are averaged together in the final virtually stained image. Here an overlap of 50% is used.
[00406] In a study, two CycleGAN models were trained on loosely paired data using the registration and dataset preparation methods described earlier. One model was trained on human skin tissue and another on human breast tissue. For each model, the training sets were composed of 5000 training pairs of size 512x512px (128x128 pm) sourced from standard 40x magnification (250nm/pixel) whole slide images of each tissue type. The model generators were trained for 500 epochs with an early stopping criteria to terminate training when losses stopped improving. The model was trained with a learning rate of 0.0002, batch size of 1 and an 80/20% split of training and validation pairs. For comparison purposes, a pix2pix model and standard unpaired CycleGAN model were also trained for each tissue type. The pix2pix models were trained on the same dataset as the paired CycleGAN model, with more rigorous registration process and the same model parameters. For the unpaired training of CycleGAN models, the same number of training pairs were used, however the TA and H&E domains were sourced from different whole slide images of the same tissue type.
[00407] A current shortcoming of the PARS raw images is the presence of measurement noise. Improvements in PARS image quality were achieved by measuring detection power and excitation pulse energy. Image noise was then correction based on the laser energy variability. Even with the energy reference correction, measurement noise is still present in the non-radiative signals. This additive noise disproportionately impacts signals which exhibit low non-radiative relaxation since they generate smaller non-radiative perturbations in the detection beam.
[00408] Paired or unpaired denoising method can be applied to the raw PARS data in the TA-PARS images 5820 to remove noise prior to colourization using an image generator such as the cycleGAN 5812. Unpaired denoising algorithms do not require matched noisy and clean image targets for training and facilitate a simple self-supervised approach for denoising PARS images. Paired denoising algorithms may also be used on PARS images. For example, clean and noisy image pairs for training could be generated by acquiring two images of the same area, one at high pulse energies and one at low pulse energies. High pulse energies would yield lower noise (i.e., clean) images, whereas low pulse energies would produce noisier image targets. [00409] FIG. 80 shows an example of the raw PARS data 8010 in the TA-PARS images 5820 denoised using a Noise2Void (N2V) framework, as seen in A. Krull, T.-O. Buchholz, and F. Jug, -Noise2Void - Learning Denoising from Single Noisy Images.” arXiv, Apr. 05, 2019. doi: 10.48550/arXiv.1811.10980, the entire content of which is herein incorporated by reference. The denoising example in FIG. 80 has been adapted in J. E. D. Tweel, B. R. Ecclestone, M. Boktor, J. A. T. Simmons, P. Fieguth, and P. H. Reza, “Virtual Histology with Photon Absorption Remote Sensing using a Cycle-Consistent Generative Adversarial Network with Weakly Registered Pairs.” arXiv, Jun. 26, 2023. doi: 10.48550/arXiv.2306.08583, the entire content of which is herein incorporated by reference.
[00410] After denoising the raw PARS data 8010 in the TA-PARS images 5820 via the preprocessing module 5811 , the denoised image 8020 may contain mechanical scanning- related jitter artifacts, as seen in FIG. 80. These artifacts are line-by-line distortions caused by slight inconsistencies in the mechanical scanning fast axis velocity, which results in uneven spatial sampling. Before colourization using an image generator, a custom jitter or error correction submodule 5823 may be used to fix these distortions in the denoised images 8020 and generate images 8030 with artifact removed. The generated images 8030 may be used as the preprocessed PARS image data 5826 for input into an image generator such as the cycleGAN 5812. One example implementation 8100 of an error correction submodule 5823 to fix the line-by-line jitter distortion in the denoised PARS images 8020 is illustrated in FIG. 81.
[00411] The implementation 8100 of the error correction submodule 5823 shown in FIG. 81 determines the optimal pixel shifts for a series of chunks spaced across a given row, with overlap. Chunks are then moved to their appropriate locations and summed together into a corrected row, with areas of overlapping chunks averaged. FIG. 81 illustrates three example chunks and their optimal pixel shifts. These shifts are determined by moving a chunk left and right until a minimal mean square error is reached between the chunk and a reference row. This reference is calculated as the average between the top and bottom rows for the given row being corrected. The error correction submodule 5823 is implemented based on the assumption that the fast axis speed profile differs mostly for velocity sweeps in opposing directions and minimally for velocity sweeps in matching directions. As such, the top and bottom rows were captured in the same direction and averaging them together provides a suitable in-between row to use as reference for correction.
[00412] FIG. 83 shows an example of the raw non-radiative and radiative image channels after reconstruction and laser power reference correction. At high magnification, significant noise can be seen in the raw data channels. This motivates denoising as a preprocessing step. However, noiseless PARS image targets were not available for training a traditional denoising CNN. Hence, the N2V denoising CNN 5813 is an ideal method as it allows effective denoising without a clean image target.
[00413] In FIG. 83, example denoising results generated after execution of the N2V-based denoising CNN 5812 and the error correction submodule 5813 are shown, based on raw PARS image data 8300 including both raw non-radiative and radiative image channels. Three example regions are shown at higher magnification to see the effect of the denoising and jitter correction algorithms. The structure imaged here shows a hair follicle capture from human skin tissue. After removing noise from the raw data, the jitter artifacts seen in FIG. 80 are uncovered and become the main source of noise in the images. While these sub-resolution shifts and distortions between the rows of the image can be seen embedded within the noise, they are difficult to resolve and correct. Denoising not only helps improve raw data quality but helps make the jitter correction possible. As shown in FIG. 83, most of the artifacts are removed after applying the correction submodule 5823.
[00414] After denoising and jitter correcting the raw data, the whole slide radiative and non- radiative images are registered to the ground truth H&E image. As mentioned above, a simple affine transform is used here to account for the tissue layout alterations accrued during the staining process, which may generate upwards of 6000 closely registered 512x512 training pairs for a single 40x, 1cm2, whole slide image.
[00415] Traditionally a stain such as Verhoeff-Van Gieson (WG), which highlights normal or pathologic elastic fibers, would be required to visualize the internal elastic membrane of arteries. In clinical applications, WG stain is sometimes combined with Masson’s trichrome stain, to differentiate collagen and muscle fibers within tissue samples. This is performed to visualize potential increases in collagen associated with diseases like cirrhosis and assess muscle tissue morphology for pathological conditions affecting muscle fibers. In contrast, all these structures are well highlighted in the PARS raw data. Currently, the H&E virtual staining model flattens these structures during the image translation process. However, this highlights the potential use of the rich PARS raw data to replicate various clinically relevant contrasts beyond H&E staining. A practical application for PARS virtual staining is to provide several emulated histochemical stains from a single acquisition. Moreover, there is a potential to develop completely new histochemical like contrasts based on the endogenous PARS contrast. The PARS system as described herein may be able to provide contrast to biomolecules which are inaccessible with current chemical staining methods.
[00416] As described above, measurement reference correction, and Noise2Void based image denoising, are successfully applied to improve image quality. An error correction submodule is presented to reduce pixel level mechanical scanning position artifacts, which blur submicron scale features. These enhancements afford marked improvements in the clarity of small tissue structures, such as nucleoli and chromatin distribution. In conjunction, a new virtual staining processes is presented which uses a semi-registered CycleGAN. While the semi-registered CycleGAN does not require registration like pix2pix, providing the semiregistered data may enhances the colorization quality by reducing the presence of hallucination artifacts. As described herein, emulated H&E images are produced from label- free PARS images with quality and contrast that compare favorably to traditional H&E staining. The colorization performance represents the current best PARS virtual staining implementation. Applied to entire sections of unstained human tissues, the presented method enables accurate recovery of subtle structural and subnuclear details. With these improvements, the PARS virtual H&E images, may be effectively indistinguishable from gold standard chemically stained H&E scans. In some embodiments, PARS label-free virtual staining the has potential to provide multiple histochemical stains from a single unlabelled sample enhancing diagnostic confidence, and greatly improving patient outcomes.
[00417] FIG. 53 is a schematic diagram of computing device 5300 which may be used to implement a computing device used to train or execute (at inference time) an image generator or machine learning model 3802, 3812, 3912.
[00418] As depicted, computing device 5300 includes at least one processor 5302, memory 5304, at least one I/O interface 5306, and at least one network interface 5308.
[00419] Each processor 5302 may be, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or any combination thereof.
[00420] Memory 5304 may include a suitable combination of any type of computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
[00421] Each I/O interface 5306 enables computing device 5300 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, or with one or more output devices such as a display screen and a speaker.
[00422] Each network interface 5308 enables computing device 5300 to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile (e.g., 4G, 5G network), wireless (e.g. Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.
[00423] For simplicity only, one computing device 5300 is shown but system 100 may include multiple computing devices 5300. The computing devices 5300 may be the same or different types of devices. The computing devices 5300 may be connected in various ways including directly coupled, indirectly coupled via a network, and distributed over a wide geographic area and connected via a network (which may be referred to as “cloud computing”).
[00424] For example, and without limitation, a computing device 5300 may be a server, network appliance, set-top box, embedded device, computer expansion module, personal computer, laptop, personal data assistant, cellular telephone, smartphone device, UMPC tablets, video display terminal, gaming console, or any other computing device capable of being configured to carry out the methods described herein.
[00425] FIG. 54 shows a process performed by a processor of an example embodiment of machine learning system or architecture 3800, 3850, 3900.
[00426] At operation 5402, the processor receives, from a sample, a plurality of signals including radiative and non-radiative signals.
[00427] In some embodiments, the plurality of signals include absorption spectra signals.
[00428] In some embodiments, the plurality of signals include scattering signals.
[00429] In some embodiments, the sample is an in vivo or an in situ sample.
[00430] In some embodiments, the sample is not stained.
[00431] At operation 5404, the processor extracts a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals.
[00432] In some embodiments, the contrast may include one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast.
[00433] In some embodiments, processing the plurality of signals may include: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample.
[00434] In some embodiments, said extracting the plurality of features includes processing both radiative signals and non-radiative signals.
[00435] In some embodiments, the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities.
[00436] In some embodiments, the complementary modalities comprises at least one of: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI).
[00437] In some embodiments, image data obtained from complementary modalities may include photoactive labels for contrasting or highlighting specific regions in the images.
[00438] In some embodiments, the plurality of features is supplemented with at least one of features informative of patient information.
[00439] In some embodiments, said processing includes converting the at least one of the plurality of signals to at least one image.
[00440] In some embodiments, said converting to said at least one image includes applying a simulated stain.
[00441] In some embodiments, the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH).
[00442] In some embodiments, the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue.
[00443] In some embodiments, said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images.
[00444] In some embodiments, said converting includes applying a colorization machine learning architecture.
[00445] In some embodiments, the colorization machine learning architecture includes a generative adversarial network (GAN).
[00446] In some embodiments, the colorization machine learning architecture includes a cycle-consistent generative adversarial network (CycleGAN).
[00447] In some embodiments, the colorization machine learning architecture includes a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
[00448] At operation 5406, the processor applies the plurality of features to a machine learning architecture to generate an inference 2517 regarding the sample.
[00449] In some embodiments, the inference 2517 comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape.
[00450] At operation 5408, which is an optional step, the processor generates signals for causing to render, at a display device, a user interface (Ul) 4000 showing a visualization of the inference 2517.
[00451] A set of instructions configured to training the GAN may include, in each training iteration, instructions causing a process to: instantiate a machine learning architecture including neural network having a plurality of nodes and weights stored on a memory device; obtain a true total absorption (TA) image; generate a simulated stained image based on the true TA image; generate a fake TA image based on the generated stained image; compute a first loss based on the generated fake TA image and the true TA image; obtain a labelled and stained image; compute a second loss based on the generated simulated stained image and the labelled and stained image; and update weights of the neural network based on at least one of the first and second losses.
[00452] In accordance with still another aspect, there is provided a computer-implemented method for training a machine learning architecture for generating a simulated stained image, the machine learning architecture including a plurality of nodes and weights stored on a memory device, the method comprising, in each training iteration: obtaining a true total absorption (TA) image; generating a simulated stained image based on the true TA image; generating a fake TA image based on the generated stained image; computing a first loss based on the generated fake TA image and the true TA image; obtaining a labelled and stained image; computing a second loss based on the generated simulated stained image and the labelled and stained image; and updating weights of the neural network based on at least one of the first and second losses.
[00453] In some embodiments, the simulated stained image is generated by a second neural network comprising a second set of nodes and weights, the second set of weights being updated based on at least one of the first and second losses during each iteration.
[00454] In some embodiments, the fake TA image is generated by a third neural network comprising a second set of nodes and weights, the third set of weights being updated based on at least one of the first and second losses during each iteration.
[00455] In some embodiments, computing the second loss based on the generated simulated stained image and the labelled and stained image may include steps of: processing the generated simulated stained image by a first discriminator network; processing the labelled and stained image by a second discriminator network; and computing the second loss based on a respective output from each of the first and second discriminator networks.
[00456] In some embodiments, the method may further include processing the respective output from each of the first and second discriminator networks through a respective classification matrix prior to computing the second loss.
[00457] In some embodiments, the machine learning architecture comprises a CycleGAN machine learning architecture.
[00458] In some embodiments, the machine learning architecture comprises a conditional generative adversarial network (cGAN), which may include for example, a pix2pix model.
[00459] In some embodiments, the labelled and stained image is a labelled PARS image. [00460] In some embodiments, the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image.
[00461] In some embodiments, automatically labelling the unlabeled PARS image comprises labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities.
[00462] In some embodiments, the database is a H&E database.
Additional Machine Learning Architectures for Processing PARS Images
[00463] Referring now to FIG. 64, which is an example machine learning architecture 6400 for processing one or more output 6404 from a PARS system 6402. The PARS system 6402, similar to the PARS system 3801 , 3901 previously described in connection with FIGs. 38A, 38B and 39, may include one or more of the TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems. The PARS system 6402 may be a PARS system from FIG. 5 described above, for example.
[00464] The PARS system 6402 may detect generated signals in the detection beam(s) returning from a given sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc. The sample, which be an unstained sample, may be an in vivo or an in situ sample. For example, it may be tissue underneath skin of a patient. For another example, it may be a tissue on a glass.
[00465] In some embodiments, there is provided a computer-implemented deep learning model 6406 for processing PARS signal and/or image data. The input 6404 to the deep learning model 6406 may include a plurality of PARS signals including radiative and non- radiative signals, and/or a plurality of extracted features based on processing at least one of the plurality of signals, the features informative of a contrast provided by the at least one of the plurality of signals (e.g., PARS data/PARS Image/PARS features/PARS Image features) and other related data (i.e. genomic data, clinical characteristics).
[00466] The deep learning model 6406 may be trained and deployed to generate one or more inferences 6408 based on the output 6404 from the PARS system 6402. The generated inference 6408 may then be transmitted to a user application display device 6410 for further interpretation and/or display. The user application display device 6410 may be connected to an user application (e.g., user application 3825), which may be installed at a user device.
[00467] The generated inference 6408 may include one or more of:
• detection/segmentation/classification of cell or nuclei
• segmentation of gland/tissue/tumor
• detection/classification/grading of cancer
• prediction/prognosis of survival/outcome
• stain normalization/transfer
• genomic/molecular prediction.
[00468] Depending on the application, as shown in FIG. 65, the deep learning model 6406 can be based on deep neural network models that use one or more types of learning: Supervised Learning (e.g., classification models, regression Models, segmentation models) such as Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN), weakly supervised learning (multiple instance learning models, other weakly supervised models), unsupervised learning, and transfer learning deep neural networks (pre-trained models, domain adaptation models) having one or more of the following architectures (and their modified versions): CNN, RNN, Fully Convolutional Networks (FCN), Auto-decoders (A-D), Generative Adversarial Networks (GAN) or Pre-trained Networks (PRE-T-N). The PARS machine learning models described herein may be used to generate one or more inferences including:
• detection/segmentation/classification of cell/nuclei
• segmentation of gland/tissue/tumor
• detection/classification/grading of cancer
• prediction/prognosis of survival
• other inferences.
[00469] For instance, unsupervised learning classification such as GAN and A-D may be used for:
• detection/segmentation/classification of cell/nuclei
• gland/tissue/tumor segmentation
• detection/classification/grading of cancer.
[00470] Supervised learning classification such as CNN or RNN may be used for: • detection/segmentation/classification of cell/nuclei
• detection/classification/grading of cancer.
[00471] Weakly-supervised learning classification (weakly supervised CNN, RNN) may be used for:
• detection/segmentation/classification of cell/nuclei
• gland/tissue/tumor segmentation
• detection/classification/grading of cancer
• prediction/prognosis of survival/outcome.
[00472] Transfer learning (CNN, GAN, PRE-T-N) may be used for:
• detection/segmentation/classification of cell/nuclei
• gland/tissue/tumor segmentation
• detection/classification/grading of cancer
• prediction/prognosis of survival
• stain normalization/transfer-genomic/molecular prediction.
Automatic Nuclei detection, segmentation, and classification of PARS data
[00473] In accordance with one aspect, a computer-implemented machine learning architecture for automatic nuclei detection, segmentation, and classification of PARS data is disclosed herein. As shown in FIG. 66, a deep learning model 6406 may receive a plurality of PARS signals and PARS data from a PARS system 6402, the PARS signals may include radiative and non-radiative signals, and the PARS data may include a plurality of extracted features based on processing at least one of the plurality of signals, the features informative of a contrast provided by the at least one of the plurality of signals.
[00474] The deep learning model 6406 may include one or more of: a classification deep neural network 6610, a segmentation deep neural network 6620, and a nuclei detection deep neural network 6630. The deep learning model 6406 may include, for instance, Densely Connected Neural Network (DCNN), Densely Connected Recurrent Convolutional Neural Network (DCRN) and Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net. The outputs of the deep learning model 6406 may include nuclei type, segmentation, and detection masks, which may be transmitted to a user application display device 6410 for further processing and display. [00475] In some embodiments, the deep learning model 6406 may receive a set of multi structured input data, which may include, for example, PARS images, PARS features and PARS image features. Some or all of the multi structured input data may include PARS data and/features 6404 from the PARS system 6402. Due to the nature of deep data represented by the PARS signal from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives. In addition to cross- entropy and Means Squared Error (MSE) losses, the deep learning model 6406 may be implemented and trained using other loss calculation methods, for example, the deep learning model 6406 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations. The outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous. Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer- implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
Automatic nuclei segmentation of PARS data
[00476] In some embodiments, as shown in FIG. 67, a computer-implemented machine learning architecture is disclosed herein for automatic nuclei segmentation. The Nuclei Segmentation Region-Based CNN 6710, which may be one example of deep learning model 6406, can receive a plurality of PARS signals, features, and images from a PARS system 6402 as input. The PARS signals may include radiative and non-radiative signals. The PARS features may include a plurality of extracted features based on processing at least one of the plurality of PARS signals, the features informative of a contrast provided by the at least one of the plurality of signals. The Nuclei Segmentation Region-Based CNN 6710 may include a Backbone Network (e.g., Region Proposal Network (RPN)) 6712, a feature map generator 6715 and a mask module 6717. The Backbone Network 6712 may be implemented to find areas that may contain an object. The Nuclei Segmentation Region-Based CNN 6710 may predict classes of proposed areas and to refine a bounding box for the proposed area, and the mask module 6717 may be used to generate masks for an object at the pixel level in the next stage based on the proposed areas. An output of the Nuclei Segmentation Region-Based CNN 6710 may be an image with segmented nuclei, which may be transmitted to a user application display device 6410 for further processing and/or display.
[00477] In some embodiments, Principal Component Analysis (PCA) may be used for dimensionality reduction to to obtain the most relevant feature representatives. Moreover, loss function computation alternative methods provided in as described above may be employed. Due to the nature of deep data represented by the PARS signal from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives. In addition to cross- entropy and Means Squared Error (MSE) losses, the Nuclei Segmentation Region-Based CNN 6710 may be implemented and trained using other loss calculation methods, for example, the Nuclei Segmentation Region- Based CNN 6710 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations. The outputs of the Nuclei Segmentation Region-Based CNN 6710 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous.
[00478] Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78). The output from the system 6406 in FIG. 66 may be combined with the output of the Nuclei Segmentation Region-Based CNN 6710 in FIG. 67 for validation.
Classification of tissue malignancy (PARS Images)
[00479] In some embodiments, a computer-implemented machine learning architecture is disclosed herein for identification of malignancy of tissues. As shown in FIGs. 68 and 69, a deep learning model 6406 (e.g., a modified Convolutional Neural Network (CNN) model) may be configured to receive local PARS image features obtained by an image transform submodule 6403, by using techniques such as and not limited to: Contourlet T ransform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information). The deep learning model 6406 may receive PARS signals (radiative, non-radiative, scattering signals) and PARS images from a PARS system 6402 and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology). The generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
Classification of tissue malignancy (PARS signal features)
[00480] In some embodiments, a computer-implemented machine learning architecture is disclosed herein for identification of the malignancy of tissues. As shown in FIG. 70A, a deep learning model 6406 (e.g., modified CNN) may receive a plurality of PARS signals and features 7015 from a PARS system 6402 as input. The PARS signals may include radiative and non-radiative signals. The PARS features may include a plurality of extracted features based on processing at least one of the plurality of PARS signals, the features informative of a contrast provided by the at least one of the plurality of signals. The deep learning model 6406 may be configured to receive local PARS image features obtained by an image transform sub-module 6403, by using techniques such as and not limited to: Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
[00481] The deep learning model 6406 may receive multichannel PARS signals (radiative, non-radiative, scattering signals) and PARS signals and features 7015 from a PARS system 6402 and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology). The generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
[00482] In some embodiments, the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous. Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
Classification of tissue malignancy (simulated stained PARS image)
[00483] In some embodiments, another computer-implemented machine learning architecture is disclosed herein for identification of the malignancy of tissues. As shown in FIG. 70B, a deep learning model 6406 (e.g., modified CNN) may receive simulated stained PARS image(s) from an image generator 7010 (similar to image generator 3812, 3912), which may produce the simulated stained PARS image(s) based on PARS signals and features from the PARS system 6402. The deep learning model 6406 may also be configured to receive local PARS image features obtained by an image transform sub-module 6403, by using techniques such as and not limited to: Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information).
[00484] The deep learning model 6406 may receive the simulated stained PARS image(s) and local PARS image features obtained from the image transform sub-module 6403 as inputs and generate one or more inferences, which may include classifications for malignancy of tissues (e.g., benign, malignant, no pathology). The generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
[00485] In some embodiments, the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous. Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
Multi-dimensional PARS Input Data for medical application
[00486] As shown in FIG. 70C, in some embodiments, a deep learning model 6406 may receive a set of multi-dimensional input data. The set of multi-dimensional input data may be a set of multi structured input data, which may include, for example, two or more from: PARS images 7020, PARS signals and features 7015, PARS image features 7023, and simulated stained images 7025 generated from selected PARS features 7021. Some or all of the multi structured input data may include PARS data and/features from the PARS system 6402. Due to the nature of deep data represented by the PARS signal data from the PARS system 6402, Principal Component Analysis (PCA) may be applied for dimensionality reduction to obtain the most relevant feature representatives.
[00487] For example, pixel information in PARS images 7020 (e.g., pixel intensity) may be combined with information contained in the PARS signal features 7015 to form a multidimensional (deep data) input to the deep learning model 6406. Example of PARS signal features 7015 may include data values representative of mechanical properties (e.g., stiffness, speed of sound, pea87istolocity, thermo conductivity) and data values representative of chemical properties (e.g., QER, total absorption, bonding state, viscosity, ion concentration, charge, chemical composition).
[00488] By combining image pixel information and spatial information obtained from PARS images 7020 (and optionally PARS image features 7023) and mechanical, physical and chemical feature data from PARS signal features 7015, the input data sent to deep learning model 6406 can be used to generate outcome of increased complexity. The output of the deep learning model 6406 may include tissue malignancy class, malignancy grading, cancer prognosis, treatment prognosis. The generated inferences may be transmitted to a user application display 6410.
[00489] In addition to cross- entropy and Means Squared Error (MSE) losses, the deep learning model 6406 may be implemented and trained using other loss calculation methods, for example, the deep learning model 6406 may be trained using a modified Structural Similarity Index (SSIM) based on overlapping Gaussian sliding windows taking the tile image patches, and Earth Movers (EM) loss to account for the structured representations.
[00490] The outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous. Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
Multi-stain graph fusion for multimodal integration in pathology to predict cancer grading
[00491] In some embodiments, a computer-implemented machine learning architecture is disclosed herein for performing multi-stain graph fusion for multimodal integration of a simulated stained PARS image and multiple non-registered stained histology images to predict pathologic scores, as shown in FIG. 71. This multimodal deep learning graph fusion process may use information from a simulated stained PARS image, and multiple non- registered histopathology images 7110 to predict pathologic scores.
[00492] The simulated stained PARS image may be obtained from image generator 7010 (similar to image generator 3812, 3912), which may produce the simulated stained PARS image(s) based on PARS signals and features from the PARS system 6402.
[00493] In some embodiments, the deep learning model 6406 may receive a set of multidimensional input data. The set of multi-dimensional input data may be a set of multi structured input data, which may include, for example, two or more from: PARS images 7020, PARS signals and features 7015, historical unregistered histology images 7110, and simulated stained images 7025 generated from selected PARS features. Some of the multi structured input data may include PARS data and/features from the PARS system 6402.
[00494] The deep learning model 6406 may be implemented to perform pixel-level classification of various stains. Output of the deep learning model 6406 may include heatmaps 7130, which is used to generate graphs by a graph generator 7130. A Graph Neural Network model 7150 is trained on a plurality of input image graphs. The output of the trained Graph Neural Network model 7150 can generate inferences that represent tissue malignancy grading and/or probability scores. The generated inferences may be transmitted to a user application display device 6410 for further processing and/or display.
[00495] In some embodiments, the outputs of the deep learning model 6406 may include, for example, nuclei type, segmentation, and detection masks, which may be combined into a multi-layer image display containing segmented and/or detected nuclei including an annotated type of the nuclei. Nuclei types may include, for example: Epithelial, Fibroblast, Inflammatory, and Miscellaneous. Historical reference cases (images, diagnoses) may be provided that closely match the given case based on a computer-implemented content-based image retrieval (CBIR) system, such as a CBIR system 7800 (see e.g., FIG. 78).
Surviv-I Analysis - Integration of PARS image data with genomic data
[00496] In some embodiments, a computer-implemented machine learning architecture 7200 may be configured to combine PARS image data and genomic data to obtain integrated predictions of patient survival. Referring now to FIG. 72, genomic data 7250 may include any signals derived from analysis of DNA or RNA, or mRNA derived from any sequencing or other nucleic analysis technique, including epigenetic features. Genomic data 7250 can be derived from germline analysis, bulk tumor analysis, single cell analysis, analysis of malignant cells or subsets thereof, or analysis of benign cells or subsets thereof, including benign stromal elements. The machine learning architecture 7200 has three parts: PARS image cluster processing, genomic data processing (for example, but not limited to, mRNA-seq analysis by WGCNA), and multi-modality survival analysis.
[00497] In terms of PARS image cluster processing, PARS images and PARS image features from a PARS system 6402 may be received as input by a patch clustering process 7210, during which patches are clustered into n categories followed by patch augmentation process 7220 (horizontal flip, vertical flip, and rotation). Each category of patches may serve as input into a deep neural network 7230 (e.g., multi instance full convolutional network (Ml- FCN) composed of multiple sub-networks with the same structure and shared weight parameters). The output from the deep neural network 7230 may undergo attention aggregation to obtain a deep learning risk score of a given patient.
[00498] The deep neural network 7230 may be implemented to receive PARS image features obtained by using techniques such as and not limited to Contourlet Transform (CT) (edge smoothness), Histogram (pixel strength distribution), Discrete Fourier Transform (DFT) (feature selection using frequency-domain information from the image), and Local Binary Pattern (LBP) (textural information). The outputs may undergo attention aggregation to obtain a deep learning risk score of a patient.
[00499] In terms of genomic data processing (in this case, not limited to mRNA-seq processing), eigengenes may be obtained by weighted gene co-expression network analysis (WGCNA) 7260. Then, modules may be selected by a least absolute shrinkage and selection operator (LASSO) process 7270 based on eigengenes. The top hub genes of the retained modules may then be extracted as risk factors.
[00500] Next, the deep learning risk score and hub genes may be integrated using the cox proportional hazard model 7280. Based on the deep learning risk score from histopathology images, module hub genes from genetic data, and clinical characteristics (e.g., age and sex) 7240, a multi-input integrative prognosis machine learning model based on cox proportional hazards model 7280 is implemented and trained. The cox proportional hazards model 7280 is a regression model to investigate the association between the survival time of patients and one or more predictor variables. The integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290. Surviv-I Analysis - Integration of PARS signal (extracted features) with genomic data
[00501] In some embodiments, a computer-implemented machine learning architecture 7300 may be configured to combine PARS image data and genomic data to obtain integrated predictions of patient survival. Referring now to FIG. 73, genomic data 7250 may include any signals derived from analysis of DNA or RNA, or mRNA derived from any sequencing or other nucleic analysis technique, including epigenetic features. Genomic data 7250 can be derived from germline analysis, bulk tumor analysis, single cell analysis, analysis of malignant cells or subsets thereof, or analysis of benign cells or subsets thereof, including benign stromal elements. The machine learning architecture 7300 has three parts: PARS data processing, genomic data processing (for example, but not limited to, mRNA-seq analysis by WGCNA), and multi-modality survival analysis.
[00502] In terms of PARS data processing, PARS features from a PARS system 6402 may be received as input by a deep neural network 6406, and the output from the deep neural network 6406 may undergo attention aggregation to obtain a deep learning risk score of a given patient.
[00503] In terms of genomic data processing (in this case, not limited to mRNA-seq processing), eigengenes may be obtained by weighted gene co-expression network analysis (WGCNA) 7260. Then, modules may be selected by a least absolute shrinkage and selection operator (LASSO) process 7270 based on eigengenes. The top hub genes of the retained modules may then be extracted as risk factors.
[00504] Next, the deep learning risk score and hub genes may be integrated using the cox proportional hazard model 7280. Based on the deep learning risk score from the deep neural network 6406, module hub genes from genetic data, and clinical characteristics (e.g., age and sex) 7240, a multi-input integrative prognosis machine learning model based on cox proportional hazards model 7280 is implemented and trained. The cox proportional hazards model 7280 may be a regression model to investigate the association between the survival time of patients and one or more predictor variables. The integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290.
Survival analysis based on PARS image data, historical unregistered histology images and genomic data [00505] In some embodiments, as shown in FIG. 73, in addition to PARS signals and features from a PARS system 6402, historical unregistered histology images 7110 may also be used as an input to the deep neural network 6406. In some embodiments, genomic data 7250 may also be used as an input to the deep neural network 6406 to compute the risk score. The risk score and genomic data 7250 can be integrated using the cox proportional hazards model 7280, which is a regression model to investigate the association between the survival time of patients and one or more predictor variables. The integrated model can estimate survival risk and calculate comprehensive risk score by which patients can be categorized into a low- or high-risk groups in a survival analysis module 7290.
Multimodal fusion with PARS images
[00506] In some embodiments, depending on the application, a computer-implemented system is implemented to perform image fusion of PARS image data 7410 with images 7405 from other modalities (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)) based on image feature representations and a similarity measure, as shown in FIG. 74.
[00507] An embodiment system may perform registration of PARS Image data with other imaging modalities such as CT and MRI; dimensional complexity may include image-to- volume (2D to 3D), image-to-image (2D to 2D), and volume-to-volume (3D to 3D).
[00508] An example process 7400 performed by the embodiment system is illustrated in FIG. 74. The multimodal image fusion can be accomplished through image registration techniques implemented with iterative optimization algorithms. In each iteration, better alignment can be achieved based on a predefined similarity measure 7450 that computes the amount of correspondence between the input images.
[00509] An optimization algorithm may calculate and update the new transformation/interpolation parameters. The operations continue until the optimal registration is achieved, or some predefined criteria are satisfied. The system’s output can be either the transformation parameters 7480 or the final interpolated fused image 7470. The example process 7400 can enable measures of 3D features and to spatially localize the findings of histology within the local 3D tissue environment. Combining PARS image data with high- resolution 2D/3D-imaging techniques such as micro-CT and micro-MRI (prior to sectioning) can provide access to morphological characteristics, relate histological findings to the 2D/3D structure of the local tissue environment and enable guided sectioning of tissue. Multimodal fusion of PARS signal/image (Deep learning)
[00510] In some embodiments, multimodal image fusion may be performed by a computer- implemented system configured to perform an example process 7500 shown in FIG. 75. The system performs registration of PARS image data 7510 with images 7505 from other imaging modalities such as CT and MRI via a similarity metrics 7520 and a deep learning registration model 7530. Dimensional complexity may include image-to-volume (2D to 3D), image-to- image (2D to 2D), and volume-to-volume (3D to 3D). The system output may be a final 2D or 3D fused image, which may be transmitted to a user application display device 6410 for display. This fusion technique can enable measures of 3D features and to spatially localize t92istology of histology within the local 2D/3D tissue environment and enable guided sectioning of tissue.
Customized PARS image staining based on user input
[00511] In accordance with yet another aspect, a customized staining deep learning model 7606 may be implemented as part of a machine learning architecture 7600 to perform customized PARS image staining, which may allow a user to control different aspects, such as a total number and nature, of different stained images being displayed at a user application display device 6410. The user may, for example, mix, change and combine stains in real time based on one or more specified criteria through user input 7610 received by the user application display device 6410.
[00512] The customized staining deep learning model 7606 may receive one or more PARS images and PARS features from a PARS system 6401 and user input 7610 as input. The user input 7610 may include user-defined criteria for generating one or more stained images. The customized staining deep learning model 7606 may generate one or more custom stained images for display at the user application display device 6410 based on the user criteria. Some example user criteria for custom staining that may be included in the user input 7610 may include:
• any nuclei cells smaller/larger than some specified value,
• any nuclei with atypical features,
• any nuclei with prominent nucleoli,
• any nuclei showing mitotic figures, • any cell membrane with specific features,
• any cell with features of lymphocytes,
• any cell undergoing apoptosis,
• any cell undergoing cell division,
• any cell adjacent to lymphocytes,
• any smooth muscle cell,
• any cell adjacent to basement membrane,
• any connective tissue,
• any non-cellular material,
• any connective tissue with specific features,
• any adipocyte.
[00513] The customized staining deep learning model 7606 may include a feature extraction mechanism that may determine size, shape, features (density for example) and structures to customize the colour map.
General PARS Al model for classification, grading and fusion staining of PARS Image data
[00514] In some embodiments, a computer-implemented machine learning architecture 7700 is implemented to perform detection, classification and grading of tissue malignancy (cancer). The machine learning architecture 7700 may include a stain fusion deep learning model 7720 to produce a simulated stained/ fused PARS image 7730 relevant to the predicted model outcome for display at the user application display device 6410. The stain fusion deep learning model 7720 may, in order to generate the relevant simulated stained/ fused PARS image 7730, receive a plurality of simulated stained images (stain 1 , stain 2,... stain n) from an image generator 7710, which can generate the plurality of simulated stained images based on PARS features from a PARS system 6402.
[00515] The machine learning architecture 7700 includes a diagnostic deep learning model 7706, which may include two deep neural network models to perform classification and grading, respectively, based on PARS features and images from the PARS system 6402. The output of the diagnostic deep learning model 7706 and the output of the image generator 7710 (n simulated stained images) may serve as input to the stain fusion deep learning model 7720 to generate the relevant simulated stained PARS image 7730. [00516] The output 7750 of the diagnostic deep learning model 7706 for classification, grading and staining of PARS image data may include a predicted class and grading, such as malignancy grading, and the output of the stain fusion deep learning model 7720 is a simulated stained PARS image 7730. The two outputs 7730, 7750 may be transmitted to the user application display device 6410 for further processing (if any) and display.
Content-based Image Retrieval (CBIR) System to assist pathologists in diagnosis
[00517] In some embodiments, a computer-implemented content-based image retrieval (CBIR) system 7800 is implemented to assist pathologists in diagnosis. The system 7800 may be configured to query one or more images, which may be, for example, a PARS image 7802, a simulated stained PARS image 7805, or a histology image, and retrieve similar images 7850 from an image repository 7810 based on the queried image 7802, 7805.
[00518] In some embodiments, images obtained from the repository 7810 are processed by an image feature extraction module 7820 to obtain semantically meaningful features (feature vector) which are then indexed (represented by index features 7840) based on their pair-wise differences computed with a distance measure 7830. The queried image 7802, 7805 can be processed by the same feature extraction module 7820 to generate a feature vector 7825 of the queried image. The feature vector 7825 of the queried image is then compared, by a distance measure module 7835, to the indexed features 7840 obtained based on the image from the image repository 7810. The final output 7850 is obtained by choosing one or more images from the image repository 7810 that are the closest to the queried image 7802, 7805 based on the computed distances generated by the distance measure module 7835. For example, if a computed distance D between the repository image and the queried image is beneath a certain threshold, the image from the image repository 7810 is considered sufficiently similar to the queried image 7802, 7805 to be included as part of the final output 7850, which may be used for diagnostic reporting.
Explainable Al PARS Architecture
[00519] In some embodiments, a computer-implemented architecture 7900 may be implemented to generate meaningful information for explaining one or more PARS images from a PARS system 6402, as shown in FIG. 79. The architecture 7900 may include a Explainable Al PARS Module 7950, which may include a plurality of different modules. The Al PARS Module 7950 may receive as input, one or more PARS images from a PARS system 6402, histology images 7920, and user query 7930, in order to perform diagnostic analyses to assist a pathologist in diagnosis. The Al PARS Module 7950 may include one or more modules and machine learning models that represent classes of explanation-generation methods, which may include, for example: deep learning diagnostic module, deep learning saliency maps generator, concept attribution generator, prototypes generator, counterfactuals generator, trust scores generator, and a user query interpreter.
[00520] Within the Al PARS Module 7950, The deep learning diagnostic module can generate diagnostic predictions based on the PARS images. Global and local saliency maps from the deep learning saliency maps generator can explain model predictions by providing visualisations. The concept attribution generator can provide explanation of model predictions with the use of high-level concepts including synthetically generated visualisations and/or domain-related natural language. The prototypes generator can generate explanations of model inner workings. These explanations are provided through real or synthetically generated examples such as typical instances of a particular category or feature. Counterfactuals generator can generate counterfactuals used to explain a model outcome by presenting outcomes of other possible scenarios that lead to a different outcome. Counterfactual examples are synthetically generated visualisations or real data. The trust scores generator can generate trust scores or measures indicating trustworthiness of the model predictions and outcomes. The user query interpreter analyzes user’s input query (e.g., visualizing specific part of tissue, specific sub-structures, indicators for a specific type of cancer, count of nuclei of specific size, etc. for a given patient).
[00521] The output of the Al PARS Module 7950 is a collection of images, a quantitative measure, presentation of similar cases, generated report in the form of domain-related natural language, which may be transmitted to the user application display device 6410 for display to a user.
[00522] In some embodiments, one or more simulated stain images from an image generator 7910 may be used as input to the Al PARS Module 7950 for performing diagnostic analyses to assist a pathologist in diagnosis. The one or more simulated stain images may also be transmitted to the user application display device 6410 for display to a user together with the output from the Al PARS Module 7950.
EXAMPLE APPLICATIONS [00523] Aspects disclosed herein may include non-radiative (heat and pressure) and radiative (fluorescence is one of the possible signals) signals in a sample. Aspects disclosed herein may include collecting radiative relaxation and non-radiative relaxation due to optical absorption and also scattering from both excitation and detection. The collected signals and/or raw data may be used to directly form and color an image of a sample, such as an H&E (hematoxylin and eosin) histology image without staining the sample. H&E histology images may be directly formed and colorized by using methods (such as based on a comparison of non-radiative and radiative signals, QER, lifetime or evolution of signals, and/or a clustering algorithm) disclosed herein and using features in raw PARS signals. Aspects disclosed herein may be used to determine or measure, using a photon absorption remote sensing system or PARS, mechanical characteristics such as the speed of sound and/or temperature characteristics of the sample. A tiny or pinpointed area of the sample (e.g., a size of a focused laser beam or beam of light) may be used to measure these features or characteristics. Aspects disclosed herein may extract more than just an amplitude or scalar amplitude of signals in a sample. For example, two targets may have a same or similar optical absorption but slightly different other characteristics such as a different speed of sound, which may result in a different evolution and/or shape of the signals. Aspects disclosed herein may be used to determine or add novel molecular information to PARS images.
[00524] It will be apparent that other examples may be designed with different fiber-based or free-space components to achieve similar results. Other alternatives may include various coherence length sources, use of balanced photodetectors, interrogation-beam modulation, incorporation of optical amplifiers in the return signal path, etc.
[00525] During in vivo imaging experiments, no agent or ultrasound coupling medium are required. However, the target can be prepared with water or any liquid such as oil before a non-contact imaging session. As well, in some instances an intermediate window such as a cover slip or glass window may be placed between the imaging system and the sample.
[00526] Aspects disclosed herein may use a combination of a PARS device alongside an optical coherence tomography (OCT). OCT is a complementary imaging modality to PARS devices. OCT measurements can be performed using various approaches, either in the time domain optical coherence tomography (TD-OCT) or in frequency domain optical coherence tomography (FD-OCT) as described in US 2010/0265511 and US2014/0125952. In OCT systems, multiple A-scans are typically acquired while the sample beam is scanned laterally across the tissue surface, building up a two-dimensional map of reflectivity versus depth and lateral extent typically called a B-scan. The lateral resolution of the B-scan is approximated by the confocal resolving power of the sample arm optical system, which is usually given by the size of the focused optical spot in the tissue.
[00527] All optical sources including but not limited to PARS excitations, PARS detections, PARS signal enhancements, and OCT sources may be implemented as continuous beams, modulated continuous beams, or short pulsed lasers in which pulse widths may range from attoseconds to milliseconds. These may be set to any wavelength suitable for taking advantage of optical (or other electromagnetic) properties of the sample, such as scattering and absorption. Wavelengths may also be selected to purposefully enhance or suppress detection or excitation photons from different absorbers. Wavelengths may range from nanometer to micron scales. Continuous-wave beam powers may be set to any suitable power range such as from attowatts to watts. Pulsed sources may use pulse energies appropriate for the specific sample under test such as within the range from attojoules to joules. Various coherence lengths may be implemented to take advantage of interferometric effects. These coherence lengths may range from nanometers to kilometers. As well, pulsed sources may use any repetition rate deemed appropriate for the sample under test such as from continuous- wave to the gigahertz regime. The sources may be tunable, monochromatic or polychromatic.
[00528] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may include an interferometer, such as a Michelson interferometer, Fizeau interferometer, Ramsey interferometer, Fabry-Perot interferometer, Mach-Zehnder interferometer, or optical-quadrature detection. Interferometers may be free- space or fiber-based or some combination. The basic principle is that phase and amplitude oscillations in the probing receiver beam can be detected using interferometry and detected at AC, RF or ultrasonic frequencies using various detectors.
[00529] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may use and implement a non-interferometry detection design to detect amplitude modulation within the signal. The non-interferometry detection system may be free-space or fiber-based or some combination therein.
[00530] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may use a variety of optical fibers such as photonic crystal fibers, image guide fibers, double-clad fibers etc.
[00531] The PARS subsystems may be implemented as a conventional photoacoustic remote sensing system, non-interferometric photoacoustic remote sensing (NI-PARS), camera-based photoacoustic remote sensing (C-PARS), coherence-gated photoacoustic remote sensing (CG-PARS), single-source photoacoustic remote sensing (SS-PARS), or extensions thereof.
[00532] In one example, all beams may be combined and scanned. In this way, PARS excitations may be sensed in the same area as they are generated and where they are the largest. OCT detection may also be performed in the same location as the PARS to aid in registration. Other arrangements may also be used, including keeping one or more of the beams fixed while scanning the others or vice versa. Optical scanning may be performed by galvanometer mirrors, MEMS mirrors, polygon scanners, stepper/DC motors, etc. Mechanical scanning of the sample may be performed by stepper stages, DC motor stages, linear drive stages, piezo drive stages, piezo stages, etc.
[00533] Both the optical scanning and mechanical scanning approaches may be leveraged to produce one-dimensional, two-dimensional, or three-dimensional scans about the sample. Adaptive optics such as TAG lenses and deformable mirrors may be used to perform axial scanning within the sample. Both optical scanning and mechanical scanning may be combined to form a hybrid scanner. This hybrid scanner may employ one-axis or two-axis optical scanning to capture large areas or strips in a short amount of time. The mirrors can potentially be controlled using custom control hardware to have customized scan patterns to increase scanning efficiency in terms of speed and quality. For example, one optical axis can be used to scan rapidly and simultaneously one mechanical axis can be used to move the sample. This may render a ramp-like scan pattern which can then be interpolated. Another example, using custom control hardware, would be to step the mechanical stage only when the fastaxis has finished moving yielding a Cartesian-like grid which may not need any interpolation.
[00534] PARS may provide 3D imaging by optical or mechanical scanning of the beams or mechanical scanning of the samples or the imaging head or the combination of mechanical and optical scanning of the beams, optics, and the samples. This may allow rapid structural and function en-face or 3D imaging.
[00535] One or multiple pinholes may be employed to reject out of focus light when optically or mechanically scanning the beams or mechanical scanning of the samples or the imaging head or the combination of mechanical and optical scanning of the beams, optics, and samples. They may improve the signal to noise ratio of the resulting images.
[00536] Beam combiners may be implemented using dichroic mirrors, prisms, beamsplitters, polarizing beamsplitters, WDMs etc.
[00537] Beam paths may be focused on to the sample using different optical paths. Each of the single or multiple PARS excitation, detection, signal enhancement etc. paths and OCT paths may use an independent focusing element onto the sample, or all share a single (only one or exactly one) path or any combination. Beam paths may return from the sample using unique optical paths which are different from those optical paths used to focus on to the sample. These unique optical paths may interact with the sample at normal incidence, or may interact at some angle where the central beam axis forms an angle with the sample surface ranging from 5 degrees to 90 degrees.
[00538] For some applications such as in ophthalmic imaging, the imaging head may not implement any primary focusing element such as an objective lens to tightly focus the light onto the sample. Instead, the beams may be collimated, or loosely focused (as to create a spot size much larger than the optical diffraction limit) while being directed at the sample. For example, ophthalmic imaging devices made direct a collimated beam into the eye allo’ing the eye's lens to focus the beam on to the retina.
[00539] The imaging head may focus the beams into the sample at least to a depth of 50 nm. The imaging head may focus the beams into the sample at most to a depth of 10 mm. The added depth over previous PARS arises from the novel use of deeply-penetrating detection wavelengths as described above.
[00540] Light may be amplified by an optical amplifier prior to interacting with a sample or prior to detection. Light may be collected by photodiodes, avalanche photodiodes, phototubes, photomultipliers, CMOS cameras, CCD cameras (including EM-CCD, intensified-CCDs, back- thinned and cooled CCDs), spectrometers, etc. The detected signals may be amplified by an RF amplifier, lock-in amplifier, trans-impedance amplifier, or other amplifier configuration.
[00541] Modalities may be used for A-, B- or C- scan images for in vivo, ex vivo or phantom studies. The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may take the form of any embodiment common to microscopic and biological imaging techniques. Some of these may include but are not limited to devices implemented as a table-top microscope, inverted microscope, handheld microscope, surgical microscope, endoscope, or ophthalmic device, etc. These may be constructed based on principles known in the art.
[00542] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be optimized in order to take advantage of a multi-focus design for improving the depth-of-focus of 2D and 3D imaging. The chromatic aberration in the collimating and objective lens pair may be harnessed to refocus light from a fiber into the object so that each wavelength is focused at a slightly different depth location. These chromatic aberrations may be used to encode depth information into the recovered PARS signals which may be later recovered using wavelength specific analysis approaches. Using these wavelengths simultaneously may also be used to improve the depth of field and signal to noise ratio (SNR) of the PARS images. During imaging, depth scanning by wavelength tuning may be performed.
[00543] PARS methods may provide lateral or axial discrimination on the sample by spatially encoding detection regions, such as by using several pinholes, or by the spectral content of a broadband beam.
[00544] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be combined with other imaging modalities such as stimulated Raman microscopy, fluorescence microscopy, two-photon and confocal fluorescence microscopy, Coherent-Anti-Raman-Stokes microscopy, Raman microscopy, other photoacoustic and ultrasound systems, etc. This could permit imaging of the microcirculation, blood oxygenation parameter imaging, and imaging of other molecularly-specific targets simultaneously, a potentially important task that is difficult to implement. A multi-wavelength visible laser source may also be implemented to generate photon absorption signals for functional or structural imaging.
[00545] Polarization analyzers may be used to decompose detected light into respective polarization states. The light detected in each polarization state may provide information about the sample. Phase analyzers may be used to decompose detected light into phase components. This may provide information about the sample.
[00546] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may detect generated signals in the detection beam(s) returning from the sample. These perturbations may include but are not limited to changes in intensity, polarization, frequency, phase, absorption, nonlinear scattering, and nonlinear absorption and could be brought on by a variety of factors such as pressure, thermal effects, etc.
[00547] Analog-based signal extraction may be performed along electrical signal pathways. Some examples of such analog devices may include but are not limited to lock-in amplifiers, peak-detections circuits, etc.
[00548] The PARS subsystem may detect temporal information encoded in the back- reflected detection beam. This information may be used to discriminate chromophores, enhance contrast, improve signal extraction, etc. This temporal information may be extracted using analog and digital processing techniques. These may include but are not limited to the use of lock-in amplifiers, Fourier transforms, wavelet transforms, intelligent algorithm extraction to name a few. In one example, lock in detection may be leveraged to extract PARS signals which are similar to known expected signals for extraction of particular chromophores such as DNA, cytochromes, red blood cells, etc.
[00549] The imaging head of the system may include close-loop or open-loop adaptive optic components including but not limited to wave-front sensors, deformable mirrors, TAG lenses, etc. for wave-front and aberration correction. Aberrations may include de-focus, astigmatism, coma, distortion, 3rd-order effects, etc. The signal enhancement beam may also be used to suppress signals from undesired chromophores by purposely inducing a saturation effect such as photobleaching.
[00550] Various types of optics may be utilized to leverage their respective advantages. For example, axicons may be used as a primary objective to produce Bessel beams with a larger depth of focus as compared to that available by standard Gaussian beam optics. Such optics may also be used in other locations within beam paths as deemed appropriate. Reflective optics may also take the place of their respective refractive elements, such as the use of a reflective objective lens rather than a standard compound objective lens.
[00551] Optical pathways may include nonlinear optical elements for various related purposes such as wavelength generation and wavelength shifting. Beam foci may overlap at the sample but may also be laterally and axially offset from each other when appropriate by a small amount.
[00552] The TA-PARS, MP-PARS, Multi-Photon Excitation PARS, QER, lifetime PARS, and TD-PARS subsystems may be used as a spectrometer for sample analysis.
[00553] Other advantages that are inherent to the structure will be apparent to those skilled in the art. The embodiments described herein are illustrative and not intended to limit the scope of the claims, which are to be interpreted in light of the specification as a whole.
[00554] It will be understood that the system described herein may be used in various ways, such as those purposes described in the prior art, and also may be used in other ways to take advantage of the aspects described above. A non-exhaustive list of applications are discussed below.
[00555] The system may be used for imaging angiogenesis for different pre-clinical tumor models. [00556] The system may be used for unmixing targets (e.g. detect, separate or otherwise discretize constituent species and/or subspecies) based on their absorption, scattering or frequency contents by taking advantage of different wavelengths, different pulse widths, different coherence lengths, repetition rates, exposure time, different evolution or lifetime of signals, quantum efficiency ratio and/or other comparisons of non-radiative and radiative signals, etc.
[00557] The system may be used to image with resolution up to and exceeding the diffraction limit.
[00558] The system may be used to image anything that absorbs light, including exogenous and endogenous targets and biomarkers.
[00559] The system may have some surgical applications, such as functional and structural imaging during brain surgery, use for assessment of internal bleeding and cauterization verification, imaging perfusion sufficiency of organs and organ transplants, imaging angiogenesis around islet transplants, imaging of skin-grafts, imaging of tissue scaffolds and biomaterials to evaluate vascularization and immune rejection, imaging to aid microsurgery, guidance to avoid cutting critical blood vessels and nerves.
[00560] The system may also have some gastroenterological applications, such as imaging vascular beds and depth of invasion in Barrett’s esophagus and colorectal cancers. Depth of invasion, in at least some embodiments, is key to prognosis and metabolic potential. This may be used for virtual b’opsy, crohn's diseases, monitoring of IBS, inspection of carotid artery. Gastroenterological applications may be combined or piggy-backed off of a clinical endoscope and the miniaturized PARS system may be designed either as a standalone endoscope or fit within the accessory channel of a clinical endoscope.
[00561] The system may also be used for clinical imaging of micro- and macro-circulation and pigmented cells, which may find use for applications such as in (1) the eye, potentially augmenting or replacing fluorescein angiography; (2) imaging dermatological lesions including melanoma, basal cell carcinoma, hemangioma, psoriasis, eczema, dermatitis, imaging Mohs surgery, imaging to verify tumor margin resections; (3) peripheral vascular disease; (4) diabetic and pressure ulcers; (5) burn imaging; (6) plastic surgery and microsurgery; (7) imaging of circulating tumor cells, especially melanoma cells; (8) imaging lymph node angiogenesis; (9) imaging response to photodynamic therapies including those with vascular ablative mechanisms; (10) imaging response to chemotherapeutics including anti-angiogenic drugs; (11) imaging response to radiotherapy. [00562] The system may also be used for some histopathology imaging applications, such as frozen pathology, generating H&E-stain like images from tissue samples, virtual biopsy, etc. The system may be implemented to generate virtual stains and other types of images for various tissue preparations, such as, for example, formalin-fixed paraffin-embedded (FFPE) tissue blocks, formalin-fixed paraffin-embedded (FFPE) tissue slides, FFPE tissue sections, frozen pathology sections, formalin fixed tissue, freshly resected unprocessed tissue, freshly resected specimen, and so on. Within these tissue samples, visualization of macromolecules such as DNA, RNA, cytochromes, lipids, proteins, and so on may be performed.
[00563] The generated stains or images may be used for one or more histopathology imaging applications for different diseases including but not limited to: wound healing, angiogenesis and tissue regeneration, hypersensitivity, infection, inflammation, autoimmunity, scarring and fibrosis.
[00564] The system may be useful in estimating oxygen saturation using multi-wavelength PARS excitation in applications including: (1) estimating venous oxygen saturation where pulse oximetry cannot be used including estimating cerebrovenous oxygen saturation and central venous oxygen saturation. This could potentially replace catheterization procedures which can be risky, especially in small children and infants.
[00565] Oxygen flux and oxygen consumption may also be estimated by using PARS imaging to estimate oxygen saturation, and to estimate blood flow in vessels flowing into and out of a region of tissue.
[00566] The system may be useful in separating salient histological chromophores such as cell nuclei and the surrounding cytoplasm by leveraging their respective absorption spectra.
[00567] The systems may be used for unmixing targets using their absorption contents, scattering, phase, polarization or frequency contents by taking advantage of different wavelengths, different pulse widths, different coherence lengths, repetition rates, fluence, exposure time, etc.
[00568] Other examples of applications may include imaging of contrast agents in clinical or pre-clinical applications; identification of sentinel lymph nodes; non- or minimally-invasive identification of tumors in lymph nodes; non-destructive testing of materials; imaging of genetically-encoded reporters such as tyrosinase, chromoproteins, fluorescent proteins for pre-clinical or clinical molecular imaging applications; imaging actively or passively targeted optically absorbing nanoparticles for molecular imaging; and imaging of blood clots and potentially staging the age of the clots. [00569] Other examples of applications may include clinical and pre-clinical ophthalmic applications; oxygen saturation measurement and retinal metabolic rate in diseases such as age related macular degeneration, diabetic retinopathy and glaucoma, limbal vasculature and stem cells imaging, corneal nerve and neovascularization imaging, evaluating Schlemm canal changes in glaucoma patients, choroidal neovascularization imaging, anterior and posterior segments blood flow imaging and blood flow state, wound healing, angiogenesis and tissue regeneration, hypersensitivity, infection, inflammation, autoimmunity, and scarring and fibrosis.
[00570] The system may be used for measurement and estimation of metabolism within a biological sample leveraging the capabilities of both PARS and OCT. In this example the OCT may be used to estimate volumetric blood flow within a region of interest, and the PARS systems may be used to measure oxygen saturation within blood vessels of interest. The combination of these measurements then may provide estimation of metabolism within the region.
[00571] The system may be used for head and neck cancer types and skin cancer types, functional brain activities, Inspecting stroke patient’s vasculature to help locate clots, monitoring changes in neuronal and brain function/development as a result of changing gut bacteria composition, atherosclerotic plaques, monitoring oxygen sufficiency following flap reconstruction, profusion sufficiency following plastic or cosmetic surgery and imaging the cosmetic injectable.
[00572] The system may be used for topology tracking of surface deformations. For example, the OCT may be used to track the location of the sample surface. Then corrections may be applied to a tightly focused PARS device using mechanisms such as adaptive optics to maintain alignment to that surface as scanning proceeds.
[00573] The system may be implemented in various different form factors appropriate to these applications such as a tabletop microscope, inverted microscope, handheld microscope, surgical microscope, ophthalmic microscope, endoscope, etc.
[00574] Aspects disclosed herein may be used with the following applications: imaging histological samples; imaging cell nuclei; imaging proteins; imaging DNA; imaging RNA; imaging lipids; imaging of blood oxygen saturation; imaging of tumor hypoxia; imaging of wound healing, burn diagnostics, or surgery; imaging of microcirculation; blood oxygenation parameter imaging; estimating blood flow in vessels flowing into and out of a region of tissue; imaging of molecularly-specific targets; imaging angiogenesis for pre-clinical tumor models; clinical imaging of micro- and macro-circulation and pigmented cells; imaging of the eye; augmenting or replacing fluorescein angiography; imaging dermatological lesions; imaging melanoma; imaging basal cell carcinoma; imaging hemangioma; imaging psoriasis; imaging eczema; imaging dermatitis; imaging Mohs surgery; imaging to verify tumor margin resections; imaging peripheral vascular disease; imaging diabetic and/or pressure ulcers; burn imaging; plastic surgery; microsurgery; imaging of circulating tumor cells; imaging melanoma cells; imaging lymph node angiogenesis; imaging response to photodynamic therapies; imaging response to photodynamic therapies having vascular ablative mechanisms; imaging response to chemotherapeutics; imaging frozen pathology samples; imaging paraffin embedded tissues; imaging H&E-like images; imaging oxygen metabolic changes; imaging response to anti- angiogenic drugs; imaging response to radiotherapy; estimating oxygen saturation using multiwavelength PARS excitation; estimating venous oxygen saturation where pulse oximetry cannot be used; estimating cerebrovenous oxygen saturation and/or central venous oxygen saturation; estimating oxygen flux and/or oxygen consumption; imaging vascular beds and depth of invasio’ in Barrett's esophagus and/or colorectal cancers; functional and structural imaging during brain surgery; assessment of internal bleeding and/or cauterization verification; imaging perfusion sufficiency of organs and/or organ transplants; imaging angiogenesis around islet transplants; imaging of skin-grafts; imaging of tissue scaffolds and/or biomaterials to evaluate vascularization and/or immune rejection; imaging to aid microsurgery; guidance to avoid cutting blood vessels and/or nerves; imaging of contrast agents in clinical or pre-clinical applications; identification of sentinel lymph nodes; non- or minimally-invasive identification of tumors in lymph nodes; non-destructive testing of materials; imaging of genetically-encoded reporters, wherein the genetically-encoded reporters include tyrosinase, chromoproteins, and/or fluorescent proteins for pre-clinical or clinical molecular imaging applications; imaging actively or passively targeted optically absorbing nanoparticles for molecular imaging; imaging of blood clots; staging an age of blood clots; remote or non-invasive intratumoural assessment of glucose concentration by detection of endogenous glucose absorption peeks; assessment of organoid growth; monitoring of developing embryos; assessment of biofilm composition; assessment of tooth decay; assessment of non-living structures; evaluating the composition of paintings for non-invasive confirmation of authenticity; evaluation of archeological artifacts; manufacturing quality control; manufacturing quality assurance; replacing a catheterization procedure; gastroenterological applications; single-excitation pulse imaging over an entire field of view; imaging of tissue; imaging of cells; imaging of scattered light from object surfaces; imaging of absorption-induced changes of scattered light; or non-contact imaging of optical absorption.
[00575] Aspects disclosed herein may provide a computer-implemented method of visualizing features in a sample. The method may include receiving one or more photon absorption remote sensing or system (PARS) signals, clustering the received one or more PARS signals using a clustering algorithm to determine features of the sample, and determining an image based on the clustered PARS signals. Alternatively or in addition thereto, the method may include determining a ratio of non-radiative signals to radiative signals, determining a value that is a function of non-radiative signals and radiative signals, and/or comparing non-radiative signals, radiative signals, and/or scattering signals, and determining the image, including colors, based on the determined ratio, value, and/or comparison.
[00576] The PARS signals may be collected by generating signals in the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample, including for example at or below a surface of the sample, interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample, and detecting a portion of the interrogation beam returning from the sample. Generating signals may include generating pressure, temperature, and fluorescence (and/or other radiative and/or non-radiative signals). The returned portion of the interrogation beam may be indicative of the generated pressure and temperature signals. The PARS signals are further collected by detecting fluorescence signals from the excitation location of the sample while detecting the generated pressure and temperature signals. The PARS signals may be further collected by redirecting a portion of the returned interrogation beam and detecting an interaction with the sample.
[00577] A wavelength of the excitation beam may be configured such that the sample absorbs two or more photons simultaneously, wherein a sum of energy of the two or more photons may be equal to a predetermined energy. The method may include collecting the PARS signals.
[00578] Clustering the received PARS signals may be based on shape. The method may not include analyzing a reconstructed grayscale image to determine the image. Clustering the received PARS signals may not be based on a scalar amplitude. The method may not include mapping or visualizing a scalar amplitude. The PARS signals may be indicative of temperature characteristics of the sample. The PARS signals may be indicative of a speed of sound in the sample. The PARS signals may be indicative of molecular information. The PARS signals may be indicative of characteristics in the sample in an area having a size defined by a focused beam of light. Receiving the PARS signals may include receiving time domain (TD) signals.
[00579] The method may include determining cluster centroids based on the clustered PARS signals. The determined cluster centroids may include characteristic time-domain signals. Receiving the PARS signals may include receiving backscattering intensity, radiative signals, and non-radiative relaxation time-domain signals.
[00580] Receiving the PARS signals may include receiving radiative PARS signals and non-radiative PARS signals. The method may further include determining a ratio of and/or value based on the radiative PARS signals and the non-radiative PARS signals. The ratio and/or value may be plotted against quantum efficiency (QE) values. The method may include determining an image and/or biomolecular information based on the ratio and/or value.
[00581] The method may include determining a decay or evolution time based on the received PARS signals. Determining the image may include determining one or more colors based on the clustering. The method may include displaying the image on a display.
[00582] Systems and techniques disclosed herein may provide a photon absorption remote sensing (PARS) system for imaging features in a sample. The system may include an excitation light source configured to generate signals in the sample at an excitation location, the excitation light source being focused at or below the sample, including at or or below a surface of the sample, an interrogation light source configured to interrogate the sample and directed toward the excitation location of the sample, the interrogation light source being focused at or below the sample, a portion of the at least one interrogation light source returning from the sample that is indicative of the generated signals, and a processor configured to execute a clustering algorithm to cluster the generated signals and determine an image based on the clustered generated signals, the image being indicative of features in the sample. The system may include a display configured to display the determined image. The image may be formed directly from the received signals.
[00583] The processor may be configured to determine one or more colors based on the clustering. The determined colors may include purple, blue, and pink such that the image is configured to resemble an hematoxylin and eosin (H&E) stained image.
[00584] Systems and techniques disclosed herein may provide a computer-implemented method of visualizing features in a sample. The method may include receiving one or more signals, clustering the received signals based on shape using a clustering algorithm to determine time-domain features of the sample, and determining an image, including one or more colors used in the image, based on the clustered signals and determined time-domain features.
[00585] The method may include determining vector angles from the received one or more signals. Clustering the received signals based on shape may include clustering the received signals based on the vector angles. The one or more signals may include at least one of non- radiative signals or radiative signals. The one or more signals may include at least one of non- radiative heat signals or non-radiative pressure signals. The one or more signals may include radiative fluorescence signals. The radiative fluorescence signals may be radiative autofluorescence signals. The non-radiative and radiative signals may include pressure signals, temperature signals, ultrasound signals, autofluorescence signals, nonlinear scattering, and/or nonlinear fluorescence signals.
[00586] Aspects disclosed herein may provide a computer-implemented method of visualizing features in a sample. The method may include receiving signals, the signals including non-radiative and radiative signals from the sample, clustering the received one or more signals using a clustering algorithm to determine features of the sample, and determining an image based on the clustered signals. The non-radiative signals may include heat signals and pressure signals, and the radiative signals may include fluorescence signals. The entire non-radiative and radiative relaxations may be received, such as pressure signals, temperature signals, ultrasound signals, autofluorescence signals, nonlinear scattering, and nonlinear fluorescence.
[00587] At least some of the signals are collected by generating signals in the sample at an excitation location using an excitation beam, interrogating the sample with an interrogation beam directed toward the excitation location of the sample, and detecting a portion of the interrogation beam returning from the sample. At least some of the signals may be collected by detecting optical absorption and scattering from the sample. The optical absorption and scattering may occur from excitation and detection of the sample.
[00588] Aspects disclosed herein may provide a method of visualizing features in a sample. The method may include receiving one or more signals, clustering the received signals based on shape using a clustering algorithm to determine features of the sample, the shape being based on a vector, and determining an image, including one or more colors used in the image, based on the clustered signals and determined features.
[00589] In this disclosur", the word”"comprising" is used in its non-limiting sense to mean that items following the word are included, but items not specifically mentioned are not excluded. A reference to an element by the indefin“t” article "a" does not require that there be one and only one of the elements.
[00590] The scope of the following claims should not be limited by the preferred embodiments set forth in the examples and in the drawings but should be given the broadest interpretation consistent with the description as a whole.

Claims

no
CLAIMS A computer-implemented method for analyzing a sample, the method comprising: receiving, from the sample, a plurality of signals including radiative and non- radiative signals; extracting a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and applying the plurality of features to a machine learning architecture to generate an inference regarding the sample. The computer-implemented method of claim 1 , wherein processing the plurality of signals comprises: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample. The computer-implemented method of claim 1 , wherein said extracting the plurality of features includes processing both radiative signals and non-radiative signals. The method of claim 1 , wherein the plurality of signals include absorption spectra signals. The computer-implemented method of claim 1 , wherein the plurality of signals include scattering signals. The computer-implemented method of claim 1 , wherein the sample is an in vivo or an in situ sample. The computer-implemented method of claim 1 , wherein the sample is not stained. The computer-implemented method of claim 1 , wherein the sample is stained. The computer-implemented method of claim 1 , wherein the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities. The computer-implemented method of claim 9, wherein the complementary modalities comprises: at least one image from: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI); and one or more photoactive labels for contrasting or highlighting specific regions in the at least one image. The computer-implemented method of claim 1 , wherein the plurality of features is supplemented with at least one of features informative of patient information. The computer-implemented method of claim 1 , wherein said processing includes converting the at least one of the plurality of signals to at least one image. The computer-implemented method of claim 12, wherein said converting to said at least one image includes applying a simulated stain. The computer-implemented method of claim 13, wherein the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH). The computer-implemented method of claim 14, wherein the simulated stain is applicable to a frozen tissue section, a preserved tissue sample, or a fresh unprocessed tissue. The computer-implemented method of claim 12, wherein said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images. The computer-implemented method of claim 12, wherein said converting includes applying a colorization machine learning architecture. The computer-implemented method of claim 17, wherein the colorization machine learning architecture includes at least one Generative Adversarial Network (GAN). The computer-implemented method of claim 18, wherein the GAN comprises one of: a cycle-consistent generative adversarial network (CycleGAN) and a conditional generative adversarial network (cGAN). The computer-implemented method of claim 1 , wherein the inference comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape. The computer-implemented method of claim 1 , further comprising generating signals for causing to render, at a display device, a user interface (Ul) showing a visualization of the inference. The computer-implemented method of claim 1 , wherein the contrast comprises one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast. The computer-implemented method of claim 1 , wherein the inference comprises a probability of a disease for at least one region in the sample, the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture. The computer-implemented method of claim 23, wherein the inference further comprises a heat map identifying one or more regions of the sample and a corresponding probability of a disease for each of the one or more regions of the sample. The computer-implemented method of claim 24, wherein the corresponding probability of a disease for each of the one or more regions of the sample is illustrated by a corresponding intensity of a color shown in the respective region in the heat map. The computer-implemented method of claim 1 , wherein the non-radiative signals comprise at least one of: a photothermal signals and a photoacoustic signal. The computer-implemented method of claim 1 , wherein the radiative signals comprise one or more autofluorescence signals. The computer-implemented method of claim 1 , wherein the plurality of signals comprise radiative and non-radiative absorption relaxation signals. A computer-implemented method for training a machine learning architecture for generating a simulated stained image, the machine learning architecture including a neural network having a plurality of nodes and weights stored on a memory device, the method comprising, in each training iteration: obtaining a true total absorption (TA) image; generating a simulated stained image based on the true TA image; generating a fake TA image based on the generated stained image; computing a first loss based on the generated fake TA image and the true TA image; obtaining a labelled and stained image; computing a second loss based on the generated simulated stained image and the labelled and stained image; and updating weights of the neural network based on at least one of the first and second losses. The method of claim 29, wherein the simulated stained image is generated by a second neural network comprising a second set of nodes and weights, the second set of weights being updated based on at least one of the first and second losses during each iteration. The method of claim 29, wherein the fake TA image is generated by a third neural network comprising a second set of nodes and weights, the third set of weights being updated based on at least one of the first and second losses during each iteration. The method of claim 29, wherein computing the second loss based on the generated simulated stained image and the labelled and stained image comprises: processing the generated simulated stained image by a first discriminator network; processing the labelled and stained image by a second discriminator network; and computing the second loss based on a respective output from each of the first and second discriminator networks. The method of claim 32, further comprising processing the respective output from each of the first and second discriminator networks through a respective classification matrix prior to computing the second loss. The method of claim 29, wherein the machine learning architecture comprises at least one Generative Adversarial Network (GAN). The method of claim 34, wherein the GAN comprises one of: a cycle-consistent generative adversarial network (CycleGAN) and a conditional generative adversarial network (cGAN). The method of claim 29, wherein the labelled and stained image is a labelled PARS image. The method of claim 36, wherein the labeled PARS image is automatically labelled, prior to training of the neural network, based on an unlabeled PARS image. The method of claim 37, wherein automatically labelling the unlabeled PARS image comprises labelling the unlabeled PARS image based on an existing labelled stained image from a database, wherein the existing labelled stained image and the unlabeled PARS image share structural similarities. A computer system for analyzing a sample, the system comprising: a processor operating in conjunction with computer memory and non- transitory computer-readable storage, the processor configured to: receive, from the sample, a plurality of signals including radiative and non-radiative signals; extract a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and apply the plurality of features to a machine learning architecture to generate an inference regarding the sample. The system of claim 39, wherein processing the plurality of signals comprises: exciting the sample at an excitation location using an excitation beam, the excitation beam being focused at or below the sample; and interrogating the sample with an interrogation beam directed toward the excitation location of the sample, the interrogation beam being focused at or below the sample. The system of claim 39, wherein said extracting the plurality of features includes processing both radiative signals and non-radiative signals. The system of claim 39, wherein the plurality of signals include absorption spectra signals. The system of claim 39, wherein the plurality of signals include scattering signals. The system of claim 39, wherein the sample is an in vivo or an in situ sample. The system of claim 39, wherein the sample is not stained. The system of claim 39, wherein the sample is stained. The system of claim 39, wherein the plurality of features is supplemented with at least one of features informative of image data obtained from complementary modalities. The system of claim 39, wherein the plurality of signals comprise radiative and non- radiative absorption relaxation signals. The system of claim 48, wherein the complementary modalities comprises: at least one image from: ultrasound imaging, a positron emission tomography (PET) scan, a computerized tomography (CT) scan, and magnetic resonance imaging (MRI); and one or more photoactive labels for contrasting or highlighting specific regions in the at least one image. The system of claim 39, wherein the plurality of features is supplemented with at least one of features informative of patient information. The system of claim 39, wherein said processing includes converting the at least one of the plurality of signals to at least one image. The system of claim 39, wherein said converting to said at least one image includes applying a simulated stain. The system of claim 52, wherein the simulated stain includes at least one of: Hematoxylin and Eosin (H&E) stain, Jones’ Stain (MPAS), PAS and GMS stain, Toluidine Blue, Congo Red, Masson's Trichrome Stain, Lillie's Trichrome, and Verhoeff Stain, Immunohistochemistry (IHC), histochemical stain, and In-Situ Hybridization (ISH). The system of claim 53, wherein the simulated stain is applicable to a preserved tissue sample, a frozen tissue section, or a fresh unprocessed tissue. The system of claim 51 , wherein said converting to said at least one image includes converting to at least two images, and applying a different simulated stain to each of the images. The system of claim 51 , wherein said converting includes applying a colorization machine learning architecture. The system of claim 55, wherein the colorization machine learning architecture includes at least one Generative Adversarial Network (GAN). The system of claim 57, wherein the GAN comprises one of: a cycle-consistent generative adversarial network (CycleGAN) and a conditional generative adversarial network (cGAN). The system of claim 39, wherein the inference comprises at least one of: survival time; drug response; drug resistance; phenotype characteristics; molecular characteristics; mutational burden; tumor molecular characteristics; parasite; toxicity; inflammation; transcriptomic features; protein expression features; patient clinical outcomes; a suspicious signal; a biomarker location or value; cancer grade; cancer subtype; a tumor margin region; and groupings of cancerous cells based on cell size and shape. The system of claim 39, wherein the processor is further configured to generate signals for causing to render, at a display device, a user interface (Ul) showing a visualization of the inference. The system of claim 39, wherein the contrast comprises one of: an absorption contrast, a scattering contrast, an attenuation contrast, an amplitude contrast, a phase contrast, a decay rate contrast, and a lifetime decay rate contrast. The system of claim 39, wherein the inference comprises a probability of a disease for at least one region in the sample, the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture. The system of claim 62, wherein the inference further comprises a heat map identifying one or more regions of the sample and a corresponding probability of a disease for each of the one or more regions of the sample. The system of claim 63, wherein the corresponding probability of a disease for each of the one or more regions of the sample is illustrated by a corresponding intensity of a color shown in the respective region in the heat map. The system of claim 39, wherein the non-radiative signals comprise at least one of: a photothermal signals and a photoacoustic signal. The system of claim 39, wherein the radiative signals comprise one or more autofluorescence signals. A non-transitory computer readable medium, storing machine-interpretable instruction sets which when executed by a processor, cause the processor to: receive, from the sample, a plurality of signals including radiative and non- radiative signals; extract a plurality of features based on processing at least one of the plurality of signals, the features informative of an contrast provided by the at least one of the plurality of signals; and apply the plurality of features to a machine learning architecture to generate an inference regarding the sample. The non-transitory computer readable medium of claim 67, wherein the inference comprises a probability of a disease for at least one region in the sample, the probability of the disease determined based on the plurality of features and complementary data streams received by the machine learning architecture. The non-transitory computer readable medium of claim 68, wherein the inference further comprises a heat map identifying one or more regions of the sample and a corresponding probability of a disease for each of the one or more regions of the sample. The non-transitory computer readable medium of claim 69, wherein the corresponding probability of a disease for each of the one or more regions of the sample is illustrated by a corresponding intensity of a color shown in the respective region in the heat map. The non-transitory computer readable medium of claim 67, wherein the non-radiative signals comprise at least one of: a photothermal signals and a photoacoustic signal. The non-transitory computer readable medium of claim 67, wherein the radiative signals comprise one or more autofluorescence signals. A computer system for training a machine learning architecture, the system comprising: a processor operating in conjunction with computer memory and non- transitory computer-readable storage, the processor configured to, in each training iteration: instantiate a machine learning architecture including neural network having a plurality of nodes and weights stored on a memory device; obtain a true total absorption (TA) image; generate a simulated stained image based on the true TA image; generate a fake TA image based on the generated stained image; compute a first loss based on the generated fake TA image and the true TA image; obtain a labelled and stained image; compute a second loss based on the generated simulated stained image and the labelled and stained image; and update weights of the neural network based on at least one of the first and second losses.
PCT/CA2023/051497 2022-11-09 2023-11-09 Machine-learning processing for photon absorption remote sensing signals WO2024098153A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US202263382906P 2022-11-09 2022-11-09
US63/382,906 2022-11-09
US202263424647P 2022-11-11 2022-11-11
US63/424,647 2022-11-11
US202363443838P 2023-02-07 2023-02-07
US63/443,838 2023-02-07
US202363453371P 2023-03-20 2023-03-20
US63/453,371 2023-03-20

Publications (1)

Publication Number Publication Date
WO2024098153A1 true WO2024098153A1 (en) 2024-05-16

Family

ID=91031579

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2023/051497 WO2024098153A1 (en) 2022-11-09 2023-11-09 Machine-learning processing for photon absorption remote sensing signals

Country Status (1)

Country Link
WO (1) WO2024098153A1 (en)

Similar Documents

Publication Publication Date Title
Rey-Barroso et al. Optical technologies for the improvement of skin cancer diagnosis: a review
US11798300B2 (en) Method of characterizing and imaging microscopic objects
US11756675B2 (en) Systems and methods for analysis and remote interpretation of optical histologic images
Gao et al. Optical hyperspectral imaging in microscopy and spectroscopy–a review of data acquisition
Bredfeldt et al. Computational segmentation of collagen fibers from second-harmonic generation images of breast cancer
Abbasi et al. All-optical reflection-mode microscopic histology of unstained human tissues
Unger et al. Real-time diagnosis and visualization of tumor margins in excised breast specimens using fluorescence lifetime imaging and machine learning
Huttunen et al. Automated classification of multiphoton microscopy images of ovarian tissue using deep learning
Bell et al. Reflection-mode virtual histology using photoacoustic remote sensing microscopy
Que Research techniques made simple: noninvasive imaging technologies for the delineation of basal cell carcinomas
Lefebvre et al. Whole mouse brain imaging using optical coherence tomography: reconstruction, normalization, segmentation, and comparison with diffusion MRI
Boktor et al. Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS)
Huttunen et al. Multiphoton microscopy of the dermoepidermal junction and automated identification of dysplastic tissues with deep learning
WO2022238956A1 (en) Photoabsorption remote sensing (pars) imaging methods
Tang et al. Review of mesoscopic optical tomography for depth-resolved imaging of hemodynamic changes and neural activities
Ho et al. Detecting mouse squamous cell carcinoma from submicron full‐field optical coherence tomography images by deep learning
Martell et al. Deep learning-enabled realistic virtual histology with ultraviolet photoacoustic remote sensing microscopy
Tserevelakis et al. Hybrid confocal fluorescence and photoacoustic microscopy for the label-free investigation of melanin accumulation in fish scales
Krafft et al. Opportunities of optical and spectral technologies in intraoperative histopathology
Liu et al. Real-time deep learning assisted skin layer delineation in dermal optical coherence tomography
Wang et al. Automated ovarian cancer identification using end-to-end deep learning and second harmonic generation imaging
WO2021207118A1 (en) System and method of dynamic micro-optical coherence tomography for mapping cellular functions
WO2024098153A1 (en) Machine-learning processing for photon absorption remote sensing signals
Murashova The Integration of Computational Methods and Nonlinear Multiphoton Multimodal Microscopy Imaging for the Analysis of Unstained Human and Animal Tissues
Boktor et al. Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images