WO2021067507A1 - Construction de fonctions de transfert numériques sur des images de microscopie optique 3d à l'aide d'un apprentissage profond - Google Patents
Construction de fonctions de transfert numériques sur des images de microscopie optique 3d à l'aide d'un apprentissage profond Download PDFInfo
- Publication number
- WO2021067507A1 WO2021067507A1 PCT/US2020/053644 US2020053644W WO2021067507A1 WO 2021067507 A1 WO2021067507 A1 WO 2021067507A1 US 2020053644 W US2020053644 W US 2020053644W WO 2021067507 A1 WO2021067507 A1 WO 2021067507A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image type
- imaging domain
- images
- dimensional
- Prior art date
Links
- 238000012546 transfer Methods 0.000 title claims abstract description 106
- 230000006870 function Effects 0.000 title claims description 71
- 238000013135 deep learning Methods 0.000 title description 13
- 238000000879 optical micrograph Methods 0.000 title description 3
- 238000003384 imaging method Methods 0.000 claims abstract description 77
- 238000001000 micrograph Methods 0.000 claims abstract description 46
- 238000010801 machine learning Methods 0.000 claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims description 54
- 238000012549 training Methods 0.000 claims description 47
- 238000000034 method Methods 0.000 claims description 43
- 238000000386 microscopy Methods 0.000 claims description 26
- 230000015654 memory Effects 0.000 claims description 22
- 238000010200 validation analysis Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims 3
- 230000004931 aggregating effect Effects 0.000 claims 1
- 230000009466 transformation Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 67
- 210000004940 nucleus Anatomy 0.000 description 34
- 210000004027 cell Anatomy 0.000 description 33
- 238000002474 experimental method Methods 0.000 description 31
- 235000019587 texture Nutrition 0.000 description 25
- 108020002231 fibrillarin Proteins 0.000 description 20
- 102000005525 fibrillarin Human genes 0.000 description 20
- 102100026517 Lamin-B1 Human genes 0.000 description 18
- 108010052263 lamin B1 Proteins 0.000 description 18
- 102100029538 Structural maintenance of chromosomes protein 1A Human genes 0.000 description 17
- 230000003834 intracellular effect Effects 0.000 description 17
- 108010025568 Nucleophosmin Proteins 0.000 description 14
- 102100022678 Nucleophosmin Human genes 0.000 description 14
- 239000000523 sample Substances 0.000 description 14
- 238000004458 analytical method Methods 0.000 description 11
- 101000633429 Homo sapiens Structural maintenance of chromosomes protein 1A Proteins 0.000 description 9
- 238000013459 approach Methods 0.000 description 6
- 101710143390 Structural maintenance of chromosomes protein 1A Proteins 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 239000002131 composite material Substances 0.000 description 5
- 108010033040 Histones Proteins 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000007654 immersion Methods 0.000 description 4
- 210000003470 mitochondria Anatomy 0.000 description 4
- 230000002438 mitochondrial effect Effects 0.000 description 4
- 238000012952 Resampling Methods 0.000 description 3
- 238000012923 label-free technique Methods 0.000 description 3
- 210000000633 nuclear envelope Anatomy 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000011002 quantification Methods 0.000 description 3
- 238000009987 spinning Methods 0.000 description 3
- 108010004731 structural maintenance of chromosome protein 1 Proteins 0.000 description 3
- 102100030649 Histone H2B type 1-J Human genes 0.000 description 2
- 101710160681 Histone H2B type 1-J Proteins 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013434 data augmentation Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000000975 dye Substances 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 102000004169 proteins and genes Human genes 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000010869 super-resolution microscopy Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 108010077544 Chromatin Proteins 0.000 description 1
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 1
- 102000006947 Histones Human genes 0.000 description 1
- 102000007999 Nuclear Proteins Human genes 0.000 description 1
- 108010089610 Nuclear Proteins Proteins 0.000 description 1
- 210000004102 animal cell Anatomy 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000003556 assay Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008827 biological function Effects 0.000 description 1
- 239000012472 biological sample Substances 0.000 description 1
- 238000000339 bright-field microscopy Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000003483 chromatin Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000003743 erythrocyte Anatomy 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 235000019580 granularity Nutrition 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000016507 interphase Effects 0.000 description 1
- 238000003368 label free method Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 210000002353 nuclear lamina Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008832 photodamage Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000013442 quality metrics Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000028706 ribosome biogenesis Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/002—Scanning microscopes
- G02B21/0024—Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
- G02B21/0052—Optical details of the image generation
- G02B21/0076—Optical details of the image generation arrangements using fluorescence or luminescence
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/002—Scanning microscopes
- G02B21/0024—Confocal scanning microscopes (CSOMs) or confocal "macroscopes"; Accessories which are not restricted to use with CSOMs, e.g. sample holders
- G02B21/008—Details of detection or image processing, including general computer control
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/50—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
- G02B30/52—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/06—Means for illuminating specimens
- G02B21/08—Condensers
- G02B21/12—Condensers affording bright-field illumination
Definitions
- Modem microscopy has enabled biologists to gain new insights on many different aspects of cell and developmental biology. To do so, biologists choose the appropriate microscopy settings for their specific research purposes, including different microscope modalities, magnifications, resolution settings, laser powers, etc.
- Figure 6 is an image diagram showing results of the inventors' four different experiments.
- Figure 8 is a chart diagram showing the results of application- specific validation for Lamin B1 magnification transfer in the four experiments.
- Figure 13 is a chart diagram showing the comparison of predicted images to real images for the additional intracellular structures.
- Figure 14 is an image diagram showing the facility’s core segmentation of mTagRFP-T-nucleophosmin.
- Figure 15 is a chart diagram showing a comparison of the volume of nucleophosmin segmented by the facility based upon actual target predicted images.
- Figure 21 is a chart diagram showing texture contrast correlation between predicted and target images for SMC1A.
- Figure 22 is a chart diagram showing image texture measures for SMC1A and H2B.
- Figure 23 is an image diagram showing sample results of a cross modality transfer function on Lamin B1 images.
- Figure 24 shows sample results of a cross-modality transfer function on FI2B images.
- Figure 25 is an image diagram showing sample results of an SNR transfer function.
- Figure 26 is an image diagram showing sample results of a binary-to-realistic microscopy image transfer function.
- Figure 27 is an image diagram showing an example of transferring microscopy images into binary masks on the same 3D mitochondrial images as in Application 5, but with the source type and target type swapped.
- Figure 28 is an image diagram showing the result of a composite transfer function.
- Figure 29 is a data flow diagram showing multiple approaches available to predict high-magnification fluorescent images from lower- magnification brightfield images.
- Figure 30 is an image diagram showing actual images captured of a sample.
- enhanced-resolution microscopy may provide images of increased resolution, but generally does so while either compromising speed or large fields of view (FOVs).
- a lower magnification air objective may permit acquisition of long duration timelapse imaging with decreased photo damage and with a larger FOV to image — for example, entire colonies of cells (instead of a handful of cells) in the image — but does so with reduced resolution and magnification.
- transfer functions each transform a “source type image” into a “target type image,” and include, but are not limited to, those that transfer images between different magnifications, different microscope objectives, different resolutions, different laser powers, different light microscope modalities, different signal to noise ratios (SNRs) and even between binary simulated images and realistic microscopy images and between microscopy images and their binary masks (sometimes referred to as segmentation).
- SNRs signal to noise ratios
- the facility trains and applies a separate instance of its models for each of one or more imaging domains.
- these imaging domains are defined at varying levels of specificity, such as genetic material; cells; animal cells and plant cells; etc.
- Embodiments of the facility use two main workflows to create this collection of deep learning-based 3D light microscopy transfer functions: a paired workflow based on a Conditional Generative Adversarial Network (cGAN); and an unpaired workflow based on a Generative Adversarial Network with Cycle Consistency (CycleGAN).
- the paired workflow uses specialized biological experiments to acquire aligned matched training images of the two types to be transferred between, which are either inherently aligned or aligned computationally by the facility.
- the unpaired workflow does not rely on such alignment, it is instead limited by a lack of direct physical correspondence between samples.
- the predicted nuclear texture using the paired workflow would have both realistic and “spatially correct” nuclear texture, e.g., a high correspondence between images in a voxel-by-voxel manner when compared to the target enhanced-resolution image.
- the inventors have explored different methodologies for biology- driven validation of the prediction results.
- the prediction from the source type image will rarely be identical to the real target type image.
- the real source type images and the real target type images may not capture exactly the same actual physical locations, especially along z, due to different imaging modalities or z-steps. Therefore, the actual z positions in the biological sample represented in the prediction and in the target type image might not be identical.
- the target type images can only be used as a reference. In other words, there is often no absolute ground truth. (The term “ground truth” is used herein to refer to the target type images used as reference, even though they are not the truth in the absolute sense.) In addition, no machine learning model is perfect, each will always typically incur certain discrepancy between prediction and the ground truth.
- the facility makes available three-dimensional microscopy results that are improved in a variety of ways.
- Figure 1 is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility operates.
- these computer systems and other devices 100 can include server computer systems, cloud computing platforms or virtual machines in other configurations, desktop computer systems, laptop computer systems, netbooks, mobile phones, personal digital assistants, televisions, cameras, automobile computers, electronic media players, hand-held devices, etc.
- the computer systems and devices include zero or more of each of the following: a processor 101 for executing computer programs and/or training or applying machine learning models, such as a CPU, GPU, TPU, NNP, FPGA, or ASIC; a computer memory 102 for storing programs and data while they are being used, including the facility and associated data, an operating system including a kernel, and device drivers; a persistent storage device 103, such as a hard drive or flash drive for persistently storing programs and data; a computer- readable media drive 104, such as a floppy, CD-ROM, or DVD drive, for reading programs and data stored on a computer-readable medium; and a network connection 105 for connecting the computer system to other computer systems to send and/or receive data, such as via the Internet or another network and its networking hardware, such as switches, routers, repeaters, electrical cables and optical fibers, light emitters and receivers, radio transmitters and receivers, and the like. While computer systems configured as described above are typically used to support the operation of the facility, those
- Figure 2 is a data flow diagram illustrating the paired workflow used by the facility in some embodiments.
- the facility collects images of two different types (“source type” and “target type”), but of the identical cells 201 in identical samples (e.g., fixed samples).
- the images are inherently fully aligned, while in other applications one or more computational steps are performed to create aligned images.
- images 213 of the source type are collected by a computer or other interface device 212 from a first microscope or other imaging device 211 having source characteristics
- images 223 of the target type are collected by a computer or other interface device 222 from a second microscope or other imaging device 221 having target characteristics.
- the facility then feeds the image pairs into the deep learning module 230 to train the transfer function.
- the facility applies the trained model 240 (the “transfer function”) to transfer (or "transform”) images of the source type to predicted images comparable to the target type.
- the deep learning module of the paired workflow has two parts: a conditional GAN (cGAN) and an Auto-Align module.
- the cGAN generates target type images based on the features of the input source type images.
- the cGAN uses two common network backbones: U-Net and EDSR.
- U-Net Convolutional networks for biomedical image segmentation, International Conference on Medical image computing and computer-assisted intervention, 2015 Oct 5 (pp. 234-241), Springer, Cham., which is hereby incorporated by reference in its entirety.
- the facility uses EDSRs as described in Lim B., Son S., Kim H., Nah S., Mu Lee K., Enhanced deep residual networks for single image super-resolution, Proceedings of the IEEE conference on computer vision and pattern recognition workshops 2017 (pp. 136-144), which is hereby incorporated by reference in its entirety.
- the Auto-Align module estimates the misalignment at the sub-pixel level to further align the training pairs. This allows the pixel-wise loss normally used in cGAN training to be calculated between more highly aligned training pairs. The pixel-wise correspondence between the source and target types is thus improved, which encourages the cGAN to generate more biologically valid images (e.g., for quantitative analysis).
- the cGAN network used by the facility in the paired workflow is adapted from processing to the images to processing 3D images.
- the cGAN’s 2D convolutional kernel is modified to process 3D images. Because 3D microscope images are typically not isotropic, that is, the Z-resolution is lower than the resolution in the X and Y dimensions, the facility employs anisotropic operations in the neural network. For convolutional layers, kernel size 3 pixels with the stride value of 1 are used in the Z dimension, and kernel size 4 pixels with the stride value of 2 are used in the X and Y dimensions. In some embodiments, max-pooling is performed only in the X and Y dimensions. Using a cGAN network organized in this way provides comparable theoretical receptive fields in all dimensions.
- the Auto-Align module has two parts: a misalignment estimation part and an alignment part.
- the misalignment estimation part takes predicted target type images from cGAN and ground truth target type images as input, which it concatenates and passes into a 3-layer convolution network that is followed by a fully connected layer.
- the outputs are three scalar values: misalignment offsets along the z-axis (axial direction), y-axis and x-axis (lateral direction).
- the alignment part shifts target type image by predicted by the cGAN by the offsets with differentiable image sampling introduced by Spatial- Transformer Network.
- the (x,y,z) coordinates of the shifted version (V) of the cGAN predicted target type image (U) can be calculated by T: where (n, m, I) is the coordinate in the cGAN predicted target type image (U); (ox, oy, oz) are the estimated misalignment offsets along x, y, z directions; k is the bilinear interpolation parameters; and N, M, L are the dimension of the cGAN predicted target type image along x, y, z.
- the shift is implemented by bilinear resampling.
- the Auto-Align module optimizes the pixel-wise loss between the shifted version of the cGAN predicted target type image and ground truth target type image.
- the facility fixes the GAN parameters.
- the pixel-wise loss tends to attain its minimum only when the shifted version of the cGAN predicted target type image and the ground truth target type image are optimally aligned.
- the facility estimates the misalignment offset between the ground truth target type image and input source type image by the offset between shift estimation from the Auto-Align module.
- the facility trains the cGAN and Auto Align networks in three main training stages.
- the cGAN is pre-trained on training pairs (which may not be fully aligned).
- the second stage is finding the misalignment; and finally using the more well-aligned image pairs to train the model.
- the cGAN and Auto-Align module are individually trained in turns, each for one epoch every time.
- the facility obtains the misalignment values for each image. The misalignment of each image is subtracted by the misalignment of all the images, in order to offset the universal misalignment introduced by cGAN.
- the misalignment value at different training epochs will have a similar mean value.
- the facility uses the average misalignment value at different epoch to align the image pair. Then the facility trains the cGAN with the updated image pairs which are now much better aligned, leading to an improved prediction result.
- the facility uses the Adam optimizer with a learning rate of 0.0002 when using U-Net as the backbone, or a learning rate of 0.00002 when using EDSR as the backbone.
- the facility sets the batch size as 1 for training.
- the facility randomly crops 100 patches from each training pair, making the training input batch size 16 x 128 x 128 voxels while the training target patch size can be calculated accordingly base on the backbone network. The facility can often obtain reasonable results after 20 training epochs.
- the inventors performed an initial evaluation of the effectiveness of the Auto-Align module using a semi-simulated dataset for magnification transfer.
- the computational alignment step cannot theoretically generate fully aligned images.
- the two types of images have different axial (in the Z) dimension sizes, the two images may not be captured in identical spatial locations.
- generally optical microscopy images have a lower axial (z) than lateral (x, y) resolution, the computational alignment along the z direction could suffer more inaccuracy than x and y. For this reason, the inventors focused on simulating the z-misalignment issue in this evaluation.
- the inventors To simulate the z-misalignment, the inventors injected a shift of a few pixels (a random number between -4 and +4) along the z axis into the down-sampled 20x images. To evaluate the results, the inventors held five images out from training. The inventors manually selected 10 patches of about 20x20 pixels within ten different nuclei in each of the five hold-out images. The inventors then estimated the nuclear heights based on the intensities within these 50 patches along z. The differences in the estimated nuclear heights between ground truth and predicted images (also comparing two different backbone networks in cGAN) are reported together with the standard image quality metrics, peak signal to noise ratio (PSNR) and SSIM as shown in Table 1 below. The Auto-Align module consistently yielded a considerable improvement, regardless of the backbone used in cGAN.
- PSNR peak signal to noise ratio
- Table 1 Quantitative evaluation of the Auto-Align module with two different backbones in cGAN on data with and without z-misalignment. Correction Prediction is the number of nuclei where the nuclear heights estimated from the predicted images are with (-0.5, 0.5) pixel from the nuclear height estimated from the ground truth.
- the inventors also performed further evaluation of the effectiveness of the Auto-Align module, described below in the Application 1 section.
- Figure 3 is a data flow diagram illustrating the unpaired workflow used by the facility in some embodiments.
- the facility collects images of two different types to transfer between, but not necessarily of the identical cells or samples.
- the facility feeds the two sets of images into the deep learning module to learn the transfer function from the source type to the target type.
- Images of cells 310 are collected from a microscope or other imaging device 311 having source characteristics via a computer or other interface device 312 as source type images 313, and images of cells 320 are collected by a microscope or other imaging device 321 having target characteristics and received through a computer or other interface device 322 as target type images 323.
- the deep learning module 330 the facility uses a version of CycleGAN extended to 3D to tackle the unpaired microscopy image transfer problem.
- the facility Given a set of source type images and a set of target type images, the facility can produce a transfer function capable of transferring images drawn from the source type into new images that fit the “distribution” of the target type.
- the “distribution” can be interpreted as the set of underlying characteristics of the target type images in general, not specific to any particular image.
- the generated images will have a similar general appearance as the target type, while still maintaining the biology attributes of the input image.
- CycleGAN is commonly used as a bi directional transfer.
- the facility imposes directionality into the unpaired workflow (/. e. , source type and target type).
- the inventors found that the bi-directional transfer did not show similar accuracy between the two transfer directions.
- the model was able to transfer the target type back to the source type, but did so with greatly decreased performance compared to the transfer from source type to target type.
- the capability of transferring from target type images back to source type images as part of the cycle consistency is utilized as an effective means to build a computational transfer function from source to target without the need for paired trained images.
- Deep Transfer Function Suite is the first deep learning toolbox with a wide spectrum of applicability to different microscopy image transfer problems, including creating composites of multiple transfer functions to achieve much higher transfer power.
- Figures 3-10 are image diagrams showing results of applying the facility to various microscopy samples.
- Lamin B1 decorates the nuclear lamina located just inside the nuclear envelope, which appears as a shell with almost no texture (during interphase).
- Lamin B1 as the nuclear envelope has a very simple topological structure and can be used as a good approximation of the nuclear shapes, and is therefore relatively easy to conduct a biology-driven validation via nuclear shape analysis.
- the inventors To collect the data, the inventors first fixed cells and imaged the identical cells twice with two different objectives on a ZEISS spinning-disk confocal microscope.
- the 20x images (source type) have a much larger FOV, but less resolution compared to the 100x images (target type).
- the two sets of original images were then computationally aligned.
- the inventors conducted four different experiments: v1. Training with roughly aligned pairs and with basic cGAN v2. Training with roughly aligned pairs and with basic cGAN +
- the roughly aligned pairs used in experiments v1 and v2 were obtained by segmenting both the 20x and 100x images and finding the maximum overlapping position.
- the more accurately aligned pairs used in experiments v3 and v4 were obtained by the 3D registration workflow. Because the 20x images are of much lower resolution than the 100x images, the segmented nuclear shapes from 20x images are much less accurate. As a result, overlaying the two segmentations creates a rough alignment.
- the goal of including a roughly aligned version is two-fold: to demonstrate the importance of accurately aligned training pairs (by comparing experiment 1 and 3); and show how much the Auto-Alignment module improves prediction accuracy when accurately aligned pairs are not available (by comparing experiment 1 and 2).
- Figure 4 is an image diagram showing, in an x, y view, Lamin B1 transferred to a higher magnification by the facility using the paired workflow.
- Image 410 shows the subject Lamin B1 nuclei imaged in accordance with a source type, in a 20x magnification image captured using a first objective on a ZEISS spinning-disc confocal microscope.
- the plane shown is the plane at which the xy-cross-sectional area of the nuclei in the target 100x image is maximal.
- the real 20x image is upsampled via bilinear interpolation to match the resolution of the target and prediction images.
- the scale bars are 10 urn.
- Image 420 of the target type is captured of the same nuclei, at a 100x magnification using a second objective on a ZEISS spinning-disc confocal microscope.
- Image 430 shows the facility’s transformation of the source image 410 into the target type, i.e., magnification 100x, from the source type, magnification 20x.
- a portion 411 of image 410 isolates a particular nucleus in the source image; corresponding portions 421 and 431 of images 420 and 430, respectively, isolate the same nucleus in those images.
- Figure 5 is an image diagram showing the isolated nuclei from the images in Figure 4 at higher magnification.
- image 511 shows the isolated nucleus in the source image in x,y view
- image 512 shows the isolated nucleus in the source image in side view
- image 521 shows the isolated nucleus in the target image in x,y view
- image 522 shows the isolated nucleus in the target image in side view
- image 531 shows the isolated nucleus in the prediction image in x,y view
- image 532 shows the isolated nucleus in the prediction image in side view.
- images 511 , 521 , and 531 are xy- cross-sections from the z-plane at which the xy-cross-sectional area of the nuclei is maximal; images 512, 522, and 532 are xz-cross-sections from the y- plane at which the xz-cross-sectional area of the nucleus in the target 100x image is maximal.
- the real 20x image is upsampled via bilinear interpolation to match the resolution of the target and prediction images. Scale bar is 5 urn.
- Figure 6 is an image diagram showing results of the inventors' four different experiments.
- Image 601 is the x,y view of the isolated nucleus in the actual target image, while image 602 is a side of the isolated nucleus in the actual target image.
- images 601 and 602 are the same as images 521 and 522, respectively.
- Image 611 is the x,y view and 612 is the side view of the same nucleus as predicted by the facility in experiment v1.
- Image 621 is the x,y view and 622 is the side view of the same nucleus as predicted by the facility in experiment v2.
- Image 631 is the x,y view and 632 is the side view of the same nucleus as predicted by the facility in experiment v3.
- Image 641 is the x,y view and 642 is the side view of the same nucleus as predicted by the facility in experiment v4.
- Figure 7 is an image diagram showing further results of the four experiments.
- image 711 is the x,y view and 712 is the side view of a comparison of the green actual target image with the magenta predicted image in experiment v1.
- Image 721 is the x,y view and 722 is the side view of a comparison of the green actual target image with the magenta predicted image in experiment v2.
- Image 731 is the x,y view and 732 is the side view of a comparison of the green actual target image with the magenta predicted image in experiment v3.
- Image 741 is the x,y view and 742 is the side view of a comparison of the green actual target image with the magenta predicted image in experiment v4.
- the inventors used a semi-automatic seeded watershed algorithm to segment Lamin B1 from both predicted images and real images.
- the inventors extracted the nuclear outline from these segmentations and used spherical harmonics parametrization to quantify the nuclear shape in both predicted images and real images.
- the inventors calculated the spherical harmonic shape features of the 182 nuclei from 15 fields of view, from both real images and four different versions of predictions.
- Figure 8 is a chart diagram showing the results of application- specific validation for Lamin B1 magnification transfer in the four experiments.
- Chart 810 is scatter plot of the quantifications of the spherical harmonic shape features of the 182 nuclei from 15 fields of view for experiment v1.
- the feature quantified is the LOMO coefficient of the predicted nuclei (y-axis) and the real, target nuclei (x-axis). The LOMO coefficient is compared to nuclear volume.
- Charts 820, 830, and 840 show the same information for experiments v2, v3, and v4, respectively.
- Figure 9 is a chart diagram showing the additional results of application-specific validation for Lamin B1 magnification transfer in the four experiments. These charts compare the coefficient of determination and percentage bias of first five spherical harmonic coefficients when comparing the four predictions against real images.
- charts 911 , 921 , 931 , 941 , and 951 are bar graphs that show the accuracy with which the different experimental transfer functions predict the quantified spherical harmonic shaped features for the first five spherical harmonic coefficients as measured by coefficient of determination.
- Charts 912, 922, 932, 942, and 952 show percent bias for the four experiments for the same five spherical harmonic coefficients.
- the next step is to evaluate the performance of the facility's microscopy objective transfer functions on intracellular structures with higher “complexity” in shape.
- the inventors extended their experiments to four additional cell lines: mEGFP- tagged fibrillarin, mEGFP-tagged nucleophosmin, mEGFP-tagged histone FI2B type 1-J, and mEGFP-tagged SMC protein 1A, which represents two different types of “complexity”.
- Fibrillarin and nucleophosmin mark the dense fibrillar component and granular component of the nucleolus, respectively. Morphologically, they present slightly more complexity than the simple shell morphology of Lamin B1.
- histone FI2B type 1-J and SMC protein 1 A are two nuclear structures, which mark histones and chromatin, respectively. Visually, they are within the nucleus and display very different textures, different from both the Lamin B1 shells and the nucleolar structure morphologies. Histone H2B type 1-J and SMC protein 1A also provide another means for approximating the nuclear shape. Texture-wise, histone H2B type 1- J features a more complex and uneven texture throughout the nucleus, while SMC protein 1 A exhibits smoother texture with puncta. As in the four-part experiment discussed above in connection with Figures 5-9, the inventors performed an experiment with these four additional intracellular structures in which their images were transformed from 20x magnification to 100x magnification using the paired workflow.
- Figure 10 is an image diagram showing predictions made for the additional intracellular structures by the facility.
- image 1011 shows a source image at 20x magnification
- image 1021 shows an actual target image at 100x magnification
- image 1031 shows the facility’s prediction at 100x magnification based upon the source image.
- Images 1012, 1022, and 2032 show the same contents for mTagRFP-T-nucleophosmin.
- Images 1013, 1023, and 1033 show the same contents for mEGFP-H2B.
- Images 1014, 1024, and 1034 show the same contents for mEGFP-SMC1A.
- Figure 11 is an image diagram showing additional results of the facility’s predictions with respect to the additional intracellular structures.
- image 1111 shows a single z-plane image (above) and a single y-plane image (below) of the structures in an individual nucleus that are boxed in image 1011, the source image for this structure.
- the top image is a single z-plane (at max xy-cross-sectional area of structure segmentation) and the bottom image is the y-plane (at max xz-cross-sectional area of the structure).
- the yellow line in the top image represents the plane of the bottom image and vice versa.
- the scale bar is 5 pm.
- Image 1121 has the same contents for this structure’s target image, and 1131 for its prediction image.
- Images 1112, 1122, and 1132 show the same contents for the mTagRFP-T- nucleophosmin structure.
- Image 1121 has the same contents for this structure’s target image, and 1131 for its prediction image.
- Images 1113, 1123, and 1133 show the same contents for the mEGFP-FI2B structure.
- Image 1121 has the same contents for this structure’s target image, and 1131 for its prediction image.
- Images 1114, 1124, and 1134 show the same contents for the mEGFP- SMC1A structure.
- Figure 12 is an image diagram comparing prediction results for the additional intracellular structures to the actual target images for mEGFP- fibrillarin.
- Image 1210 shows the predicted image for the isolated nucleus in magenta, compared to the actual target image in green — in other words, a comparison of image 1134 to image 1124.
- Images 1220, 1230, and 1240 have the same contents for mTagRFP-T-nucleophosmin, mEGFP-FI2B, and mEGFP- SMC1A, respectively. In the results, one can observe that the 100x images predicted from 20x images show significant visual similarity to real 100x images.
- Figure 13 is a chart diagram showing the comparison of predicted images to real images for the additional intracellular structures.
- Chart 1310 shows the Pearson correlation metric for the additional intracellular structures.
- Chart 1320 similarly shows the peak signal to noise ratio (PSNR) metric, and chart 1330 the structure similarity index measure (SSIM).
- PSNR peak signal to noise ratio
- SSIM structure similarity index measure
- the legend 1340 maps colors used in these charts to the different structure types. These metrics show strong correlation and high similarity between predicted and actual target images for these structures.
- the inventors performed quantitative application-specific validation for each of these structures.
- the evaluation of the predicted fibrillarin and nucleophosmin from 20x to 100x is segmentation-based quantification, such as the total volume, the number of pieces and mean volume per piece.
- Both fibrillarin and nucleophosmin are key structures in the nucleolus, which is the nuclear subcompartment where ribosome biogenesis occurs.
- the inventors segment fibrillarin and nucleophosmin from both the real and predicted 100x images using the classic segmentation workflow in Allen Cell Structure Segmenter. Both fibrillarin and nucleophosmin are segmented at two different granularities. The coarse segmentation captures the overall shape of nucleolus, while the fine segmentation discussed below delineates finer details about the structure visible in the image.
- Figure 14 is an image diagram showing the facility’s core segmentation of mTagRFP-T-nucleophosmin. Images 1411 and 1412 show the facility’s 3D segmentation of this structure based upon the actual target image in yellow, superimposed over the actual target image. Images 1421 and 1422 similarly show the facility’s 3D segmentation based upon the predicted image in yellow superimposed over that predicted image.
- Figure 15 is a chart diagram showing a comparison of the volume of nucleophosmin segmented by the facility based upon actual target predicted images.
- chart 1500 is a scatter plot that compares the total volume of nucleophosmin segmented based upon the predicted image to the total volume of nucleophosmin segmented based upon the actual target image.
- the chart also shows coefficient of determination (R 2 ) and percent bias. This information confirms that the segmented nucleolus size from nucleophosmin in real 100x images and predicted 100x images are very consistent.
- Figure 16 is a chart diagram showing further analysis of core segmentation of the additional intracellular structures by the facility.
- the chart 1600 shows coefficient of determination for coarse segmentation measures.
- the statistic computed for total volume is shown for all structures, and for measured counts and volume per piece it is shown only for those structures where more than one piece is segmented per nuclei (fibrillarin and nucleophosmin). Error bars are the 90% confidence interval determined by 200 iterations of bootstrap resampling of the statistic. This information shows non- negligible discrepancy in the number of pieces in the segmentation and volume per piece in both coarse segmentation when comparing the results between real predicted images. This is reasonable in the sense that a single pixel difference in segmentation may alter the connectivity of the structure and thus yield different number of pieces. As a result, quantitative analyses based on predicted images need to be carefully designed as certain measurements are less accurate than others.
- Figure 17 is an image diagram showing the facility’s fine segmentation of fibrillarin.
- images 1711 and 1712 show the fine segmentation fibrillarin based upon the actual target image of fibrillarin in yellow, superimposed over the actual target image.
- Images 1721 and 1722 show in yellow segmentation of the fibrillarin based upon the predicted image, overlaid over the predicted image.
- the green arrows in these images indicate an area where the target image segments one object, while the predicted image segments two objects.
- Figure 18 is a chart diagram showing the facility’s segmentation of fibrillarin based upon predicted images versus actual target images.
- Chart 1800 is a scatter plot showing, from each of a number of fibrillarin images, a scatter plot of the number comparing the number of pieces of fibrillarin segmented in the predicted image to the number of pieces of fibrillarin segmented based on the actual target image.
- the chart also shows coefficient of determination and percent bias.
- Figure 19 is a chart diagram showing further analysis of the facility’s fine segmentation.
- Chart 1900 shows the coefficient of determination for fine segmentation measures for fibrillarin and nucleophosmin. Error bars are the 90% confidence interval of the metric.
- the inventors evaluated two aspects of the accuracy of transfer function prediction: nuclear shape and texture.
- the segmentations were obtained by using a deep learning model trained with the iterative deep learning workflow in the inventors’ Segmenter on a separate set of data.
- the accuracy of the total measured nuclear volume is very high.
- texture is an important indicator of localization patterns of proteins, which tie to their biological function. Therefore, a goal of transfer functions is to maintain texture features with high fidelity.
- the inventors selected and calculated the Flaralick’s texture features including contrast, entropy, and variation.
- the inventors first looked at correlations between textures computed for different gray-levels and pixel windows to determine the best workflow using only real images. The inventors then applied the workflow and compared texture features between the real and the predicted images.
- FIG 20 is an image diagram showing the derivation of information from SMC1 A images for assessing texture metrics.
- Image 2011 , 2012, and 2013 are source, target, and predicted images for SMC1A. Shown is a single z-plane at which the xy-cross-sectional area is maximum.
- GLCM gray-level co-occurrence matrices
- the GLCMs were computed with a symmetric pixel offset of 1 in all directions, and using 4-bit (16 gray level) normalized images.
- Figure 21 is a chart diagram showing texture contrast correlation between predicted and target images for SMC1A.
- Chart 2100 is a scatter plot of the computed Haralick contrast of SMC1A fluorescence from individual nuclei. The coefficient of determination (R2) and percent bias are labeled in the figure.
- Figure 22 is a chart diagram showing image texture measures for SMC1 A and FI2B.
- Chart 2200 shows the coefficient of determination for image texture measures for SMC1A and FI2B. Error bars are the 90% confidence interval of the metric.
- Different microscopy modalities may have certain advantages and disadvantages, and may each require different instances of a particular segmentation or quantification method, creating technical challenges in comparing image data taken on different microscope modalities.
- the inventors present an application of the paired workflow to build a transfer function from images acquired on a spinning-disk microscopy to images with ⁇ 1.7-fold enhanced resolution acquired on a super-resolution microscope using the paired workflow. To collect the training data, the inventors fixed the cells and then collected 3D images of the identical cells using two different imaging modalities.
- This second modality achieves sub-diffraction limit resolution that is 1.7x (both laterally and axially) better than regular fluorescence microscopy on an identical system by utilizing a build-in post processing algorithm.
- the inventors present results of these transfer function experiments on hiPS cells expressing mEGFP-Lamin B1 and hiPS cells expressing mEGFP-H2B, confirming the feasibility of this approach and overall validity of the prediction results.
- Figure 23 is an image diagram showing sample results of a cross modality transfer function on Lamin B1 images.
- Image 2311 is a source image from 100x spinning disc microscopy.
- Image 2312 is a target image from enhanced-resolution AiryScan FAST microscopy.
- Image 2313 is a prediction by the facility using a UNet backbone, and image 2314 is a predicted image produced by the facility using an EDSR backbone.
- Charts 2321-2324 show an intensity profile along the yellow lines marked in the images above. The predicted images have much less blur compared to the input image.
- Signal-to- noise ratios (SNRs) with the ground truth super-resolution image as the reference is also shown.
- Image 2311 is upsampled to the scale comparable to the super-resolution image for visualization purposes. Images are all presented at comparable intensity scale.
- Figure 24 shows sample results of a cross-modality transfer function on FI2B images.
- Image 2410 is a source image of the FI2B from 100x spinning-disc microscopy. Portion 2411 of image 2410 is shown at greater magnification as image 241 .
- Image 2420 is a target image from ZEISS LSM 8080 AiryScan super resolution microscopy. Portion 2421 of image 2420, corresponding to portion 2411 , is shown at greater magnification as image 2412'.
- Image 2430 is predicted by the facility using a UNet backbone. Portion 2431 of image 2430, corresponding to portions 2411 and 2421, is shown at greater magnification as image 2413'.
- Image 2410 is upsampled comparable to the super resolution image for visualization purposes. Images are all presented at comparable intensity scale.
- Application 4 SNR Transfer
- the inventors For fluorescent microscopy images, using increased laser power can generate higher signal to noise ratios (SNRs), but at the cost of higher phototoxity and photobleaching. Thus, for certain experiments ( e.g ., long time- lapse), the inventors prefer to set the laser power as low as possible, which inevitably produces low SNR images. In these scenarios, the capability of transferring images from low SNR to high SNR would be beneficial for many applications such as getting more accurate quantitative measurements of intracellular structures.
- the inventors demonstrate applying the paired workflow towards transferring low SNR images to higher SNR images
- Figure 25 is an image diagram showing sample results of applying an SNR transfer function.
- Image 2510 is an image of Lamin B1 acquired in a single acquisition using 20x spinning disc. Portion 2511 of image 2510 is shown at greater magnification as image 251 .
- Image 2520 is the result of repeating the acquisition that produced image 251060 times, and averaging these 60 images for use as high SNR ground truth. Portion 2521 of image 2520, corresponding to portion 2511 , is shown at greater magnification as image 2512'.
- Image 2530 is predicted from source image 2510 using a transfer function trained with image 2520, in which SNR is greatly boosted relative to the input image. Portion 2531 of image 2530, corresponding to portions 2511 and 2521, is shown at greater magnification as image 2513'. These images are all presented at comparable intensity scales.
- the binary mask may have been produced by segmenting the microscopy image 2620.
- Portion 2621 of image 2620, corresponding to portion 2611 is shown at greater magnification as image 2621'.
- the facility uses image 2610 as the source image, and image 2620 as the target image.
- a transfer function trained on these images can be applied to transform a binary segmentation mask into a predicted microscopy image, such as image 2630.
- Portion 2631 in image 2630, corresponding to portions 2611 and 2621 is shown in greater magnification as image 263T.
- the model can transfer the original microscopy images into their binary mask with higher accuracy showing operation of a sample microscopy image to binary mask transfer function, that is, a segmentation transfer function.
- the transfer function model is able to retain the overall original accuracy of the segmentation and further reduce these false negative errors that arose due to uneven illumination.
- the predicted binary mask 2730 achieves an overall accuracy comparable to the existing segmentation 2720, and additionally reduces the false negative errors in the existing segmentation that were due to uneven illumination, thus improving the overall quality of the segmentation mask.
- the region 2712, 2722, and 2732 marked by the yellow box highlights an area where the already-good results of the existing segmentation are retained in the prediction.
- the region 2711, 2721 , and 2731 marked by the green box highlights an area where errors in the existing segmentation, indicated by the orange arrows in image 2721 , are reduced in the CycleGAN-based prediction.
- Figure 28 is an image diagram showing the result of a composite transfer function.
- Image 2810 is a source image of FI2B obtained by 20x spinning disc.
- Image 2820 shows an intermediate prediction produced by applying function Fi to the source image, increasing its magnification to 100x.
- Image 2830 is produced by applying transfer function F2 to intermediate prediction image 2820, increasing its resolution to SuperRes.
- the combination of label-free techniques permits localizing different intracellular structures directly without any dyes or labels even in images of low magnification and resolution and has the potential to change imaging assays traditionally used in cell biology.
- the label-free method can be viewed as a special type of “transfer function”, which is from brightfield to fluorescent images under the same magnification and resolution.
- transfer function is from brightfield to fluorescent images under the same magnification and resolution.
- the facility daisy-chains a transfer function model to a label-free model to achieve a wider-range of transfer.
- Lamin B1 cells demonstrate the potential of integrating transfer function models and label-free models to achieve transfer from 20x brightfield images to 100x fluorescent images.
- Figure 29 is a data flow diagram showing multiple approaches available to predict high-magnification fluorescent images from lower- magnification brightfield images.
- Figure 29 shows three possible paths from a 20x brightfield image 2901 to a 100x fluorescent image 2940.
- Path 1 the facility applies a 20x/100x microscopy objective transfer model 2910 to 20x brightfield images to predict 100x brightfield images 2920, then a 100x label-free model 2930 to predict 100x fluorescent images.
- Path 2 first applies a label-free model 2950 to predict 20x fluorescent images from 20x brightfield images, and then applies a 20x/100x microscopy objective transfer function 2970 on fluorescent images to predict final 100x fluorescent images.
- the facility can also use a direct transfer function 2980 from 20x brightfield images to 100x fluorescent images. The results of all three paths demonstrate potential for combining label free and transfer functions, especially with some further model optimization.
- Figure 30 is an image diagram showing actual images captured of a sample.
- Image 3010 is a brightfield image at 20x captured from the sample.
- Image 3020 is an x,y view of a fluorescent image captured at 20x, while image 3021 is a side view of this fluorescent image.
- Figure 31 is an image diagram showing higher-magnification images of the same sample.
- Image 3110 is a brightfield image captured at 100x.
- Image 3120 is an x,y view of a fluorescent image captured at 100x.
- Image 3121 is a side view of the 100x fluorescent image.
- Image 3130 is an x,y view of a 100x fluorescent image predicted by the label-free process from the captured 10Ox brightfield image 3110.
- Image 3131 is a side view of this predicted 100x fluorescent image.
- Figure 33 is an image diagram showing images predicted by the facility in path 2 shown in Figure 29.
- Images 3310 and 3311 are a 20x fluorescent image predicated by the label-free process from the captured 20x brightfield image 3010.
- Images 3320 and 3321 are a 100x fluorescent image predicted by transfer function 3370 from predicted 20x fluorescent images 3310 and 3311.
- Figure 34 is an image diagram showing an image predicted by the facility in path 3 shown in Figure 29. Images 3410 and 3411 are a 100x fluorescent image predicted by transfer function 2980 from the captured 20x brightfield image 3010.
- the goal of the registration workflow is to crop either the source or the target type image, or both, so that the two images align with one another in 3D when interpolated to the same voxel dimensions.
- the inventors’ method assumes that either the source or target type image has a wider FOV and contains the entire other image within this FOV. If needed, of the images can be pre-cropped to ensure it is fully contained in the other.
- the image with the smaller FOV serves as the “moving image” during registration and the larger FOV serves as the “fixed image”, irrespective of which is the source or target.
- Several intermediate interpolation results are generated for accurate registration, but the actual cropping is only applied on the output image files to avoid information loss.
- the moving image ⁇ I mov was imaged with a voxel size of 0.108 urn x 0.108 urn x 0.290 urn and is rotated 90 degrees clockwise relative to a fixed image ( I fix ) with a voxel size of 0.049 urn x 0.049 urn x0.220 urn, then it will be upsampled in x, y, and z by factors of 2.204, 2.204, and 1.306, respectively, using linear interpolation and then rotated 90 degrees counterclockwise ( I mov _p).
- the two images ( I mov _p and l_flx) are registered in the x and y dimensions. This is done by matching ORB features in the maximum intensity projections of the images and estimating a Euclidean transformation matrix that aligns the matched features. The elements in the transformation matrix corresponding to translation are used to calculate the offset between the two images in x and y. This offset is then applied to the full, 3D moving image ⁇ I mov), with zero-padding used to match the fixed image’s dimensions.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Optics & Photonics (AREA)
- Analytical Chemistry (AREA)
- Artificial Intelligence (AREA)
- Chemical & Material Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne un moyen de transformation d'image. Le moyen accède à un modèle d'apprentissage automatique entraîné pour transformer des images de microscopie tridimensionnelles dans un domaine d'imagerie d'un type d'image source en des images tridimensionnelles dans un domaine d'imagerie d'un type d'image cible. Le moyen accède également à une image de microscopie sujet dans le domaine d'imagerie du type d'image source. Le moyen applique le modèle d'apprentissage automatique entraîné à l'image de microscopie sujet pour obtenir une image de résultat de transfert dans le domaine d'imagerie du type d'image cible.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962908316P | 2019-09-30 | 2019-09-30 | |
US62/908,316 | 2019-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021067507A1 true WO2021067507A1 (fr) | 2021-04-08 |
Family
ID=75337519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/053644 WO2021067507A1 (fr) | 2019-09-30 | 2020-09-30 | Construction de fonctions de transfert numériques sur des images de microscopie optique 3d à l'aide d'un apprentissage profond |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021067507A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160185A (zh) * | 2021-04-27 | 2021-07-23 | 哈尔滨理工大学 | 一种利用生成边界位置指导宫颈细胞分割的方法 |
CN113570627A (zh) * | 2021-07-02 | 2021-10-29 | 上海健康医学院 | 深度学习分割网络的训练方法及医学图像分割方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190087939A1 (en) * | 2017-09-15 | 2019-03-21 | Saudi Arabian Oil Company | Inferring petrophysical properties of hydrocarbon reservoirs using a neural network |
US20190221313A1 (en) * | 2017-08-25 | 2019-07-18 | Medi Whale Inc. | Diagnosis assistance system and control method thereof |
US20190244347A1 (en) * | 2015-08-14 | 2019-08-08 | Elucid Bioimaging Inc. | Methods and systems for utilizing quantitative imaging |
US20190251330A1 (en) * | 2016-06-13 | 2019-08-15 | Nanolive Sa | Method of characterizing and imaging microscopic objects |
-
2020
- 2020-09-30 WO PCT/US2020/053644 patent/WO2021067507A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190244347A1 (en) * | 2015-08-14 | 2019-08-08 | Elucid Bioimaging Inc. | Methods and systems for utilizing quantitative imaging |
US20190251330A1 (en) * | 2016-06-13 | 2019-08-15 | Nanolive Sa | Method of characterizing and imaging microscopic objects |
US20190221313A1 (en) * | 2017-08-25 | 2019-07-18 | Medi Whale Inc. | Diagnosis assistance system and control method thereof |
US20190087939A1 (en) * | 2017-09-15 | 2019-03-21 | Saudi Arabian Oil Company | Inferring petrophysical properties of hydrocarbon reservoirs using a neural network |
Non-Patent Citations (1)
Title |
---|
YICHEN WU, LUO YILIN, CHAUDHARI GUNVANT, RIVENSON YAIR, CALIS AYFER, DE HAAN KEVIN, OZCAN AYDOGAN: "Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram", LIGHT: SCIENCE & APPLICATIONS, vol. 8, no. 1, XP055751396, DOI: 10.1038/s41377-019-0139-9 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160185A (zh) * | 2021-04-27 | 2021-07-23 | 哈尔滨理工大学 | 一种利用生成边界位置指导宫颈细胞分割的方法 |
CN113570627A (zh) * | 2021-07-02 | 2021-10-29 | 上海健康医学院 | 深度学习分割网络的训练方法及医学图像分割方法 |
CN113570627B (zh) * | 2021-07-02 | 2024-04-16 | 上海健康医学院 | 深度学习分割网络的训练方法及医学图像分割方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12008764B2 (en) | Systems, devices, and methods for image processing to generate an image having predictive tagging | |
Kang et al. | Stainnet: a fast and robust stain normalization network | |
Hand et al. | Automated tracking of migrating cells in phase‐contrast video microscopy sequences using image registration | |
JP2020529083A (ja) | 単一分子局在化顕微鏡法によって取得された回折限界画像からの高密度超解像度画像の再構築を改善する方法、装置、及びコンピュータプログラム | |
Ma et al. | PathSRGAN: multi-supervised super-resolution for cytopathological images using generative adversarial network | |
Ulman et al. | Virtual cell imaging: A review on simulation methods employed in image cytometry | |
CN112419295B (zh) | 医学图像处理方法、装置、计算机设备和存储介质 | |
Feuerstein et al. | Reconstruction of 3-D histology images by simultaneous deformable registration | |
Dey et al. | Group equivariant generative adversarial networks | |
WO2021067507A1 (fr) | Construction de fonctions de transfert numériques sur des images de microscopie optique 3d à l'aide d'un apprentissage profond | |
CN116664892A (zh) | 基于交叉注意与可形变卷积的多时相遥感图像配准方法 | |
Yao et al. | Scaffold-A549: A benchmark 3D fluorescence image dataset for unsupervised nuclei segmentation | |
Geng et al. | Cervical cytopathology image refocusing via multi-scale attention features and domain normalization | |
Matula et al. | Fast point-based 3-D alignment of live cells | |
CN118015190A (zh) | 一种数字孪生模型的自主构建方法及装置 | |
Dubey et al. | Structural cycle gan for virtual immunohistochemistry staining of gland markers in the colon | |
Dai et al. | Exceeding the limit for microscopic image translation with a deep learning-based unified framework | |
CN109872353B (zh) | 基于改进迭代最近点算法的白光数据与ct数据配准方法 | |
Preibisch et al. | Bead-based mosaicing of single plane illumination microscopy images using geometric local descriptor matching | |
Lan et al. | Unpaired stain style transfer using invertible neural networks based on channel attention and long-range residual | |
Palaniappan et al. | Non-rigid motion estimation using the robust tensor method | |
Zhang et al. | Point-based registration for multi-stained histology images | |
Cai et al. | An Improved Convolutional Neural Network for 3D Unsupervised Medical Image Registration | |
Hua et al. | Leukocyte super-resolution via geometry prior and structural consistency | |
Salvi et al. | Computational Synthesis of Histological Stains: A Step Toward Virtual Enhanced Digital Pathology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20870839 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20870839 Country of ref document: EP Kind code of ref document: A1 |