CN117953158A - Image processing method and system - Google Patents

Image processing method and system Download PDF

Info

Publication number
CN117953158A
CN117953158A CN202410138010.5A CN202410138010A CN117953158A CN 117953158 A CN117953158 A CN 117953158A CN 202410138010 A CN202410138010 A CN 202410138010A CN 117953158 A CN117953158 A CN 117953158A
Authority
CN
China
Prior art keywords
image
network
regularization
fluorescence microscope
term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410138010.5A
Other languages
Chinese (zh)
Inventor
陈良怡
黄小帅
范骏超
王建勇
周博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202410138010.5A priority Critical patent/CN117953158A/en
Publication of CN117953158A publication Critical patent/CN117953158A/en
Pending legal-status Critical Current

Links

Landscapes

  • Microscoopes, Condenser (AREA)

Abstract

The embodiment of the specification provides an image processing method and system, wherein the method comprises the following steps: acquiring multi-dimensional image data, wherein the image data is generated by a fluorescence microscope; generating an initial multi-dimensional image based on the multi-dimensional image data; constructing an objective function based on an image acquisition process, and carrying out one or more rounds of iteration on an initial multidimensional image to generate an objective image; wherein: the objective function includes a fidelity term associated with the imaging model of the fluorescence microscope and a regularization term calculated over a regularization network.

Description

Image processing method and system
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and system.
Background
Fluorescence Microscopy (FM) is critical to elucidating the kinetics and function of various biological processes in living cells. However, all FMs have drawbacks and trade-offs because they divide a limited signal space-time budget. These limitations manifest themselves when comparing different types of microscopes (e.g., 3D-SIM provides better spatial resolution than high numerical aperture FLFM, but at a slower speed); different implementations of the same microscope type (e.g., 3D-SIM provides better spatial resolution than 3D-CSDM, but less photobleaching); also, shorter exposure times and smaller pixels can increase speed and resolution within the same microscope, sacrificing signal-to-noise ratio. Performance trade-offs are particularly important when considering live cell super-resolution (SR) microscopy applications, where the required spatial-temporal resolution must be balanced with SNR.
In addition to advances in microscope hardware, computational methods are becoming increasingly important for FM image restoration. From a statistical point of view, the nature of FM image restoration is the loss of information (convolution point spread function) and uncertainty (noise) in measurement when the camera is restored to observe the sample. And the process of image restoration can be translated into a linear inverse problem consisting of a fidelity term corresponding to the camera viewing process and a regularization term corresponding to the sample a priori distribution. It is crucial to design a suitable regularization term to express the complex statistical prior distribution of the samples. The former devised various mathematically easy-to-understand regularization terms, but the complex a priori distribution of samples was only partially reflected in its formula. Thus, conventional analytical models are limited by their assumed accuracy, which may lead to quality loss upon image restoration. Deep Learning (DL) has a strong expressive power and can theoretically approach complex sample distributions with infinitesimal errors.
Some embodiments of the present specification provide restoration methods applicable to 3D-SIM, CSDM, WFM, FLFM, and the like. The restoration method may refer to combining a camera imaging noise model-based fidelity term with a DL-based regularization term and providing a noise-free, artifact-free, high-fidelity, high-contrast FM image as a general method across multiple fluorescence imaging modalities.
Disclosure of Invention
One of the embodiments of the present specification provides an image processing method including: acquiring multi-dimensional image data, the image data being generated by a fluorescence microscope; generating an initial multi-dimensional image based on the multi-dimensional image data; constructing an objective function based on an image acquisition process, and carrying out one or more rounds of iteration on the initial multidimensional image to generate an objective image; wherein: the objective function comprises a fidelity term and a regularization term, wherein the fidelity term is related to an imaging model of the fluorescence microscope, and the regularization term is obtained through calculation of the regularization network.
One of the embodiments of the present specification provides an image processing system including: an image data acquisition module for acquiring multi-dimensional image data, the image data being generated by a fluorescence microscope; an initial image generation module for generating an initial multi-dimensional image based on the multi-dimensional image data; the target image generation module is used for constructing a target function based on an image acquisition process, and carrying out one or more rounds of iteration on the initial multidimensional image to generate a target image; wherein: the objective function comprises a fidelity term and a regularization term, wherein the fidelity term is related to an imaging model of the fluorescence microscope, and the regularization term is obtained through calculation of the regularization network.
One of the embodiments of the present specification provides an image processing apparatus including a processor for executing the above-described image processing method.
One of the embodiments of the present specification provides a computer-readable storage medium storing computer instructions that, when read by a computer, perform the above-described image processing method.
Drawings
The present specification will be further elucidated by way of example embodiments, which will be described in detail by means of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic diagram of an exemplary image restoration method shown in accordance with some embodiments of the present description;
FIG. 2 is a schematic diagram of a comparison of hybrid 3D-SIM with other 3D-SIM approaches according to some embodiments of the present disclosure;
FIG. 3 is a schematic illustration of a restoration method shown in some embodiments of the present description to improve CSDM imaging;
FIG. 4 is a schematic illustration of live cell bicolor delayed WFM and FLFM imaging with improved mitochondrial and peroxisome performance according to the recovery method shown in some embodiments of the present disclosure;
FIG. 5 is a schematic diagram of a Fourier light field microscope according to some embodiments of the disclosure;
FIG. 6 is a schematic illustration of an application scenario of an image processing system according to some embodiments of the present description;
FIG. 7 is a schematic diagram of exemplary hardware and/or software components of a computing device shown in accordance with some embodiments of the present description;
FIG. 8 is an exemplary block diagram of an image processing system according to some embodiments of the present description;
FIG. 9 is an exemplary flow chart of an image processing method shown in accordance with some embodiments of the present description;
FIG. 10 is a schematic diagram of a training process for a regularized network as shown in some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It will be appreciated that "system," "apparatus," "unit" and/or "module" as used herein is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
It should be noted that when describing performing operations on an image, the term "image" as used herein may refer to a data set (e.g., a matrix) containing pixel values in the image. As used herein, for brevity, an object (e.g., a person, an organ, a cell, or a portion thereof) in an image may be referred to as an object. Further, the manipulation of the representation of the object in the image may be referred to as manipulation of the object, and the segmentation of a portion of the image that includes the representation of the cell or organelle from the image may be referred to as segmentation of the cell or organelle.
Images with relatively high resolution (e.g., super-resolution microscopy images) generated by conventional image reconstruction methods may typically be subject to artifacts.
In view of this, some embodiments of the present disclosure provide an image processing method that optimizes an initial multi-dimensional image to generate a target image by constructing an objective function and utilizing a regularization network, wherein the objective function includes a fidelity term associated with an imaging model of a fluorescence microscope and a regularization term calculated by a deep-learned regularization network such that artifacts or noise in the target image generated by the regularization network are effectively reduced.
It should be noted that, for convenience of description, the present specification describes a process of generating an image by using a fluorescence microscope, but it should be understood that the method provided in the present specification may be applied to any other type of system, such as a microscope, a telescope, a camera (e.g., a monitoring camera, a mobile phone including a photographing function, a webcam, etc.), a medical imaging apparatus, etc., or a combination of the foregoing apparatuses.
The present embodiment has developed an image processing method (also referred to as a restoration method) of a Fluorescence Microscope (FM) by combining a camera noise model with a Deep Learning (DL) in an interpretable manner, which can be applied to a Three-dimensional structure illumination microscope (Three-dimensional structured illumination microscopy (3D-SIM)), a turret Confocal microscope (Confocal SPINNING DISK microscopy (CSDM)), a Wide-field microscope (WFM), a Fourier-field microscope (FLFM), and the like. Recovery method in some embodiments, the recovery method may be applied to process 3D-SIM images of F-actin filaments, verifying that the method can achieve excellent recovery comparable to the other three currently best methods. The recovery method can be employed to promote degraded multidimensional 3D-SIM data, thereby achieving artifact-free and high fidelity Super Resolution (SR) recovery. The recovery method is used for recovering in the CSDM, so that the 2D/3D delay imaging with high SNR and high contrast is realized, and signal time leakage and loss can not occur. The restoration method can completely eliminate noise and artifacts of long-term WFM plane and FLFM body imaging as a general restoration method.
The recovery method may be integrated into a 3D-SIM (3D-SIM) and the 3D-SIM and other most advanced 3D-SIM methods may be systematically evaluated, including SIMnoise, open D-SIM on F-actin filaments and Hessian3D-SIM (see fig. 2). For low SNR, the background areas of SIMnoise, open3D-SIM, and Hessian3D-SIM, although improved in terms of specific gravity over traditional 3D-SIMs, open3D-SIM and Hessian3D-SIM may still produce artifacts due to noise amplification (see FIG. 2 b). By comparison, 3D-SIM yields a more continuous (e.g., fig. 2c, 2i, 2 m) and complete structure (e.g., fig. 2c,2D, 2j, 2 k), with fewer artifacts (e.g., fig. 2b, 2D, 2h, 2 l) and higher resolution maintenance (e.g., fig. 2c, 2D), 3DPSNR (e.g., fig. 2 n) and 3DSSIM (e.g., fig. 2 o) than other approaches. The volume SR imaging capability of the 3D-SIM and conventional 3D-SIM with a sample of live COS-7 cells expressing Lifeact-EGFP is shown in FIG. 2 e. The 3D-SIM can display the complex 3D structure of actin filaments with super resolution, whereas artifacts in conventional 3D-SIM reconstruction severely distort the complex 3D structure of actin filaments (as in fig. 2f,4 g).
The 3D-SIM can be applied to a sample of viable COS-7 cells co-expressing MitoTracker and SiR-Tublin to visualize the structural dynamics and interactions of microtubules and mitochondria (see FIGS. 2p-4 s). Although the wide field incident illumination configuration used in 3D-SIMs may result in rapid photobleaching, 3D-SIMs may continue to generate high quality, time-lapse volumetric SR images of microtubes. In contrast, the conventional 3D-SIM generates an unexplained SR image under the same conditions. For mitochondrial channels with relatively high signal-to-noise ratios, conventional 3D-SIMs can produce well-perceived SR images, but still degrade due to reconstruction artifacts in the background region. The 3D-SIM may completely eliminate these artifacts. These results may demonstrate the excellent volumetric SR imaging capability of 3D-SIMs, facilitating volumetric SR imaging of photosensitive biological processes with minimal invasiveness.
To verify the versatility of the fluorescence microscope recovery method, the recovery method strategy may be integrated into a CSDM (CSDM). Modeling zinc ion secretion events using a specific fluorescence imaging model demonstrated the restorative performance of CSDM on delayed islet zinc ion secretion images after training using synthetic super-realistic imaging data (see fig. 3a-3 c). For original images that are severely contaminated with noise in a conventional CSDM, deepCAD-CSDM can generate a good de-noised image, but there are signal time leaks (fig. 3b, arrows) and deletions (fig. 3c, arrows and boxes), which can be avoided by CSDM.
CSDM may be applied to live COS-7 cell samples co-expressing EGFP-KDEL and PEX2-GFP to capture successive time lapse images of the endoplasmic reticulum and peroxisomes (FIGS. 3d, 3 e). Because of the low signal-to-noise ratio, conventional CSDM images have difficulty conveying useful information even after deconvolution. In comparison, CSDM provides a higher SNR and higher contrast image after additional deconvolution. And the high fidelity performance advantage of the recovery method can be maintained in 3D-CSDM volume delay imaging of microtubes (as in fig. 3f,3 g).
The versatility of the recovery method can be further demonstrated at FLFM. Conventional FLFM can achieve fast, volumetric and polychromatic live cell imaging, but imaging duration is limited by the photobleaching caused by the wide field epi-illumination configuration. To address the limitations of epi-illumination while preserving volumetric imaging capabilities, a restoration method may be implemented in the FLFM platform to develop FLFM system (as in fig. 4).
FLFM the original image is acquired with a wide field epi-illumination configuration and can be considered a Wide Field Microscope (WFM) image. And the sample volume distribution can be reconstructed by 3D deconvolution (as in fig. 5). For 3D reconstruction of the low SNRFLFM original image, the noise is amplified by deconvolution and transferred into the artifact. Noise and artifacts in conventional delayed WFM planar images and FLFM volumetric reconstructions can be completely eliminated by the restoration method (as in fig. 4a-4 c).
FLFM can be applied to live COS-7 cell samples co-expressing MitoTracker and PEX2-mcherry for long-term visualization of mitochondrial and peroxisome volume structure and dynamic information (see FIGS. 4d-4 f). During the 30 minute recording process, the conventional FLFM D reconstructed volume was continuously degraded by severe artifacts and the structural dynamic information was almost completely invisible, especially for peroxisome channels, due to the high similarity of sample structure and artifact distribution. FLFM have balanced properties in terms of volume reconstruction that provide low artifacts while preserving fine subcellular structures. In short, the recovery method as a general method extends the imaging duration of WFM plane and FLFM volume imaging.
FM generally improves live cell five-dimensional (5D) (xyz-time color) fluorescence imaging by interpretably combining the advantages of the optical front-end (camera imaging noise model) and algorithmic back-end (DL) methods. The recovery method may have the following advantages: (1) FM is proposed by a statistical analysis image forming model based on a fluorescence microscope, and the corresponding iterative recovery process is completely statistically interpretable; (2) Compared with the traditional algorithm that a manual analysis model with a certain hypothesis is adopted to iteratively restore the image, the restoration method replaces the manual hypothesis analysis model with a more proper analysis model based on deep learning, and the final restoration quality is improved to a great extent; (3) The restoration method proves to be a widely compatible and versatile method in image restoration of multiple fluorescence imaging modes.
The restoration method can provide a general and interpretable solution, can realize noise-free, artifact-free, high-fidelity, high-contrast and long-term (about 3.5 hours) restoration from low-SNR images, and has resolution as high as below 45 nm. The recovery method has the advantages of reducing photon dosage and related photon toxicity, improving imaging speed, prolonging imaging duration time, prolonging color size and the like, and is very important for revealing various biological phenomena.
The recovery method in embodiments of the present description may involve data set acquisition, preprocessing, deep learning regularization term training, and the like. For 3D-SIM imaging, the same settings in HiS-SIM are used. For CSDM imaging, CSDM is a commercial system based on an inverted fluorescence microscope (IX 81, olympus) that can be equipped with a wide field objective (X100/1.3 oil, olympus) and a scanning confocal system (CSU-X1, cross-river). Four laser beams of 405nm, 488nm, 561nm and 647nm may be used in combination with the CSDM. The images were captured by either a sCMOS camera (C14440-20 UP, hamamatsu) or an EMCCD camera (iXOn 3897, andor). For FLFM imaging, the same settings in HR-FLFM may be used. To obtain a low signal-to-noise ratio raw image and GT images of the corresponding TIRF-SIM, 3D-SIM, CSDM and FLFM to train the recovery method DL regularization term, samples can be imaged using high illuminance laser intensity and exposure time as GT raw images, noise following the fluorescence microscope camera imaging noise model being added to the GT raw images as low SNR raw images.
For each imaging modality and sample, about 30 cells may be imaged and the images preprocessed to obtain pairs of raw data and GT images. These image pairs may be divided into training sets, validation sets, and test sets. In some embodiments, random clipping, quarter rotation, and horizontal/vertical flipping may be applied to further enrich the training dataset. The recovery method may be trained using Adam optimizer and the learning rate may be set to 1 x 10 -4. A combination of Mean Square Error (MSE) loss and Structural Similarity Index Measurement (SSIM) loss may be employed as the loss function:
L(x,y)=MSE(x,y)+k[1-SSIM(x,y)] (1)
Where x and y in equation (1) represent the predicted image and the GT image, respectively, k is the weight that balances the contributions of SSIM and MSE losses, and is empirically set to 0.2 in our experiments.
To quantitatively evaluate the performance of this restoration method and other computational 3D-SIM methods, 3D PSNR and 3D SSIM between GT volume data and algorithmically restored volume data may be calculated. The gray values of GT (y) can be normalized to within the [0,1] range and then a linear transformation applied to the restored volume x to match its dynamic range to y:
Finally calculate And 3D PSNR and 3D SSIM between y. Since there is no GT in the real experimental data, we use the variance of fluorescence intensity in the background region (e.g. the grid region within actin filaments) to evaluate the magnitude of the artifact and use the variance of fluorescence intensity along actin filaments to evaluate the continuity of the signal. Furthermore, the length and density of actin filaments may be calculated according to the method in BFSIM. The FWHM (full width at half maximum) values of actin filaments and the minimum FRC (fourier loop correlation) values in the rFRC (rolling FRC) map calculated by PANELJ software can be measured using the method in HESSIAN SIM to evaluate the resolution of the different recovered volume data.
Fig. 1 shows an exemplary restoration method in this specification. In fig. 1: figure a shows a schematic diagram of an iterative optimization recovery process and a training phase of the recovery method. Graph b shows the architecture of regularization term and its gradient based on deep learning. Figure c shows the architecture of the regularized item and its gradient Block and T-Block in figure b. Wherein Conv represents a convolutional layer; pote denotes a potential function; acti denotes an active layer; T-Block represents a transposed Block; T-Conv represents a transposed convolutional layer; G-Pote represents the gradient of the potential function; G-Acti represents a gradient activation layer. Figure d shows a schematic diagram of the reasoning phase of the recovery method.
Figure 2 shows a comparison of a 3D-SIM with other 3D-SIM methods. In fig. 2: figure a shows a representative 3D-SIM image of F-actin reconstructed by a conventional 3D-SIM, an open3D-SIM and a 3D-SIM, color coded according to distance from the substrate. Figures b-D show magnified xoy views of the region where the boxes in (b) and (c) are located reconstructed by conventional 3D-SIM, SIMnoise, open D-SIM, hessian3D-SIM and hybrid 3D-SIM and xoz views along the lines in (a). GT3D-SIM images acquired at high SNR are shown for reference. Panel e shows representative 3D-SIM images of F-actin in living COS-7 cells expressing Lifeact-EGFP, color coded to indicate distance from substrate by conventional 3D-SIM and 3D-SIM reconstruction. Figures f-g show time-lapse enlarged xoy plots of the line box region in (e), reconstructed from the conventional 3D-SIM (f) and the mixed-color 3D-SIM (g). Along the line xoz view is at the bottom. Graph h-i shows background (h) of the xoy plane and variance of signal (i) in different reconstructions (n=20). Graphs j-k show the background (j) of the xoz planes and the variance (n=20) of the signal (k) in the different reconstructions. Panels l-m show the length (l) of the actin filaments after segmentation and skeletonization (n=16) and the density (m) of actin filaments after segmentation (n=16). Graph n-o shows the 3DPSNR (n) and 3DSSIM (o) for the different reconstructions (n=20). Figure p shows representative 3D-SIM images of microtubules and mitochondria reconstructed by conventional 3D-SIM and 3D-SIM. Figure q shows (a) delayed xoy views of different imaging depths within the region of the line box reconstructed from the conventional 3D-SIM (left column) and 3D-SIM (right column). Figure r shows a delayed enlarged xoz view along the line in (a), reconstructed from the conventional 3D-SIM (left column) and 3D-SIM (right column). Graph s shows a time-lapse magnified view of Microtubes (MT) and mitochondria (Mito) in rectangular boxes in (p), reconstructed from conventional 3D-SIMs (left column) and 3D-SIMs (right column), color-coded as distance from the substrate. The scales are 5 μm (a, e, p, s in the figure) and 1 μm (b, c, d, f, g, q, r in the figure).
Fig. 3 shows that this recovery method improves CSDM imaging without signal time leakage and loss. In fig. 3: figure a shows representative CSDM images of islet zinc ion secretion of conventional CSDM, deepCADCSDM and CSDM, color coded from the time of imaging onset. Figures b-c show magnified views of the delay in the region of the line box in (a) imaged by conventional CSDM, deepCADCSDM and CSDM. The profile along the line in (c) is at the bottom. Panel d shows representative CSDM images of endoplasmic reticulum and peroxisomes of a conventional CSDM, a deconvolved conventional CSDM, and a deconvolved CSDM. Fig. e shows a time-lapse enlarged view of the conventional CSDM, the deconvolved conventional CSDM, and the deconvolved CSDM within the square area in (d). Figure f shows a representative 3D-CSDM image of a conventional 3D-CSDM, a deconvolved conventional 3D-CSDM, and a deconvolved microtube of a 3D-CSDM, color coded as distance from a substrate. Figure g shows a time-lapse magnified view of the legacy 3D-CSDM, deconvolved legacy 3D-CSDM, and deconvolved 3D-CSDM within the square region in (f). The scales are 5 μm (e.g., figure a), 2 μm (e.g., figure d, f), 1 μm (e.g., figure b, c, e, g), 0.5 μm (e.g., figure c, contour, horizontal axis), and 0.2au (e.g., figure c, contour, vertical axis).
Figure 4 shows that this recovery method improves live cell bicolor delay WFM and FLFM imaging of mitochondria and peroxisomes. In fig. 4: figures a-b show representative WFM (a) and FLFM (b) mitochondrial images of conventional WFM, conventional FLFM and FLFM imaging, color coding the distance from the focal plane in (b). Fig. c shows a time-lapse magnified view of conventional WFM, conventional FLFM, and FLFM imaging within the line box regions of (a) and (b). Panels d-f show long term bicolor FLFM imaging of viable cells by conventional FLFM (top row) and FLFM (bottom row) imaging of mitochondria and peroxisomes. The xoz view along the line in (d) is at the bottom. The distances from the focal plane are color coded in (e) and (f). The scales are 5 μm (as shown in figures a, b), 2 μm (as shown in figure c) and 10 μm (as shown in figures d, e, f).
In some embodiments, the generation of the numerical Point Spread Function (PSF) of FLFM is shown below. Referring to fig. 5, an exemplary FLFM system is shown in fig. 5, and uses vector debye theory to derive the wave function of the Native Image Plane (NIP):
Wherein, Is NIP coordinate,/>Is the sample volume coordinate, M, NA and f obj are the objective magnification, numerical aperture and focal length, respectively, lambda em is the emission wavelength, alpha is the critical angle for total internal reflection, defined as equation (4), theta andIs the angle of refraction (objective side) and angle of incidence (sample side) immersion medium (refractive index=n 1) and sample solution (refractive index=n 2) at the interface, Φ (·) is the aberration function, defined as equation (5), l is the normal focus position, τ s and τ p are fresnel transmission coefficients, defined as equation (6) and equation (7), u and v are normalized radial and axial coordinates, defined as equation (8) and equation (9), J 0 and J 2 are the first class 0 and 2 bezier functions, respectively.
α=min[sin-1(NA/n1),sin-1(n2/n1)] (4)
Because of the polarization isotropy of the fluorescence emitter, p x=py = 0 can be set and the light field is derived to point only in the pz direction to facilitate computation. Then, the process is carried out,Optical Fourier transform by Fourier lensAnd modulated by MLA as/>Where φ (·) is the MLA transfer function, defined as equation (10),/>Is the MLA coordinates.
Where amp (·) is the MLA amplitude mask function, comb (·) is the MLA comb function, d and f mla are the MLA single-microlens diameter and focal length,Is a convolution operator.
The light field propagation from the MLA to the camera can then be modeled using Fresnel propagation:
Wherein, And/>For camera plane space and frequency coordinates, IFT is the inverse fourier transform operator. Thus, the PSF function H (-) of FLFM is as follows:
where δ (·) is a dirac function.
In some embodiments, an exemplary FLFM three-dimensional (3D) reconstruction process is as follows.
For having spatial fluorescence intensity distributionCorresponding FLFM of the acquired observation original imageIt can be calculated as:
The matrixed representation of equation (13) can be expressed as:
E=HS (14)
wherein E, H and S are determined by the observed image, PSF (as in fig. 5 b) and the sample size of FLFM. Then FLFM the 3D reconstruction can be converted to an optimization problem of equation (15) that can be solved iteratively by Richardson-Lucy (RL) deconvolution, as shown in equation (16).
Where KL (·) is Kullback-Leible divergence, (·) T is the matrix transpose operator, k is the number of iterations, S k is the kth reconstruction. In some embodiments, FLFM D reconstructions (e.g., fig. 5 c) may be performed using the PSF of the experimental PSF (providing the intensity distribution) and the numerical PSF (providing the spatial position).
In some embodiments, an exemplary 3D-SIM imaging physical model is shown below.
In contrast to conventional 3D wide field fluorescence microscopes, 3D-SIM uses pattern illuminationExcitation volume sample/>The spatial resolution and the axial resolution can be doubled, and the spatial resolution and the axial resolution are generated by three-beam light interference. The fluorescence emission distribution detected by the camera/>, irrespective of imaging noiseCan be expressed as:
Where r and n are the illumination pattern direction and phase index, and specific calculations can be described with reference to equation (3). Pattern lighting It can be calculated as:
Where N is a normalization factor defined as equation (19), I o and I n are intensities of oblique and normal incident plane waves, λ ex is an excitation wavelength, θ is an incident polar angle, ψ r is an azimuth angle of an illumination pattern for an r-th direction, and ψ rn is phases of the illumination pattern for the r-th direction and N-th phase.
Where M r and M n are the illumination pattern direction and the number of phases. The matrixed representation of equation (17) can be expressed as:
Ern=IrnS (20)
Where E rn and S are determined by the observed image and sample volume, and I rn is determined by the structured light illumination pattern of the 3D-SIM and the PSF.
In some embodiments, exemplary 3D-SIM illumination pattern parameter estimation and Super Resolution (SR) reconstruction are shown below.
Auto-and cross-correlation algorithms are used to estimate parameters of the 3D-SIM illumination pattern. The SR reconstruction of the 3D-SIM can then be translated into a regularized Least Squares (LS) minimization problem, defined as:
where R (S) is an additional regularization term based on sample prior knowledge and λ is the regularization term weight. For SR reconstruction of a traditional 3D-SIM, wiener regularization terms are widely adopted, and the corresponding minimization problem can be resolved by wiener deconvolution.
In some embodiments, an exemplary CSDM imaging physical model is shown below.
Multiple focusing laser beam for CSDMScanning samples/>And pass through corresponding confocal circular pinholes/>Focusing on a camera, detecting fluorescence intensity/>, of each laser beamFluorescence intensity/>It can be calculated as:
Round pinhole The effect of confocal aperture is described, given by the following formula (23):
where a is the radius of the confocal aperture when back projected into NIP space.
If the Stokes shift between the excitation wavelength and the emission wavelength is ignored, and assuming that the confocal aperture of the microscope does not significantly limit lateral light detection (i.e., the excitation intensity distribution has dropped to negligible values before the confocal aperture is rendered tangible), the matrixed representation of equation (22) can be approximated as:
E=HS (24)
where E and S are determined by the observed image and sample and H is determined by the multi-focused laser beam illumination pattern of the CSDM and the PSF.
In some embodiments, an exemplary CSDM deconvolution is shown below. As a low photon imaging technique, the fluorescent emission of CSDM can closely approximate the poisson process. Equation (24) may be transformed into:
E~Poisson(HS) (25)
Where Poisson (HS) is a Poisson distribution with parameters (mean or variance) equal to HS. Maximizing the posterior probability P (E|HS) using a multiplicative gradient based algorithm results in the RL deconvolution algorithm expressed as equation (16).
In some embodiments, an exemplary fluorescence microscope camera imaging noise model is shown below.
In EMCCD and sCMOS cameras, imaging noise is mainly composed of shot noise, thermal noise, and readout noise. Shot noise originates from the photon detection process, and thermal noise and readout noise originate from electronics surrounding the detector chip. When photons are detected on the camera sensor chip, the analog-to-digital unit (ADU) count output of the camera follows a probability distribution that can be described by a convolution of a poisson distribution and a gaussian distribution, where the poisson distribution represents that shot noise of photon detection and the gaussian distribution is a result of readout noise. The Conditional Probability Density Function (CPDF) for a single pixel i of the camera can be described by the following equation:
Where C i represents the specific count (in ADU) that the camera obtains in that pixel, A is the normalization constant, E i is the expected photoelectron number (E -),ki is the amplification gain (ADUs/E -) pixels i, o i, and var i represent the mean (offset) and variance, respectively, of the readout noise for pixel i.
For convenience of the following derivation, the distribution of the random variable C i can be equivalently expressed as:
Ci=Pi+Gi+oi (27)
Wherein P i obeys the poisson distribution, the mean and variance values are equal to k iEi:
Pi~P(kiEi) (28)
G i obeys gaussian distribution, average value is zero, variance value is var i:
Gi~N(0,vari) (29)
Then, a new random variable Z i can be defined and expressed as:
When the mean (or variance) of the poisson distribution is large, it may be approximated as a gaussian distribution. Then the distribution of Z i approximately follows a gaussian distribution and can be expressed as:
for simplicity, ignoring the E i term in the Z i variance, the approximation CPDF of Z i can be expressed as:
The approximation CPDF of C i can be calculated by substituting Z i in equation (32) into equation (30), expressed as:
Based on the above derivation, the image formation of Fluorescence Microscopy (FM), including 3D-SIM, CSDM and FLFM, can be regarded as a linear model defined by equations (20), (24) and (14), which can be unified into the following form:
E=HFMS (34)
Where H FM is the FM linear physical model, S is the sample fluorophore spatial distribution to be recovered, and E is the camera observations (or the expected photoelectron count).
Further, the sample fluorophore spatial distribution S can be recovered by maximizing the posterior probability, defined as:
wherein P (s|c i) can be calculated using bayesian rules and expressed as:
then, the recovery optimization function J (S) is defined as the negative logarithm of P (s|c) and calculated by combining equations (33) to (36). The simplified form of J (S) can be expressed as a linear inverse problem, defined as:
Where the first part of J (S) represents the fidelity term and the second part of J (S) (i.e., R (S)) represents the regularization term, defined as the negative logarithm of P (S) corresponding to a priori knowledge of the sample distribution, from a statistical perspective.
In some embodiments, an exemplary deep learning based regularization term is shown below.
It is important to design a suitable regularizer to capture complex statistics of the samples. Deep Learning (DL) has a strong expressive power and can theoretically approach a complex sample size distribution with infinitesimal errors. Therefore, a regularization term R (S, θ) based on deep learning can be designed to replace the traditional manual regularization term R (S), as a more suitable sample distribution statistical model, θ is the learned deep learning network weight.
Pseudo-inverse function of FM physical modelThe method is applied to fidelity items so as to facilitate the solution of follow-up optimization problems. Thus, equation (7) can be rewritten as a hybrid optimization problem of camera noise model-based fidelity terms and depth learning-based regularization terms:
wherein f Hybrid is a method for recovering the spatial distribution (such as S) of the fluorophores of the sample, Is defined as/>Is a normal restored image of (c).
The inverse problem of equation (38) can be solved iteratively by a gradient descent algorithm:
Where T is the number of iterations, f T is the result of the iteration recovery of the T-th round, Is a gradient operator. Fidelity item/>The partial derivative of (c) can be calculated as:
Regularization term The partial derivatives of (a) can be calculated according to the flow in fig. 1b and 1 c.
In some embodiments, an exemplary CSDM composite data generation process is as follows.
A CSDM image (e.g., a zinc islet ion secretion CSDM image) may be modeled as a superposition of multiple PSFs of an imaging system. The equivalent PSF of the CSDM may then be generated using equation (22): coordinates of sample volumesSet to (0, 0). The apodization filter defined by equation (41) can then be used/>Applied to the PSF to avoid oscillations that are detrimental to the convergence of subsequent deep learning regularization training.
Wherein,Is the NIP coordinates and R is the Full Width Half Maximum (FWHM) of the PSF. The location map and intensity map of zinc ion secretion may be randomly generated.
Fig. 6 is a schematic view of an application scenario of an image processing system according to some embodiments of the present description. As shown. Referring to fig. 6, an image processing system 600 may include a fluorescence microscope 610, a processing device 620 storage device 630, one or more terminals 640, and a network 650.
The components in image processing system 600 may be connected in one or more of a variety of ways. For example only, the fluorescence microscope 610 may be connected to the processing device 620 through a network 650. As another example, fluorescence microscope 610 may be directly connected to processing device 620. As yet another example, the storage device 630 may be connected to the processing device 620 directly or through a network 650. As yet another example, the terminal 640 may be connected to the processing device 620 through a network 650 or directly.
The image processing system 600 may be configured to generate a target image using a regularization network (as implemented by the process 900 of fig. 6). The target image has a higher spatial resolution relative to the initial multi-dimensional image, for example, the spatial resolution may exceed the upper limit of the spatial resolution at which the fluorescence microscope 610 can acquire the three-dimensional image.
In some embodiments, the fluorescence microscope may include any of the following: three-dimensional structure illumination microscope (Three-dimensional structured illumination microscopy (3D-SIM)), turret Confocal microscope (Confocal SPINNING DISK microscopy (CSDM)), wide-field microscope (WFM)), and Fourier light field microscope (Fourier-light-field microscope (FLFM)), and the like.
Fluorescence microscope 610 may be configured to acquire multi-dimensional image data of objects within its viewing area. The observed objects may include one or more biological or non-biological objects. Illustratively, the observed subject may include, for example, INS-1 cells, COS-7 cells, hela cells, liver Sinus Endothelial Cells (LSEC), human Umbilical Vein Endothelial Cells (HUVEC), HEK263 cells, and the like, or any combination thereof. In some embodiments, one or more objects may be fluorescently treated or fluorescently labeled, and the fluorescently treated or fluorescently labeled objects may be excited to emit fluorescence for imaging.
The processing device 620 may process data and/or information acquired from the fluorescence microscope 610, the storage device 630, and/or the terminal 640. For example, the processing device 620 may process image data (e.g., an initial multi-dimensional image) related to the object collected by the fluorescence microscope 610. As another example, the processing device 620 may at least partially execute a regularization network to process the initial multi-dimensional image. In some embodiments, the processing device 620 may be a single server or a group of servers. The server group may be centralized or distributed. In some embodiments, processing device 620 may be local or remote. For example, the processing device 620 may access information and/or data from the fluorescence microscope 610, the storage device 630, and/or the terminal 640 via the network 650. As another example, the processing device 620 may be directly connected to the fluorescence microscope 610, the terminal 640, and/or the storage device 630 to access information and/or data. In some embodiments, processing device 620 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or a combination thereof. In some embodiments, processing device 620 may be implemented by computing device 700 having one or more components as described in fig. 2.
Storage device 630 may store data, instructions, and/or any other information. In some embodiments, the storage device 630 may store data acquired from the fluorescence microscope 610, the processing device 620, and/or the terminal 640. In some embodiments, the storage device 630 may store data and/or instructions that may be executed by the processing device 620, or that may be used by the processing device 620 to perform the exemplary methods described in this specification. In some embodiments, the storage device 630 may include mass storage devices, removable storage devices, volatile read-write memory, read-only memory (ROM), and the like or combinations thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Exemplary removable storage devices may include flash drives, floppy disks, optical disks, memory cards, compact disks, tape, and the like. Exemplary volatile read-write memory can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR-SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitance RAM (Z-RAM), etc. Exemplary ROMs may include Mask ROM (MROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, storage device 630 may be implemented on a cloud platform as described elsewhere in this specification.
In some embodiments, the storage device 630 may be connected to the network 650 to communicate with one or more other components in the image processing system 600 (e.g., the fluorescence microscope 610, the processing device 620, and/or the terminal 640). One or more components of image processing system 600 may access data or instructions stored in storage device 630 via network 650. In some embodiments, the storage device 630 may be part of the processing device 620 or the terminal 640.
Terminal 640 may be configured to enable user interaction with image processing system 600. For example, the terminal 640 may receive instructions from a user to cause the fluorescence microscope 610 to scan an object. For another example, the terminal 640 may receive a processing result (e.g., a parameter related to an object) from the processing device 620 and display the processing result to the user. In some embodiments, terminal 640 may be connected to and/or in communication with fluorescence microscope 610, processing device 620, and/or storage device 630. In some embodiments, the terminal 640 may include a mobile device 640-1, a tablet computer 640-2, a laptop computer 640-3, or the like, or a combination thereof. For example, the mobile device 640-1 may include a mobile phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point-of-sale (POS) device, a laptop computer, a tablet computer, a desktop computer, or the like, or a combination thereof. In some embodiments, terminal 640 can include input devices, output devices, and the like. The input device may include alphanumeric and other keys that may be entered via a keyboard, a touch screen (e.g., with haptic or tactile feedback), voice input, eye-tracking input, brain monitoring system input, or any other similar input mechanism. Input information received through the input device may be sent to the processing device 620 for further processing via, for example, a bus. Other types of input devices may include cursor control devices, such as a mouse, a trackball, or cursor direction keys. The output device may include a display, speakers, printer, etc., or a combination thereof. In some embodiments, the terminal 640 may be part of the processing device 620 or the fluorescence microscope 610.
Network 650 may include any suitable network that may facilitate the exchange of information and/or data with image processing system 600. In some embodiments, one or more components of image processing system 600 (e.g., fluorescence microscope 610, processing device 620, storage device 630, terminal 640, etc.) may communicate information and/or data with one or more other components of image processing system 600 via network 650. For example, the processing device 620 may obtain image data (e.g., echo signals) from the fluorescence microscope 610 via the network 650. For another example, processing device 620 may obtain user instructions from terminal 640 via network 650. The network 650 may include public networks (e.g., the internet), private networks (e.g., local Area Network (LAN), wide Area Network (WAN)), etc.), wired networks (e.g., ethernet), wireless networks (e.g., 802.11 networks, wi-Fi networks, etc.), cellular networks (e.g., long Term Evolution (LTE) networks), frame relay networks, virtual private networks ("VPN"), satellite networks, telephone networks, routers, hubs, switches, server computers, etc., or a combination thereof. For example, the network 650 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth TM network, a zigbee TM network, a Near Field Communication (NFC) network, and the like, or a combination thereof. In some embodiments, network 650 may include one or more network access points. For example, network 650 may include wired and/or wireless network access points, such as base stations and/or internet switching points, through which one or more components of image processing system 600 may connect to network 650 to exchange data and/or information.
The description is intended to be illustrative, and not limiting, of the scope of the present description. Many alternatives, modifications, and variations will be apparent to those skilled in the art. The features, structures, methods, and characteristics of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments. In some embodiments, image processing system 600 may include one or more additional components and/or one or more of the components described above may be omitted. Additionally or alternatively, two or more components of image processing system 600 may be integrated into a single component. For example, the processing device 620 may be integrated into the fluorescence microscope 610. For another example, a component of image processing system 600 may be replaced with another component capable of performing the function of the component. In some embodiments, the storage device 630 may be a data store that includes a cloud computing platform, such as a public cloud, private cloud, community, hybrid cloud, and the like. However, such changes and modifications do not depart from the scope of the present specification.
FIG. 7 is a schematic diagram of exemplary hardware and/or software components of a computing device shown in accordance with some embodiments of the present description. In some embodiments, one or more components of image processing system 600 may be implemented on one or more components of computing device 700. By way of example only, the processing device 620 and/or the terminal 640 may each be implemented on one or more components of the computing device 700. In some embodiments, the computing device may be a medical image processing apparatus.
As shown in fig. 7, computing device 700 may include a processor 710, memory 720, input/output (I/O) 730, and communication ports 740. Processor 710 may execute computer instructions (e.g., program code) and perform the functions of processing device 620 in accordance with the techniques described herein. Computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions that perform particular functions described herein. For example, processor 710 may process image data of an object acquired from fluorescence microscope 67, storage device 630, terminal 640, and/or any other component of image processing system 600.
In some embodiments, processor 710 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced Instruction Set Computers (RISC), application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIP), central Processing Units (CPU), graphics Processing Units (GPU), physical Processing Units (PPU), microcontroller units, digital Signal Processors (DSP), field Programmable Gate Arrays (FPGA), advanced RISC Machines (ARM), programmable Logic Devices (PLD), any circuit or processor capable of executing one or more functions, etc., or a combination thereof.
For illustration only, only one processor 710 is depicted in computing device 700. It should be noted, however, that computing device 700 in this specification may also include multiple processors. Thus, operations and/or method steps performed by one processor as described herein may also be performed by multiple processors in combination or separately. For example, if in this specification the processor of computing device 700 performs steps a and B simultaneously, it should be understood that steps a and B may also be performed by two or more different processors in computing device 700 together or separately (e.g., a first processor performing step a, a second processor performing step B, or a combination of the first and second processors performing steps a and B).
Memory 720 may store data/information acquired from fluorescence microscope 67, storage device 930, terminal 940, and/or any other components of image processing system 900. In some embodiments, memory 720 may include a mass storage device, a removable storage device, a volatile read-write memory, a read-only memory (ROM), or the like, or a combination thereof. For example, mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Removable storage devices may include flash drives, floppy disks, optical disks, memory cards, compact disks, tape, and the like. Volatile read-write memory can include Random Access Memory (RAM). The RAM may include Dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR-SDRAM), static RAM (SRAM), thyristor RAM (T-RAM), zero capacitance RAM (Z-RAM), and the like. The ROM may include Mask ROM (MROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital versatile disk ROM, etc. In some embodiments, memory 720 may store one or more programs and/or instructions to perform the exemplary methods described in this specification.
Input/output 730 may input and/or output signals, data, information, etc. In some embodiments, input/output 730 may allow for user interaction with computing device 700 (e.g., processing device 920). In some embodiments, input/output 730 may include input devices and output devices. Examples of input devices may include a keyboard, mouse, touch screen, microphone, and the like, or a combination thereof. Examples of output devices may include a display device, speakers, a printer, a projector, etc., or a combination thereof. Examples of display devices may include Liquid Crystal Displays (LCDs), light Emitting Diode (LED) based displays, flat panel displays, curved screens, television devices, cathode Ray Tubes (CRTs), touch screens, and the like, or combinations thereof.
The communication ports 740 may be connected to a network (e.g., network 950) to facilitate data communication. Communication port 740 may establish a connection between computing device 700 (e.g., processing device 920) and one or more components of image processing system 900 (e.g., fluorescence microscope 67, storage device 930, and/or terminal 940). The connection may be a wired connection, a wireless connection, any other communication connection capable of data transmission and/or reception, and/or a combination of these connections. The wired connection may include, for example, electrical cable, optical cable, telephone line, etc., or a combination thereof. The wireless connection may include, for example, a bluetooth TM connection, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a zigbee link, a mobile network link (e.g., 3G, 4G, 5G, etc.), or the like, or a combination thereof. In some embodiments, communication port 740 may be and/or include standardized communication ports, such as RS232, RS485, and the like. In some embodiments, communication port 740 may be a specially designed communication port.
Fig. 8 is an exemplary block diagram of an image processing system according to some embodiments of the present description.
As shown, the image processing system 800 may include an image data acquisition module 810, an initial image generation module 820, and a target image generation module 830.
The image data acquisition module 810 may acquire multi-dimensional image data generated by a fluorescence microscope. In some embodiments, the image data acquisition module 810 may acquire multi-dimensional image data from the storage device 630 or the fluorescence microscope 610. As used herein, multi-dimensional image data may refer to raw data (e.g., one or more raw three-dimensional images) collected by fluorescence microscope 610. For more description of acquiring multi-dimensional image data, see later in relation to step 910.
The initial image generation module 820 may generate an initial multi-dimensional image based on the multi-dimensional image data. In some embodiments, the initial image generation module 820 may generate the initial multi-dimensional image by performing, for example, a filtering operation on the multi-dimensional image data. For example only, the initial image generation module 820 may generate the initial multi-dimensional image by performing wiener filtering on one or more original three-dimensional images. For more description of generating the initial multi-dimensional image see later on as related to step 920.
The target image generating module 830 is configured to construct an objective function based on an image acquisition process, and perform one or more iterations on the initial multi-dimensional image to generate a target image. The objective function includes a fidelity term associated with an imaging model of the fluorescence microscope and a regularization term calculated over the regularization network. For more description of generating the target image, see later on as related to step 1130.
It should be noted that the above descriptions of the modules of the image processing system 800 are for illustration purposes only and are not intended to limit the present invention. It will be apparent to those skilled in the art that these modules may be combined in various ways or connected with other modules as subsystems without departing from the principles of the invention. In some embodiments, one or more modules may be added or omitted from the image processing system 800. In some embodiments, one or more modules in image processing system 800 may be integrated into a single module to perform the functions of the one or more modules.
Fig. 9 is an exemplary flowchart of an image processing method according to some embodiments of the present description. In some embodiments, one or more steps in flow 900 may be performed by a processing device (e.g., processing device 920 or processor 710). For example, flow 900 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., storage device 930). In some embodiments, the processing device may execute the set of instructions to implement one or more steps in flow 900.
Step 910, multi-dimensional image data is acquired.
In some embodiments, the multi-dimensional image data may refer to raw data generated by a fluorescence microscope. For example, the multi-dimensional image data may comprise one or more raw three-dimensional images acquired by a 3D-SIM or image data comprising a spatial dimension, a temporal dimension, and a sample dimension. For more description of fluorescence microscopy see above (e.g., fig. 1 and 6).
Illustratively, taking 3D-SIM as an example, the multi-dimensional image data may include multiple sets of original three-dimensional images. Each set of raw three-dimensional images acquired may comprise a plurality of raw images (e.g., 9 raw images, 15 raw images) corresponding to different phases and/or directions of a sinusoidal illumination pattern applied to an object (e.g., a cell sample), i.e., the three-dimensional structure illumination microscope may collect the plurality of three-dimensional raw images at different phases and/or directions, thereby resulting in multi-dimensional image data. It should be noted that in some other embodiments, the multi-dimensional image data may also be generated by CSDM or FLFM.
Step 920 generates an initial multi-dimensional image based on the multi-dimensional image data.
The initial multi-dimensional image represents a three-dimensional image obtained by at least processing (such as reconstruction processing or filtering processing) based on the multi-dimensional image data.
In some embodiments, the processing device may determine the initial multi-dimensional image by filtering the multi-dimensional image data. In some embodiments, the filtering may include wiener filtering, inverse filtering, least squares filtering, and the like, or any combination thereof.
Continuing with the foregoing 3D-SIM example, for each set of original three-dimensional images in the multi-dimensional image data, the processing device may generate an image stack by performing wiener filtering on each of the set of original three-dimensional images separately, thereby obtaining an initial multi-dimensional image. In particular, if each set of original three-dimensional images includes 9 original three-dimensional images, the processing device may combine the 9 original three-dimensional images in the set of original images into an initial multi-dimensional image. The initial multi-dimensional image may have a higher spatial resolution than the original three-dimensional image.
Step 930, constructing an objective function based on the image acquisition process, and performing one or more rounds of iteration on the initial multidimensional image to generate a target image.
The image acquisition process may reflect the imaging principle of a fluorescence microscope, which includes an imaging physical model of the fluorescence microscope and imaging noise, and may be described in detail in the foregoing.
The regularized network may be a DL model, trained by training samples. The training samples may include images with high signal-to-noise ratios. For the manner in which the training samples are obtained and for the training of the regularized network, see the description related to fig. 10 below.
In some embodiments, the input to the regularization network is an initial multidimensional image that may be calculated based on an imaging model of the fluorescence microscope to obtain a fidelity term or a bias of the fidelity term. The initial multidimensional image may be input to a regularization network based on deep learning, and regularization terms are calculated.
By utilizing the regularization network based on deep learning, the model can continuously learn prior distribution of high signal-to-noise ratio images in training samples and picture characteristics (such as space-time continuity, sparsity and the like) of different corresponding objects, and the quality of image processing results is improved.
The objective function represents a mathematical expression of the objective to be reached by the model optimization (i.e., image processing) during execution of the regularized network.
In some embodiments, the objective function includes a fidelity term and a regularization term. In some embodiments, the fidelity term is associated with an imaging model of the fluorescence microscope, the fidelity term representing a degree of consistency of the three-dimensional image with the imaging model inherent to the fluorescence microscope, as previously described, the fidelity term is associated with the imaging model of the fluorescence microscope, it being understood that the fidelity term may be determined based on the imaging principles of fluorescence microscope 610 in fig. 6, and as the fluorescence microscope may be a plurality of, the imaging principles and imaging models inherent to the various fluorescence microscopes may differ. For example, the imaging principles of fluorescence microscope 610 may correspond to a variety of fluorescence microscopes, including 3D-SIM, CSDM, or FLFM, respectively.
Regularization terms in the objective function are computed over a deep learning based regularization network. For more description of regularization terms and deep learning based regularization networks, see later description related to FIG. 10.
In some embodiments, in the process of generating the target image, one or more iterations may be performed with the minimized target function as a result, and when the number of iterations reaches a preset number (e.g., 500 or 1000) or the target function obtained by the current iteration satisfies a preset condition (e.g., the function converges or the function value is smaller than a preset threshold value), the iteration is stopped.
In some embodiments, the objective function may be similar to equation (38), i.e., may be expressed as:
Where f Hybrid denotes the iteratively generated target image, f denotes the initial multi-dimensional image, Representing fidelity terms,/>Restoring the image for the tradition; r (f, θ) represents a regularization term, and θ is a weight parameter of the regularized network based on deep learning. The conventional restoration image may be an image related to a fluorescence microscope manually preset, which can reflect the case of an image restored according to the imaging principle of the fluorescence microscope and an inherent physical model.
In some embodiments, weighting coefficients may be added to the fidelity term and/or regularization term to reflect the emphasis on the fidelity term and/or regularization term in the result. The present specification does not set any limitation on the manner of weighting.
In some embodiments, one or more iterations of the optimization algorithm may be employed to minimize the objective function and thereby adjust the three-dimensional image. By way of example, the optimization algorithm may include a Direct Fourier Transform (DFT) algorithm, a Filtered Back Projection (FBP) algorithm, an Algebraic Reconstruction Technique (ART), a Synchronous Iterative Reconstruction Technique (SIRT), a Maximum Entropy (ME) method, and the like. In some embodiments, at least one of the fidelity term and the regularization term may be determined based on an optimization algorithm.
It should be noted that the above description of the process 900 is for illustration and description only, and is not intended to limit the scope of the application of the present disclosure. Various modifications and changes to flow 900 will be apparent to those skilled in the art in light of the present description. However, such modifications and variations are still within the scope of the present description.
Referring to fig. 1b, the regularized network may comprise a multi-scale convolutional neural network. In some embodiments, the regularization network may include a first convolutional layer, a multi-scale convolutional neural network, a second convolutional layer, a potential function, and a full 1 convolutional layer connected in sequence. In some embodiments, the regularized network may further comprise an activation function (activation layer). By providing a multi-scale convolutional neural network, the spatial resolution of the three-dimensional image input into the regularization network (e.g., the initial multi-dimensional image, the output image of the previous iteration) may be varied without additional normalization operations to unify the spatial resolution of the three-dimensional image input into the regularization network.
Continuing with the 3D-SIM example above, because the three-dimensional image has more information about the dimensions in the z-axis direction relative to the two-dimensional image, including information in the planes in the x-axis direction and the y-axis direction, the multi-scale convolutional neural network is used to make the three-dimensional image recovery process more axially continuous (i.e., more features between the layers of the two-dimensional image), and the fidelity of each layer recovery is also higher. In some embodiments, the regularized multi-scale convolutional neural network may include one or more different receptive field convolution kernels that are used to extract features between the multi-layer images in the initial multi-dimensional image to obtain a feature map of the multi-dimensional image that has a lower spatial resolution than the input multi-dimensional image.
In some embodiments, one or more convolutions may be made to the input multidimensional image or feature map based on convolution checks of different receptive fields. By way of example, assuming that the input to the multi-scale convolutional neural network is a three-dimensional image, depending on the spatial resolution of the three-dimensional image and the structure of the multi-scale convolutional neural network, the size and/or number of convolution kernels for different receptive fields may be different (e.g., the size of the convolution kernels for different receptive fields is 3 x 3, or 5 x 5, etc.).
In some embodiments, each round of iterations may include: calculating partial derivatives of the fidelity item based on the output image of the previous iteration and the traditional recovery image; calculating partial derivatives of regularization items based on the output image of the previous iteration; and determining an output image of the current round based on the partial guide of the fidelity term, the partial guide of the regularization term and the output image of the previous round of iteration.
Continuing with the example of the objective function above as equation (1), in some embodiments, assuming the current iteration is the t+1st round, each of the multiple iterations on the initial multi-dimensional image may be represented as:
Wherein f T is the three-dimensional image iteratively output by the T-th round (i.e. the previous round), f T+1 is the three-dimensional image iteratively obtained by the current round, Partial derivatives representing fidelity terms,/>Representing the bias of the regularized term.
Due to the output image f T of the previous iteration and the conventional restored imageIs known, then the partial derivative of the fidelity termThe calculation process of (1) may include: calculating the traditional recovery image based on an imaging model of the fluorescence microscope; and calculating the partial derivatives of the fidelity items based on the traditional recovery image, the output image of the previous iteration and a preset relation.
The predetermined relationship is related to noise of the physical model of the fluorescence microscope (e.g., related to 3D-SIM, CSDM, WFM and FLFM), and represents a relationship between the conventional recovery image and the output image of the previous iteration, and in some embodiments, further description of calculating the partial derivatives of the fidelity term based on the conventional recovery image and the output image of the previous iteration and the predetermined relationship may be referred to the related descriptions of formulas (37) and (40) above, and will not be repeated herein.
In some embodiments, the method of computing the bias derivatives of the regularization term based on the output image of the previous iteration may further comprise: inverting the regularization network based on deep learning, and calculating the bias guide of the regularization item based on the inverted regularization network.
Specifically, in some embodiments, the full 1 matrix may be input to the inverted regularization network, where the output of the inverted regularization network is the partial derivative of the regularization term. The all 1 matrix is a matrix in which the values of all elements are 1.
In some embodiments, inverting the regularized network comprises: converting the convolution layers in the regularized network into transposed convolution layers having the same convolution kernel, converting the activation function in the regularized network into a gradient of the activation function, converting the potential function in the regularized network into a gradient of the potential function, and converting the blocks in the regularized network into transposed blocks.
As before, the regularization network comprises a first convolution layer, a multi-scale convolution neural network, a second convolution layer, a potential function and a full 1 convolution layer which are connected in sequence, and the inverted regularization network comprises a gradient of the potential function, a transposed second convolution layer, a multi-scale transposed convolution neural network and a transposed first convolution layer which are connected in sequence. The multi-scale transposition convolutional neural network is obtained by converting all blocks in the multi-scale transposition convolutional neural network into transposition blocks, and the transmission direction of the multi-scale transposition convolutional neural network is opposite to that of the multi-scale convolutional neural network.
After calculating the partial guide of the regularization term through the regularization network after inversion, determining an output image of the current round of iteration based on the formula (43); if the current iteration is the last iteration or the output image of the current iteration meets the preset condition, the output image of the current iteration can be used as the target image.
In some embodiments, the initial regularized network may be trained using training samples to obtain a trained regularized network. Referring also to fig. 10, a training process 1000 of the regularized network may include: acquiring a reference image 1010 acquired by a fluorescence microscope under the conditions of high illumination laser intensity and exposure time; by superimposing noise into the reference image 1010, a plurality of sample three-dimensional images 1020 are obtained; and training the initial regularization network by taking a plurality of sample three-dimensional images 1020 as training samples and corresponding reference images 1010 as labels, and adjusting network model parameters to obtain a trained regularization network. In some embodiments, noise may be correlated to an imaging model of a fluorescence microscope. In some embodiments, the noise may be gaussian noise. By adding noise to the reference image 1010, the image signal-to-noise ratio is reduced, simulating images obtained under poor imaging conditions, so that the trained regularized network has the ability to identify and remove noise from the images while preserving important features and details in the images.
In some embodiments, the reference image 1010 may be a gold standard image of an observed object acquired by a fluorescence microscope. The gold standard image can accurately reflect the characteristics of the observed object, but the gold standard image is acquired by a fluorescence microscope under the conditions of high illumination laser intensity and exposure time, and the high illumination laser intensity and the exposure time can possibly cause morphological change or apoptosis of the observed object, so that the gold standard image cannot be used in a cell observation task of a common scene. In the cell observation task of the common scene, in order to ensure the activity of cells, images acquired under lower illumination laser intensity and exposure time are used, but due to lower laser intensity and exposure time, the signal to noise ratio of the images is also obviously lower than that of the gold standard images. In some embodiments of the present disclosure, the activity of a small portion of cells is sacrificed to obtain a reference image 1010 with a high signal-to-noise ratio as training data, and the reference image is used to train a regularized network, so that the trained model can be utilized to obtain three-dimensional images with higher signal-to-noise ratios in the same class of cell observation tasks (of the sacrificed cells).
When the same fluorescence microscope is used, the reference image 1010 is acquired only once for the same type of cell observation task, and the model is trained. When other types of cells need to be observed, the training process 1000 of the regularized network described above may be re-performed using the reference images 1010 corresponding to the other types of cells to obtain a regularized network corresponding to the other types of cells.
In some embodiments, for a reference image 1010, multiple different (e.g., different noise distributions and/or different noise intensities, etc.) noises may be superimposed in the reference image 1010, resulting in multiple sample three-dimensional images 1020. By superimposing noise on the acquired multiple reference images 1010, multiple different sample three-dimensional images 1020 can be quickly obtained.
In some embodiments, for a plurality of sample three-dimensional images 1020, a corresponding reference image 1010 may be used as a label, i.e., the reference image 1010 before the sample three-dimensional image 1020 is added with noise is used as a label for the sample three-dimensional image 1020, and the reference image 1010 with a higher signal-to-noise ratio and the sample three-dimensional image 1020 with a lower signal-to-noise ratio are combined into a high-to-low signal-to-noise ratio sample pair. In some embodiments, random clipping, rotation, and/or flipping may also be performed on the labeled training samples to further expand the training samples.
In some embodiments, training the initial regularized network, adjusting network model parameters, obtaining a trained regularized network may include: inputting a plurality of training samples with labels into an initial regularization network, constructing a loss function through the labels and a result image 1030 which is output by the initial regularization network and subjected to multiple iterations, iteratively updating parameters of the initial regularization network through gradient descent or other methods (such as an optimizer) based on the loss function, and completing model training when preset conditions are met, so that a trained regularization network is obtained. The preset condition may be that the loss function converges, the number of iterations reaches a threshold value, etc.
In some embodiments, the parameters of the updated initial regularized network comprise at least weight parameters of the multi-scale convolutional neural network. In some embodiments, the weight parameters of the multi-scale convolutional neural network may be updated by an error back-propagation algorithm. The present specification does not set any limit to the model training method.
In some embodiments, the loss function may include mean square error and structural similarity, and for more details on the loss function, see the description of equation (1) above. It should be noted that in some other embodiments, other loss functions may be used according to actual needs, and this description is not limited.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject matter of the present description requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.

Claims (12)

1. An image processing method, comprising:
Acquiring multi-dimensional image data, the image data being generated by a fluorescence microscope;
Generating an initial multi-dimensional image based on the multi-dimensional image data;
constructing an objective function based on an image acquisition process, and carrying out one or more rounds of iteration on the initial multidimensional image to generate an objective image; wherein:
the objective function includes a fidelity term and a regularization term,
The fidelity term is related to an imaging model of the fluorescence microscope,
The regularization term is obtained through calculation through a regularization network.
2. The method of claim 1, wherein:
The regularized network comprises a trainable multi-scale convolutional neural network;
The multi-scale convolutional neural network includes convolution kernels of different receptive fields that, when trained, are used to extract features between the multi-layer images in the initial multi-dimensional image.
3. The method of claim 1, wherein the fidelity term relates to a traditional restored image calculated based on an imaging physical model of the fluorescence microscope.
4. A method as claimed in claim 3, each of the one or more iterations comprising:
Calculating the partial derivative of the fidelity item based on the traditional recovery image and the output image of the previous iteration;
Calculating the partial guide of the regularization term based on the output image of the previous iteration;
And determining an output image of the current round based on the partial guide of the fidelity term, the partial guide of the regularization term and the output image of the previous round of iteration.
5. The method of claim 4, wherein the calculating the partial derivative of the fidelity term comprises:
Calculating the traditional recovery image based on an imaging model of the fluorescence microscope;
Calculating the partial derivatives of the fidelity items based on the traditional recovery image, the output image of the previous iteration and a preset relation;
wherein the preset relationship is related to noise of an imaging physical model of the fluorescence microscope.
6. The method of claim 4, the computing the partial derivatives of the regularization term based on the output image of the previous iteration, comprising:
Inverting the regularization network, and calculating the bias guide of the regularization item based on the inverted regularization network.
7. The method of claim 6, wherein the reversing comprises:
Converting the convolution layers in the regularization network into transposed convolution layers having the same convolution kernel, converting an activation function in the regularization network into a gradient of an activation function, converting a potential function in the regularization network into a gradient of a potential function, and converting a block in the regularization network into a transposed block.
8. The method of claim 1, the training process of the regularized network comprising:
Acquiring a reference image acquired by the fluorescence microscope under the conditions of high illumination laser intensity and exposure time;
obtaining a plurality of sample multidimensional images by superposing noise into the reference image;
And training the initial regularization network by taking the multi-dimensional images of the plurality of samples as training samples and the corresponding reference images as labels, and adjusting network model parameters to obtain the trained regularization network.
9. The method of claim 1, wherein the fluorescence microscope comprises any one of: three-dimensional structure illumination microscope, turret confocal microscope, wide-field microscope, and fourier light field microscope.
10. An image processing system, comprising:
an image data acquisition module for acquiring multi-dimensional image data, the image data being generated by a fluorescence microscope;
an initial image generation module for generating an initial multi-dimensional image based on the multi-dimensional image data;
The target image generation module is used for constructing a target function based on an image acquisition process, and carrying out one or more rounds of iteration on the initial multidimensional image to generate a target image; wherein: the objective function comprises a fidelity term and a regularization term, wherein the fidelity term is related to an imaging model of the fluorescence microscope, and the regularization term is obtained through calculation through a regularization network.
11. An image processing apparatus comprising a processor for performing the image processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium storing computer instructions that, when read by a computer in the storage medium, the computer performs the image processing method according to any one of claims 1 to 9.
CN202410138010.5A 2024-01-31 2024-01-31 Image processing method and system Pending CN117953158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410138010.5A CN117953158A (en) 2024-01-31 2024-01-31 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410138010.5A CN117953158A (en) 2024-01-31 2024-01-31 Image processing method and system

Publications (1)

Publication Number Publication Date
CN117953158A true CN117953158A (en) 2024-04-30

Family

ID=90801241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410138010.5A Pending CN117953158A (en) 2024-01-31 2024-01-31 Image processing method and system

Country Status (1)

Country Link
CN (1) CN117953158A (en)

Similar Documents

Publication Publication Date Title
Broxton et al. Wave optics theory and 3-D deconvolution for the light field microscope
EP3942518B1 (en) Systems and methods for image processing
Lim et al. CycleGAN with a blur kernel for deconvolution microscopy: Optimal transport geometry
Ikoma et al. A convex 3D deconvolution algorithm for low photon count fluorescence imaging
US20230085827A1 (en) Single-shot autofocusing of microscopy images using deep learning
Ben Hadj et al. Space variant blind image restoration
Zhang et al. Correction of out-of-focus microscopic images by deep learning
Nikitin et al. Photon-limited ptychography of 3D objects via Bayesian reconstruction
Samuylov et al. Modeling point spread function in fluorescence microscopy with a sparse gaussian mixture: Tradeoff between accuracy and efficiency
Speiser et al. Teaching deep neural networks to localize sources in super-resolution microscopy by combining simulation-based learning and unsupervised learning
Boland et al. Improving axial resolution in Structured Illumination Microscopy using deep learning
Li et al. On-the-fly estimation of a microscopy point spread function
Chen et al. Measure and model a 3-D space-variant PSF for fluorescence microscopy image deblurring
Mukherjee et al. Super-resolution recurrent convolutional neural networks for learning with multi-resolution whole slide images
Zunino et al. Reconstructing the image scanning microscopy dataset: an inverse problem
Fazel et al. Analysis of super-resolution single molecule localization microscopy data: A tutorial
CN110785709B (en) Generating high resolution images from low resolution images for semiconductor applications
Ruan et al. Adaptive total variation based autofocusing strategy in ptychography
Wu et al. Adaptive correction method of hybrid aberrations in Fourier ptychographic microscopy
Li et al. Deep adversarial network for super stimulated emission depletion imaging
EP3937120B1 (en) Computer-implemented method, computer program product and system for processing images
Wu et al. A lensless LED matrix-based ptychographic microscopy imaging method using loss correction and adaptive step size
CN116721017A (en) Self-supervision microscopic image super-resolution processing method and system
US20220020116A1 (en) Holographic ultra resolution imaging
CN117953158A (en) Image processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination