CN114529476A - Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network - Google Patents

Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network Download PDF

Info

Publication number
CN114529476A
CN114529476A CN202210177683.2A CN202210177683A CN114529476A CN 114529476 A CN114529476 A CN 114529476A CN 202210177683 A CN202210177683 A CN 202210177683A CN 114529476 A CN114529476 A CN 114529476A
Authority
CN
China
Prior art keywords
phase recovery
decoupling
network
method based
fusion network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210177683.2A
Other languages
Chinese (zh)
Other versions
CN114529476B (en
Inventor
牛毅
杨书印
马明明
李甫
石光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210177683.2A priority Critical patent/CN114529476B/en
Publication of CN114529476A publication Critical patent/CN114529476A/en
Application granted granted Critical
Publication of CN114529476B publication Critical patent/CN114529476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of computer vision and microscopic imaging, and particularly provides a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network. The method comprises the following steps: s1, measuring parameters of the lens-free holographic microscopic imaging system to be recovered; s2, obtaining a sample and constructing a training sample set and a testing sample set; s3, constructing a phase recovery network; s4, training the constructed phase recovery network; and S5, performing phase recovery to solve complex amplitude. The method uses a decoupling network to decouple double-channel complex matrix information from a single-channel holographic brightness image, uses a fusion network to fuse multi-frame collected information, and combines the information with a Fresnel diffraction physical model, so that the learning target is clear and the interpretability is strong; the reconstruction accuracy is high, and the visual effect of the reconstructed phase difference and amplitude image is good; the phase recovery network used by the invention needs fewer gradient descending rounds and has higher phase recovery speed.

Description

基于解耦-融合网络的无透镜全息显微成像相位恢复方法Phase recovery method for lensless holographic microscopy based on decoupling-fusion network

技术领域technical field

本申请涉及计算机视觉及显微成像领域,具体而言,涉及一种基于解耦-融合网络的无透镜全息显微成像相位恢复方法,可用于同轴全息无透镜显微成像系统的振幅和相差显微成像。The present application relates to the fields of computer vision and microscopic imaging, and in particular, to a lensless holographic microscopic imaging phase recovery method based on a decoupling-fusion network, which can be used for the amplitude and phase difference of a coaxial holographic lensless microscopic imaging system. Microscopic imaging.

背景技术Background technique

在医学及生物学的研究中,显微镜是一种常用的研究工具,其在观察病理切片,微生物结构等方面起到了极其重要的作用。但是传统显微镜由于其结构特点和成像原理会带来例如视场与放大倍率不兼容,透镜的像差和色度失真等问题,并且无法记录样本深度信息,对于半透明的相位型物体成像不佳。无透镜全息显微成像技术是一种基于光学全息原理,利用CMOS或 CCD等光电传感器靠近样本平面,在相干光场或部分相干光场中采集样本亮度图像,利用算法恢复样本平面完整波前信息的技术。In medical and biological research, microscope is a commonly used research tool, which plays an extremely important role in observing pathological sections and microbial structures. However, due to its structural characteristics and imaging principles, traditional microscopes will bring problems such as incompatibility between the field of view and magnification, lens aberration and chromatic distortion, and cannot record the depth information of the sample, which is not good for translucent phase objects. . Lensless holographic microscopy imaging technology is based on the principle of optical holography. It uses a photoelectric sensor such as CMOS or CCD to be close to the sample plane, collects the brightness image of the sample in a coherent light field or a partially coherent light field, and uses an algorithm to recover the complete wavefront information of the sample plane. Technology.

无透镜全息显微成像系统中,将样本平面靠近CMOS或 CCD光电传感器平面,两个平面的轴向距离称为离焦距离,通常在几百微米到几毫米之间,系统中使用发出完全相干光或部分相干光的光源照射样本平面使其在CMOS或CCD光电传感器平面投影成像,该成像过程在近场区域近似菲涅尔衍射模型, CMOS或CCD光电传感器记录下的亮度图像为光场中样本的干涉全息图,从采集的单帧或多帧全息图中,通过相位恢复算法,能够求解样本平面包含了振幅及相位的波前复振幅信息,从而实现对样本平面的振幅或相差成像。无透镜显微成像系统相较于传统光学显微镜拥有视场大,系统结构简单,无透镜畸变、像差等特点,并且从恢复的相位信息中可以计算深度信息进而实现三维成像。In a lensless holographic microscope imaging system, the sample plane is placed close to the CMOS or CCD photoelectric sensor plane, and the axial distance between the two planes is called the defocus distance, which is usually between a few hundred microns and a few millimeters. The light source of light or partially coherent light illuminates the sample plane so that it is projected and imaged on the CMOS or CCD photoelectric sensor plane. The imaging process approximates the Fresnel diffraction model in the near-field region, and the brightness image recorded by the CMOS or CCD photoelectric sensor is in the light field. The interference hologram of the sample, from the collected single frame or multi-frame hologram, can solve the wavefront complex amplitude information including the amplitude and phase in the sample plane through the phase recovery algorithm, so as to realize the amplitude or phase contrast imaging of the sample plane. Compared with the traditional optical microscope, the lensless microscopic imaging system has the characteristics of large field of view, simple system structure, no lens distortion, aberration, etc., and the depth information can be calculated from the recovered phase information to realize three-dimensional imaging.

在无透镜全息显微成像系统中,利用采集的全息图亮度图像求解相位的常用方法可以分为基于传统迭代的相位恢复方法和基于神经网络的相位恢复方法两类,其中基于传统迭代方法的相位恢复方法,例如:Gerchberg和Saxton于1972年在题目为“A practicalalgorithm for the determination of the phase from image and diffraction planepictures”的文章中提出的G-S相位迭代算法,该方法随机生成初始相位,与样本平面采集的亮度图像合成复振幅,将复振幅在样本平面和成像平面之间迭代投影,并利用两个平面上已知的亮度图像替换振幅,在迭代过程中修正相位信息,直到满足终止条件停止迭代。利用这种方法能够求得满足一定误差要求的相位信息。之后的一些研究提出的基于迭代方法的相位恢复算法,多在G-S迭代相位恢复算法上加以改进。这类迭代算法在数轮迭代之后会陷入局部最优解导致误差下降缓慢,并且都不可避免地需要进行大量的计算,时间成本较高。In the lensless holographic microscope imaging system, the common methods of using the acquired hologram brightness image to solve the phase can be divided into two types: the traditional iterative-based phase recovery method and the neural network-based phase recovery method. Recovery methods, such as the G-S phase iterative algorithm proposed by Gerchberg and Saxton in their 1972 paper titled "A practical algorithm for the determination of the phase from image and diffraction plane pictures", which randomly generates an initial phase, which is acquired with the sample plane The complex amplitude is synthesized from the luminance image of , and the complex amplitude is iteratively projected between the sample plane and the imaging plane, and the known luminance image on the two planes is used to replace the amplitude, and the phase information is corrected in the iterative process until the termination condition is satisfied. Using this method, phase information that meets certain error requirements can be obtained. The phase recovery algorithm based on iterative method proposed by some later studies is mostly improved on the G-S iterative phase recovery algorithm. This kind of iterative algorithm will fall into a local optimal solution after several rounds of iterations, resulting in a slow decrease in error, and inevitably requires a large amount of calculation, with high time cost.

随着深度学习在图像恢复,图像重构等图像问题领域取得卓越的进展,众多学者开始探讨将神经网络应用在相位恢复问题上以解决传统方法所面临的诸多问题,目前,典型的基于深度学习的相位恢复方法有以下几种:Yichen Wu等人于2018年在期刊《Optica》上发表了一篇题目为“Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery”的文章,公开了一种基于深度学习的相位恢复方法,利用U型网络结构实现端到端的相位恢复。该方法在大量带有真实标签的数据集上进行训练,学习固定离焦距离下采集的单帧全息图的亮度图像到样本平面复振幅的映射关系,该方法的优点在于:在应用中,对于准确获取样本平面到成像平面的离焦距离的精度要求不高,因为该方法在一定的离焦距离范围内具有鲁棒性。但是,这种方法需要大量带有真实标签的数据用于训练,因此获数据集的工作量巨大,并且网络的可解释性差,另外,在脱离了数据集风格的实验数据上的重构性能不佳,具有泛化性能的局限性。With the outstanding progress of deep learning in the field of image restoration, image reconstruction and other image problems, many scholars have begun to discuss the application of neural networks to the phase restoration problem to solve many problems faced by traditional methods. At present, typical deep learning-based There are several methods of phase recovery: Yichen Wu et al. published an article entitled "Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery" in the journal "Optica" in 2018. In this paper, a deep learning-based phase recovery method is disclosed, which utilizes a U-shaped network structure to achieve end-to-end phase recovery. The method is trained on a large number of datasets with real labels, and learns the mapping relationship from the luminance image of a single frame of hologram collected at a fixed defocus distance to the complex amplitude of the sample plane. The advantages of this method are: in application, for The accuracy of accurately obtaining the defocus distance from the sample plane to the imaging plane is not high, because the method is robust within a certain defocus distance range. However, this method requires a large amount of data with real labels for training, so the workload of obtaining the dataset is huge, and the interpretability of the network is poor. In addition, the reconstruction performance on experimental data that is out of the dataset style is not good. good, with limitations in generalization performance.

Fei Wang等人于2020年在期刊《Light:Science& Applications》上发表了一篇题目为“Phase imaging with an untrained neural network”的文章,公开了一种结合了物理模型到深度神经网络的相位恢复方法,该方法使用一种编解码的网络结构,编码端使用U型网络生成所求复振幅的预测,解码端利用已知的菲涅尔衍射物理模型,将编码端预测的复振幅正向传播到成像平面上并计算亮度信息,与成像平面上实际采集到的亮度图像计算像素值误差,使用梯度下降的方法优化网络参数,实现自监督学习,因而不需要获取数据的真实标签。另外,这种深度学习方法无需训练,由网络结构设计手工先验知识,无需从训练集中获得学习先验知识,使用时输入单帧采集的全息图,在自监督的学习过程中训练模型参数降低损失函数,最后将编码端网络输出的结果作为复振幅的预测,这种非端到端的网络结构在求解相位时同样存在大量迭代过程,时间成本较高,且该方法与前述Yichen Wu等人的方法都仅使用一张离焦全息图作约束,因此重构结果误差较高,视觉效果较差。Fei Wang et al. published an article titled "Phase imaging with an untrained neural network" in the journal "Light: Science & Applications" in 2020, disclosing a phase recovery method that combines a physical model to a deep neural network , this method uses an encoding and decoding network structure, the encoding end uses a U-shaped network to generate the prediction of the complex amplitude, and the decoding end uses the known Fresnel diffraction physical model to forward the complex amplitude predicted by the encoding end to the The brightness information is calculated on the imaging plane, and the pixel value error is calculated with the brightness image actually collected on the imaging plane. The gradient descent method is used to optimize the network parameters to realize self-supervised learning, so it is not necessary to obtain the real label of the data. In addition, this deep learning method does not require training, the network structure is designed with manual prior knowledge, and there is no need to obtain learning prior knowledge from the training set. The hologram collected in a single frame is input when used, and the parameters of the training model are reduced during the self-supervised learning process. Loss function, and finally use the output of the encoder network as the prediction of the complex amplitude. This non-end-to-end network structure also has a large number of iterative processes when solving the phase, and the time cost is high, and this method is similar to the aforementioned Yichen Wu et al. All methods only use one defocused hologram as a constraint, so the error of reconstruction results is high and the visual effect is poor.

综上所述,现有的相位恢复方法存在可解释性差、时间成本高、图像重构准确率低的问题。To sum up, the existing phase recovery methods have the problems of poor interpretability, high time cost, and low image reconstruction accuracy.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于,针对上述现有技术中的不足,提供一种基于解耦-融合网络的无透镜全息显微成像相位恢复方法,以解决现有相位恢复方法中存在的可解释性差、时间成本高、图像重构准确率低的问题。The purpose of the present invention is to provide a lensless holographic microscope imaging phase recovery method based on the decoupling-fusion network based on the deficiencies in the above-mentioned prior art, so as to solve the problems of poor interpretability and time in the existing phase recovery methods. High cost and low image reconstruction accuracy.

本发明相位恢复方法的核心技术思路是:使用不同离焦距离下采集的多帧全息图作为相位恢复网络的输入,将解耦-融合网络的输出作为所预测的复振幅,并将其结合已知的菲涅尔衍射模型正向传播到各个离焦平面,与各个离焦平面实际采集的亮度图像计算平均绝对误差,将各个离焦平面的平均绝对误差求和作为总的损失函数,并用梯度下降法更新相位恢复网络参数。相位恢复网络的训练过程是自监督的,因此不需要获取数据的真实复振幅标签,本发明在采集或仿真的小型数据集上进行预训练后保存参数作为良好的初始值,在解决实际相位恢复问题时无需由随机参数开始进行大量的梯度下降过程,其收敛速度快于未经训练的生成式网络,本发明方法需要的时间较少,时间成本低。由于卷积神经网络在本发明所提出的解耦-融合网络中具有明确的任务,学习的是简单的非线性映射,而非复杂的图像重构逆过程,因此提升了相位恢复网络的泛化性能和可解释性。另外,本发明方法使用多帧约束取代单帧约束,降低了重构误差,提升了振幅或相差成像的视觉效果,因此本发明方法的重构准确率较高,重构图像的视觉效果较好。The core technical idea of the phase recovery method of the present invention is to use the multi-frame holograms collected at different defocus distances as the input of the phase recovery network, take the output of the decoupling-fusion network as the predicted complex amplitude, and combine it with the The known Fresnel diffraction model is propagated forward to each defocus plane, and the average absolute error is calculated with the brightness image actually collected by each defocus plane, and the average absolute error of each defocus plane is summed as the total loss function, and the gradient is used. The descent method updates the phase recovery network parameters. The training process of the phase recovery network is self-supervised, so it is not necessary to obtain the real complex amplitude labels of the data. The present invention saves the parameters as good initial values after pre-training on the collected or simulated small data sets, and solves the problem of actual phase recovery. When the problem occurs, it is not necessary to start a large number of gradient descent processes from random parameters, the convergence speed is faster than that of the untrained generative network, the method of the present invention requires less time, and the time cost is low. Since the convolutional neural network has a clear task in the decoupling-fusion network proposed in the present invention, it learns a simple nonlinear mapping instead of a complex inverse image reconstruction process, thus improving the generalization of the phase recovery network. performance and interpretability. In addition, the method of the present invention uses multi-frame constraints instead of single-frame constraints, which reduces the reconstruction error and improves the visual effect of amplitude or phase contrast imaging. Therefore, the reconstruction accuracy of the method of the present invention is higher, and the visual effect of the reconstructed image is better. .

具体地,本发明采用的技术方案如下:Specifically, the technical scheme adopted in the present invention is as follows:

本申请提供一种基于解耦-融合网络的无透镜全息显微成像相位恢复方法,该方法包括如下步骤:S1,测量待恢复无透镜全息显微成像系统的参数;S2,获取样本并构建训练样本集和测试样本集;S3,构建相位恢复网络;S4,对构建的相位恢复网络进行训练;S5,进行相位恢复求解复振幅。The present application provides a lensless holographic microscope imaging phase recovery method based on a decoupling-fusion network, the method includes the following steps: S1, measuring parameters of the lensless holographic microscope imaging system to be restored; S2, acquiring samples and constructing training Sample set and test sample set; S3, construct a phase recovery network; S4, train the constructed phase recovery network; S5, perform phase recovery to solve the complex amplitude.

更进一步地,步骤S1中的参数包括相干光源的中心波长、光电传感器的像素、离焦平面距离样本平面的离焦距离。Further, the parameters in step S1 include the center wavelength of the coherent light source, the pixels of the photoelectric sensor, and the defocus distance from the defocus plane to the sample plane.

更进一步地,步骤S2中的样本为待恢复无透镜全息显微成像系统中采集的M个样本在s个离焦平面的分辨率大小为N×N 的亮度图像,得到M组亮度图像数据,共M×s张亮度图像,其中M≥300,s≥2,N=768。Further, the samples in step S2 are brightness images with a resolution size of N×N of the M samples collected in the lensless holographic microscope imaging system to be restored in s defocus planes, and M groups of brightness image data are obtained, A total of M×s luminance images, where M≥300, s≥2, and N=768.

更进一步地,步骤S2中的训练样本集和测试样本集为将M 组亮度图像数据按照9:1的比例划分得到。Further, the training sample set and the test sample set in step S2 are obtained by dividing the M groups of luminance image data according to a ratio of 9:1.

更进一步地,步骤S3中的相位恢复网络包括解耦网络、反向传播菲涅尔衍射层、融合网络、正向传播菲涅尔衍射层、亮度提取层。Furthermore, the phase recovery network in step S3 includes a decoupling network, a back propagation Fresnel diffraction layer, a fusion network, a forward propagation Fresnel diffraction layer, and a brightness extraction layer.

更进一步地,解耦网络包括四个下采样模块、四个上采样模块、两个普通卷积模块。Further, the decoupling network includes four down-sampling modules, four up-sampling modules, and two ordinary convolution modules.

更进一步地,下采样模块包括三个卷积层和一个用作下采样的最大池化层并使用ReLU激活函数,上采样模块包括两个卷积层和一个用作上采样的反卷积层并使用ReLU激活函数。Further, the downsampling module includes three convolutional layers and one max-pooling layer for downsampling and uses the ReLU activation function, and the upsampling module includes two convolutional layers and a deconvolutional layer for upsampling. And use the ReLU activation function.

更进一步地,融合网络包括四个下采样模块、四个上采样模块、两个普通卷积模块。Further, the fusion network includes four down-sampling modules, four up-sampling modules, and two ordinary convolution modules.

更进一步地,反向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。Furthermore, the back-propagating Fresnel diffraction layer includes a discrete Fourier transform operator, a transfer function product module, and an inverse operator of the discrete Fourier transform.

更进一步地,正向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。Furthermore, the forward propagation Fresnel diffraction layer includes a discrete Fourier transform operator, a transfer function product module, and an inverse operator of the discrete Fourier transform.

与现有技术相比,本发明的有益效果:Compared with the prior art, the beneficial effects of the present invention:

(1)本发明使用解耦网络从单通道全息亮度图像中解耦出双通道复数矩阵信息,使用融合网络融合多帧采集信息,并与菲涅尔衍射物理模型相结合,其网络学习目标明确,可解释性强;(1) The present invention uses a decoupling network to decouple dual-channel complex matrix information from a single-channel holographic brightness image, uses a fusion network to fuse multi-frame acquisition information, and combines with the Fresnel diffraction physical model, and its network learning target is clear , which is highly interpretable;

(2)本发明利用不同离焦距离下采集的多帧亮度图像进行信息的提取,和作为损失函数约束,相位恢复值相较单帧恢复方法更接近真实值,重构准确率较高,重构相差和振幅图像的视觉效果好;(2) The present invention uses the multi-frame luminance images collected at different defocus distances to extract information, and as a loss function constraint, the phase recovery value is closer to the true value than the single-frame recovery method, and the reconstruction accuracy rate is higher. The visual effect of the phase contrast and amplitude images is good;

(3)本发明使用的相位恢复网络,在小型数据集上进行训练,保存参数作为梯度下降的初始值,相比于传统不训练的深度学习方法,所需梯度下降轮数更少,利用学习先验知识提升了相位恢复速度。(3) The phase recovery network used in the present invention is trained on a small data set, and the parameters are saved as the initial value of gradient descent. Compared with the traditional deep learning method without training, the number of gradient descent rounds required is less, and the learning Prior knowledge improves phase recovery speed.

附图说明Description of drawings

图1为本发明提供的基于解耦-融合网络的无透镜全息显微成像相位恢复方法的示意图;1 is a schematic diagram of a lensless holographic microscopic imaging phase recovery method based on a decoupling-fusion network provided by the present invention;

图2为本发明提供的基于解耦-融合网络的无透镜全息显微成像相位恢复方法中步骤S3构建的相位恢复网络的示意图;2 is a schematic diagram of a phase recovery network constructed in step S3 in the lensless holographic microscopic imaging phase recovery method based on the decoupling-fusion network provided by the present invention;

图3为采集USAF1951分辨率板的在不同离焦距离下的亮度图像,其中图3(a)、图3(b)、图3(c)、图3(d)对应的离焦距离分别为0.710mm、1.185mm、1.685mm、2.178mm;Figure 3 shows the brightness images collected from the USAF1951 resolution board at different defocus distances, in which the defocus distances corresponding to Figure 3(a), Figure 3(b), Figure 3(c), and Figure 3(d) are respectively 0.710mm, 1.185mm, 1.685mm, 2.178mm;

图4为采集小肠上皮细胞样本切片在不同离焦距离下的亮度图像,其中图4(a)、图4(b)、图4(c)、图4(d)对应的离焦距离分别为0.865mm、1.305mm、1.804mm、2.304mm;Figure 4 shows the brightness images of small intestinal epithelial cell sample slices collected at different defocus distances, in which the defocus distances corresponding to Figure 4(a), Figure 4(b), Figure 4(c), and Figure 4(d) are respectively 0.865mm, 1.305mm, 1.804mm, 2.304mm;

图5为利用G-S迭代算法进行相位恢复的结果,其中图5(a)、图5(b)、图5(c)、图5(d)分别为利用G-S迭代算法300轮迭代后对USAF1951分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果;Figure 5 shows the results of phase recovery using the G-S iterative algorithm, in which Figure 5(a), Figure 5(b), Figure 5(c), and Figure 5(d) respectively show the resolution of the USAF1951 after 300 iterations of the G-S iterative algorithm. Rate plate amplitude imaging results, USAF1951 resolution plate phase contrast imaging results, small intestinal epithelial cell sample slice amplitude imaging results, small intestinal epithelial cell sample slice phase contrast imaging results;

图6为利用Fei Wang等人提出的深度学习算法进行相位恢复的结果,其中图6(a)、图6(b)、图6(c)、图6(d)分别为利用Fei Wang等人提出的深度学习算法300轮梯度下降后对USAF1951 分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果;Fig. 6 is the result of phase recovery using the deep learning algorithm proposed by Fei Wang et al., in which Fig. 6(a), Fig. 6(b), Fig. 6(c), and Fig. 6(d) are the results of using Fei Wang et al. After 300 rounds of gradient descent of the proposed deep learning algorithm, the results of the USAF1951 resolution plate amplitude imaging, the USAF1951 resolution plate phase contrast imaging results, the amplitude imaging results of the small intestinal epithelial cell sample slices, and the small intestinal epithelial cell sample slice phase contrast imaging results;

图7为本发明方法进行相位恢复的结果,其中图7(a)、图 7(b)、图7(c)、图7(d)分别为本发明方法100轮梯度下降后对 USAF1951分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果。Fig. 7 is the result of phase recovery performed by the method of the present invention, wherein Fig. 7(a), Fig. 7(b), Fig. 7(c), and Fig. 7(d) are respectively the resolution of USAF1951 after 100 rounds of gradient descent by the method of the present invention. Plate amplitude imaging results, USAF1951 resolution plate phase contrast imaging results, small intestinal epithelial cell slice amplitude imaging results, and small intestinal epithelial cell sample slice phase contrast imaging results.

具体实施方式Detailed ways

为了使本发明的实施过程更加清楚,下面将会结合附图进行详细说明。In order to make the implementation process of the present invention clearer, a detailed description will be given below with reference to the accompanying drawings.

本发明提供了一种基于解耦-融合网络的无透镜全息显微成像相位恢复方法,如图1所示,具体步骤如下:The present invention provides a lensless holographic microscopic imaging phase recovery method based on a decoupling-fusion network, as shown in FIG. 1 , and the specific steps are as follows:

S1,测量待恢复无透镜全息显微成像系统的参数;S1, measure the parameters of the lensless holographic microscope imaging system to be restored;

无透镜全息显微成像技术是一种基于同轴全息原理,利用 CMOS/CCD等光电传感器靠近样本平面,在相干光场或部分相干光场中采集样本的亮度图像,亮度图像即为得到的全息图像,利用算法恢复样本平面完整波前信息的技术。具体地,相干光源的中心波长、光电传感器的像元尺寸、离焦平面距离样本平面的离焦距离,这些参数均为传递函数中的参数。为了得到传递函数中的参数,测量相干光源中心波长λ,获取光电传感器的像元尺寸p,本实施例中的传感器以CMOS传感器为例,测量任意s个离焦平面距离样本平面的s个离焦距离{Lr|r∈1,2,…,s},具体地,由于菲涅尔衍射发生在近场区域,离焦距离为 0.1mm-3mm,这样才能够使用菲涅尔衍射公式近似光学传播过程。可以固定样本,即样本平面固定,移动光电传感器,即移动离焦平面,也可以固定光电传感器,移动样本;本发明以固定样本,移动光电传感器的系统为例进行阐述,这样有利于维持样本的稳定。Lensless holographic microscopy imaging technology is based on the principle of coaxial holography. It uses photoelectric sensors such as CMOS/CCD to be close to the sample plane, and collects the brightness image of the sample in a coherent light field or a partially coherent light field. The brightness image is the obtained hologram. Image, a technique for recovering the complete wavefront information of the sample plane using an algorithm. Specifically, the central wavelength of the coherent light source, the pixel size of the photoelectric sensor, and the defocus distance from the defocus plane to the sample plane are all parameters in the transfer function. In order to obtain the parameters in the transfer function, the central wavelength λ of the coherent light source is measured, and the pixel size p of the photoelectric sensor is obtained. The sensor in this embodiment takes a CMOS sensor as an example, and measures any s defocus planes from the sample plane at s distances from the sample plane. The focal distance {L r |r∈1,2,…,s}, specifically, since the Fresnel diffraction occurs in the near field region, the defocus distance is 0.1mm-3mm, so that the Fresnel diffraction formula can be used to approximate Optical propagation process. The sample can be fixed, that is, the sample plane is fixed, and the photoelectric sensor can be moved, that is, the defocused plane can be moved, or the photoelectric sensor can be fixed, and the sample can be moved; Stablize.

S2,获取样本并构建训练样本集和测试样本集;S2, obtain samples and construct training sample sets and test sample sets;

本实施例的样本为在无透镜全息显微成像系统中采集的M 个样本在s个离焦平面的分辨率大小为N×N的亮度图像,得到 M组亮度图像数据,共M×s张亮度图像,其中,M≥300,这样能够保证训练数据集的充分性;s≥2,至少为两帧,以确保能够进行多帧重构;N=768,能够适应本实施例的显存大小。本实施例使用的是bmp格式图片,并且网络结构较大,对于12GB 显存的显卡,使用768×768分辨率的图像比较适合,而对于显存更小或更大的其他显卡设备,N的数值需要调整;另外,在 768×768分辨率的训练样本集上训练的参数,对于其他分辨率的输入也能完成恢复工作,实施时,使用512×512或者1024×1024 分辨率的图像输入训练好的相位恢复网络都可以进行恢复,因为神经网络的卷积运算是3×3的卷积核滑动运算,所以对于输入的分辨率没有强制要求。随机地,将M组亮度图像数据按照 9:1的比例划分为训练样本集和测试样本集,即M组亮度图像数据中9M/10组亮度图像数据组成训练样本集,其余1M/10组亮度图像数据组成测试样本集。The samples in this embodiment are brightness images of M samples collected in a lensless holographic microscope imaging system with a resolution size of N×N in s defocus planes, and M groups of brightness image data are obtained, a total of M×s sheets Brightness image, where M≥300, which can ensure the sufficiency of the training data set; s≥2, at least two frames, to ensure that multi-frame reconstruction can be performed; N=768, which can adapt to the memory size of this embodiment. This example uses bmp format pictures, and the network structure is large. For a graphics card with 12GB video memory, it is more suitable to use an image with a resolution of 768×768. For other graphics card devices with smaller or larger video memory, the value of N needs to be Adjustment; in addition, the parameters trained on the training sample set of 768×768 resolution can also complete the restoration work for input of other resolutions. When implementing, use the image input of 512×512 or 1024×1024 resolution. The phase recovery network can be recovered, because the convolution operation of the neural network is a 3×3 convolution kernel sliding operation, so there is no mandatory requirement for the input resolution. Randomly, the M groups of brightness image data are divided into training sample sets and test sample sets according to the ratio of 9:1, that is, 9M/10 groups of brightness image data in the M groups of brightness image data form the training sample set, and the remaining 1M/10 groups of brightness images. The image data constitutes the test sample set.

S3,构建相位恢复网络;S3, construct a phase recovery network;

如图2所示,本发明用相位恢复网络代替图像重构逆问题中的非线性映射部分,相位恢复网络使用不同离焦距离下采集到的多帧亮度图像作为输入,解耦网络用于将每帧采集到的单通道亮度图像映射为双通道的复数矩阵,并通过菲涅尔衍射模型反向传播到样本平面。通过融合网络将多个复数矩阵进行信息融合,并输出所预测的复振幅。损失函数端将预测的复振幅通过已知的菲涅尔衍射模型正向传播到各个离焦距离下的采集平面上,与对应平面实际采集的亮度图像计算平均绝对误差,通过梯度的反向传播,使用梯度下降法优化相位恢复网络的参数。图2中的亮度提取层、重构损失、解耦网络、反向传播菲涅尔衍射层、正向传播菲涅尔衍射层均有s个,分别对应s个离焦距离下采集的样本,其中s个重构损失合为一个总损失函数,随着训练的进行由于映射关系不同,s个解耦网络的参数各不相同。As shown in Figure 2, the present invention uses a phase recovery network to replace the nonlinear mapping part in the inverse problem of image reconstruction. The phase recovery network uses multiple frames of luminance images collected at different defocus distances as input, and the decoupling network is used to convert the The single-channel luminance image collected in each frame is mapped to a two-channel complex matrix and back-propagated to the sample plane through the Fresnel diffraction model. The information is fused by multiple complex number matrices through the fusion network, and the predicted complex amplitude is output. The loss function side forwards the predicted complex amplitude to the acquisition plane at each defocus distance through the known Fresnel diffraction model, calculates the average absolute error with the brightness image actually collected on the corresponding plane, and propagates back through the gradient. , using gradient descent to optimize the parameters of the phase recovery network. In Figure 2, there are s brightness extraction layers, reconstruction loss, decoupling network, back propagation Fresnel diffraction layer, and forward propagation Fresnel diffraction layers, which correspond to samples collected at s defocus distances, respectively. Among them, the s reconstruction losses are combined into a total loss function. As the training progresses, the parameters of the s decoupling networks are different due to the different mapping relationships.

S31,构建相位恢复网络;S31, constructing a phase recovery network;

如图2所示,相位恢复网络包括解耦网络、反向传播菲涅尔衍射层、融合网络、正向传播菲涅尔衍射层、亮度提取层。解耦网络和融合网络均采用常见的U型网络结构,但每个卷积层的通道数不同。As shown in Figure 2, the phase recovery network includes a decoupling network, a back propagation Fresnel diffraction layer, a fusion network, a forward propagation Fresnel diffraction layer, and a brightness extraction layer. Both the decoupling network and the fusion network adopt the common U-shaped network structure, but the number of channels in each convolutional layer is different.

解耦网络括四个下采样模块、四个上采样模块、两个普通卷积模块,用于将单通道张量的离焦平面亮度图像亮度矩阵 {Ir|r∈1,2,…,s}解耦为双通道张量的复数矩阵

Figure BDA0003520971620000121
其中第一个通道张量为实部矩阵,第二个通道张量为虚部矩阵,而光波本身为一个复数矩阵,这样更加能够反映实际的成像过程。具体地,每个下采样模块中包括三个卷积层和一个用作下采样的最大池化层并使用ReLU函数作为前两个卷积层的激活函数,卷积层用于提取特征,使得通道数逐渐增加,其中池化操作会降低分辨率。每个上采样模块中包括一个用作上采样的反卷积层和两个卷积层并使用ReLU激活函数,其中反卷积层用于增加分辨率。使用残差连接将下采样模块中第二个卷积层的输出张量和第三个卷积层输出的同等大小张量相加,残差结构网络容易优化,能够缓解深度神经网络增加深度带来的梯度消失的问题,同时也能够提升解耦网络的训练速度,降低时间成本;采用跳层连接将下采样模块最大池化层输出的张量和对应上采样模块中反卷积层输出的同等大小张量沿通道方向连接,作为该上采样模块中第一个卷积层的输入,这样能够将较浅的卷积层特征引出,保留丰富的低层级信息,弥补由于池化操作带来的低层级图像信息丢失和分辨率降低,从而提高图像重构的准确率,同时还能够减少梯度消失和网络退化问题。下采样模块、上采样模块、普通卷积模块相关的具体参数设置详见表1。更具体地,解耦网络的设置依次为:第一下采样模块→第二下采样模块→第三下采样模块→第四下采样模块→第一普通卷积模块→第一上采样模块→第二上采样模块→第三上采样模块→第四上采样模块→第二普通卷积模块,其中第一普通卷积模块用于连接解耦网络上采样和下采样的路径,第二普通卷积模块用于使解耦网络输出所需大小的张量信息。The decoupling network includes four down-sampling modules, four up-sampling modules, and two ordinary convolution modules, which are used to convert the out-of-focus plane brightness image brightness matrix {I r |r∈1,2,…, s} decoupled into a complex matrix of two-channel tensors
Figure BDA0003520971620000121
The first channel tensor is the real part matrix, the second channel tensor is the imaginary part matrix, and the light wave itself is a complex number matrix, which can better reflect the actual imaging process. Specifically, each downsampling module includes three convolutional layers and a max pooling layer used for downsampling and uses the ReLU function as the activation function of the first two convolutional layers. The convolutional layers are used to extract features such that The number of channels is gradually increased, where the pooling operation reduces the resolution. Each upsampling module includes a deconvolution layer for upsampling and two convolution layers and uses the ReLU activation function, where the deconvolution layer is used to increase the resolution. Using residual connection to add the output tensor of the second convolutional layer in the downsampling module and the same size tensor output by the third convolutional layer, the residual structure network is easy to optimize, which can alleviate the increase of the depth band of the deep neural network. At the same time, it can also improve the training speed of the decoupling network and reduce the time cost; the jump layer connection is used to connect the tensor output by the maximum pooling layer of the downsampling module and the output tensor of the deconvolution layer in the corresponding upsampling module. Tensors of the same size are connected along the channel direction as the input of the first convolutional layer in the upsampling module, which can extract the shallower convolutional layer features, retain rich low-level information, and make up for the low level caused by the pooling operation. Hierarchical image information is lost and resolution is reduced, thereby improving the accuracy of image reconstruction, while reducing gradient disappearance and network degradation. The specific parameter settings related to the down-sampling module, the up-sampling module and the common convolution module are shown in Table 1. More specifically, the setting of the decoupling network is as follows: first downsampling module→second downsampling module→third downsampling module→fourth downsampling module→first ordinary convolution module→first upsampling module→th The second upsampling module → the third upsampling module → the fourth upsampling module → the second ordinary convolution module, where the first ordinary convolution module is used to connect the path of decoupling network upsampling and downsampling, and the second ordinary convolution module The module is used to make the decoupling network output tensor information of the desired size.

反向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。用于将s个解耦网络输出的复数矩阵

Figure BDA0003520971620000131
由离焦平面反向传播到样本平面,反向传播菲涅尔衍射的线性运算记为{F-1 r|r∈1,2,…,s}:The back-propagating Fresnel diffraction layer includes discrete Fourier transform operators, transfer function product modules, and inverse operators of discrete Fourier transforms. complex matrix used to output the s decoupling networks
Figure BDA0003520971620000131
Backpropagating from the out-of-focus plane to the sample plane, the linear operation of backpropagating Fresnel diffraction is denoted as {F -1 r |r∈1,2,…,s}:

Figure BDA0003520971620000132
Figure BDA0003520971620000132

其中,F代表离散傅里叶变换算子;F-1代表离散傅里叶变换的逆变换算子;fx和fy为频域单位,Hr(fx,fy)代表传递函数矩阵,表达式为:Among them, F represents the discrete Fourier transform operator; F -1 represents the inverse operator of the discrete Fourier transform; f x and f y are frequency domain units, and H r (f x , f y ) represents the transfer function matrix , the expression is:

Figure BDA0003520971620000133
Figure BDA0003520971620000133

其中,j为虚数单位;Lr为第r个离焦平面到样本平面的离焦距离;λ为光源中心波长;波数k=2π/λ;不同离焦距离下产生的传递函数不同。Among them, j is the imaginary unit; L r is the defocusing distance from the rth defocusing plane to the sample plane; λ is the center wavelength of the light source;

融合网络包括四个下采样模块、四个上采样模块、两个普通卷积模块,用于融合多帧采集的全息图亮度图像的信息,并生成样本平面预测的复振幅

Figure BDA0003520971620000141
具体地,输入层将反向传播菲涅尔衍射层输出的s个复数矩阵求和,每个下采样模块中包括三个卷积层和一个用作下采样的最大池化层并使用ReLU激活函数,卷积层用于提取特征,使得通道数逐渐增加,其中池化操作会降低分辨率。每个上采样模块中包括两个卷积层和一个用作上采样的反卷积层并使用ReLU激活函数,其中反卷积层用于增加分辨率,具体参数设置如表1所示。使用残差连接将下采样模块中第二个卷积层的输出张量和第三个卷积层输出的同等大小张量相加,残差结构网络容易优化,能够缓解深度神经网络增加深度带来的梯度消失的问题,同时也能够提升解耦网络的训练速度,降低时间成本;采用跳层连接将下采样模块最大池化层输出的张量和对应上采样模块中反卷积层输出的同等大小张量沿通道方向连接,作为该上采样模块中第一个卷积层的输入,这样能够将较浅的卷积层特征引出,保留丰富的低层级信息,弥补由于池化操作带来的低层级图像信息丢失和分辨率降低,从而提高图像重构的准确率,同时还能够减少梯度消失和网络退化问题。下采样模块、上采样模块、普通卷积模块相关的具体参数设置详见表1。更具体地,融合网络的设置依次为:第五下采样模块→第六下采样模块→第七下采样模块→第八下采样模块→第三普通卷积模块→第五上采样模块→第六上采样模块→第七上采样模块→第八上采样模块→第四普通卷积模块,其中第三普通卷积模块用于连接融合网络上采样和下采样的路径,第四普通卷积模块用于使融合网络输出所需大小的张量信息。The fusion network includes four down-sampling modules, four up-sampling modules, and two ordinary convolution modules, which are used to fuse the information of the hologram luminance images collected in multiple frames and generate the complex amplitude predicted by the sample plane.
Figure BDA0003520971620000141
Specifically, the input layer sums the s complex matrices output by the back-propagated Fresnel diffraction layer, and each downsampling module includes three convolutional layers and a max pooling layer used as downsampling and uses ReLU activation function, the convolutional layer is used to extract features, so that the number of channels is gradually increased, and the pooling operation will reduce the resolution. Each upsampling module includes two convolutional layers and a deconvolutional layer for upsampling and uses the ReLU activation function, where the deconvolutional layer is used to increase the resolution, and the specific parameter settings are shown in Table 1. Using residual connection to add the output tensor of the second convolutional layer in the downsampling module and the same size tensor output by the third convolutional layer, the residual structure network is easy to optimize, which can alleviate the increase of the depth band of the deep neural network. At the same time, it can also improve the training speed of the decoupling network and reduce the time cost; the jump layer connection is used to connect the tensor output by the maximum pooling layer of the downsampling module and the output tensor of the deconvolution layer in the corresponding upsampling module. Tensors of the same size are connected along the channel direction as the input of the first convolutional layer in the upsampling module, which can extract the shallower convolutional layer features, retain rich low-level information, and make up for the low level caused by the pooling operation. Hierarchical image information is lost and resolution is reduced, thereby improving the accuracy of image reconstruction, while reducing gradient disappearance and network degradation. The specific parameter settings related to the down-sampling module, the up-sampling module and the common convolution module are shown in Table 1. More specifically, the settings of the fusion network are in the following order: the fifth downsampling module→the sixth downsampling module→the seventh downsampling module→the eighth downsampling module→the third ordinary convolution module→the fifth upsampling module→the sixth Upsampling module→seventh upsampling module→eighth upsampling module→fourth ordinary convolution module, where the third ordinary convolution module is used to connect the path of fusion network upsampling and downsampling, and the fourth ordinary convolution module is used for It is used to make the fusion network output tensor information of the required size.

正向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。用于将融合网络输出的预测复振幅

Figure BDA0003520971620000151
由样本平面正向传播到s个离焦平面,输出s个离焦平面的复振幅预测矩阵
Figure BDA0003520971620000152
The forward propagation Fresnel diffraction layer includes a discrete Fourier transform operator, a transfer function product module, and an inverse operator of the discrete Fourier transform. The predicted complex amplitude used to fuse the network output
Figure BDA0003520971620000151
Forward propagation from the sample plane to s out-of-focus planes, and output the complex amplitude prediction matrix of s out-of-focus planes
Figure BDA0003520971620000152

Figure BDA0003520971620000153
Figure BDA0003520971620000153

其中,F代表离散傅里叶变换算子;F-1代表离散傅里叶变换的逆变换算子;Hr(fx,fy)代表传递函数矩阵,fx和fy为频域单位,

Figure BDA0003520971620000154
为融合网络在样本平面预测的复振幅。Among them, F represents the discrete Fourier transform operator; F -1 represents the inverse operator of the discrete Fourier transform; H r (f x , f y ) represents the transfer function matrix, and f x and f y are frequency domain units ,
Figure BDA0003520971620000154
Complex amplitude predicted at the sample plane for the fusion network.

亮度提取层包括计算模块,用于从正向传播菲涅尔衍射层输出的s个离焦平面复振幅预测矩阵

Figure BDA0003520971620000155
中提取出亮度图像矩阵
Figure BDA0003520971620000156
这样方便与光电传感器探测到的实际亮度图像进行对比,求出损失。提取亮度信息的表达式为:The luminance extraction layer includes a computational module for the s out-of-focus plane complex amplitude prediction matrices output from the forward-propagating Fresnel diffraction layer
Figure BDA0003520971620000155
The luminance image matrix is extracted from
Figure BDA0003520971620000156
In this way, it is convenient to compare with the actual brightness image detected by the photoelectric sensor to find the loss. The expression for extracting luminance information is:

Figure BDA0003520971620000157
Figure BDA0003520971620000157

其中,

Figure BDA0003520971620000158
代表
Figure BDA0003520971620000159
张量的第一个通道,代表复振幅的实部;
Figure BDA00035209716200001510
代表
Figure BDA00035209716200001511
张量的第二个通道,代表复振幅的虚部。in,
Figure BDA0003520971620000158
represent
Figure BDA0003520971620000159
The first channel of the tensor, representing the real part of the complex amplitude;
Figure BDA00035209716200001510
represent
Figure BDA00035209716200001511
The second channel of the tensor, representing the imaginary part of the complex amplitude.

本发明使用解耦网络从单通道全息亮度图像中解耦出双通道复数矩阵信息,使用融合网络融合多帧采集信息,并与菲涅尔衍射物理模型相结合,网络学习目标明确,可解释性强。The invention uses the decoupling network to decouple the dual-channel complex matrix information from the single-channel holographic brightness image, uses the fusion network to fuse the multi-frame acquisition information, and combines it with the Fresnel diffraction physical model. The network learning target is clear and interpretable. powerful.

表1:相位恢复网络中解耦网络、融合网络的参数设置。Table 1: Parameter settings of decoupling network and fusion network in phase recovery network.

Figure BDA0003520971620000161
Figure BDA0003520971620000161

Figure BDA0003520971620000171
Figure BDA0003520971620000171

Figure BDA0003520971620000181
Figure BDA0003520971620000181

S32,定义相位恢复网络的总损失函数L。S32, define the total loss function L of the phase recovery network.

相位恢复网络的总损失函数L的表达式如下:The expression of the total loss function L of the phase recovery network is as follows:

Figure BDA0003520971620000182
Figure BDA0003520971620000182

其中,mean(*)为逐像素取均值运算符,

Figure BDA0003520971620000183
为亮度提取层输出的第 r个离焦平面预测的全息图亮度矩阵,Ir为第r个离焦平面实际采集到的全息图亮度矩阵。采用1-范数,对细节信息友好,能够有效提高分辨率。由总损失函数L的表达式能够得到采集到的亮度和根据预测的复振幅通过亮度提取得到的亮度之间的差值,差值越小,说明预测的复振幅越准确,差值越大,说明预测的复振幅越不准确,从而能够反映出本发明方法相位恢复的准确度。Among them, mean(*) is the pixel-by-pixel mean operator,
Figure BDA0003520971620000183
is the hologram brightness matrix predicted by the rth defocusing plane output by the brightness extraction layer, and I r is the hologram brightness matrix actually collected by the rth defocusing plane. The 1-norm is adopted, which is friendly to detail information and can effectively improve the resolution. From the expression of the total loss function L, the difference between the collected luminance and the luminance obtained by luminance extraction according to the predicted complex amplitude can be obtained. The smaller the difference is, the more accurate the predicted complex amplitude is, and the larger the difference is. It is explained that the more inaccurate the predicted complex amplitude is, so that the accuracy of the phase recovery of the method of the present invention can be reflected.

本发明利用不同离焦距离下采集的多帧全息图进行信息的提取,和作为损失函数约束,相位恢复值相较单帧恢复方法更接近真实值,重构准确率较高,重构相差和振幅图像的视觉效果好。The present invention uses the multi-frame holograms collected under different defocus distances to extract information, and as a loss function constraint, the phase recovery value is closer to the true value than the single-frame recovery method, the reconstruction accuracy rate is higher, and the reconstruction phase difference and The visual effect of the amplitude image is good.

S4,对构建的相位恢复网络进行训练;S4, train the constructed phase recovery network;

将训练样本集中的9M/10组在s个离焦平面采集到的亮度图像{Ir|r∈1,2,…,s}作为本发明相位恢复网络的输入,对相位恢复网络进行J次迭代训练,J大于100,这样确保网络得到充分的训练,具体地,本实施例为300次。每一轮需要遍历所有训练样本集和测试样本集,先将训练样本集中的所有样本依次输入相位恢复网络,用于训练网络,优化网络参数;再将测试样本集中的所有样本依次输入相位恢复网络,用于测试训练后的网络是否需要继续训练。在测试样本集达到一定的损失函数要求后,保存相位恢复网络的参数;在本实施例中选用PyTorch框架提供的Adam优化器用于优化相位恢复网络中的参数,学习率设置为0.0001,要求在测试样本集上的平均损失小于等于3,这样能够确保预测的准确度,平均损失指的是所有测试样本的总损失L求平均。The brightness images {I r |r∈1,2,…,s} of the 9M/10 groups in the training sample set collected in s defocus planes are used as the input of the phase recovery network of the present invention, and the phase recovery network is performed J times. For iterative training, J is greater than 100, so as to ensure that the network is fully trained, specifically, 300 times in this embodiment. Each round needs to traverse all training sample sets and test sample sets. First, all samples in the training sample set are input into the phase recovery network in turn to train the network and optimize network parameters; then all samples in the test sample set are input into the phase recovery network in turn. , which is used to test whether the trained network needs to continue training. After the test sample set reaches a certain loss function requirement, the parameters of the phase recovery network are saved; in this embodiment, the Adam optimizer provided by the PyTorch framework is used to optimize the parameters in the phase recovery network, and the learning rate is set to 0.0001. The average loss on the sample set is less than or equal to 3, which can ensure the accuracy of the prediction. The average loss refers to the average of the total loss L of all test samples.

本发明使用的相位恢复网络,只需在小型数据集上进行训练,不需要大量的数据,能够有效地减少训练的时间,减少时间成本,保存参数作为梯度下降的初始值,相比于传统不训练的深度学习方法,所需梯度下降轮数更少,利用学习先验知识提升了相位恢复速度。The phase recovery network used in the present invention only needs to be trained on a small data set and does not require a large amount of data, which can effectively reduce the training time, reduce the time cost, and save the parameters as the initial value of gradient descent. The deep learning method for training requires fewer gradient descent rounds and improves the speed of phase recovery by learning prior knowledge.

S5,进行相位恢复求解复振幅;S5, perform phase recovery to solve the complex amplitude;

将步骤S4中保存的参数加载入相位恢复网络中,在与相位恢复网络训练时相同的系统参数下,采集被测样本在s个离焦平面的亮度图像数据,将其输入相位恢复网络,并计算总损失函数,利用Adam优化器再次进行梯度下降优化参数K轮,直至在输入数据上总损失函数满足L≤α,其中α为手工选定的损失函数阈值,具体地,损失函数阈值α等于0.5,以确保相位恢复网络的相位恢复准确度,融合网络输出的样本平面复振幅的预测结果

Figure BDA0003520971620000201
即为所求复振幅,即求得复振幅完成了相位恢复。The parameters saved in step S4 are loaded into the phase recovery network, and under the same system parameters as the phase recovery network training, the brightness image data of the tested sample in s defocused planes are collected, and input into the phase recovery network. Calculate the total loss function, and use the Adam optimizer to perform the gradient descent optimization parameter K rounds again until the total loss function on the input data satisfies L≤α, where α is the manually selected loss function threshold. Specifically, the loss function threshold α is equal to 0.5, to ensure the phase recovery accuracy of the phase recovery network, and fuse the prediction results of the sample plane complex amplitude output by the network
Figure BDA0003520971620000201
That is, the complex amplitude is obtained, that is, the complex amplitude is obtained to complete the phase recovery.

本发明实施例的实施条件及结果分析:Implementation conditions and result analysis of the embodiment of the present invention:

1.实施条件;1. Implementation conditions;

本发明方法适用于任何相互兼容的硬件和软件平台,本实施例采用的硬件测试平台是:主频为3.60GHz的Intel Core i7 CPU,16GB的内存;GPU为:NVIDIA TITAN XP,12GB显存;软件仿真平台为:Windows 10 64位操作系统;软件仿真语言: Python;使用深度学习框架:PyTorch。The method of the present invention is applicable to any mutually compatible hardware and software platforms. The hardware test platform adopted in this embodiment is: Intel Core i7 CPU with a main frequency of 3.60GHz and 16GB of memory; GPU: NVIDIA TITAN XP, 12GB of video memory; software The simulation platform is: Windows 10 64-bit operating system; software simulation language: Python; using deep learning framework: PyTorch.

2.结果分析。2. Result analysis.

使用本发明与G-S迭代算法及Fei Wang等人的方法对同一系统采集的实验数据进行相位恢复。采集的图像如图3所示,得到相差及振幅成像结果如图4所示。The present invention, the G-S iterative algorithm and the method of Fei Wang et al. are used to perform phase recovery on the experimental data collected by the same system. The collected images are shown in Figure 3, and the phase contrast and amplitude imaging results are shown in Figure 4.

图3为采集USAF1951分辨率板的在不同离焦距离下的亮度图像,其中图3(a)、图3(b)、图3(c)、图3(d)对应的离焦距离分别为0.710mm、1.185mm、1.685mm、2.178mm。本发明使用多帧不同离焦距离下采集的亮度图像进行信息的提取,和作为损失函数约束,相位恢复值相较单帧恢复方法更接近真实值,重构准确率较高,重构相差和振幅图像的视觉效果好。Figure 3 shows the brightness images collected from the USAF1951 resolution board at different defocus distances, in which the defocus distances corresponding to Figure 3(a), Figure 3(b), Figure 3(c), and Figure 3(d) are respectively 0.710mm, 1.185mm, 1.685mm, 2.178mm. The present invention uses multiple frames of luminance images collected at different defocus distances to extract information, and as a loss function constraint, the phase recovery value is closer to the real value than the single-frame recovery method, the reconstruction accuracy rate is higher, and the reconstruction phase difference and The visual effect of the amplitude image is good.

图4为采集小肠上皮细胞样本切片在不同离焦距离下的亮度图像,其中图4(a)、图4(b)、图4(c)、图4(d)对应的离焦距离分别为0.865mm、1.305mm、1.804mm、2.304mm。本发明使用多帧不同离焦距离下采集的亮度图像进行信息的提取,和作为损失函数约束,相位恢复值相较单帧恢复方法更接近真实值,重构准确率较高,重构相差和振幅图像的视觉效果好。Figure 4 shows the brightness images of small intestinal epithelial cell sample slices collected at different defocus distances, in which the defocus distances corresponding to Figure 4(a), Figure 4(b), Figure 4(c), and Figure 4(d) are respectively 0.865mm, 1.305mm, 1.804mm, 2.304mm. The present invention uses multiple frames of luminance images collected at different defocus distances to extract information, and as a loss function constraint, the phase recovery value is closer to the real value than the single-frame recovery method, the reconstruction accuracy rate is higher, and the reconstruction phase difference and The visual effect of the amplitude image is good.

图5为利用G-S迭代算法进行相位恢复的结果,其中图5(a)、图5(b)、图5(c)、图5(d)分别为利用G-S迭代算法300轮梯度下降后对USAF1951分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果。具体地,以Gerchberg和 Saxton于1972年在题目为“A practicalalgorithm for the determination of the phase from image and diffraction planepictures”的文章中公开的方法得到图5的结果。Figure 5 shows the results of phase recovery using the G-S iterative algorithm, in which Figure 5(a), Figure 5(b), Figure 5(c), and Figure 5(d) are the results of using the G-S iterative algorithm for 300 rounds of gradient descent on the USAF1951 Resolution plate amplitude imaging results, USAF1951 resolution plate phase contrast imaging results, small intestinal epithelial cell sample slice amplitude imaging results, small intestinal epithelial cell sample slice phase contrast imaging results. Specifically, the results of FIG. 5 were obtained with the method disclosed by Gerchberg and Saxton in an article entitled "A practical algorithm for the determination of the phase from image and diffraction plane pictures" in 1972.

图6为利用Fei Wang等人提出的深度学习算法进行相位恢复的结果,其中图6(a)、图6(b)、图6(c)、图6(d)分别为利用 Fei Wang等人提出的深度学习算法300轮梯度下降后对 USAF1951分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果。具体地,以Fei Wang等人于2020年在期刊《Light:Science&Applications》上发表了一篇题目为“Phase imaging with anuntrained neural network”的文章中公开的方法得到图6中的结果。Fig. 6 is the result of phase recovery using the deep learning algorithm proposed by Fei Wang et al., in which Fig. 6(a), Fig. 6(b), Fig. 6(c), and Fig. 6(d) are the results of using Fei Wang et al. After 300 rounds of gradient descent by the proposed deep learning algorithm, the results of the USAF1951 resolution plate amplitude imaging, the USAF1951 resolution plate phase contrast imaging results, the amplitude imaging results of small intestinal epithelial cell samples, and the phase contrast imaging results of small intestinal epithelial cell samples were obtained. Specifically, the results in Figure 6 are obtained by the method disclosed in an article titled "Phase imaging with anuntrained neural network" published by Fei Wang et al. in the journal "Light: Science & Applications" in 2020.

图7为本发明方法进行相位恢复的结果,其中图7(a)、图 7(b)、图7(c)、图7(d)分别为本发明方法100轮梯度下降后对 USAF1951分辨率板振幅成像结果、USAF1951分辨率板相差成像结果、小肠上皮细胞样本切片振幅成像结果、小肠上皮细胞样本切片相差成像结果。一方面,本发明方法在小型数据集上进行训练,保存参数作为梯度下降的初始值,相比于传统不训练的深度学习方法,所需梯度下降轮数更少,本发明方法迭代轮数100少于两种对比方法的300,本发明方法的相位恢复速度更快。另一方面,通过对比图5、图6、图7可以看出,G-S 迭代算法和现有深度学习方法在相位恢复的结果上,对于样本振幅成像存在明显噪声和阴影,说明G-S迭代算法会较快陷入局部最优解,导致相位恢复的数值不够平滑,本发明方法学习了大量数据的先验知识,所以图像更平滑,噪声少,阴影少;现有深度学习方法只使用单张图像,因此约束不够,导致相位恢复数值不够准确,本发明方法使用多帧,约束多,恢复效果更好,数值更准确。本发明方法得到的振幅恢复结果几乎不存在噪声和阴影;对于样本相差成像存在不同程度的过曝现象,本发明方法得到的相位恢复的结果没有明显的过曝现象,具有更好的视觉效果。由于本发明利用不同离焦距离下采集的多帧亮度图像进行信息的提取,和作为损失函数约束,相位恢复值相较单帧恢复方法更接近真实值,重构准确率较高,重构相差和振幅图像的视觉效果好。Fig. 7 is the result of phase recovery performed by the method of the present invention, wherein Fig. 7(a), Fig. 7(b), Fig. 7(c), and Fig. 7(d) are respectively the resolution of USAF1951 after 100 rounds of gradient descent by the method of the present invention. Plate amplitude imaging results, USAF1951 resolution plate phase contrast imaging results, small intestinal epithelial cell slice amplitude imaging results, and small intestinal epithelial cell sample slice phase contrast imaging results. On the one hand, the method of the present invention is trained on a small data set, and the parameters are saved as the initial value of gradient descent. Compared with the traditional deep learning method without training, the number of gradient descent rounds required is less, and the number of iteration rounds of the method of the present invention is 100. Less than 300 for the two comparison methods, the phase recovery speed of the method of the present invention is faster. On the other hand, by comparing Figure 5, Figure 6, and Figure 7, it can be seen that in the phase recovery results of the G-S iterative algorithm and the existing deep learning method, there are obvious noises and shadows for the sample amplitude imaging, indicating that the G-S iterative algorithm will be more efficient. Quickly fall into the local optimal solution, resulting in the phase recovery value is not smooth enough. The method of the present invention learns the prior knowledge of a large amount of data, so the image is smoother, with less noise and less shadow; the existing deep learning method only uses a single image, so Insufficient constraints result in inaccurate phase recovery values. The method of the present invention uses multiple frames and has many constraints, resulting in better recovery effects and more accurate values. The amplitude recovery result obtained by the method of the present invention has almost no noise and shadow; for sample phase contrast imaging, there are different degrees of overexposure, and the phase recovery result obtained by the method of the present invention has no obvious overexposure phenomenon and has better visual effect. Because the present invention uses the multi-frame brightness images collected at different defocus distances to extract information, and as a loss function constraint, the phase recovery value is closer to the real value than the single-frame recovery method, the reconstruction accuracy is higher, and the reconstruction phase difference And the visual effect of the amplitude image is good.

以上仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1.一种基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述方法包括如下步骤:1. a lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network, is characterized in that, described method comprises the steps: S1,测量待恢复无透镜全息显微成像系统的参数;S1, measure the parameters of the lensless holographic microscope imaging system to be restored; S2,获取样本并构建训练样本集和测试样本集;S2, obtain samples and construct training sample sets and test sample sets; S3,构建相位恢复网络;S3, construct a phase recovery network; S4,对构建的所述相位恢复网络进行训练;S4, train the constructed phase recovery network; S5,进行相位恢复求解复振幅。S5, perform phase recovery to solve the complex amplitude. 2.根据权利要求1所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述步骤S1中的所述参数包括相干光源的中心波长、光电传感器的像素、离焦平面距离样本平面的离焦距离。2 . The lensless holographic microscope imaging phase recovery method based on decoupling-fusion network according to claim 1 , wherein the parameters in the step S1 include the center wavelength of the coherent light source, the pixel of the photoelectric sensor. 3 . , the defocus distance from the defocus plane to the sample plane. 3.根据权利要求2所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述步骤S2中的所述样本为所述待恢复无透镜全息显微成像系统中采集的M个样本在s个离焦平面的分辨率大小为N×N的亮度图像,得到M组亮度图像数据,共M×s张亮度图像,其中M≥300,s≥2,N=768。3. The lensless holographic microscopic imaging phase recovery method based on the decoupling-fusion network according to claim 2, wherein the sample in the step S2 is the lensless holographic microscopic imaging to be recovered The M samples collected in the system have brightness images with a resolution size of N×N in s defocus planes, and M groups of brightness image data are obtained, a total of M×s brightness images, where M≥300, s≥2, N = 768. 4.根据权利要求3所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述步骤S2中的所述训练样本集和所述测试样本集为将所述M组亮度图像数据按照9:1的比例划分得到。4. The lensless holographic microscope imaging phase recovery method based on decoupling-fusion network according to claim 3, wherein the training sample set and the test sample set in the step S2 are The M groups of luminance image data are divided and obtained according to the ratio of 9:1. 5.根据权利要求4所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述步骤S3中的所述相位恢复网络包括解耦网络、反向传播菲涅尔衍射层、融合网络、正向传播菲涅尔衍射层、亮度提取层。5. The lensless holographic microscopic imaging phase recovery method based on the decoupling-fusion network according to claim 4, wherein the phase recovery network in the step S3 comprises a decoupling network, a back-propagating phenanthrene Nell diffraction layer, fusion network, forward propagation Fresnel diffraction layer, brightness extraction layer. 6.根据权利要求5所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述解耦网络包括四个下采样模块、四个上采样模块、两个普通卷积模块。6 . The lensless holographic microscope imaging phase recovery method based on decoupling-fusion network according to claim 5 , wherein the decoupling network comprises four down-sampling modules, four up-sampling modules, two Ordinary convolution module. 7.根据权利要求6所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述下采样模块包括三个卷积层和一个用作下采样的最大池化层并使用ReLU激活函数,所述上采样模块包括两个卷积层和一个用作上采样的反卷积层并使用ReLU激活函数。7. The lensless holographic microscopy imaging phase recovery method based on decoupling-fusion network according to claim 6, wherein the downsampling module comprises three convolutional layers and a max pool used for downsampling The upsampling module includes two convolutional layers and a deconvolutional layer for upsampling and uses the ReLU activation function. 8.根据权利要求7所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述融合网络包括四个下采样模块、四个上采样模块、两个普通卷积模块。8. The lensless holographic microscope imaging phase recovery method based on decoupling-fusion network according to claim 7, wherein the fusion network comprises four down-sampling modules, four up-sampling modules, two common convolution module. 9.根据权利要求8所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述反向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。9 . The lensless holographic microscopic imaging phase recovery method based on the decoupling-fusion network according to claim 8 , wherein the back-propagating Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer Function product module, inverse operator of discrete Fourier transform. 10.根据权利要求9所述的基于解耦-融合网络的无透镜全息显微成像相位恢复方法,其特征在于,所述正向传播菲涅尔衍射层包括离散傅里叶变换算子、传递函数乘积模块、离散傅里叶变换的逆变换算子。10 . The lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network according to claim 9 , wherein the forward propagation Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer Function product module, inverse operator of discrete Fourier transform.
CN202210177683.2A 2022-02-25 2022-02-25 Phase retrieval method for lensless holographic microscopy based on decoupling-fusion network Active CN114529476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210177683.2A CN114529476B (en) 2022-02-25 2022-02-25 Phase retrieval method for lensless holographic microscopy based on decoupling-fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210177683.2A CN114529476B (en) 2022-02-25 2022-02-25 Phase retrieval method for lensless holographic microscopy based on decoupling-fusion network

Publications (2)

Publication Number Publication Date
CN114529476A true CN114529476A (en) 2022-05-24
CN114529476B CN114529476B (en) 2024-11-22

Family

ID=81624827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210177683.2A Active CN114529476B (en) 2022-02-25 2022-02-25 Phase retrieval method for lensless holographic microscopy based on decoupling-fusion network

Country Status (1)

Country Link
CN (1) CN114529476B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097709A (en) * 2022-07-05 2022-09-23 东南大学 A holographic coding method based on complex number optimizer or complex number solver

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983579A (en) * 2018-09-05 2018-12-11 南京大学 Method and device thereof without lens digital holographic microscopic imaging phase recovery and reconstruction
WO2021073335A1 (en) * 2019-10-18 2021-04-22 南京大学 Convolutional neural network-based lens-free holographic microscopic particle characterization method
CN113281979A (en) * 2021-05-20 2021-08-20 清华大学深圳国际研究生院 Lensless laminated diffraction image reconstruction method, system, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983579A (en) * 2018-09-05 2018-12-11 南京大学 Method and device thereof without lens digital holographic microscopic imaging phase recovery and reconstruction
WO2021073335A1 (en) * 2019-10-18 2021-04-22 南京大学 Convolutional neural network-based lens-free holographic microscopic particle characterization method
CN113281979A (en) * 2021-05-20 2021-08-20 清华大学深圳国际研究生院 Lensless laminated diffraction image reconstruction method, system, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PETER KOCSIS等: "Single-shot pixel super-resolution phase imaging by wavefront separation approach", 《OPTICS EXPRESS》, 30 December 2021 (2021-12-30) *
杨书印: "基于深度学习的无透镜全息显微成像研究", 《CNKI》, 1 May 2023 (2023-05-01) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115097709A (en) * 2022-07-05 2022-09-23 东南大学 A holographic coding method based on complex number optimizer or complex number solver
CN115097709B (en) * 2022-07-05 2023-11-17 东南大学 Holographic coding method based on complex optimizer or complex solver

Also Published As

Publication number Publication date
CN114529476B (en) 2024-11-22

Similar Documents

Publication Publication Date Title
Wang et al. On the use of deep learning for phase recovery
CN111366557B (en) A Phase Imaging Method Based on Thin Scattering Medium
US9025881B2 (en) Methods and apparatus for recovering phase and amplitude from intensity images
Bai et al. Dual-wavelength in-line digital holography with untrained deep neural networks
Li et al. Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network
CN111650738A (en) A method and device for reconstructing Fourier stack microscopic images based on deep learning
Kocsis et al. Single exposure lensless subpixel phase imaging: optical system design, modelling, and experimental study
Ke et al. Depth resolution enhancement in optical scanning holography with a dual-wavelength laser source
CN113762460A (en) Image Migration and Reconstruction Algorithm for Multimode Fiber Transmission Based on Numerical Speckle
WO2020081125A1 (en) Analyzing complex single molecule emission patterns with deep learning
Galande et al. Quantitative phase imaging of biological cells using lensless inline holographic microscopy through sparsity-assisted iterative phase retrieval algorithm
Chang et al. Complex-domain-enhancing neural network for large-scale coherent imaging
CN116503258A (en) Super-resolution computational imaging method, device, electronic equipment and storage medium
Madsen et al. On-axis digital holographic microscopy: Current trends and algorithms
Wei et al. Computational imaging-based single-lens imaging systems and performance evaluation
CN114529476A (en) Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network
CN114241072B (en) Laminated imaging reconstruction method and system
Liu et al. Fast digital refocusing Fourier ptychographic microscopy method based on convolutional neural network
Huang et al. Wrapped phase aberration compensation using deep learning in digital holographic microscopy
Malik et al. A practical criterion for focusing of unstained cell samples using a digital holographic microscope
CN112070675B (en) A graph-based normalized light-field super-resolution method and light-field microscopy device
CN118625622A (en) A computational holographic reconstruction method based on deep learning
Jiang et al. Optimization of single-beam multiple-intensity reconstruction technique: Select an appropriate diffraction distance
CN115063377A (en) Intelligent interpolation method and system for three-dimensional microscopic image of fibrous structure
CN106502074A (en) A kind of auto focusing method for image planes digital holographic micro-measuring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant