CN114529476A - Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network - Google Patents
Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network Download PDFInfo
- Publication number
- CN114529476A CN114529476A CN202210177683.2A CN202210177683A CN114529476A CN 114529476 A CN114529476 A CN 114529476A CN 202210177683 A CN202210177683 A CN 202210177683A CN 114529476 A CN114529476 A CN 114529476A
- Authority
- CN
- China
- Prior art keywords
- network
- phase recovery
- decoupling
- microscopic imaging
- lens
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011084 recovery Methods 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 title claims abstract description 83
- 238000003384 imaging method Methods 0.000 title claims abstract description 68
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 238000012360 testing method Methods 0.000 claims abstract description 15
- 238000005070 sampling Methods 0.000 claims description 30
- 238000012546 transfer Methods 0.000 claims description 11
- 230000001427 coherent effect Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 238000000386 microscopy Methods 0.000 claims 2
- 239000011159 matrix material Substances 0.000 abstract description 16
- 230000000007 visual effect Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 33
- 238000004422 calculation algorithm Methods 0.000 description 19
- 238000013135 deep learning Methods 0.000 description 13
- 210000002919 epithelial cell Anatomy 0.000 description 9
- 210000000813 small intestine Anatomy 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008034 disappearance Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000011478 gradient descent method Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000004075 alteration Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 210000002490 intestinal epithelial cell Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000813 microbial effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000000144 pharmacologic effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of computer vision and microscopic imaging, and particularly provides a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network. The method comprises the following steps: s1, measuring parameters of the lens-free holographic microscopic imaging system to be recovered; s2, obtaining a sample and constructing a training sample set and a testing sample set; s3, constructing a phase recovery network; s4, training the constructed phase recovery network; and S5, performing phase recovery to solve complex amplitude. The method uses a decoupling network to decouple double-channel complex matrix information from a single-channel holographic brightness image, uses a fusion network to fuse multi-frame collected information, and combines the information with a Fresnel diffraction physical model, so that the learning target is clear and the interpretability is strong; the reconstruction accuracy is high, and the visual effect of the reconstructed phase difference and amplitude image is good; the phase recovery network used by the invention needs fewer gradient descending rounds and has higher phase recovery speed.
Description
Technical Field
The application relates to the field of computer vision and microscopic imaging, in particular to a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network, which can be used for amplitude and phase difference microscopic imaging of a coaxial holographic lens-free microscopic imaging system.
Background
In medical and biological research, a microscope is a common research tool, and plays an extremely important role in observing pathological sections, microbial structures and the like. However, due to the structural characteristics and the imaging principle of the conventional microscope, problems such as incompatibility of a field of view and magnification, aberration and chromatic distortion of a lens and the like can be caused, sample depth information cannot be recorded, and imaging of a semitransparent phase-type object is not good. The lens-free holographic microscopic imaging technology is based on the optical holographic principle, utilizes a CMOS or CCD or other photoelectric sensor to approach a sample plane, collects a sample brightness image in a coherent light field or a partially coherent light field, and utilizes an algorithm to recover complete wavefront information of the sample plane.
In the lens-free holographic microscopic imaging system, a sample plane is close to a CMOS or CCD photoelectric sensor plane, the axial distance between the two planes is called a defocusing distance, usually between hundreds of micrometers and several millimeters, a light source emitting complete coherent light or partial coherent light is used in the system to irradiate the sample plane to enable the sample plane to be projected and imaged on the CMOS or CCD photoelectric sensor plane, the imaging process approximates a Fresnel diffraction model in a near field region, a brightness image recorded by the CMOS or CCD photoelectric sensor is an interference hologram of a sample in a light field, and wave front complex amplitude information containing amplitude and phase of the sample plane can be solved from an acquired single-frame or multi-frame hologram through a phase recovery algorithm, so that amplitude or phase difference imaging of the sample plane is realized. Compared with the traditional optical microscope, the lens-free microscopic imaging system has the characteristics of large field of view, simple system structure, no lens distortion, no aberration and the like, and can calculate depth information from recovered phase information so as to realize three-dimensional imaging.
In a lens-free holographic microscopic imaging system, common methods for solving a phase by using a collected hologram brightness image can be classified into two types, namely a phase recovery method based on traditional iteration and a phase recovery method based on a neural network, wherein the phase recovery method based on the traditional iteration method includes: gerchberg and Saxton proposed in 1972 a p-a practical algorithm for the determination of the phase from image and differential plane pictures by randomly generating an initial phase, synthesizing a complex amplitude with a luminance image acquired at a sample plane, iteratively projecting the complex amplitude between the sample plane and an imaging plane, replacing the amplitude with a known luminance image at both planes, modifying phase information in the iterative process, and stopping the iteration until a termination condition is met. The method can be used for obtaining phase information meeting certain error requirements. Some subsequent studies propose a phase recovery algorithm based on an iterative method, which is improved on a G-S iterative phase recovery algorithm. Such iterative algorithms fall into a local optimal solution after several iterations, which results in slow error reduction, and inevitably requires a large amount of calculation, resulting in high time cost.
With the remarkable progress of deep learning in the image problem fields such as image recovery and image reconstruction, a lot of scholars begin to discuss the application of a neural network to the phase recovery problem to solve the problems of the traditional method, and at present, typical phase recovery methods based on deep learning include the following: YIchen Wu et al published an article entitled "Extended depth-of-field in pharmacological imaging using depth-learning-based end-based automatic phase recovery" in 2018 in the journal "optical", disclosing a phase recovery method based on deep learning, and realizing end-to-end phase recovery by using a U-type network structure. The method trains on a large number of data sets with real labels, learns the mapping relation from the brightness image of the single-frame hologram collected under the fixed defocus distance to the sample plane complex amplitude, and has the advantages that: in application, the precision requirement for accurately acquiring the defocus distance from the sample plane to the imaging plane is not high, because the method has robustness in a certain defocus distance range. However, this method requires a large amount of data with real labels for training, so the workload of obtaining data sets is huge, the interpretability of the network is poor, and in addition, the reconstruction performance on experimental data which is separated from the style of data sets is not good, and the method has the limitation of generalization performance.
Fei Wang et al published a paper entitled "Phase imaging with an independent neural network" in the journal "Light: Science & Applications" in 2020, and disclosed a Phase recovery method combining a physical model to a deep neural network, in which an encoding end uses a U-type network to generate a prediction of a complex amplitude, a decoding end uses a known fresnel diffraction physical model to forward-propagate the complex amplitude predicted by the encoding end to an imaging plane and calculate luminance information, a pixel value error is calculated from a luminance image actually acquired on the imaging plane, a network parameter is optimized by using a gradient descent method to realize self-supervised learning, and thus a real label of data does not need to be acquired. In addition, the deep learning method does not need training, manual priori knowledge is designed by a network structure, learning priori knowledge does not need to be obtained from a training set, a single-frame acquired hologram is input during use, a model parameter is trained in a self-supervision learning process to reduce a loss function, and finally a result output by a coding end network is used as the prediction of complex amplitude.
In summary, the conventional phase recovery method has the problems of poor interpretability, high time cost and low image reconstruction accuracy.
Disclosure of Invention
The invention aims to provide a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network aiming at the defects in the prior art, so as to solve the problems of poor interpretability, high time cost and low image reconstruction accuracy rate in the conventional phase recovery method.
The core technical idea of the phase recovery method is as follows: the method comprises the steps of using multi-frame holograms collected under different defocusing distances as input of a phase recovery network, using output of a decoupling-fusion network as predicted complex amplitude, transmitting the predicted complex amplitude to each defocusing plane in a forward direction by combining with a known Fresnel diffraction model, calculating average absolute errors with brightness images actually collected by each defocusing plane, summing the average absolute errors of each defocusing plane to serve as a total loss function, and updating parameters of the phase recovery network by using a gradient descent method. The training process of the phase recovery network is self-supervised, so that a real complex amplitude label of data does not need to be acquired, parameters are stored as good initial values after pre-training is carried out on an acquired or simulated small data set, a large number of gradient descending processes do not need to be carried out from random parameters when the problem of actual phase recovery is solved, the convergence speed is higher than that of an untrained generative network, and the method needs less time and is low in time cost. Because the convolutional neural network has a definite task in the decoupling-fusion network provided by the invention, simple nonlinear mapping is learned instead of a complex image reconstruction inverse process, and the generalization performance and the interpretability of the phase recovery network are improved. In addition, the method uses multi-frame constraint to replace single-frame constraint, reduces reconstruction errors, and improves the visual effect of amplitude or phase difference imaging, so that the reconstruction accuracy rate of the method is high, and the visual effect of the reconstructed image is good.
Specifically, the technical scheme adopted by the invention is as follows:
the application provides a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network, which comprises the following steps: s1, measuring parameters of the lens-free holographic microscopic imaging system to be recovered; s2, obtaining a sample and constructing a training sample set and a testing sample set; s3, constructing a phase recovery network; s4, training the constructed phase recovery network; and S5, performing phase recovery to solve complex amplitude.
Further, the parameters in step S1 include the center wavelength of the coherent light source, the pixel of the photosensor, the defocus distance of the defocus plane from the sample plane.
Furthermore, the samples in step S2 are luminance images of M samples collected in the lens-free holographic microscopic imaging system to be restored, where the resolution of the M samples in S defocused planes is N × N, so as to obtain M groups of luminance image data, and M × S luminance images are obtained, where M is greater than or equal to 300, S is greater than or equal to 2, and N is 768.
Further, the training sample set and the testing sample set in step S2 are obtained by dividing M groups of luminance image data according to a ratio of 9: 1.
Further, the phase recovery network in step S3 includes a decoupling network, a back propagation fresnel diffraction layer, a fusion network, a front propagation fresnel diffraction layer, and a brightness extraction layer.
Further, the decoupling network comprises four down-sampling modules, four up-sampling modules and two ordinary convolution modules.
Further, the downsampling module includes three convolutional layers and a max pooling layer used for downsampling and uses a ReLU activation function, and the upsampling module includes two convolutional layers and an inverse convolutional layer used for upsampling and uses a ReLU activation function.
Furthermore, the fusion network comprises four down-sampling modules, four up-sampling modules and two common convolution modules.
Further, the back propagation fresnel diffraction layer includes a discrete fourier transform operator, a transfer function product module, and an inverse discrete fourier transform operator.
Still further, the forward propagating fresnel diffraction layer includes a discrete fourier transform operator, a transfer function product module, and an inverse discrete fourier transform operator.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention uses a decoupling network to decouple double-channel complex matrix information from a single-channel holographic brightness image, uses a fusion network to fuse multi-frame acquisition information, and is combined with a Fresnel diffraction physical model, and the network learning target is clear and the interpretability is strong;
(2) the method utilizes the multi-frame brightness images collected under different defocus distances to extract information and uses the information as loss function constraint, the phase recovery value is closer to a true value compared with a single-frame recovery method, the reconstruction accuracy is higher, and the visual effect of the reconstruction phase difference and amplitude images is good;
(3) the phase recovery network used by the method is trained on a small data set, and the parameters are stored as initial values of gradient descent.
Drawings
FIG. 1 is a schematic diagram of a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network according to the present invention;
fig. 2 is a schematic diagram of the phase recovery network constructed in step S3 in the lens-free holographic microscopic imaging phase recovery method based on the decoupling-fusion network according to the present invention;
FIG. 3 is a diagram of luminance images of a USAF1951 resolution plate at different defocus distances, wherein the defocus distances of FIG. 3(a), FIG. 3(b), FIG. 3(c) and FIG. 3(d) are 0.710mm, 1.185mm, 1.685mm and 2.178mm respectively;
FIG. 4 is a luminance image of a sample slice of collected small intestine epithelial cells at different defocus distances, wherein the defocus distances of FIG. 4(a), FIG. 4(b), FIG. 4(c) and FIG. 4(d) are 0.865mm, 1.305mm, 1.804mm and 2.304mm, respectively;
FIG. 5 shows the results of phase recovery using the G-S iterative algorithm, wherein FIG. 5(a), FIG. 5(b), FIG. 5(c) and FIG. 5(d) show the results of amplitude imaging of a USAF1951 resolution plate, USAF1951 resolution plate difference imaging, section amplitude imaging of a small intestine epithelial cell sample and section phase difference imaging of a small intestine epithelial cell sample after 300 iterations using the G-S iterative algorithm, respectively;
fig. 6 shows the results of phase recovery using the deep learning algorithm proposed by Fei Wang et al, wherein fig. 6(a), 6(b), 6(c), and 6(d) show the results of amplitude imaging of a USAF1951 resolution plate, and phase difference imaging of a small intestine epithelial cell sample slice, respectively, after 300 cycles of gradient reduction using the deep learning algorithm proposed by Fei Wang et al;
FIG. 7 shows the results of phase recovery by the method of the present invention, wherein FIG. 7(a), FIG. 7(b), FIG. 7(c), and FIG. 7(d) show the results of amplitude imaging of USAF1951 resolution plate, and phase difference imaging of small intestine epithelial cell sample section after 100 cycles of gradient decrease by the method of the present invention, respectively.
Detailed Description
In order to make the implementation of the present invention clearer, the following detailed description is made with reference to the accompanying drawings.
The invention provides a lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network, which comprises the following specific steps as shown in figure 1:
s1, measuring parameters of the lens-free holographic microscopic imaging system to be recovered;
the lens-free holographic microscopic imaging technology is based on the coaxial holographic principle, and is a technology which utilizes a photoelectric sensor such as a CMOS/CCD and the like to be close to a sample plane, collects a brightness image of a sample in a coherent light field or a partially coherent light field, the brightness image is an obtained holographic image, and recovers complete wavefront information of the sample plane by utilizing an algorithm. Specifically, the center wavelength of the coherent light source, the pixel size of the photoelectric sensor, and the defocus distance of the defocus plane from the sample plane are parameters in the transfer function. In order to obtain parameters in the transfer function, the central wavelength λ of the coherent light source is measured, and the pixel size p of the photoelectric sensor is obtainedrI r ∈ 1,2, …, s }, specifically, since fresnel diffraction occurs in the near field region, the defocus distance is 0.1mm-3mm, so that the optical propagation process can be approximated using the fresnel diffraction formula. The sample can be fixed, namely the sample plane is fixed, the photoelectric sensor is moved, namely the defocusing plane is moved, and the photoelectric sensor can also be fixed to move the sample; the inventionThe system of fixing the sample and moving the photoelectric sensor is taken as an example for explanation, which is beneficial to maintaining the stability of the sample.
S2, obtaining a sample and constructing a training sample set and a testing sample set;
the sample of the embodiment is a brightness image with resolution of NxN at s defocused planes of M samples collected in a lens-free holographic microscopic imaging system, M groups of brightness image data are obtained, and M x s brightness images are obtained, wherein M is more than or equal to 300, so that the sufficiency of a training data set can be ensured; s is more than or equal to 2 and is at least two frames so as to ensure that multi-frame reconstruction can be carried out; n is 768, which can be adapted to the video memory size of the present embodiment. In the embodiment, a bmp format picture is used, the network structure is large, for a display card with a 12GB display memory, an image with a resolution of 768 × 768 is suitable, and for other display card devices with a smaller or larger display memory, the value of N needs to be adjusted; in addition, parameters trained on a training sample set with 768 × 768 resolution can be restored even for inputs with other resolutions, and in the implementation, the restoration can be performed by using a phase restoration network trained by inputting images with 512 × 512 or 1024 × 1024 resolution, and the convolution operation of the neural network is a convolution kernel sliding operation of 3 × 3, so that the input resolution is not mandatory. Randomly, dividing M groups of brightness image data into a training sample set and a test sample set according to a ratio of 9:1, namely, 9M/10 groups of brightness image data in the M groups of brightness image data form the training sample set, and the rest 1M/10 groups of brightness image data form the test sample set.
S3, constructing a phase recovery network;
as shown in fig. 2, the present invention replaces the nonlinear mapping part in the inverse problem of image reconstruction with a phase recovery network, where the phase recovery network uses multi-frame luminance images collected at different defocus distances as input, and the decoupling network is used to map a single-channel luminance image collected by each frame into a dual-channel complex matrix, and reversely propagates to the sample plane through the fresnel diffraction model. And performing information fusion on the plurality of complex matrixes through a fusion network, and outputting the predicted complex amplitude. And the loss function end forwards transmits the predicted complex amplitude to the acquisition plane at each defocusing distance through a known Fresnel diffraction model, calculates an average absolute error with the brightness image actually acquired by the corresponding plane, and optimizes the parameters of the phase recovery network by using a gradient descent method through the reverse transmission of the gradient. The brightness extraction layer, the reconstruction loss, the decoupling network, the reverse propagation Fresnel diffraction layer and the forward propagation Fresnel diffraction layer in the figure 2 are s and respectively correspond to samples collected at s defocusing distances, wherein the s reconstruction losses are combined into a total loss function, and the parameters of the s decoupling networks are different along with the training due to different mapping relations.
S31, constructing a phase recovery network;
as shown in fig. 2, the phase recovery network includes a decoupling network, a reverse propagation fresnel diffraction layer, a fusion network, a forward propagation fresnel diffraction layer, and a brightness extraction layer. The decoupling network and the convergence network both adopt common U-shaped network structures, but the number of channels of each convolution layer is different.
The decoupling network comprises four down-sampling modules, four up-sampling modules and two ordinary convolution modules and is used for acquiring a brightness matrix { I (I) } of the defocused plane brightness image of the single-channel tensorrI r is formed by 1,2, …, s which are decoupled into complex matrix of double-channel tensorThe first channel tensor is a real part matrix, the second channel tensor is an imaginary part matrix, and the light wave is a complex matrix, so that the actual imaging process can be reflected better. Specifically, each downsampling module includes three convolutional layers for extracting features such that the number of channels is gradually increased, and one maximum pooling layer for downsampling, which reduces the resolution, and uses the ReLU function as an activation function for the first two convolutional layers. Each up-sampling module includes one anti-convolution layer for up-sampling and two convolution layers for increasing resolution and uses the ReLU activation function. The output tensor of the second convolution layer in the down-sampling module and the tensor with the same size output by the third convolution layer are added by using residual connection, and a residual structure network is easy to optimizeThe problem of gradient disappearance caused by depth increase of the deep neural network can be solved, the training speed of the decoupling network can be improved, and the time cost is reduced; the tensor output by the largest pooling layer of the down-sampling module and the tensor output by the deconvolution layer in the corresponding up-sampling module are connected along the channel direction by adopting layer jump connection and used as the input of the first convolution layer in the up-sampling module, so that the characteristics of the shallower convolution layer can be led out, abundant low-level information is reserved, the loss and the resolution reduction of low-level image information caused by pooling operation are compensated, the accuracy of image reconstruction is improved, and the problems of gradient disappearance and network degradation can be reduced. The specific parameter settings related to the down-sampling module, the up-sampling module and the common convolution module are detailed in table 1. More specifically, the setting of the decoupling network is, in turn: the device comprises a first downsampling module → a second downsampling module → a third downsampling module → a fourth downsampling module → a first common convolution module → a first upsampling module → a second upsampling module → a third upsampling module → a fourth upsampling module → a second common convolution module, wherein the first common convolution module is used for connecting paths of upsampling and downsampling of the decoupling network, and the second common convolution module is used for enabling the decoupling network to output tensor information with required size.
The reverse propagation Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer function product module and an inverse discrete Fourier transform operator. Complex matrix for outputting s decoupled networksThe Fresnel diffraction is reversely propagated to the sample plane from the defocusing plane, and the linear operation of the reverse propagation Fresnel diffraction is recorded as { F-1 r|r∈1,2,…,s}:
Wherein F represents a discrete Fourier transform operator; f-1An inverse transform operator representing a discrete fourier transform; f. ofxAnd fyIn frequency domain units, Hr(fx,fy) Representing a transfer function matrix, and the expression is:
wherein j is an imaginary unit; l isrIs the defocus distance from the r-th defocus plane to the sample plane; λ is the central wavelength of the light source; the wave number k is 2 pi/lambda; the transfer function generated at different defocus distances is different.
The fusion network comprises four down-sampling modules, four up-sampling modules and two ordinary convolution modules, and is used for fusing information of the hologram brightness images acquired by multiple frames and generating complex amplitude of sample plane predictionSpecifically, the input layer sums s complex matrices output by the back-propagating fresnel diffraction layer, each down-sampling module includes three convolutional layers for extracting features such that the number of channels increases gradually, and one maximum pooling layer for down-sampling, which reduces resolution, and uses the ReLU activation function. Each up-sampling module includes two convolutional layers and one anti-convolutional layer for up-sampling, which is used to increase resolution, and uses the ReLU activation function, and the specific parameter settings are shown in table 1. The output tensor of the second convolution layer in the down-sampling module and the tensor with the same size output by the third convolution layer are added by using residual connection, a residual structure network is easy to optimize, the problem of gradient disappearance caused by depth increase of a deep neural network can be relieved, the training speed of a decoupling network can be improved, and the time cost is reduced; the tensor output by the largest pooling layer of the down-sampling module and the tensor output by the deconvolution layer in the corresponding up-sampling module are connected along the channel direction by adopting layer jump connection and are used as the input of the first convolution layer in the up-sampling module, so that the characteristics of the shallower convolution layer can be led out, abundant low-level information is reserved, the loss and the resolution reduction of low-level image information caused by pooling operation are compensated, and the image is improvedThe accuracy of image reconstruction can be improved, and the problems of gradient disappearance and network degradation can be reduced. The specific parameter settings related to the down-sampling module, the up-sampling module and the common convolution module are detailed in table 1. More specifically, the setting of the converged network is as follows in sequence: the device comprises a fifth downsampling module → a sixth downsampling module → a seventh downsampling module → an eighth downsampling module → a third common convolution module → a fifth upsampling module → a sixth upsampling module → a seventh upsampling module → an eighth upsampling module → a fourth common convolution module, wherein the third common convolution module is used for connecting paths of upsampling and downsampling of the fusion network, and the fourth common convolution module is used for enabling the fusion network to output tensor information with required size.
The forward propagation Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer function product module and an inverse discrete Fourier transform operator. Predicted complex amplitude for output of fused networkThe forward propagation from the sample plane to s out-of-focus planes outputs complex amplitude prediction matrix of the s out-of-focus planes
Wherein F represents a discrete Fourier transform operator; f-1An inverse transform operator representing a discrete fourier transform; hr(fx,fy) Representing a matrix of transfer functions, fxAnd fyIn the unit of the frequency domain,complex amplitudes predicted at the sample plane for the fusion network.
The brightness extraction layer comprises a calculation module for calculating s defocusing plane complex amplitude prediction matrixes output from the forward propagation Fresnel diffraction layerExtracting a luminance image matrixTherefore, the loss can be conveniently compared with the actual brightness image detected by the photoelectric sensor, and the loss can be solved. The expression for extracting luminance information is:
wherein,representsThe first channel of the tensor, representing the real part of the complex amplitude;representsThe second channel of the tensor represents the imaginary part of the complex amplitude.
According to the invention, the decoupling network is used for decoupling the dual-channel complex matrix information from the single-channel holographic brightness image, the fusion network is used for fusing multi-frame acquisition information, and the information is combined with the Fresnel diffraction physical model, so that the network learning target is clear and the interpretability is strong.
Table 1: and setting parameters of a decoupling network and a fusion network in the phase recovery network.
S32, defining a total loss function L of the phase recovery network.
The expression of the total loss function L of the phase recovery network is as follows:
wherein mean is the pixel-by-pixel averaging operator,hologram luminance matrix predicted for the r-th out-of-focus plane output by the luminance extraction layer, IrThe hologram brightness matrix actually collected for the r-th out-of-focus plane. And the 1-norm is adopted, so that the method is friendly to detailed information and can effectively improve the resolution. The difference between the collected brightness and the brightness obtained by brightness extraction according to the predicted complex amplitude can be obtained through the expression of the total loss function L, the smaller the difference is, the more accurate the predicted complex amplitude is, and the larger the difference is, the less accurate the predicted complex amplitude is, so that the accuracy of phase recovery of the method can be reflected.
The method utilizes the multi-frame holograms collected under different defocus distances to extract information and uses the information as loss function constraint, the phase recovery value is closer to a true value compared with a single-frame recovery method, the reconstruction accuracy is higher, and the visual effect of the reconstruction phase difference and amplitude image is good.
S4, training the constructed phase recovery network;
luminance images { I) acquired by 9M/10 groups in training sample set at s defocused planesrI r ∈ 1,2, …, s } is used as an input of the phase recovery network, and the phase recovery network is iteratively trained J times, where J is greater than 100, so that the network is sufficiently trained, specifically, 300 times in this embodiment. In each round, all training sample sets and test sample sets need to be traversed, and training is firstly carried outAll samples in the sample set are sequentially input into a phase recovery network and used for training the network and optimizing network parameters; and then all samples in the test sample set are sequentially input into the phase recovery network for testing whether the trained network needs to be trained continuously. After the test sample set meets a certain loss function requirement, saving parameters of the phase recovery network; in the embodiment, an Adam optimizer provided by a pytorr framework is selected to optimize parameters in a phase recovery network, the learning rate is set to 0.0001, and the average loss on a test sample set is required to be less than or equal to 3, so that the prediction accuracy can be ensured, wherein the average loss refers to the average of the total loss L of all test samples.
The phase recovery network used by the invention only needs to train on a small data set, does not need a large amount of data, can effectively reduce the training time and time cost, saves parameters as the initial value of gradient descent, has fewer gradient descent rounds compared with the traditional untrained deep learning method, and improves the phase recovery speed by using the learning priori knowledge.
S5, carrying out phase recovery to solve complex amplitude;
loading the parameters stored in the step S4 into a phase recovery network, acquiring brightness image data of the measured sample in S defocusing planes under the same system parameters as those of the phase recovery network during training, inputting the brightness image data into the phase recovery network, calculating a total loss function, and performing gradient descent optimization parameter K round again by using an Adam optimizer until the total loss function on the input data meets L not more than alpha, wherein alpha is a manually selected loss function threshold, specifically, the loss function threshold alpha is equal to 0.5, so as to ensure the phase recovery accuracy of the phase recovery network and fuse the prediction result of the complex amplitude of the sample plane output by the networkI.e. the complex amplitude is found, i.e. the phase recovery is completed.
The implementation conditions and the result analysis of the embodiment of the invention are as follows:
1. carrying out the conditions;
the method of the invention is suitable for any mutually compatible hardware and software platform, and the hardware testing platform adopted by the embodiment is as follows: intel Core i7 CPU with 3.60GHz dominant frequency and 16GB internal memory; the GPU is as follows: NVIDIA TITAN XP, 12GB video memory; the software simulation platform comprises: windows 1064-bit operating system; software simulation language: python; using a deep learning framework: PyTorch.
2. And (6) analyzing results.
The experimental data collected by the same system is subjected to phase recovery by using the method of the invention, a G-S iterative algorithm and a method of Fei Wang et al. The acquired image is shown in fig. 3, and the obtained phase difference and amplitude imaging results are shown in fig. 4.
Fig. 3 is a diagram for acquiring luminance images of a USAF1951 resolution plate at different defocus distances, wherein the defocus distances in fig. 3(a), 3(b), 3(c) and 3(d) are 0.710mm, 1.185mm, 1.685mm and 2.178mm respectively. The method uses the brightness images collected under different defocus distances of a plurality of frames to extract information, and uses the information as loss function constraint, the phase recovery value is closer to a true value compared with a single-frame recovery method, the reconstruction accuracy is higher, and the visual effect of the reconstruction phase difference and amplitude images is good.
FIG. 4 is a luminance image of a sample slice of collected small intestine epithelial cells at different defocus distances, wherein the defocus distances of FIG. 4(a), FIG. 4(b), FIG. 4(c) and FIG. 4(d) are 0.865mm, 1.305mm, 1.804mm and 2.304mm, respectively. The method uses the brightness images collected under different defocus distances of a plurality of frames to extract information, and uses the information as loss function constraint, the phase recovery value is closer to a true value compared with a single-frame recovery method, the reconstruction accuracy is higher, and the visual effect of the reconstruction phase difference and amplitude images is good.
Fig. 5 shows the results of phase recovery using the G-S iterative algorithm, where fig. 5(a), fig. 5(b), fig. 5(c), and fig. 5(d) show the results of amplitude imaging of the USAF1951 resolution plate, amplitude imaging of the small intestine epithelial cell sample slice, and phase imaging of the small intestine epithelial cell sample slice, respectively, after 300 cycles of gradient reduction using the G-S iterative algorithm. Specifically, the results of FIG. 5 were obtained by the method disclosed by Gerchberg and Saxton in 1972 in an article entitled "A practical algorithm for the determination of the phase from images and diffusion plane pictures".
Fig. 6 shows the results of phase recovery using the deep learning algorithm proposed by the Fei Wang et al, in which fig. 6(a), 6(b), 6(c), and 6(d) show the results of amplitude imaging of the USAF1951 resolution plate, the results of differential imaging of the USAF1951 resolution plate, the results of amplitude imaging of the small intestinal epithelial cell sample slice, and the results of differential imaging of the small intestinal epithelial cell sample slice, respectively, after gradient reduction of 300 cycles using the deep learning algorithm proposed by the Fei Wang et al. Specifically, the results shown in FIG. 6 were obtained by Fei Wang et al published in 2020 journal "Light: Science & Applications" as a method entitled "Phase imaging with an associated neural network".
FIG. 7 shows the results of phase recovery by the method of the present invention, wherein FIG. 7(a), FIG. 7(b), FIG. 7(c), and FIG. 7(d) show the results of amplitude imaging of USAF1951 resolution plate, and phase difference imaging of small intestine epithelial cell sample section after 100 cycles of gradient decrease by the method of the present invention, respectively. On one hand, the method trains on a small data set, and the parameters are stored as initial values of gradient descent, compared with the traditional untrained deep learning method, the number of gradient descent rounds is less, the number of iteration rounds of the method is less than 300 of the two comparison methods, and the phase recovery speed of the method is higher. On the other hand, as can be seen by comparing fig. 5, fig. 6 and fig. 7, on the result of phase recovery, the G-S iterative algorithm and the existing deep learning method have obvious noise and shadow for sample amplitude imaging, which indicates that the G-S iterative algorithm can quickly fall into a locally optimal solution, resulting in the unsmooth numerical value of phase recovery, and the method of the present invention learns the priori knowledge of a large amount of data, so that the image is smoother, less noise and less shadow; the existing deep learning method only uses a single image, so that the restraint is insufficient, and the phase recovery numerical value is not accurate enough. The amplitude recovery result obtained by the method of the invention almost has no noise and shadow; as for the over-exposure phenomena of different degrees of sample phase difference imaging, the phase recovery result obtained by the method of the invention has no obvious over-exposure phenomenon and has better visual effect. Because the invention utilizes the multi-frame brightness images collected under different defocus distances to extract information and uses the information as loss function constraint, the phase recovery value is closer to the true value compared with a single-frame recovery method, the reconstruction accuracy is higher, and the visual effect of the reconstruction phase difference and amplitude images is good.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A lens-free holographic microscopic imaging phase recovery method based on a decoupling-fusion network is characterized by comprising the following steps:
s1, measuring parameters of the lens-free holographic microscopic imaging system to be recovered;
s2, obtaining a sample and constructing a training sample set and a testing sample set;
s3, constructing a phase recovery network;
s4, training the constructed phase recovery network;
and S5, performing phase recovery to solve complex amplitude.
2. The method for phase retrieval of lensless holographic microscopy imaging based on decoupling-fusion network of claim 1, wherein the parameters in step S1 include center wavelength of coherent light source, pixel of photoelectric sensor, defocus distance of defocus plane from sample plane.
3. The decoupling-fusion network-based lensless holographic microscopic imaging phase retrieval method of claim 2, wherein the sample in step S2 is a luminance image with resolution of nxn at S defocused planes of M samples collected in the lensless holographic microscopic imaging system to be retrieved, so as to obtain M groups of luminance image data, wherein M × S luminance images are obtained, M is not less than 300, S is not less than 2, and N is 768.
4. The lens-free holographic microscopic imaging phase retrieval method based on the decoupling-fusion network of claim 3, wherein the training sample set and the testing sample set in the step S2 are obtained by dividing the M groups of brightness image data according to a 9:1 ratio.
5. The lens-free holographic microscopic imaging phase retrieval method based on the decoupling-fusion network of claim 4, wherein the phase retrieval network in the step S3 comprises a decoupling network, a reverse propagation Fresnel diffraction layer, a fusion network, a forward propagation Fresnel diffraction layer and a brightness extraction layer.
6. The lens-free holographic microscopic imaging phase recovery method based on the decoupling-fusion network of claim 5, wherein the decoupling network comprises four down-sampling modules, four up-sampling modules, and two general convolution modules.
7. The decoupling-fusion network-based lensless holographic microscopy phase recovery method of claim 6, wherein the downsampling module comprises three convolutional layers and one maximum pooling layer for downsampling and uses a ReLU activation function, and the upsampling module comprises two convolutional layers and one inverse convolutional layer for upsampling and uses a ReLU activation function.
8. The lens-free holographic microscopic imaging phase recovery method based on the decoupling-fusion network of claim 7, wherein the fusion network comprises four down-sampling modules, four up-sampling modules, and two general convolution modules.
9. The lens-free holographic microscopic imaging phase recovery method based on the decoupling-fusion network of claim 8, wherein the back propagation Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer function product module, and an inverse discrete Fourier transform operator.
10. The lens-free holographic microscopic imaging phase recovery method based on the decoupling-fusion network of claim 9, wherein the forward propagation Fresnel diffraction layer comprises a discrete Fourier transform operator, a transfer function product module, and an inverse discrete Fourier transform operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210177683.2A CN114529476A (en) | 2022-02-25 | 2022-02-25 | Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210177683.2A CN114529476A (en) | 2022-02-25 | 2022-02-25 | Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114529476A true CN114529476A (en) | 2022-05-24 |
Family
ID=81624827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210177683.2A Pending CN114529476A (en) | 2022-02-25 | 2022-02-25 | Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529476A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097709A (en) * | 2022-07-05 | 2022-09-23 | 东南大学 | Holographic encoding method based on complex optimizer or complex solver |
-
2022
- 2022-02-25 CN CN202210177683.2A patent/CN114529476A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115097709A (en) * | 2022-07-05 | 2022-09-23 | 东南大学 | Holographic encoding method based on complex optimizer or complex solver |
CN115097709B (en) * | 2022-07-05 | 2023-11-17 | 东南大学 | Holographic coding method based on complex optimizer or complex solver |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nguyen et al. | Deep learning approach for Fourier ptychography microscopy | |
CN106846463B (en) | Microscopic image three-dimensional reconstruction method and system based on deep learning neural network | |
CN111366557B (en) | Phase imaging method based on thin scattering medium | |
CN105589210B (en) | Digital synthetic aperture imaging method based on pupil modulation | |
CN113762460B (en) | Multimode optical fiber transmission image migration reconstruction algorithm based on numerical value speckle | |
CN111650738A (en) | Fourier laminated microscopic image reconstruction method and device based on deep learning | |
CN113256772B (en) | Double-angle light field high-resolution reconstruction system and method based on visual angle conversion | |
Yang et al. | High-fidelity image reconstruction for compressed ultrafast photography via an augmented-Lagrangian and deep-learning hybrid algorithm | |
CN113158487B (en) | Wavefront phase difference detection method based on long-short term memory depth network | |
Spencer et al. | Deconstructing self-supervised monocular reconstruction: The design decisions that matter | |
CN116758341B (en) | GPT-based hip joint lesion intelligent diagnosis method, device and equipment | |
WO2020081125A1 (en) | Analyzing complex single molecule emission patterns with deep learning | |
CN111105354A (en) | Depth image super-resolution method and device based on multi-source depth residual error network | |
CN110751700A (en) | Sampling and reconstruction integrated deep learning network for single-pixel imaging and training method thereof | |
CN114529476A (en) | Lensless holographic microscopic imaging phase recovery method based on decoupling-fusion network | |
CN115830094A (en) | Unsupervised stereo matching method | |
Park et al. | Fast automated quantitative phase reconstruction in digital holography with unsupervised deep learning | |
CN114241072A (en) | Laminated imaging reconstruction method and system | |
Wu et al. | Adaptive correction method of hybrid aberrations in Fourier ptychographic microscopy | |
CN113763300A (en) | Multi-focus image fusion method combining depth context and convolution condition random field | |
Chang et al. | Complex-domain-enhancing neural network for large-scale coherent imaging | |
CN110163800B (en) | Chip microscopic phase recovery method and device based on multi-frame image super-resolution | |
Liu et al. | An adaptive noise-blind-separation algorithm for ptychography | |
CN116468821A (en) | Anti-disturbance imaging method based on deep learning and multimode fiber speckle | |
Zhang et al. | The integration of neural network and physical reconstruction model for Fourier ptychographic microscopy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |