CN110989154A - Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof - Google Patents

Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof Download PDF

Info

Publication number
CN110989154A
CN110989154A CN201911227287.0A CN201911227287A CN110989154A CN 110989154 A CN110989154 A CN 110989154A CN 201911227287 A CN201911227287 A CN 201911227287A CN 110989154 A CN110989154 A CN 110989154A
Authority
CN
China
Prior art keywords
point spread
image
spread function
light field
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911227287.0A
Other languages
Chinese (zh)
Other versions
CN110989154B (en
Inventor
毛珩
张光义
陈良怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Publication of CN110989154A publication Critical patent/CN110989154A/en
Priority to PCT/CN2020/105190 priority Critical patent/WO2021098276A1/en
Application granted granted Critical
Publication of CN110989154B publication Critical patent/CN110989154B/en
Priority to PCT/CN2020/129083 priority patent/WO2021098645A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof, wherein the method comprises the following steps: step 1, decomposing a 3D body image into layer images, wherein each layer image consists of a plurality of 2D sub-images, and the pixel value of each 2D sub-image except coordinates (i, j) is zero; step 2, extracting pixels at coordinates (i, j) on each layer image to rearrange the pixels into single-channel images, and stacking all the single-channel images into multi-channel images; step 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel; step 4, taking the multi-channel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, performing convolution on the multi-channel image by adopting a 4D convolutional core, and outputting the multi-channel image; and 5, rearranging the multi-channel image output in the step 4 into a light field image. The method can greatly improve the convolution calculation speed, thereby improving the reconstruction speed of the image.

Description

Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof
Technical Field
The invention relates to the technical field of light field microscope image reconstruction, in particular to a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof.
Background
The light field microscope is a fast imaging mode, and a traditional fluorescence microscope can be changed into the light field microscope (as shown in fig. 1) by adding a micro lens array on a phase surface. Conventional microscopes can only perform 2D imaging, while light field microscopes can perform 3D imaging, by taking a light field image, and then using the light field image, the 3D distribution of an object can be reconstructed.
As shown in fig. 1, the light field images of a typical microscope, a light field microscope, and zebrafish taken by a light field microscope are shown. Note that the image taken by the light field microscope has a typical circular shape, which is the effect of the microlens.
In general, the positive process of light field imaging can be expressed as:
f=Hg
where f is the captured light field image, g is the 3D volume image to be reconstructed, and they are all represented by one-dimensional vectors for description introduction. H is a system matrix that relates the 3D volume image g to the light field image f from which the 3D volume image g is reconstructed. Different reconstruction algorithms differ in detail, but both Hg and H need to be calculatedTf, forward process (forward) and back projection (back) of computed imaging, and the computation process is similar.
The process of calculating the positive process Hg is mainly to convolve the volume image g with a corresponding Point Spread Function (PSF), the point spread function of the light field camera is related to the spatial position, and the point spread functions are the same only for points on the same layer and at the same position after corresponding microlenses. As shown in fig. 2, if there are N × N pixels behind each microlens, the 3D volume to be reconstructed has S layers shown as 2a in fig. 2, each layer needs to be decomposed into N × N layers, then the layers are convolved with the corresponding point spread functions, N × S2D convolutions need to be calculated when Hg is calculated in a primary positive process, and then the convolved images are accumulated to obtain the light field image lfmimeage. This approach is very computationally intensive, and the existing method uses FFT (chinese all known as "fast fourier transform") to compute the convolution, but because of the large number of convolutions, the computation process takes a long time.
Disclosure of Invention
It is an object of the present invention to provide a reconstruction method for microscopic light field volume imaging and a forward process and a back projection acquisition method thereof that overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
In order to achieve the above object, the present invention provides a positive process acquisition method in a reconstruction method of microscopic light field volume imaging, the method comprising:
step 1, the size is M2And 3D volume image (M) with number of layers S2,M2S) into NxNxS pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000021
A total of NxNxS single-channel images, all of which are stacked into a multi-channel image
Figure BDA0002302588580000022
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000023
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
Figure BDA0002302588580000024
And 5, rearranging the multichannel image output by the step 4 into a light field image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure BDA0002302588580000025
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions of
Figure BDA0002302588580000031
The 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging, which comprises the following steps:
step 1, the size is (M)2,M21), the light field image with the number of layers of 1 is decomposed into N multiplied by N light field images with the size of M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000032
Total of NXN single-channel images
Figure BDA0002302588580000033
Stacking all of the single-channel images into a multi-channel image
Figure BDA0002302588580000034
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000041
Step 4, performing 4D convolution kernel of the step 3
Figure BDA0002302588580000042
Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processed
Figure BDA0002302588580000043
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure BDA0002302588580000044
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure BDA0002302588580000045
For the multi-channel image
Figure BDA0002302588580000046
Performing convolution and outputting multi-channel image
Figure BDA0002302588580000047
Step 6, the multi-channel image output in the step 5 is processed
Figure BDA0002302588580000048
And rearranging to obtain a 3D volume image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure BDA0002302588580000049
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions of
Figure BDA0002302588580000051
The 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the method.
Since the invention first images the 3D volume (M)2,M2S) into NxNxS layer images or light field images (M)2,M21), decomposing into N × N layer images, and extracting only the pixels at non-zero positions without calculating the pixels at zero positions, so that each single-channel image has a size of
Figure BDA0002302588580000052
And will beThe 2D point spread functions are rearranged into small N multiplied by N convolution kernels, then are respectively convolved and then are inversely rearranged, and the convolution method replaces the prior art that the 2D point spread functions are directly adopted for convolution, so that the convolution calculation speed can be greatly improved, and the reconstruction speed of the image is improved.
Drawings
Fig. 1 is a schematic diagram of an imaging principle of a microscope, a schematic diagram of an imaging principle of a light field microscope, and a captured light field picture, which are typical in the prior art;
FIG. 2 is a schematic diagram illustrating the calculation principle of the positive imaging process of a light field camera in the prior art;
FIG. 3 is a schematic diagram illustrating a reconstruction method for microscopic light field imaging according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an arrangement of 5D point spread functions into 4D convolution kernels according to an embodiment of the present invention;
fig. 5 is a flowchart of a positive process acquisition method in a reconstruction method of microscopic light field volume imaging according to an embodiment of the present invention;
fig. 6 is a flowchart of a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging according to the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
As shown in fig. 2, fig. 3 and fig. 5, the positive process obtaining method in the reconstruction method of microscopic light field volume imaging provided in this embodiment is to obtain a light field image f according to a known 3D volume image g, and the method includes:
step 1, from the 3D volume image (M)2,M2S) of size M2The number of layers is S, the 3D-volume image is decomposed into N multiplied by S image layer images, and the size of each image layer image is M2×M2. Each layer image is composed of a plurality of 2D sub-images c with the size of NxN, the pixel value of each 2D sub-image c except the pixel D at the coordinate (i, j) is 0, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N.
Step 2, extracting the image on each image layer imagePixel d at coordinate (i, j) to rearrange into a plurality of single-channel images h. Wherein the single-channel image h is represented as
Figure BDA0002302588580000061
Each single-channel image h has a size of
Figure BDA0002302588580000062
With NxNxS channels, all single-channel images are stacked into a multi-channel image
Figure BDA0002302588580000063
In fig. 3, with reference to fig. 3, a layer image obtained in step 1 is located above the left side of fig. 3, and the size of the layer image is represented as M2×M2. The lower left side shows a single channel image h. The embodiment extracts only the pixels of the non-zero position coordinates (i, j) without calculating the pixels of the zero position, and therefore, the size of each single-channel image is
Figure BDA0002302588580000064
This provides a convenient condition for fast reconstruction.
And 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel. Specifically, as shown in FIG. 2, the 5D point spread function of a light field microscope is a 5D vector with a dimension of (M)1,M1N, N, S), namely (M)1,M1N, S), where N is the number of discrete pixels in the row or column direction behind each microlens, S is the number of layers of the 3D volume image, M1The maximum number of pixels to which the point spread function can spread in the row direction or the column direction. The resulting 4D convolution kernel is represented as
Figure BDA0002302588580000071
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, and adopting the 4D convolutional kernel to carry out the operation on the multichannel imageConvolving, outputting a multi-channel image, the output multi-channel image being represented as
Figure BDA0002302588580000072
Wherein, when using a convolutional neural network, the edge padding is set to
Figure BDA0002302588580000073
The convolution step length stride is set to 1, and the convolutional neural network can be implemented by using existing software, such as matlab, tensorflow, pytorch, and the like.
And 5, rearranging the multichannel image output by the step 4 into a light field image, wherein the rearrangement can be realized by adopting the existing method.
In the embodiment, the 3D volume image is firstly decomposed into N × S layer images, only the pixels at the non-zero positions are extracted, and the pixels at the zero positions do not need to be calculated, so that the size of each single-channel image is equal to
Figure BDA0002302588580000074
And rearranging the 2D point spread functions into small N multiplied by N convolution kernels, respectively convolving the kernels, and then inversely rearranging the kernels instead of directly adopting the 2D point spread functions to carry out convolution in the prior art, so that the convolution calculation speed can be greatly improved, and the reconstruction speed is improved.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point diffusion function e included in the 5D point diffusion function, and converting the 2D point diffusion function e into a 3D vector; then, selecting the next 2D point diffusion function e until all 2D point diffusion functions e contained in the 5D point diffusion function are traversed, and converting all 2D point diffusion functions e contained in the 5D point diffusion function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
Figure BDA0002302588580000075
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, step 3 can also be implemented by methods provided in the prior art, which are not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, and the second preset value is set to be j- (N + 1)/2.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels. And (3) the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all pixel points of which the abscissa is not more than N and the ordinate is not more than N on the aligned 2D point spread function are traversed sequentially, so that an NxN small convolution kernel f is obtained.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1. The lower right side shows a number of N small convolution kernels f, each of which has a size of
Figure BDA0002302588580000081
Specifically, when a pixel is extracted for the first time, taking a pixel point with a coordinate (1, 1) on the aligned 2D point spread function as a starting point, and extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when the pixel is taken for the second time,taking a pixel point with coordinates (1, 2) on the aligned 2D point diffusion function as a starting point, and respectively extracting a pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; when a pixel is taken for the third time, taking a pixel point with coordinates (1, 3) on the aligned 2D point diffusion function as a starting point, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; by analogy, sequentially traversing to pixel points with coordinates (N, N) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; in the above process, the extracted pixels are combined into a small convolution kernel f, and finally, the small convolution kernel f of N × N is obtained.
Step 323, stacking the nxn small convolution kernels f obtained in the step 322 into the 3D vector, wherein the dimension of the 3D vector is
Figure BDA0002302588580000082
As shown in fig. 6, the present invention also provides a back projection acquisition method in a reconstruction method of an apparent micro-light field volume imaging, which is to obtain a 3D volume image g from a known light field image f, the method comprising:
step 1, the size is M2Light field image (M) with layer number 12,M21) decomposition into N × N pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N.
Step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure BDA0002302588580000091
Total of NXN single-channel images
Figure BDA0002302588580000092
Stacking all of the single-channel images into a multi-channel image
Figure BDA0002302588580000093
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure BDA0002302588580000094
Wherein, 5D point spread function (M)1,M1N, S) have been introduced above and will not be described herein.
Step 4, performing 4D convolution kernel of the step 3
Figure BDA0002302588580000095
Rotation and exchange dimensions are performed. Where "rotate and swap dimensions" refers to convolving the 4D kernels
Figure BDA0002302588580000096
The first two-dimensional 2D image is rotated 180 degrees and then the third and fourth dimensions are swapped.
Step 5, the multichannel image obtained in the step 2 is processed
Figure BDA0002302588580000097
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure BDA0002302588580000098
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure BDA0002302588580000099
For the multi-channel image
Figure BDA00023025885800000910
Performing convolution and outputting multi-channel image
Figure BDA00023025885800000911
Wherein, makeWith convolutional neural networks, edge padding is set to
Figure BDA00023025885800000912
The convolution step stride is set to 1. The convolutional neural network can be implemented by using existing software, such as matlab, tenserflow, pytorch, etc.
In the step, the single-channel image obtained in the step 2 is converted into the input of a Convolutional Neural Network (CNN), and the 4D convolutional kernel obtained in the step 3 is used as the weight of the convolutional neural network, so that the existing neural network toolkit can be used for calculation, the output of the network is obtained, and then the inverse operation is carried out on the original image space, and the final output is obtained.
Step 6, the multi-channel image output in the step 5 is processed
Figure BDA0002302588580000101
The rearrangement is performed to obtain a 3D volume image, wherein the rearrangement can be performed by using an existing method.
The embodiment firstly provides that the light field image is decomposed into N multiplied by N image layer images, only the pixels at the non-zero positions are extracted, and the pixels at the zero positions do not need to be calculated, so that the size of each single-channel image is equal to
Figure BDA0002302588580000102
And rearranging the 2D point spread functions into small N multiplied by N convolution kernels, respectively convolving the kernels, and then inversely rearranging the kernels instead of directly adopting the 2D point spread functions to carry out convolution in the prior art, so that the convolution calculation speed can be greatly improved, and the reconstruction speed is improved.
Note that, similarly to the above-described embodiment, each layer image in step 1 (the size M of each layer image) may also be set2×M2The number is N × S) are respectively shifted in the row direction by- (i-1) and in the column direction by- (j-1) to align the image layers. The procedure of step 2 was similar to the previous example, resulting in a plurality of sizes (M)2,M2) The plurality of single-channel images of (c) is further divided into a plurality of sizes of (M)2,M2) Is stacked into a multi-channel image (M)2,M2N × S). In step 3, each 2D point spread function contained in the 5D point spread function moves i-1 along the row direction and moves j-1 along the column direction respectively to obtain a 4D convolution kernel (M)1,M1N × S, 1). The operation of step 4 is the same as the above embodiment, wherein the edge padding of the neural network is set to (M)1-1)/2, convolution step stride set to N. Step 5 output shape is (M)2,M2And 1) light field image of the light field.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point spread function e contained in the 5D point spread function, and converting the 2D point spread functions into 3D vectors; then, selecting the next 2D point diffusion function e until all 2D point diffusion functions e contained in the 5D point diffusion function are traversed, and converting all 2D point diffusion functions e contained in the 5D point diffusion function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
Figure BDA0002302588580000111
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, the method provided in the prior art can also be used to implement step 2, which is not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, i is larger than or equal to 1 and is smaller than or equal to N, the second preset value is set to be j- (N +1)/2, and j is larger than or equal to 1 and is smaller than or equal to N.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels. And (3) the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all pixel points of which the abscissa is not more than N and the ordinate is not more than N on the aligned 2D point spread function are traversed sequentially, so that an NxN small convolution kernel f is obtained.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1. The lower right side shows a number of N small convolution kernels f, each of which has a size of
Figure BDA0002302588580000112
Specifically, when a pixel is extracted for the first time, taking a pixel point with a coordinate (1, 1) on the aligned 2D point spread function as a starting point, and extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when pixels are taken for the second time, taking pixel points with coordinates (1, 2) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when a pixel is taken for the third time, taking a pixel point with coordinates (1, 3) on the aligned 2D point diffusion function as a starting point, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; by analogy, sequentially traversing to pixel points with coordinates (N, N) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; in the above process, the extracted pixels are combined into a small convolution kernel f, and finally, the small convolution kernel f of N × N is obtained.
Step 323, stacking the nxn small convolution kernels f obtained in the step 322 into the 3D vector, wherein the dimension of the 3D vector is
Figure BDA0002302588580000121
The size of the image layer image obtained by decomposing the 3D body image is M2×M2The size of the 2D point spread function contained in the 5D point spread function of the light field microscope corresponding to the image layer image is M1×M1The complexity of the convolution is directly calculated to be O (M) by adopting the existing method1 2M2 2) The complexity can be reduced to using the existing FFT
Figure BDA0002302588580000122
Observing the characteristics of the convolution, the convolved image can be found to be 0 except for the specific position, so the invention extracts the image and rearranges the point spread function to obtain the size M1×M1Is rearranged into N × N small convolution kernels, each having a size of
Figure BDA0002302588580000123
Thus the complexity of directly calculating the convolution is reduced to
Figure BDA0002302588580000124
Meanwhile, due to the development of deep learning, a plurality of software packages for efficiently calculating the convolution are provided, the convolution can be efficiently calculated by combining the software packages, and experiments show that the method can greatly improve the calculation speed compared with the original algorithm based on FFT.
The following table 1 is a comparison of the reconstruction method provided by the present invention with the existing methods in terms of time consumption and acceleration factor in reconstructing the image:
TABLE 1
Image size (pixel) 600×600 900×900 1200×1200 1500×1500 1800×1800
FFT time (seconds) 2135.6 2149.0 2185.5 2228.6 2825.9
The invention consumes time (second) 35.3 90.6 122.0 163.0 227.2
Multiple of acceleration 60.0× 23.7× 17.9× 13.7× 12.4×
According to the above table 1, comparing the time consumption of the reconstruction method provided by the present invention with the time consumption of the existing method, all based on the algorithm provided in the documents Prevedel, r, et al, Simultaneous white-animal 3D imaging of neural effective light-field microscopy, Methods 2014.11(7): p.727-730, the reconstruction method provided by the present invention can achieve a 12 to 60-fold increase, for example, when the size of the region of interest is 900 × 900, the reconstruction time of 1000 images can be reduced from 3 weeks to 1 day, and the time cost can be greatly reduced. In the experiment, N is 15, S is 101, and the iteration is performed 16 times. The conventional algorithm and the improved algorithm are realized by Matlab2019a, a GPU is used for acceleration, and experiments are carried out on a large-area workstation which is provided with a GeForce GTX 1080Ti GPU, a double CPU (Intel Xeon 2.10GHz) and a 64G memory.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the methods in the embodiments.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A positive process acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2And 3D volume image (M) with number of layers S2,M2S) into NxNxS pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure FDA0002302588570000011
A total of NxNxS single-channel images, all of which are stacked into a multi-channel image
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure FDA0002302588570000013
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
Figure FDA0002302588570000014
And 5, rearranging the multichannel image output by the step 4 into a light field image.
2. The positive process acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 1, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure FDA0002302588570000015
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
3. The method for acquiring positive process in the reconstruction method of microscopic light field volume imaging according to claim 2, wherein the step 32 of selecting a 2D point spread function included in the 5D point spread function comprises:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions of
Figure FDA0002302588570000021
The 3D vector of (a).
4. The method for acquiring positive processes in the reconstruction method for microscopic light field volume imaging according to claim 3, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
5. A back projection acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2Light field image (M) with layer number 12,M21) decomposition into N × N pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, and i is more than or equal to 1 and less than or equal to iN,1≤j≤N;
Step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size is
Figure FDA0002302588570000031
Total of NXN single-channel images
Figure FDA0002302588570000032
Stacking all of the single-channel images into a multi-channel image
Figure FDA0002302588570000033
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Figure FDA0002302588570000034
Step 4, performing 4D convolution kernel of the step 3
Figure FDA0002302588570000035
Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processed
Figure FDA0002302588570000036
As the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is used
Figure FDA0002302588570000037
As weights for the convolutional neural network, the 4D convolutional kernel is employed
Figure FDA0002302588570000038
For the multi-channel image
Figure FDA0002302588570000039
Performing convolution and outputting multi-channel image
Figure FDA00023025885700000310
Step 6, the multi-channel image output in the step 5 is processed
Figure FDA00023025885700000311
And rearranging to obtain a 3D volume image.
6. The back projection acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 5, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
Figure FDA0002302588570000041
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
7. The back projection acquisition method in the reconstruction method for microscopically light field volume imaging as claimed in claim 6, wherein when selecting one of the 2D point spread functions included in the 5D point spread function in the step 32, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions of
Figure FDA0002302588570000042
The 3D vector of (a).
8. The back projection acquisition method in the reconstruction method for microscopic light field volume imaging according to claim 7, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
9. A reconstruction method for microscopic light field volume imaging comprising the method of any one of claims 1 to 8.
CN201911227287.0A 2019-11-22 2019-12-04 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof Active CN110989154B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/105190 WO2021098276A1 (en) 2019-11-22 2020-07-28 Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein
PCT/CN2020/129083 WO2021098645A1 (en) 2019-11-22 2020-11-16 Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019111566792 2019-11-22
CN201911156679 2019-11-22

Publications (2)

Publication Number Publication Date
CN110989154A true CN110989154A (en) 2020-04-10
CN110989154B CN110989154B (en) 2020-07-31

Family

ID=70090028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911227287.0A Active CN110989154B (en) 2019-11-22 2019-12-04 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof

Country Status (2)

Country Link
CN (1) CN110989154B (en)
WO (2) WO2021098276A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021098645A1 (en) * 2019-11-22 2021-05-27 北京大学 Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor
CN113971722A (en) * 2021-12-23 2022-01-25 清华大学 Fourier domain optical field deconvolution method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276188A1 (en) * 2008-05-05 2009-11-05 California Institute Of Technology Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors
CN106199941A (en) * 2016-08-30 2016-12-07 浙江大学 A kind of shift frequency light field microscope and three-dimensional super-resolution microcosmic display packing
CN106767534A (en) * 2016-12-30 2017-05-31 北京理工大学 Stereomicroscopy system and supporting 3 d shape high score reconstructing method based on FPM
CN110047430A (en) * 2019-04-26 2019-07-23 京东方科技集团股份有限公司 Light field data reconstructing method, light field data restructing device and light field display device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015157769A1 (en) * 2014-04-11 2015-10-15 The Regents Of The University Of Colorado, A Body Corporate Scanning imaging for encoded psf identification and light field imaging
CN108364342B (en) * 2017-01-26 2021-06-18 中国科学院脑科学与智能技术卓越创新中心 Light field microscopic system and three-dimensional information reconstruction method and device thereof
CN109615651B (en) * 2019-01-29 2022-05-20 清华大学 Three-dimensional microscopic imaging method and system based on light field microscopic system
CN110441271B (en) * 2019-07-15 2020-08-28 清华大学 Light field high-resolution deconvolution method and system based on convolutional neural network
CN110989154B (en) * 2019-11-22 2020-07-31 北京大学 Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276188A1 (en) * 2008-05-05 2009-11-05 California Institute Of Technology Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors
CN106199941A (en) * 2016-08-30 2016-12-07 浙江大学 A kind of shift frequency light field microscope and three-dimensional super-resolution microcosmic display packing
CN106767534A (en) * 2016-12-30 2017-05-31 北京理工大学 Stereomicroscopy system and supporting 3 d shape high score reconstructing method based on FPM
CN110047430A (en) * 2019-04-26 2019-07-23 京东方科技集团股份有限公司 Light field data reconstructing method, light field data restructing device and light field display device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAOSHUAI HUANG EL: "Fast, long-term, super-resolution imaging with Hessian", 《NATURE BIOTECHNOLOGY》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021098645A1 (en) * 2019-11-22 2021-05-27 北京大学 Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor
WO2021098276A1 (en) * 2019-11-22 2021-05-27 北京大学 Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein
CN113971722A (en) * 2021-12-23 2022-01-25 清华大学 Fourier domain optical field deconvolution method and device
CN113971722B (en) * 2021-12-23 2022-05-17 清华大学 Fourier domain optical field deconvolution method and device

Also Published As

Publication number Publication date
CN110989154B (en) 2020-07-31
WO2021098645A1 (en) 2021-05-27
WO2021098276A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
Zhang et al. Residual non-local attention networks for image restoration
Luo et al. Deep constrained least squares for blind image super-resolution
Tsai et al. Banet: a blur-aware attention network for dynamic scene deblurring
Farrugia et al. Super resolution of light field images using linear subspace projection of patch-volumes
Belekos et al. Maximum a posteriori video super-resolution using a new multichannel image prior
Hardie A fast image super-resolution algorithm using an adaptive Wiener filter
Ignatov et al. Ntire 2019 challenge on image enhancement: Methods and results
CN110989154B (en) Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof
CN112070670B (en) Face super-resolution method and system of global-local separation attention mechanism
CN112750082A (en) Face super-resolution method and system based on fusion attention mechanism
CN109389667B (en) High-efficiency global illumination drawing method based on deep learning
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN113538243B (en) Super-resolution image reconstruction method based on multi-parallax attention module combination
Gerken et al. Equivariance versus augmentation for spherical images
Xie et al. Non-local nested residual attention network for stereo image super-resolution
Shi et al. (SARN) spatial-wise attention residual network for image super-resolution
CN115439702A (en) Weak noise image classification method based on frequency domain processing
CN108229194A (en) Method for Digital Image Scrambling based on multithreading model and multistage scramble
CN109416743B (en) Three-dimensional convolution device for identifying human actions
Wang et al. Osffnet: Omni-stage feature fusion network for lightweight image super-resolution
CN111524078A (en) Dense network-based microscopic image deblurring method
CN116109768A (en) Super-resolution imaging method and system for Fourier light field microscope
CN111951159B (en) Processing method for super-resolution of light field EPI image under strong noise condition
CN116128722A (en) Image super-resolution reconstruction method and system based on frequency domain-texture feature fusion
Zafeirouli et al. Efficient, lightweight, coordinate-based network for image super resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant