WO2021098645A1 - 显微光场体成像的重建方法及其正过程和反投影获取方法 - Google Patents

显微光场体成像的重建方法及其正过程和反投影获取方法 Download PDF

Info

Publication number
WO2021098645A1
WO2021098645A1 PCT/CN2020/129083 CN2020129083W WO2021098645A1 WO 2021098645 A1 WO2021098645 A1 WO 2021098645A1 CN 2020129083 W CN2020129083 W CN 2020129083W WO 2021098645 A1 WO2021098645 A1 WO 2021098645A1
Authority
WO
WIPO (PCT)
Prior art keywords
point spread
spread function
image
light field
images
Prior art date
Application number
PCT/CN2020/129083
Other languages
English (en)
French (fr)
Inventor
毛珩
张光义
陈良怡
Original Assignee
北京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学 filed Critical 北京大学
Publication of WO2021098645A1 publication Critical patent/WO2021098645A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • G02B21/367Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison

Definitions

  • the invention relates to the technical field of light field microscope image reconstruction, in particular to a reconstruction method of microscopic light field volume imaging and a forward process and back projection acquisition method thereof.
  • the light field microscope is a fast volume imaging method. By adding a microlens array to the phase plane, a traditional fluorescence microscope can be turned into a light field microscope (as shown in Figure 1). Conventional microscopes can only perform 2D imaging, while light field microscopes can perform 3D imaging. The specific process is to take a light field image, and then use the light field image to reconstruct the 3D distribution of the object.
  • the positive process of light field imaging can be expressed as:
  • f is the captured light field image
  • g is the 3D volume image that needs to be reconstructed, and for description, they are all represented by a one-dimensional vector.
  • H is the system matrix, which connects the 3D volume image g and the light field image f, and reconstructs the 3D volume image g from the light field image f.
  • Different reconstruction algorithms have different details, but they all need to calculate Hg and H T f, that is, calculating the forward and backward processes of imaging, and the calculation process is similar.
  • the process of calculating the positive process Hg is mainly to convolve the volume image g with the corresponding point spread function (PSF).
  • PSD point spread function
  • the point spread function of the light field camera is related to the spatial position, and only those in the same layer and at the same position after the corresponding microlens The point spread function is the same.
  • the 3D volume to be reconstructed has S layers as shown in 2a in Figure 2, and each layer needs to be decomposed into N ⁇ N layers. , And then convolve the layer with the corresponding point spread function.
  • N ⁇ N ⁇ S 2D convolutions need to be calculated, and then by accumulating the convolved images, the light field image LFM image can be obtained.
  • the amount of calculation in this method is very large, and the existing method uses FFT (full name in Chinese as "Fast Fourier Transform") to calculate the convolution, but due to the large number of convolutions, the calculation process takes a long time.
  • the purpose of the present invention is to provide a reconstruction method for microscopic light field volume imaging and its forward process and back projection acquisition method to overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
  • the present invention provides a positive process acquisition method in a reconstruction method of microscopic light field volume imaging, the method includes:
  • Step 1 Decompose a 3D volume image (M 2 , M 2 , S) with a size of M 2 and a number of layers S into N ⁇ N ⁇ S layer images with a size of M 2 ⁇ M 2 , each of which is described
  • the layer image is composed of a plurality of 2D sub-images with a size of N ⁇ N, and each of the 2D sub-images has a pixel value of zero except for coordinates (i, j), 1 ⁇ i ⁇ N, 1 ⁇ j ⁇ N;
  • Step 2 Extract the pixels at the coordinates (i, j) on each of the layer images to rearrange them into a single-channel image; wherein the size of the single-channel image is There are a total of N ⁇ N ⁇ S single-channel images, and all the single-channel images are stacked into a multi-channel image ( N ⁇ N ⁇ S);
  • Step 3 Rearrange the 5D point spread function (M 1 , M 1 , N, N, S) of the light field microscope into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N);
  • Step 4 Use the multi-channel image obtained in step 2 as the network input of the convolutional neural network, use the 4D convolution kernel obtained in step 3 as the weight of the convolutional neural network, and use the 4D convolution check
  • the multi-channel image is convolved to output a multi-channel image ( N ⁇ N);
  • Step 5 rearrange the multi-channel image output in step 4 into a light field image.
  • step 3 specifically includes:
  • Step 31 it is determined 5D point spread function (M 1, M 1, N, N, S) of the M 1 can be divisible by N, and if so, proceeds to step 33; otherwise, proceeds to step 32;
  • Step 32 Traverse all the 2D point spread functions included in the 5D point spread function, convert each of the 2D point spread functions into 3D vectors, and stack the 3D vectors into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N);
  • Step 33 Fill 0 symmetrically around the 2D point spread function selected in step 32, and return to step 31.
  • a 2D point spread function included in the 5D point spread function includes:
  • Step 321 Move the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain the aligned 2D point spread function;
  • step 322 on the aligned 2D point spread function obtained in step 321, using a preset pixel as a starting point, extract every N pixels along the row direction and column direction of the aligned 2D point spread function.
  • the extracted pixels are formed into a small convolution kernel, the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), traverse the All pixels on the aligned 2D point spread function whose abscissa is not greater than N and ordinate is not greater than N, so as to obtain N ⁇ N small convolution kernels;
  • step 323 the N ⁇ N small convolution kernels obtained in step 322 are stacked into a dimension of ( N ⁇ N) the 3D vector.
  • the first preset value is set to i-(N+1)/2
  • the second preset value is set to j-(N+1)/2.
  • the present invention also provides a back-projection acquisition method in the reconstruction method of microscopic light field volume imaging, the method includes:
  • Step 1 Decompose a light field image with a size of (M 2 , M 2 , 1) and a layer number of 1 into N ⁇ N layer images with a size of M 2 ⁇ M 2 , and each layer image is composed of A plurality of 2D sub-images with a size of N ⁇ N are composed, each of the 2D sub-images has a pixel value of zero except for coordinates (i, j), 1 ⁇ i ⁇ N, 1 ⁇ j ⁇ N;
  • Step 2 Extract the pixels at the coordinates (i, j) on each of the layer images to rearrange them into a single-channel image; wherein the size of the single-channel image is There are N ⁇ N single-channel images Stack all the single-channel images into a multi-channel image ( N ⁇ N);
  • Step 3 Rearrange the 5D point spread function (M 1 , M 1 , N, N, S) of the light field microscope into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N);
  • Step 4 The 4D convolution kernel of step 3 ( N ⁇ N ⁇ S, N ⁇ N) to rotate and exchange dimensions;
  • Step 5 Convert the multi-channel image obtained in step 2 ( N ⁇ N) is used as the network input of the convolutional neural network, and the 4D convolution kernel obtained in step 4 ( N ⁇ N, N ⁇ N ⁇ S) is used as the weight of the convolutional neural network, and the 4D convolution kernel ( N ⁇ N, N ⁇ N ⁇ S) to the multi-channel image ( N ⁇ N) perform convolution and output a multi-channel image ( N ⁇ N ⁇ S);
  • Step 6 the multi-channel image output in step 5 ( N ⁇ N ⁇ S) are rearranged to obtain a 3D volume image.
  • step 3 specifically includes:
  • Step 31 it is determined 5D point spread function (M 1, M 1, N, N, S) of the M 1 can be divisible by N, and if so, proceeds to step 33; otherwise, proceeds to step 32;
  • Step 32 Traverse all the 2D point spread functions included in the 5D point spread function, convert each of the 2D point spread functions into 3D vectors, and stack the 3D vectors into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N);
  • Step 33 Fill 0 symmetrically around the 2D point spread function selected in step 32, and return to step 31.
  • a 2D point spread function included in the 5D point spread function includes:
  • Step 321 Move the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain the aligned 2D point spread function;
  • step 322 on the aligned 2D point spread function obtained in step 321, using a preset pixel as a starting point, extract every N pixels along the row direction and column direction of the aligned 2D point spread function.
  • the extracted pixels are formed into a small convolution kernel, the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), traverse the All pixels on the aligned 2D point spread function whose abscissa is not greater than N and ordinate is not greater than N, so as to obtain N ⁇ N small convolution kernels;
  • step 323 the N ⁇ N small convolution kernels obtained in step 322 are stacked into a dimension of ( N ⁇ N) the 3D vector.
  • the first preset value is set to i-(N+1)/2
  • the second preset value is set to j-(N+1)/2.
  • the present invention also provides a reconstruction method for microscopic light field volume imaging, which includes the method described above.
  • the present invention first decomposes the 3D volume image (M 2 , M 2 , S) into N ⁇ N ⁇ S layer images or decomposes the light field image (M 2 , M 2 , 1) into N ⁇ N layers Image, and only extract the pixels at the non-zero position without calculating the pixels at the zero position. Therefore, the size of each single-channel image is And rearrange the 2D point spread function into N ⁇ N small convolution kernels, then convolve them separately, and then rearrange them in reverse, instead of directly using the 2D point spread function for convolution in the prior art, this can greatly increase the convolution. Product calculation speed, thereby improving the speed of image reconstruction.
  • FIG. 1 is a schematic diagram of the imaging principle of a typical microscope in the prior art, a schematic diagram of the imaging principle of a light field microscope, and a photographed light field picture;
  • FIG. 2 is a schematic diagram of the calculation principle of the positive imaging process of the light field camera in the prior art
  • FIG. 3 is a schematic diagram of the principle of a reconstruction method for microscopic light field imaging provided by an embodiment of the present invention
  • 4 is a flowchart of arranging 5D point spread functions into 4D convolution kernels according to an embodiment of the present invention
  • Fig. 5 is a flowchart of a method for acquiring a positive process in a reconstruction method for microscopic light field volume imaging provided by an embodiment of the present invention
  • FIG. 6 is a flowchart of a back projection acquisition method in a reconstruction method for microscopic light field volume imaging provided by an embodiment of the present invention.
  • the positive process acquisition method in the reconstruction method of microscopic light field volume imaging is to obtain the light field image f according to the known 3D volume image g.
  • Methods include:
  • Step 1 According to the 3D volume image (M 2 , M 2 , S) whose size is M 2 and the number of layers is S, decompose the 3D volume image into N ⁇ N ⁇ S layer images, each The size of the layer image is M 2 ⁇ M 2 .
  • each layer image is composed of multiple 2D sub-images c with a size of N ⁇ N, and each 2D sub-image c has a pixel value of 0 except for the pixel d at coordinates (i, j), 1 ⁇ i ⁇ N , 1 ⁇ j ⁇ N.
  • Step 2 Extract the pixel d at the coordinates (i, j) on each of the layer images to rearrange them into a plurality of single-channel images h.
  • the single-channel image h is expressed as The size of each single-channel image h is There are N ⁇ N ⁇ S channels, stack all single-channel images into multi-channel images ( N ⁇ N).
  • N ⁇ N the size of the layer image
  • the upper part on the left is a layer image obtained in step 1
  • the size of the layer image is expressed as M 2 ⁇ M 2 .
  • a single-channel image h is shown at the bottom on the left. This embodiment only extracts pixels with non-zero position coordinates (i, j), and does not need to calculate pixels at zero position. Therefore, the size of each single-channel image is This provides convenient conditions for rapid reconstruction.
  • Step 3 Rearrange the 5D point spread function of the light field microscope into a 4D convolution kernel.
  • the 5D point spread function of the light field microscope is a 5D vector, and its dimensions are (M 1 , M 1 , N, N, S), which is expressed as (M 1 , M 1 , N, N, S), where N is the number of discrete pixels in the row or column direction of each microlens, S is the number of layers of the 3D volume image, and M 1 is the maximum spread of the point spread function in the row or column direction Number of pixels.
  • the resulting 4D convolution kernel is expressed as ( N ⁇ N ⁇ S, N ⁇ N).
  • Step 4 Use the multi-channel image obtained in step 2 as the network input of the convolutional neural network, use the 4D convolution kernel obtained in step 3 as the weight of the convolutional neural network, and use the 4D convolution check
  • the multi-channel image is convolved to output a multi-channel image, and the output multi-channel image is expressed as ( N ⁇ N).
  • the edge padding is set to
  • the convolution step stride is set to 1, and the convolutional neural network can be implemented using existing software, such as matlab, tensorflow, pytorch, etc.
  • Step 5 Rearrange the multi-channel image output in step 4 into a light field image, where the "rearrangement" can be realized by an existing method.
  • This embodiment first decomposes the 3D volume image into N ⁇ N ⁇ S layer images, and extracts only the pixels at the non-zero position without calculating the pixels at the zero position. Therefore, the size of each single-channel image is And rearrange the 2D point spread function into N ⁇ N small convolution kernels, then convolve them separately, and then rearrange them in reverse, instead of directly using the 2D point spread function for convolution in the prior art, this can greatly increase the convolution. Product calculation speed, thereby improving the reconstruction speed.
  • step 3 is implemented using the following steps 31 to 33:
  • Step 31 it is determined 5D point spread function (M 1, M 1, N, N, S) of the M 1 can be divided by N, and if so, proceeds to step 33; otherwise, proceeds to step 32.
  • Step 32 First, select a 2D point spread function e included in the 5D point spread function, and convert the 2D point spread function e into a 3D vector; then, select the next 2D point spread function e until the 5D point spread function is traversed All 2D point spread functions e included in the function, and all 2D point spread functions e included in the 5D point spread function are converted into 3D vectors; finally, all the converted 3D vectors are stacked into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N).
  • Step 33 Fill 0 symmetrically around the 2D point spread function e selected in the step 32, and return to step 31.
  • Step 321 Move the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function e to obtain the aligned 2D point spread function.
  • the first preset value is set to i-(N+1)/2
  • the second preset value is set to j-(N+1)/2.
  • step 322 on the aligned 2D point spread function obtained in step 321, using a preset pixel as a starting point, extract every N pixels along the row direction and column direction of the aligned 2D point spread function. For one pixel, the extracted pixels form a small convolution kernel f.
  • the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all the abscissas on the aligned 2D point spread function are not greater than N, all the pixels whose ordinate is not greater than N, so as to obtain N ⁇ N small convolution kernels f.
  • the upper part on the right side is a 2D point spread function after alignment obtained in step 321, and the size of the D point spread function is M 1 ⁇ M 1 .
  • the bottom on the right shows the number of N ⁇ N small convolution kernels f, and the size of each small convolution kernel f is Specifically, when extracting pixels for the first time, the pixel point with the coordinate (1, 1) on the aligned 2D point spread function is taken as the starting point, and the row direction and column of the aligned 2D point spread function are taken as the starting point.
  • the direction extracts a pixel every N pixels; when the pixel is taken for the second time, the pixel point with the coordinate (1, 2) on the aligned 2D point spread function is taken as the starting point, and the pixel is taken along the aligned 2D point spread function.
  • the row direction and column direction of the point spread function one pixel is extracted every N pixels; when the pixel is taken for the third time, the pixel with the coordinate (1, 3) on the aligned 2D point spread function is taken as the starting point , Extract one pixel every N pixels along the row direction and column direction of the aligned 2D point spread function; and so on, traverse sequentially to the coordinate on the aligned 2D point spread function as (N , N) is the starting point.
  • a pixel is extracted every N pixels along the row direction and column direction of the aligned 2D point spread function; in the above process, the extracted pixels are formed into a small convolution kernel f , And finally obtain N ⁇ N small convolution kernels f.
  • Step 323 Stack the N ⁇ N small convolution kernels f obtained in the step 322 into the 3D vector, and the dimension of the 3D vector is ( N ⁇ N).
  • the present invention also provides a back-projection acquisition method in the reconstruction method of microscopic light field volume imaging.
  • the method intends to obtain a 3D volume image g based on a known light field image f, and the method includes:
  • Step 1 Decompose the light field image (M 2 , M 2 , 1) with a size of M 2 and a number of layers of 1 into N ⁇ N layer images with a size of M 2 ⁇ M 2 , each of the layers
  • the image is composed of a plurality of 2D sub-images with a size of N ⁇ N, and each of the 2D sub-images has a pixel value of zero except for coordinates (i, j), 1 ⁇ i ⁇ N, 1 ⁇ j ⁇ N.
  • Step 2 Extract the pixels at the coordinates (i, j) on each of the layer images to rearrange them into a single-channel image; wherein the size of the single-channel image is There are N ⁇ N single-channel images Stack all the single-channel images into a multi-channel image ( N ⁇ N).
  • Step 3 Rearrange the 5D point spread function (M 1 , M 1 , N, N, S) of the light field microscope into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N). Among them, the 5D point spread function (M 1 , M 1 , N, N, S) has been introduced above, and will not be repeated here.
  • Step 4 The 4D convolution kernel of step 3 ( N ⁇ N ⁇ S, N ⁇ N) perform rotation and exchange dimensions.
  • rotation and exchange dimensions refers to the 4D convolution kernel ( (N ⁇ N ⁇ S, N ⁇ N)
  • the first two-dimensional 2D image is rotated by 180 degrees, and then the third and fourth dimensions are exchanged.
  • Step 5 Convert the multi-channel image obtained in step 2 ( N ⁇ N) is used as the network input of the convolutional neural network, and the 4D convolution kernel obtained in step 4 ( N ⁇ N, N ⁇ N ⁇ S) is used as the weight of the convolutional neural network, and the 4D convolution kernel ( N ⁇ N, N ⁇ N ⁇ S) to the multi-channel image ( N ⁇ N) perform convolution and output a multi-channel image ( N ⁇ N ⁇ S).
  • the edge padding is set to
  • the convolution step stride is set to 1.
  • Convolutional neural networks can be implemented using existing software, such as matlab, tensorflow, pytorch, etc.
  • the single-channel image obtained in step 2 is converted into the input of a convolutional neural network (CNN), and the 4D convolution kernel obtained in step 3 is used as the weight of the convolutional neural network, so that you can use the current Some neural network toolkits perform calculations to obtain the output of the network and then reverse the operation to the original image space to obtain the final output.
  • CNN convolutional neural network
  • Step 6 the multi-channel image output in step 5 ( N ⁇ N ⁇ S) is rearranged to obtain a 3D volume image, and the "rearrangement" can be realized by an existing method.
  • This embodiment first provides that the light field image is decomposed into N ⁇ N layer images, and only the pixels at the non-zero position are extracted without calculating the pixels at the zero position. Therefore, the size of each single-channel image is And rearrange the 2D point spread function into N ⁇ N small convolution kernels, then convolve them separately, and then rearrange them in reverse, instead of directly using the 2D point spread function for convolution in the prior art, this can greatly increase the convolution. Product calculation speed, thereby improving the reconstruction speed.
  • each layer image (the size of each layer image is M 2 ⁇ M 2 , and the number is N ⁇ N ⁇ S) in step 1 can be arranged along the line Move the direction separately-(i-1), and move along the column direction-(j-1) respectively to align the layer images.
  • Step 2 The method of the above-mentioned embodiment, to obtain a plurality of size (M 2, M 2) of the single-channel image, and then a plurality of size (M 2, M 2) of the stack of single-channel image is a multichannel image ( M 2 , M 2 , N ⁇ N ⁇ S).
  • step 3 each 2D point spread function included in the 5D point spread function moves i-1 in the row direction and j-1 in the column direction, to obtain the 4D convolution kernel (M 1 , M 1 , N ⁇ N ⁇ S ,1).
  • the operation of step 4 is the same as in the above embodiment, in which the edge padding of the neural network is set to (M 1 -1)/2, and the convolution step stride is set to N.
  • Step 5 output a light field image with a shape of (M 2 , M 2, 1).
  • step 3 is implemented using the following steps 31 to 33:
  • Step 31 it is determined 5D point spread function (M 1, M 1, N, N, S) of the M 1 can be divided by N, and if so, proceeds to step 33; otherwise, proceeds to step 32.
  • Step 32 First, select a 2D point spread function e included in the 5D point spread function, and convert the 2D point spread functions into 3D vectors; then, select the next 2D point spread function e until the 5D point spread function is traversed All included 2D point spread function e, all 2D point spread function e included in the 5D point spread function are converted into 3D vectors; finally, all the converted 3D vectors are stacked into a 4D convolution kernel ( N ⁇ N ⁇ S, N ⁇ N).
  • Step 33 Fill 0 symmetrically around the 2D point spread function e selected in the step 32, and return to step 31.
  • step 32 when the step 32 selects a 2D point spread function e included in the 5D point spread function, it includes:
  • Step 321 Move the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function e to obtain the aligned 2D point spread function.
  • the first preset value is set as i-(N+1)/2, 1 ⁇ i ⁇ N
  • the second preset value is set as j-(N+1)/2, 1 ⁇ j ⁇ N.
  • step 322 on the aligned 2D point spread function obtained in step 321, using a preset pixel as a starting point, extract every N pixels along the row direction and column direction of the aligned 2D point spread function. For one pixel, the extracted pixels form a small convolution kernel f.
  • the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all the abscissas on the aligned 2D point spread function are not greater than N, all the pixels whose ordinate is not greater than N, so as to obtain N ⁇ N small convolution kernels f.
  • the upper part on the right side is a 2D point spread function after alignment obtained in step 321, and the size of the D point spread function is M 1 ⁇ M 1 .
  • the bottom on the right shows the number of N ⁇ N small convolution kernels f, and the size of each small convolution kernel f is Specifically, when extracting pixels for the first time, the pixel point with the coordinate (1, 1) on the aligned 2D point spread function is taken as the starting point, and the row direction and column of the aligned 2D point spread function are taken as the starting point.
  • the direction extracts a pixel every N pixels; when the pixel is taken for the second time, the pixel point with the coordinate (1, 2) on the aligned 2D point spread function is taken as the starting point, and the pixel is taken along the aligned 2D point spread function.
  • the row direction and column direction of the point spread function one pixel is extracted every N pixels; when the pixel is taken for the third time, the pixel with the coordinate (1, 3) on the aligned 2D point spread function is taken as the starting point , Extract one pixel every N pixels along the row direction and column direction of the aligned 2D point spread function; and so on, traverse sequentially to the coordinate on the aligned 2D point spread function as (N , N) is the starting point.
  • a pixel is extracted every N pixels along the row direction and column direction of the aligned 2D point spread function; in the above process, the extracted pixels are formed into a small convolution kernel f , And finally obtain N ⁇ N small convolution kernels f.
  • Step 323 Stack the N ⁇ N small convolution kernels f obtained in the step 322 into the 3D vector, and the dimension of the 3D vector is ( N ⁇ N).
  • the size of the layer image obtained by decomposing the 3D volume image is M 2 ⁇ M 2
  • the size of the 2D point spread function included in the 5D point spread function of the light field microscope corresponding to the layer image is M 1 ⁇ M 1
  • using the existing The method directly calculates the complexity of convolution as O(M 1 2 M 2 2 ), and using the existing FFT can reduce its complexity to O(M 1 2 log(M 1 2 )).
  • the present invention extracts it, and rearranges the point spread function to spread the point of size M 1 ⁇ M 1
  • the function is rearranged into N ⁇ N small convolution kernels, and the size of each small convolution kernel is In this way, the complexity of directly calculating the convolution is reduced to
  • the convolution can be calculated efficiently. It is found through experiments that the present invention can greatly improve compared with the original FFT-based algorithm. Calculation speed.
  • the invention takes time (seconds) 35.3 90.6 122.0 163.0 227.2 Acceleration factor 60.0 ⁇ 23.7 ⁇ 17.9 ⁇ 13.7 ⁇ 12.4 ⁇
  • the present invention also provides a reconstruction method for microscopic light field volume imaging, which includes the methods described in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种显微光场体成像的重建方法及其正过程和反投影获取方法,该方法包括:步骤1,将3D体图像分解成图层图像,每一个图层图像由多个2D子图像组成,每一个2D子图像除坐标(i,j)外像素值为零;步骤2,在每一个图层图像上提取坐标(i,j)处的像素,以重新排列成单通道图像,将所有单通道图像堆叠为多通道图像;步骤3,将光场显微镜的5D点扩散函数重新排列为4D卷积核;步骤4,将步骤2获得的多通道图像作为卷积神经网络的网络输入,将步骤3获得的4D卷积核作为卷积神经网络的权重,采用4D卷积核对多通道图像进行卷积,输出多通道图像;步骤5,将步骤4输出的多通道图像重新排列为光场图像。所述方法能够大幅提高卷积计算速度,从而提升图像的重建速度。

Description

显微光场体成像的重建方法及其正过程和反投影获取方法 技术领域
本发明涉及光场显微镜图像重建技术领域,特别是关于一种显微光场体成像的重建方法及其正过程和反投影获取方法。
背景技术
光场显微镜是一种快速的体成像方式,通过在相面增加一个微透镜阵列,可以将传统荧光显微镜变为光场显微镜(如图1所示)。常规显微镜只能进行2D成像,而光场显微镜可以进行3D成像,具体过程为,拍摄一张光场图像,之后使用该光场图像可以重建出物体的3D分布。
如图1所示,分别是典型的显微镜、光场显微镜、与光场显微镜拍摄到的斑马鱼的光场图像。注意光场显微镜拍摄的图像,有典型的圆状形态,这是微透镜的效果。
一般来讲,光场成像的正过程可以表示为:
f=Hg
其中,f为拍摄到的光场图像,g为需要重建的3D体图像,为描述简介,它们都用一维向量表示。H是系统矩阵,其将3D体图像g和光场图像f联系起来,从光场图像f中重建出3D体图像g。不同的重建算法细节不同,但是都需要计算Hg与H Tf,即计算成像的正过程(forward)和反投影(backward),计算过程类似。
计算正过程Hg的过程主要是将体图像g与对应的点扩散函数(PSF)进行卷积,光场相机的点扩散函数和空间位置有关,只有处在同一层、对应微透镜后同一位置的点,点扩散函数才相同。如图2所示,若每个微透镜后有N×N个像素,需要重建的3D体有图2中的2a所示的S层,需要先将每一层分解为N×N个图层,再图层与对应的点扩散函数卷积,计算一次正过程Hg需要计算N×N×S个2D卷积,再通过将卷积后的图像累加,可以得到光场图像LFM image。这种方式的计算量非常大,现有方法使用FFT(中文全称为“快速傅立叶变换”)计算该卷积,但由于卷积数量众多,计算该过程耗时较长。
发明内容
本发明的目的在于提供一种显微光场体成像的重建方法及其正过程和反投影获取方法来克服或至少减轻现有技术的上述缺陷中的至少一个。
为实现上述目的,本发明提供一种显微光场体成像的重建方法中的正过程获取方法,该方法包括:
步骤1,将大小为M 2、层数为S的3D体图像(M 2,M 2,S)分解成N×N×S个大小为M 2×M 2的图层图像,每一个所述图层图像由多个大小为N×N的2D子图像组成,每一个所述2D子图像除坐标(i,j)外像素值为零,1≤i≤N,1≤j≤N;
步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素,以重新排列成单通道图像;其中,所述单通道图像大小为
Figure PCTCN2020129083-appb-000001
共有N×N×S个单通道图像,将所有所述单通道图像堆叠为多通道图像(
Figure PCTCN2020129083-appb-000002
N×N×S);
步骤3,将光场显微镜的5D点扩散函数(M 1,M 1,N,N,S)重新排列为4D卷积核(
Figure PCTCN2020129083-appb-000003
N×N×S,N×N);
步骤4,将步骤2获得的所述多通道图像作为卷积神经网络的网络输入,将步骤3获得的所述4D卷积核作为所述卷积神经网络的权重,采用所述4D卷积核对所述多通道图像进行卷积,输出多通道图像(
Figure PCTCN2020129083-appb-000004
N×N);
步骤5,将步骤4输出的所述多通道图像重新排列为光场图像。
进一步地,步骤3具体包括:
步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32;
步骤32,遍历所述5D点扩散函数所包含的所有2D点扩散函数,将各所述2D点扩散函数均转成3D向量,并将所述3D向量堆叠成4D卷积核(
Figure PCTCN2020129083-appb-000005
N×N×S,N×N);
步骤33,将所述步骤32所选取的2D点扩散函数的四周对称地填充0,并返回步骤31。
进一步地,所述步骤32中选取所述5D点扩散函数所包含的一所述2D点扩散函数时,其包括:
步骤321,沿该2D点扩散函数的行方向移动第一预设值、列方向移动第二预设值,得到对齐后的2D点扩散函数;
步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核,第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核;
步骤323,将所述步骤322获得的N×N个小卷积核堆叠成维度为(
Figure PCTCN2020129083-appb-000006
N×N)的所述3D向量。
进一步地,所述步骤321中,所述第一预设值设置为i-(N+1)/2,所述第二预设值设置为j-(N+1)/2。
本发明还提供一种显微光场体成像的重建方法中的反投影获取方法,该方法包括:
步骤1,将大小为(M 2,M 2,1)、层数为1的光场图像分解成N×N个大小为M 2×M 2的图层图像,每一个所述图层图像由多个大小为N×N的2D子图像组成,每一个所述2D子图像除坐标(i,j)外像素值为零,1≤i≤N,1≤j≤N;
步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素,以重新排列成单通道图像;其中,所述单通道图像大小为
Figure PCTCN2020129083-appb-000007
共有N×N个单通道图像
Figure PCTCN2020129083-appb-000008
将所有所述单通道图像堆叠为多通道图像(
Figure PCTCN2020129083-appb-000009
N×N);
步骤3,将光场显微镜的5D点扩散函数(M 1,M 1,N,N,S)重新排列为4D卷积 核(
Figure PCTCN2020129083-appb-000010
N×N×S,N×N);
步骤4,将步骤3的4D卷积核(
Figure PCTCN2020129083-appb-000011
N×N×S,N×N)进行旋转和交换维度;
步骤5,将步骤2获得的所述多通道图像(
Figure PCTCN2020129083-appb-000012
N×N)作为卷积神经网络的网络输入,将步骤4获得的所述4D卷积核(
Figure PCTCN2020129083-appb-000013
N×N,N×N×S)作为所述卷积神经网络的权重,采用所述4D卷积核(
Figure PCTCN2020129083-appb-000014
N×N,N×N×S)对所述多通道图像(
Figure PCTCN2020129083-appb-000015
N×N)进行卷积,输出多通道图像(
Figure PCTCN2020129083-appb-000016
N×N×S);
步骤6,将步骤5输出的所述多通道图像(
Figure PCTCN2020129083-appb-000017
N×N×S)进行重新排列,得到3D体图像。
进一步地,步骤3具体包括:
步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32;
步骤32,遍历所述5D点扩散函数所包含的所有2D点扩散函数,将各所述2D点扩散函数均转成3D向量,并将所述3D向量堆叠成4D卷积核(
Figure PCTCN2020129083-appb-000018
N×N×S,N×N);
步骤33,将所述步骤32所选取的2D点扩散函数的四周对称地填充0,并返回步骤31。
进一步地,所述步骤32中选取所述5D点扩散函数所包含的一所述2D点扩散函数时,其包括:
步骤321,沿该2D点扩散函数的行方向移动第一预设值、列方向移动第二 预设值,得到对齐后的2D点扩散函数;
步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核,第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核;
步骤323,将所述步骤322获得的N×N个小卷积核堆叠成维度为(
Figure PCTCN2020129083-appb-000019
N×N)的所述3D向量。
进一步地,所述步骤321中,所述第一预设值设置为i-(N+1)/2,所述第二预设值设置为j-(N+1)/2。
本发明还提供一种显微光场体成像的重建方法,该方法包括如上所述的方法。
由于本发明首先将3D体图像(M 2,M 2,S)分解为N×N×S个图层图像或将光场图像(M 2,M 2,1)分解成N×N个图层图像,并仅提取非零位置的像素,而不必对零位置的像素进行计算,因此,每一个单通道图像的大小为
Figure PCTCN2020129083-appb-000020
并将2D点扩散函数重新排列为N×N个小的卷积核,再分别卷积,之后再逆重新排列,代替现有技术中直接采用2D点扩散函数进行卷积,这样能够大幅提高卷积计算速度,从而提升图像的重建速度。
附图说明
图1为现有技术中典型的显微镜的成像原理示意图、光场显微镜的成像原理示意图以及拍摄到的光场图片;
图2为现有技术中光场相机成像正过程的计算原理示意图;
图3为本发明实施例所提供的显微光场成像的重建方法的原理示意图;
图4为本发明实施例所提供的将5D点扩散函数排列为4D卷积核的流程图;
图5为本发明实施例所提供的显微光场体成像的重建方法中的正过程获取 方法的流程图;
图6为本发明实施例所提供的显微光场体成像的重建方法中的反投影获取方法的流程图。
具体实施方式
下面结合附图和实施例对本发明进行详细的描述。
如图2、图3和图5所示,本实施例所提供的显微光场体成像的重建方法中的正过程获取方法欲根据已知的3D体图像g,获得光场图像f,该方法包括:
步骤1,根据3D体图像(M 2,M 2,S),其大小为M 2、层数为S,将该3D-体图像进行分解为N×N×S个的图层图像,每一个图层图像的大小为M 2×M 2。其中,每一个图层图像由多个大小为N×N的2D子图像c组成,每一个2D子图像c除坐标(i,j)处的像素d外像素值为0,1≤i≤N,1≤j≤N。
步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素d,以重新排列成多个单通道图像h。其中,单通道图像h表示为
Figure PCTCN2020129083-appb-000021
每个单通道图像h的大小为
Figure PCTCN2020129083-appb-000022
有N×N×S个通道,将所有单通道图像堆叠为多通道图像(
Figure PCTCN2020129083-appb-000023
N×N)。其中,结合图3,图3中,左侧的上方为步骤1获得的一幅图层图像,该幅图层图像的大小表示为M 2×M 2。左侧的下方显示了一个单通道图像h。本实施例仅提取非零位置坐标(i,j)的像素,而不必对零位置的像素进行计算,因此,每一个单通道图像的大小为
Figure PCTCN2020129083-appb-000024
这为快速重建提供便利条件。
步骤3,将光场显微镜的5D点扩散函数重新排列为4D卷积核。具体地,如图2所示,光场显微镜的5D点扩散函数为5D向量,其维度为(M 1,M 1,N,N,S),即表示为(M 1,M 1,N,N,S),其中,N为每个微透镜后行方向或者列方向离散的像素数,S为3D体图像的层数,M 1为点扩散函数在行方向或者列方向能扩散到的最大像素数。得到的4D卷积核表示为(
Figure PCTCN2020129083-appb-000025
N×N×S,N×N)。
步骤4,将步骤2获得的所述多通道图像作为卷积神经网络的网络输入,将步骤3获得的所述4D卷积核作为所述卷积神经网络的权重,采用所述4D卷积核对所述多通道图像进行卷积,输出多通道图像,输出的多通道图像表示为(
Figure PCTCN2020129083-appb-000026
N×N)。其中,使用卷积神经网络时,边缘填充padding设置为
Figure PCTCN2020129083-appb-000027
卷积步长stride设置为1,卷积神经网络可以采用现有软件实现,如matlab,tensorflow,pytorch等。
步骤5,将步骤4输出的多通道图像重新排列为光场图像,其中的“重新排列”可以采用现有的方法实现。
本实施例首先将3D体图像分解为N×N×S个图层图像,仅提取非零位置的像素,而不必对零位置的像素进行计算,因此,每一个单通道图像的大小为
Figure PCTCN2020129083-appb-000028
并将2D点扩散函数重新排列为N×N个小的卷积核,再分别卷积,之后再逆重新排列,代替现有技术中直接采用2D点扩散函数进行卷积,这样能够大幅提高卷积计算速度,从而提升重建速度。
在一个实施例中,结合图3所示,步骤3采用如下步骤31至步骤33实现:
步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32。
步骤32,首先,选取5D点扩散函数所包含的一个2D点扩散函数e,将该2D点扩散函数e均转成3D向量;然后,再选取下一个2D点扩散函数e,直至遍历5D点扩散函数所包含的所有2D点扩散函数e,将5D点扩散函数所包含的所有2D点扩散函数e均转成3D向量;最后,将所有转成的3D向量堆叠成4D卷积核(
Figure PCTCN2020129083-appb-000029
N×N×S,N×N)。
步骤33,将所述步骤32所选取的2D点扩散函数e的四周对称地填充0,并返回步骤31。
当然,也可以采用现有技术中提供的方法实现步骤3,在此不一一列举。
在一个实施例中,如图3和图4所示,所述步骤32中选取所述5D点扩散 函数所包含的一所述2D点扩散函数e时,其包括:
步骤321,沿该2D点扩散函数e的行方向移动第一预设值、列方向移动第二预设值,得到对齐后的2D点扩散函数。所述第一预设值设置为i-(N+1)/2,所述第二预设值设置为j-(N+1)/2。
步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核f。第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核f。
结合图3,图3中,右侧的上方为步骤321获得的对齐后的一个2D点扩散函数,该D点扩散函数的大小为M 1×M 1。右侧的下方显示了数量为N×N个小卷积核f,每一个小卷积核f的大小为
Figure PCTCN2020129083-appb-000030
具体地,第一次提取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,1)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;第二次取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,2)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;第三次取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,3)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;以此类推,依序遍历至所述对齐后的2D点扩散函数上的坐标为(N,N)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;上述过程中,将提取的像素组成一个小卷积核f,最后获得N×N个小卷积核f。
步骤323,将所述步骤322获得的N×N个小卷积核f堆叠成所述3D向量,该3D向量的维度为(
Figure PCTCN2020129083-appb-000031
N×N)。
如图6所示,本发明还提供显微光场体成像的重建方法中的反投影获取方 法,该方法欲根据已知的光场图像f,获得3D体图像g,该方法包括:
步骤1,将大小为M 2、层数为1的光场图像(M 2,M 2,1)分解成N×N个大小为M 2×M 2的图层图像,每一个所述图层图像由多个大小为N×N的2D子图像组成,每一个所述2D子图像除坐标(i,j)外像素值为零,1≤i≤N,1≤j≤N。
步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素,以重新排列成单通道图像;其中,所述单通道图像大小为
Figure PCTCN2020129083-appb-000032
共有N×N个单通道图像
Figure PCTCN2020129083-appb-000033
将所有所述单通道图像堆叠为多通道图像(
Figure PCTCN2020129083-appb-000034
N×N)。
步骤3,将光场显微镜的5D点扩散函数(M 1,M 1,N,N,S)重新排列为4D卷积核(
Figure PCTCN2020129083-appb-000035
N×N×S,N×N)。其中,5D点扩散函数(M 1,M 1,N,N,S)在上文已经介绍,在此不再赘述。
步骤4,将步骤3的4D卷积核(
Figure PCTCN2020129083-appb-000036
N×N×S,N×N)进行旋转和交换维度。其中,“旋转和交换维度”指的是将4D卷积核(
Figure PCTCN2020129083-appb-000037
N×N×S,N×N)前两维2D图像旋转180度,再交换第三维和第四维。
步骤5,将步骤2获得的所述多通道图像(
Figure PCTCN2020129083-appb-000038
N×N)作为卷积神经网络的网络输入,将步骤4获得的所述4D卷积核(
Figure PCTCN2020129083-appb-000039
N×N,N×N×S)作为所述卷积神经网络的权重,采用所述4D卷积核(
Figure PCTCN2020129083-appb-000040
N×N,N×N×S)对所述多通道图像(
Figure PCTCN2020129083-appb-000041
N×N)进行卷积,输出多通道图像(
Figure PCTCN2020129083-appb-000042
N×N×S)。其中,使用卷积神经网络时,边缘填充padding设置为
Figure PCTCN2020129083-appb-000043
卷积步长stride设置 为1。卷积神经网络可以采用现有软件实现,如matlab,tensorflow,pytorch等。
本步骤将步骤2获得的所述单通道图像转换为卷积神经网络(CNN)的输入,将步骤3获得的所述4D卷积核作为所述卷积神经网络的权重,这样则可以使用现有的神经网络工具包进行计算,得到网络的输出后再逆操作到原始的图像空间,得到最终输出。
步骤6,将步骤5输出的所述多通道图像(
Figure PCTCN2020129083-appb-000044
N×N×S)进行重新排列,得到3D体图像,其中的“重新排列”可以采用现有的方法实现。
本实施例首先提供将光场图像分解为N×N个图层图像,仅提取非零位置的像素,而不必对零位置的像素进行计算,因此,每一个单通道图像的大小为
Figure PCTCN2020129083-appb-000045
并将2D点扩散函数重新排列为N×N个小的卷积核,再分别卷积,之后再逆重新排列,代替现有技术中直接采用2D点扩散函数进行卷积,这样能够大幅提高卷积计算速度,从而提升重建速度。
需要说明的是,和上述实施例相类似地,还可以将步骤1中的每一图层图像(每一个图层图像的大小M 2×M 2,数量为N×N×S个)沿行方向分别移动-(i-1)、沿列方向分别移动-(j-1),以将图层图像对齐。步骤2的方法与上述实施例类似,得到多个大小为(M 2,M 2)的单通道图像,再将多个大小为(M 2,M 2)的单通道图像堆叠为多通道图像(M 2,M 2,N×N×S)。步骤3中,5D点扩散函数所包含的每一个2D点扩散函数分别沿行方向移动i-1和列方向移动j-1,得到4D卷积核(M 1,M 1,N×N×S,1)。步骤4的操作与上述实施例相同,其中的神经网络的边缘填充padding设置为(M 1-1)/2,卷积步长stride设置为N。步骤5输出形状为(M 2,M 2,1)的光场图像。
在一个实施例中,结合图3所示,步骤3采用如下步骤31至步骤33实现:
步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32。
步骤32,首先,选取5D点扩散函数所包含的一个2D点扩散函数e,将该 2D点扩散函数均转成3D向量;然后,再选取下一个2D点扩散函数e,直至遍历5D点扩散函数所包含的所有2D点扩散函数e,将5D点扩散函数所包含的所有2D点扩散函数e均转成3D向量;最后,将所有转成的3D向量堆叠成4D卷积核(
Figure PCTCN2020129083-appb-000046
N×N×S,N×N)。
步骤33,将所述步骤32所选取的2D点扩散函数e的四周对称地填充0,并返回步骤31。
当然,也可以采用现有技术中提供的方法实现步骤2,在此不一一列举。
在一个实施例中,如图3和图4所示,所述步骤32中选取所述5D点扩散函数所包含的一所述2D点扩散函数e时,其包括:
步骤321,沿该2D点扩散函数e的行方向移动第一预设值、列方向移动第二预设值,得到对齐后的2D点扩散函数。其中,所述第一预设值设置为i-(N+1)/2,1≤i≤N,所述第二预设值设置为j-(N+1)/2,1≤j≤N。
步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核f。第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核f。
结合图3,图3中,右侧的上方为步骤321获得的对齐后的一个2D点扩散函数,该D点扩散函数的大小为M 1×M 1。右侧的下方显示了数量为N×N个小卷积核f,每一个小卷积核f的大小为
Figure PCTCN2020129083-appb-000047
具体地,第一次提取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,1)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;第二次取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,2)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;第三次取像素时,以所述对齐后的2D点扩散函数上的坐标为(1,3)的像素点为起点, 沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;以此类推,依序遍历至所述对齐后的2D点扩散函数上的坐标为(N,N)的像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素;上述过程中,将提取的像素组成一个小卷积核f,最后获得N×N个小卷积核f。
步骤323,将所述步骤322获得的N×N个小卷积核f堆叠成所述3D向量,该3D向量的维度为(
Figure PCTCN2020129083-appb-000048
N×N)。
3D体图像进行分解得到的图层图像的大小为M 2×M 2,对应图层图像的光场显微镜的5D点扩散函数所包含的2D点扩散函数大小为M 1×M 1,采用现有方法直接计算卷积的复杂度为O(M 1 2M 2 2),使用现有的FFT可以将其复杂度降低为O(M 1 2log(M 1 2))。观察该卷积的特点,可以发现其被卷积的图像,除特定位置外都为0,所以本发明将其提取出来,并通过重新排列点扩散函数,将大小M 1×M 1的点扩散函数重新排列为N×N个小卷积核,每一个小卷积核的大小为
Figure PCTCN2020129083-appb-000049
这样直接计算卷积的复杂度降为
Figure PCTCN2020129083-appb-000050
同时,得益于深度学习的发展,有很多高效计算卷积的软件包,结合这些软件包,可以高效的计算该卷积,通过实验发现本发明相比于原始的基于FFT的算法能大幅提高计算速度。
下面的表1是本发明提供的重建方法与现有方法在重建图像上的耗时和加速倍数方面的对比:
表1
图像大小(像素) 600×600 900×900 1200×1200 1500×1500 1800×1800
FFT耗时(秒) 2135.6 2149.0 2185.5 2228.6 2825.9
本发明耗时(秒) 35.3 90.6 122.0 163.0 227.2
加速倍数 60.0× 23.7× 17.9× 13.7× 12.4×
根据上述表1,将本发明提供的重建方法与现有方法的耗时进行对比,都 基于文献Prevedel,R.,et al.,Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy.Nat Methods,2014.11(7):p.727-730中提供的算法,使用本发明提供的重建方法可以获得12至60倍的提升,比如当感兴趣的区域大小为900×900时,可以将1000张图像重建的时间由3周减少到1天,大幅缩短时间成本。实验中N=15,S=101,迭代16次。常规算法和改进算法都用Matlab2019a实现,并使用GPU进行加速,在一台配有GeForce GTX 1080 Ti GPU,双CPU(Intel Xeon 2.10GHz),64G内存的华硕工作站上进行实验。
本发明还提供一种显微光场体成像的重建方法,其包括如上述各实施例所述的方法。
最后需要指出的是:以上实施例仅用以说明本发明的技术方案,而非对其限制。本领域的普通技术人员应当理解:可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (9)

  1. 一种显微光场体成像的重建方法中的正过程获取方法,其特征在于,包括:
    步骤1,将大小为M 2、层数为S的3D体图像(M 2,M 2,S)分解成N×N×S个大小为M 2×M 2的图层图像,每一个所述图层图像由多个大小为N×N的2D子图像组成,每一个所述2D子图像除坐标(i,j)外像素值为零,1≤i≤N,1≤j≤N;
    步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素,以重新排列成单通道图像;其中,所述单通道图像大小为
    Figure PCTCN2020129083-appb-100001
    共有N×N×S个单通道图像,将所有所述单通道图像堆叠为多通道图像
    Figure PCTCN2020129083-appb-100002
    步骤3,将光场显微镜的5D点扩散函数(M 1,M 1,N,N,S)重新排列为4D卷积核
    Figure PCTCN2020129083-appb-100003
    M 1为点扩散函数在行方向或者列方向能扩散到的最大像素数;
    步骤4,将步骤2获得的所述多通道图像作为卷积神经网络的网络输入,将步骤3获得的所述4D卷积核作为所述卷积神经网络的权重,采用所述4D卷积核对所述多通道图像进行卷积,输出多通道图像
    Figure PCTCN2020129083-appb-100004
    步骤5,将步骤4输出的所述多通道图像重新排列为光场图像。
  2. 如权利要求1所述的显微光场体成像的重建方法中的正过程获取方法,其特征在于,步骤3具体包括:
    步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32;
    步骤32,遍历所述5D点扩散函数所包含的所有2D点扩散函数,将各所述2D点扩散函数均转成3D向量,并将所述3D向量堆叠成4D卷积核
    Figure PCTCN2020129083-appb-100005
    步骤33,将所述步骤32所选取的2D点扩散函数的四周对称地填充0,并返回步骤31。
  3. 如权利要求2所述的显微光场体成像的重建方法中的正过程获取方法,其特征在于,所述步骤32中选取所述5D点扩散函数所包含的一所述2D点扩散函数时,其包括:
    步骤321,沿该2D点扩散函数的行方向移动第一预设值、列方向移动第二预设值,得到对齐后的2D点扩散函数;
    步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核,第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核;
    步骤323,将所述步骤322获得的N×N个小卷积核堆叠成维度为
    Figure PCTCN2020129083-appb-100006
    的所述3D向量。
  4. 如权利要求3所述的显微光场体成像的重建方法中的正过程获取方法,其特征在于,所述步骤321中,所述第一预设值设置为i-(N+1)/2,所述第二预设值设置为j-(N+1)/2。
  5. 一种显微光场体成像的重建方法中的反投影获取方法,其特征在于,包括:
    步骤1,将大小为M 2、层数为1的光场图像(M 2,M 2,1)分解成N×N个大小为M 2×M 2的图层图像,每一个所述图层图像由多个大小为N×N的2D子图像组成,每一个所述2D子图像除坐标(i,j)外像素值为零,1≤i≤N,1≤j≤N;
    步骤2,在每一个所述图层图像上提取所述坐标(i,j)处的像素,以重新排 列成单通道图像;其中,所述单通道图像大小为
    Figure PCTCN2020129083-appb-100007
    共有N×N个单通道图像
    Figure PCTCN2020129083-appb-100008
    将所有所述单通道图像堆叠为多通道图像
    Figure PCTCN2020129083-appb-100009
    步骤3,将光场显微镜的5D点扩散函数(M 1,M 1,N,N,S)重新排列为4D卷积核
    Figure PCTCN2020129083-appb-100010
    M 1为点扩散函数在行方向或者列方向能扩散到的最大像素数;
    步骤4,将步骤3的4D卷积核
    Figure PCTCN2020129083-appb-100011
    进行旋转和交换维度;
    步骤5,将步骤2获得的所述多通道图像
    Figure PCTCN2020129083-appb-100012
    作为卷积神经网络的网络输入,将步骤4获得的所述4D卷积核
    Figure PCTCN2020129083-appb-100013
    作为所述卷积神经网络的权重,采用所述4D卷积核
    Figure PCTCN2020129083-appb-100014
    对所述多通道图像
    Figure PCTCN2020129083-appb-100015
    进行卷积,输出多通道图像
    Figure PCTCN2020129083-appb-100016
    步骤6,将步骤5输出的所述多通道图像
    Figure PCTCN2020129083-appb-100017
    进行重新排列,得到3D体图像。
  6. 如权利要求5所述的显微光场体成像的重建方法中的反投影获取方法,其特征在于,步骤3具体包括:
    步骤31,判断5D点扩散函数(M 1,M 1,N,N,S)中的M 1能否被N整除,如果是,进入步骤33;否则,进入步骤32;
    步骤32,遍历所述5D点扩散函数所包含的所有2D点扩散函数,将各所 述2D点扩散函数均转成3D向量,并将所述3D向量堆叠成4D卷积核
    Figure PCTCN2020129083-appb-100018
    步骤33,将所述步骤32所选取的2D点扩散函数的四周对称地填充0,并返回步骤31。
  7. 如权利要求6所述的显微光场体成像的重建方法中的反投影获取方法,其特征在于,所述步骤32中选取所述5D点扩散函数所包含的一所述2D点扩散函数时,其包括:
    步骤321,沿该2D点扩散函数的行方向移动第一预设值、列方向移动第二预设值,得到对齐后的2D点扩散函数;
    步骤322,在所述步骤321获得的对齐后的2D点扩散函数上,以预设像素点为起点,沿所述对齐后的2D点扩散函数的行方向和列方向分别每隔N个像素提取一个像素,将提取的像素组成一个小卷积核,第一次提取的所述预设像素点在所述对齐后的2D点扩散函数上的坐标为(1,1),依序遍历所述对齐后的2D点扩散函数上的所有横坐标不大于N、纵坐标不大于N的所有像素点,从而获得N×N个小卷积核;
    步骤323,将所述步骤322获得的N×N个小卷积核堆叠成维度为
    Figure PCTCN2020129083-appb-100019
    的所述3D向量。
  8. 如权利要求7所述的显微光场体成像的重建方法中的反投影获取方法,其特征在于,所述步骤321中,所述第一预设值设置为i-(N+1)/2,所述第二预设值设置为j-(N+1)/2。
  9. 一种显微光场体成像的重建方法,其特征在于,包括如权利要求1至8中任一项所述的方法。
PCT/CN2020/129083 2019-11-22 2020-11-16 显微光场体成像的重建方法及其正过程和反投影获取方法 WO2021098645A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201911156679 2019-11-22
CN201911156679.2 2019-11-22
CN201911227287.0A CN110989154B (zh) 2019-11-22 2019-12-04 显微光场体成像的重建方法及其正过程和反投影获取方法
CN201911227287.0 2019-12-04

Publications (1)

Publication Number Publication Date
WO2021098645A1 true WO2021098645A1 (zh) 2021-05-27

Family

ID=70090028

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/105190 WO2021098276A1 (zh) 2019-11-22 2020-07-28 显微光场体成像的重建方法及其正过程和反投影获取方法
PCT/CN2020/129083 WO2021098645A1 (zh) 2019-11-22 2020-11-16 显微光场体成像的重建方法及其正过程和反投影获取方法

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/105190 WO2021098276A1 (zh) 2019-11-22 2020-07-28 显微光场体成像的重建方法及其正过程和反投影获取方法

Country Status (2)

Country Link
CN (1) CN110989154B (zh)
WO (2) WO2021098276A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989154B (zh) * 2019-11-22 2020-07-31 北京大学 显微光场体成像的重建方法及其正过程和反投影获取方法
CN113971722B (zh) * 2021-12-23 2022-05-17 清华大学 一种傅里叶域的光场解卷积方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276188A1 (en) * 2008-05-05 2009-11-05 California Institute Of Technology Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors
CN106199941A (zh) * 2016-08-30 2016-12-07 浙江大学 一种移频光场显微镜以及三维超分辨微观显示方法
CN106767534A (zh) * 2016-12-30 2017-05-31 北京理工大学 基于fpm的立体显微系统和配套三维面形高分重构方法
CN110989154A (zh) * 2019-11-22 2020-04-10 北京大学 显微光场体成像的重建方法及其正过程和反投影获取方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015157769A1 (en) * 2014-04-11 2015-10-15 The Regents Of The University Of Colorado, A Body Corporate Scanning imaging for encoded psf identification and light field imaging
CN108364342B (zh) * 2017-01-26 2021-06-18 中国科学院脑科学与智能技术卓越创新中心 光场显微系统及其三维信息重构方法和装置
CN109615651B (zh) * 2019-01-29 2022-05-20 清华大学 基于光场显微系统的三维显微成像方法及系统
CN110047430B (zh) * 2019-04-26 2020-11-06 京东方科技集团股份有限公司 光场数据重构方法、光场数据重构器件及光场显示装置
CN110441271B (zh) * 2019-07-15 2020-08-28 清华大学 基于卷积神经网络的光场高分辨解卷积方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276188A1 (en) * 2008-05-05 2009-11-05 California Institute Of Technology Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors
CN106199941A (zh) * 2016-08-30 2016-12-07 浙江大学 一种移频光场显微镜以及三维超分辨微观显示方法
CN106767534A (zh) * 2016-12-30 2017-05-31 北京理工大学 基于fpm的立体显微系统和配套三维面形高分重构方法
CN110989154A (zh) * 2019-11-22 2020-04-10 北京大学 显微光场体成像的重建方法及其正过程和反投影获取方法

Also Published As

Publication number Publication date
CN110989154B (zh) 2020-07-31
CN110989154A (zh) 2020-04-10
WO2021098276A1 (zh) 2021-05-27

Similar Documents

Publication Publication Date Title
WO2021098645A1 (zh) 显微光场体成像的重建方法及其正过程和反投影获取方法
JP2019537157A5 (zh)
Winkels et al. 3D G-CNNs for pulmonary nodule detection
KR102275499B1 (ko) 컨볼루션 신경망에서 컨볼루션 층들의 연산을 수행하기 위한 방법 및 장치
WO2020186942A1 (zh) 目标检测方法、系统、装置、存储介质和计算机设备
Liu et al. A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution
Bui et al. 3D densely convolutional networks for volumetric segmentation
US10657425B2 (en) Deep learning architectures for the classification of objects captured with a light-field camera
CN112750082B (zh) 基于融合注意力机制的人脸超分辨率方法及系统
CN106934766A (zh) 一种基于稀疏表示的红外图像超分辨率重建方法
Ziabari et al. 2.5 D deep learning for CT image reconstruction using a multi-GPU implementation
DE112011103452T5 (de) Verfahren zum Angleichen von Pixeln einer Entfernungsdarstellung
JP2023505899A (ja) 画像データ検出方法及び装置並びにコンピュータ装置及びプログラム
Qiu et al. Multi-window back-projection residual networks for reconstructing COVID-19 CT super-resolution images
Cheng et al. Volume segmentation using convolutional neural networks with limited training data
Feng et al. LKASR: Large kernel attention for lightweight image super-resolution
Liu et al. Deep adaptive inference networks for single image super-resolution
Li et al. Computed tomography image enhancement using 3D convolutional neural network
Etmann et al. iUNets: learnable invertible up-and downsampling for large-scale inverse problems
Yunpeng et al. Sharing residual units through collective tensor factorization in deep neural networks
Usman Ghani et al. Integrating data and image domain deep learning for limited angle tomography using consensus equilibrium
JP2023541350A (ja) 表畳み込みおよびアクセラレーション
Wang et al. BAM: A balanced attention mechanism for single image super resolution
CN117575915A (zh) 一种图像超分辨率重建方法、终端设备及存储介质
Huang et al. Anatomical‐functional image fusion based on deep convolution neural networks in local Laplacian pyramid domain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20889591

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20889591

Country of ref document: EP

Kind code of ref document: A1