CN110989154A - Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof - Google Patents
Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof Download PDFInfo
- Publication number
- CN110989154A CN110989154A CN201911227287.0A CN201911227287A CN110989154A CN 110989154 A CN110989154 A CN 110989154A CN 201911227287 A CN201911227287 A CN 201911227287A CN 110989154 A CN110989154 A CN 110989154A
- Authority
- CN
- China
- Prior art keywords
- point spread
- image
- spread function
- light field
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Microscoopes, Condenser (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof, wherein the method comprises the following steps: step 1, decomposing a 3D body image into layer images, wherein each layer image consists of a plurality of 2D sub-images, and the pixel value of each 2D sub-image except coordinates (i, j) is zero; step 2, extracting pixels at coordinates (i, j) on each layer image to rearrange the pixels into single-channel images, and stacking all the single-channel images into multi-channel images; step 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel; step 4, taking the multi-channel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, performing convolution on the multi-channel image by adopting a 4D convolutional core, and outputting the multi-channel image; and 5, rearranging the multi-channel image output in the step 4 into a light field image. The method can greatly improve the convolution calculation speed, thereby improving the reconstruction speed of the image.
Description
Technical Field
The invention relates to the technical field of light field microscope image reconstruction, in particular to a reconstruction method of microscopic light field volume imaging and a forward process and a back projection acquisition method thereof.
Background
The light field microscope is a fast imaging mode, and a traditional fluorescence microscope can be changed into the light field microscope (as shown in fig. 1) by adding a micro lens array on a phase surface. Conventional microscopes can only perform 2D imaging, while light field microscopes can perform 3D imaging, by taking a light field image, and then using the light field image, the 3D distribution of an object can be reconstructed.
As shown in fig. 1, the light field images of a typical microscope, a light field microscope, and zebrafish taken by a light field microscope are shown. Note that the image taken by the light field microscope has a typical circular shape, which is the effect of the microlens.
In general, the positive process of light field imaging can be expressed as:
f=Hg
where f is the captured light field image, g is the 3D volume image to be reconstructed, and they are all represented by one-dimensional vectors for description introduction. H is a system matrix that relates the 3D volume image g to the light field image f from which the 3D volume image g is reconstructed. Different reconstruction algorithms differ in detail, but both Hg and H need to be calculatedTf, forward process (forward) and back projection (back) of computed imaging, and the computation process is similar.
The process of calculating the positive process Hg is mainly to convolve the volume image g with a corresponding Point Spread Function (PSF), the point spread function of the light field camera is related to the spatial position, and the point spread functions are the same only for points on the same layer and at the same position after corresponding microlenses. As shown in fig. 2, if there are N × N pixels behind each microlens, the 3D volume to be reconstructed has S layers shown as 2a in fig. 2, each layer needs to be decomposed into N × N layers, then the layers are convolved with the corresponding point spread functions, N × S2D convolutions need to be calculated when Hg is calculated in a primary positive process, and then the convolved images are accumulated to obtain the light field image lfmimeage. This approach is very computationally intensive, and the existing method uses FFT (chinese all known as "fast fourier transform") to compute the convolution, but because of the large number of convolutions, the computation process takes a long time.
Disclosure of Invention
It is an object of the present invention to provide a reconstruction method for microscopic light field volume imaging and a forward process and a back projection acquisition method thereof that overcome or at least alleviate at least one of the above-mentioned drawbacks of the prior art.
In order to achieve the above object, the present invention provides a positive process acquisition method in a reconstruction method of microscopic light field volume imaging, the method comprising:
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
And 5, rearranging the multichannel image output by the step 4 into a light field image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions ofThe 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging, which comprises the following steps:
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Step 4, performing 4D convolution kernel of the step 3Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processedAs the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is usedAs weights for the convolutional neural network, the 4D convolutional kernel is employedFor the multi-channel imagePerforming convolution and outputting multi-channel image
Step 6, the multi-channel image output in the step 5 is processedAnd rearranging to obtain a 3D volume image.
Further, step 3 specifically includes:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
Further, when the step 32 selects the 2D point spread function included in the 5D point spread function, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
step 323, stacking the NxN small convolution kernels obtained in the step 322 into dimensions ofThe 3D vector of (a).
Further, in the step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the method.
Since the invention first images the 3D volume (M)2,M2S) into NxNxS layer images or light field images (M)2,M21), decomposing into N × N layer images, and extracting only the pixels at non-zero positions without calculating the pixels at zero positions, so that each single-channel image has a size ofAnd will beThe 2D point spread functions are rearranged into small N multiplied by N convolution kernels, then are respectively convolved and then are inversely rearranged, and the convolution method replaces the prior art that the 2D point spread functions are directly adopted for convolution, so that the convolution calculation speed can be greatly improved, and the reconstruction speed of the image is improved.
Drawings
Fig. 1 is a schematic diagram of an imaging principle of a microscope, a schematic diagram of an imaging principle of a light field microscope, and a captured light field picture, which are typical in the prior art;
FIG. 2 is a schematic diagram illustrating the calculation principle of the positive imaging process of a light field camera in the prior art;
FIG. 3 is a schematic diagram illustrating a reconstruction method for microscopic light field imaging according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an arrangement of 5D point spread functions into 4D convolution kernels according to an embodiment of the present invention;
fig. 5 is a flowchart of a positive process acquisition method in a reconstruction method of microscopic light field volume imaging according to an embodiment of the present invention;
fig. 6 is a flowchart of a back projection acquisition method in the reconstruction method of the microscopic light field volume imaging according to the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
As shown in fig. 2, fig. 3 and fig. 5, the positive process obtaining method in the reconstruction method of microscopic light field volume imaging provided in this embodiment is to obtain a light field image f according to a known 3D volume image g, and the method includes:
And 3, rearranging the 5D point spread function of the light field microscope into a 4D convolution kernel. Specifically, as shown in FIG. 2, the 5D point spread function of a light field microscope is a 5D vector with a dimension of (M)1,M1N, N, S), namely (M)1,M1N, S), where N is the number of discrete pixels in the row or column direction behind each microlens, S is the number of layers of the 3D volume image, M1The maximum number of pixels to which the point spread function can spread in the row direction or the column direction. The resulting 4D convolution kernel is represented as
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, and adopting the 4D convolutional kernel to carry out the operation on the multichannel imageConvolving, outputting a multi-channel image, the output multi-channel image being represented asWherein, when using a convolutional neural network, the edge padding is set toThe convolution step length stride is set to 1, and the convolutional neural network can be implemented by using existing software, such as matlab, tensorflow, pytorch, and the like.
And 5, rearranging the multichannel image output by the step 4 into a light field image, wherein the rearrangement can be realized by adopting the existing method.
In the embodiment, the 3D volume image is firstly decomposed into N × S layer images, only the pixels at the non-zero positions are extracted, and the pixels at the zero positions do not need to be calculated, so that the size of each single-channel image is equal toAnd rearranging the 2D point spread functions into small N multiplied by N convolution kernels, respectively convolving the kernels, and then inversely rearranging the kernels instead of directly adopting the 2D point spread functions to carry out convolution in the prior art, so that the convolution calculation speed can be greatly improved, and the reconstruction speed is improved.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point diffusion function e included in the 5D point diffusion function, and converting the 2D point diffusion function e into a 3D vector; then, selecting the next 2D point diffusion function e until all 2D point diffusion functions e contained in the 5D point diffusion function are traversed, and converting all 2D point diffusion functions e contained in the 5D point diffusion function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, step 3 can also be implemented by methods provided in the prior art, which are not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, and the second preset value is set to be j- (N + 1)/2.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels. And (3) the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all pixel points of which the abscissa is not more than N and the ordinate is not more than N on the aligned 2D point spread function are traversed sequentially, so that an NxN small convolution kernel f is obtained.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1. The lower right side shows a number of N small convolution kernels f, each of which has a size ofSpecifically, when a pixel is extracted for the first time, taking a pixel point with a coordinate (1, 1) on the aligned 2D point spread function as a starting point, and extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when the pixel is taken for the second time,taking a pixel point with coordinates (1, 2) on the aligned 2D point diffusion function as a starting point, and respectively extracting a pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; when a pixel is taken for the third time, taking a pixel point with coordinates (1, 3) on the aligned 2D point diffusion function as a starting point, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; by analogy, sequentially traversing to pixel points with coordinates (N, N) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; in the above process, the extracted pixels are combined into a small convolution kernel f, and finally, the small convolution kernel f of N × N is obtained.
Step 323, stacking the nxn small convolution kernels f obtained in the step 322 into the 3D vector, wherein the dimension of the 3D vector is
As shown in fig. 6, the present invention also provides a back projection acquisition method in a reconstruction method of an apparent micro-light field volume imaging, which is to obtain a 3D volume image g from a known light field image f, the method comprising:
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernelWherein, 5D point spread function (M)1,M1N, S) have been introduced above and will not be described herein.
Step 4, performing 4D convolution kernel of the step 3Rotation and exchange dimensions are performed. Where "rotate and swap dimensions" refers to convolving the 4D kernelsThe first two-dimensional 2D image is rotated 180 degrees and then the third and fourth dimensions are swapped.
Step 5, the multichannel image obtained in the step 2 is processedAs the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is usedAs weights for the convolutional neural network, the 4D convolutional kernel is employedFor the multi-channel imagePerforming convolution and outputting multi-channel imageWherein, makeWith convolutional neural networks, edge padding is set toThe convolution step stride is set to 1. The convolutional neural network can be implemented by using existing software, such as matlab, tenserflow, pytorch, etc.
In the step, the single-channel image obtained in the step 2 is converted into the input of a Convolutional Neural Network (CNN), and the 4D convolutional kernel obtained in the step 3 is used as the weight of the convolutional neural network, so that the existing neural network toolkit can be used for calculation, the output of the network is obtained, and then the inverse operation is carried out on the original image space, and the final output is obtained.
Step 6, the multi-channel image output in the step 5 is processedThe rearrangement is performed to obtain a 3D volume image, wherein the rearrangement can be performed by using an existing method.
The embodiment firstly provides that the light field image is decomposed into N multiplied by N image layer images, only the pixels at the non-zero positions are extracted, and the pixels at the zero positions do not need to be calculated, so that the size of each single-channel image is equal toAnd rearranging the 2D point spread functions into small N multiplied by N convolution kernels, respectively convolving the kernels, and then inversely rearranging the kernels instead of directly adopting the 2D point spread functions to carry out convolution in the prior art, so that the convolution calculation speed can be greatly improved, and the reconstruction speed is improved.
Note that, similarly to the above-described embodiment, each layer image in step 1 (the size M of each layer image) may also be set2×M2The number is N × S) are respectively shifted in the row direction by- (i-1) and in the column direction by- (j-1) to align the image layers. The procedure of step 2 was similar to the previous example, resulting in a plurality of sizes (M)2,M2) The plurality of single-channel images of (c) is further divided into a plurality of sizes of (M)2,M2) Is stacked into a multi-channel image (M)2,M2N × S). In step 3, each 2D point spread function contained in the 5D point spread function moves i-1 along the row direction and moves j-1 along the column direction respectively to obtain a 4D convolution kernel (M)1,M1N × S, 1). The operation of step 4 is the same as the above embodiment, wherein the edge padding of the neural network is set to (M)1-1)/2, convolution step stride set to N. Step 5 output shape is (M)2,M2And 1) light field image of the light field.
In one embodiment, as shown in fig. 3, step 3 is implemented by the following steps 31 to 33:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, step 32 is entered.
Step 32, firstly, selecting a 2D point spread function e contained in the 5D point spread function, and converting the 2D point spread functions into 3D vectors; then, selecting the next 2D point diffusion function e until all 2D point diffusion functions e contained in the 5D point diffusion function are traversed, and converting all 2D point diffusion functions e contained in the 5D point diffusion function into 3D vectors; finally, all the transformed 3D vectors are stacked into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function e selected in the step 32, and returning to the step 31.
Of course, the method provided in the prior art can also be used to implement step 2, which is not listed here.
In one embodiment, as shown in fig. 3 and 4, when the 2D point spread function e included in the 5D point spread function is selected in the step 32, it includes:
in step 321, the first preset value is moved along the row direction of the 2D point spread function e, and the second preset value is moved along the column direction, so as to obtain the aligned 2D point spread function. The first preset value is set to be i- (N +1)/2, i is larger than or equal to 1 and is smaller than or equal to N, the second preset value is set to be j- (N +1)/2, and j is larger than or equal to 1 and is smaller than or equal to N.
Step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel f by the extracted pixels. And (3) the coordinates of the preset pixel points extracted for the first time on the aligned 2D point spread function are (1, 1), and all pixel points of which the abscissa is not more than N and the ordinate is not more than N on the aligned 2D point spread function are traversed sequentially, so that an NxN small convolution kernel f is obtained.
Referring to fig. 3, in fig. 3, the upper right side is a 2D point spread function obtained in step 321 after alignment, and the size of the D point spread function is M1×M1. The lower right side shows a number of N small convolution kernels f, each of which has a size ofSpecifically, when a pixel is extracted for the first time, taking a pixel point with a coordinate (1, 1) on the aligned 2D point spread function as a starting point, and extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when pixels are taken for the second time, taking pixel points with coordinates (1, 2) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; when a pixel is taken for the third time, taking a pixel point with coordinates (1, 3) on the aligned 2D point diffusion function as a starting point, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point diffusion function; by analogy, sequentially traversing to pixel points with coordinates (N, N) on the aligned 2D point spread function as starting points, and respectively extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function; in the above process, the extracted pixels are combined into a small convolution kernel f, and finally, the small convolution kernel f of N × N is obtained.
Step 323, stacking the nxn small convolution kernels f obtained in the step 322 into the 3D vector, wherein the dimension of the 3D vector is
The size of the image layer image obtained by decomposing the 3D body image is M2×M2The size of the 2D point spread function contained in the 5D point spread function of the light field microscope corresponding to the image layer image is M1×M1The complexity of the convolution is directly calculated to be O (M) by adopting the existing method1 2M2 2) The complexity can be reduced to using the existing FFTObserving the characteristics of the convolution, the convolved image can be found to be 0 except for the specific position, so the invention extracts the image and rearranges the point spread function to obtain the size M1×M1Is rearranged into N × N small convolution kernels, each having a size ofThus the complexity of directly calculating the convolution is reduced toMeanwhile, due to the development of deep learning, a plurality of software packages for efficiently calculating the convolution are provided, the convolution can be efficiently calculated by combining the software packages, and experiments show that the method can greatly improve the calculation speed compared with the original algorithm based on FFT.
The following table 1 is a comparison of the reconstruction method provided by the present invention with the existing methods in terms of time consumption and acceleration factor in reconstructing the image:
TABLE 1
Image size (pixel) | 600×600 | 900×900 | 1200×1200 | 1500×1500 | 1800×1800 |
FFT time (seconds) | 2135.6 | 2149.0 | 2185.5 | 2228.6 | 2825.9 |
The invention consumes time (second) | 35.3 | 90.6 | 122.0 | 163.0 | 227.2 |
Multiple of acceleration | 60.0× | 23.7× | 17.9× | 13.7× | 12.4× |
According to the above table 1, comparing the time consumption of the reconstruction method provided by the present invention with the time consumption of the existing method, all based on the algorithm provided in the documents Prevedel, r, et al, Simultaneous white-animal 3D imaging of neural effective light-field microscopy, Methods 2014.11(7): p.727-730, the reconstruction method provided by the present invention can achieve a 12 to 60-fold increase, for example, when the size of the region of interest is 900 × 900, the reconstruction time of 1000 images can be reduced from 3 weeks to 1 day, and the time cost can be greatly reduced. In the experiment, N is 15, S is 101, and the iteration is performed 16 times. The conventional algorithm and the improved algorithm are realized by Matlab2019a, a GPU is used for acceleration, and experiments are carried out on a large-area workstation which is provided with a GeForce GTX 1080Ti GPU, a double CPU (Intel Xeon 2.10GHz) and a 64G memory.
The invention also provides a reconstruction method for microscopic light field volume imaging, which comprises the methods in the embodiments.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (9)
1. A positive process acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2And 3D volume image (M) with number of layers S2,M2S) into NxNxS pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, i is more than or equal to 1 and less than or equal to N, and j is more than or equal to 1 and less than or equal to N;
step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size isA total of NxNxS single-channel images, all of which are stacked into a multi-channel image
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Step 4, taking the multichannel image obtained in the step 2 as the network input of the convolutional neural network, taking the 4D convolutional kernel obtained in the step 3 as the weight of the convolutional neural network, adopting the 4D convolutional kernel to carry out convolution on the multichannel image, and outputting a multichannel image
And 5, rearranging the multichannel image output by the step 4 into a light field image.
2. The positive process acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 1, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
3. The method for acquiring positive process in the reconstruction method of microscopic light field volume imaging according to claim 2, wherein the step 32 of selecting a 2D point spread function included in the 5D point spread function comprises:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
4. The method for acquiring positive processes in the reconstruction method for microscopic light field volume imaging according to claim 3, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
5. A back projection acquisition method in a reconstruction method of microscopic light field volume imaging is characterized by comprising the following steps:
step 1, the size is M2Light field image (M) with layer number 12,M21) decomposition into N × N pieces of size M2×M2Each layer image is composed of a plurality of 2D sub-images with the size of NxN, the pixel value of each 2D sub-image except the coordinates (i, j) is zero, and i is more than or equal to 1 and less than or equal to iN,1≤j≤N;
Step 2, extracting a pixel at the coordinate (i, j) on each layer image to rearrange the pixels into a single-channel image; wherein the single channel image size isTotal of NXN single-channel imagesStacking all of the single-channel images into a multi-channel image
Step 3, 5D point spread function (M) of the light field microscope1,M1N, S) is rearranged as a 4D convolution kernel
Step 4, performing 4D convolution kernel of the step 3Performing rotation and exchanging dimensions;
step 5, the multichannel image obtained in the step 2 is processedAs the network input of the convolutional neural network, the 4D convolution kernel obtained in the step 4 is usedAs weights for the convolutional neural network, the 4D convolutional kernel is employedFor the multi-channel imagePerforming convolution and outputting multi-channel image
6. The back projection acquisition method in the reconstruction method of microscopic light field volume imaging according to claim 5, wherein the step 3 specifically comprises:
step 31, judge the 5D point spread function (M)1,M1M in N, N, S)1Whether the data can be evenly divided by N, if so, entering a step 33; otherwise, go to step 32;
step 32, traversing all 2D point spread functions contained in the 5D point spread function, converting each 2D point spread function into a 3D vector, and stacking the 3D vectors into a 4D convolution kernel
And step 33, symmetrically filling 0 around the 2D point spread function selected in the step 32, and returning to the step 31.
7. The back projection acquisition method in the reconstruction method for microscopically light field volume imaging as claimed in claim 6, wherein when selecting one of the 2D point spread functions included in the 5D point spread function in the step 32, it includes:
step 321, moving the first preset value along the row direction and the second preset value along the column direction of the 2D point spread function to obtain an aligned 2D point spread function;
step 322, taking a preset pixel point as a starting point on the aligned 2D point spread function obtained in step 321, extracting one pixel every N pixels along the row direction and the column direction of the aligned 2D point spread function, and forming a small convolution kernel by the extracted pixels, where the coordinate of the preset pixel point extracted for the first time on the aligned 2D point spread function is (1, 1), and traversing all pixel points on the aligned 2D point spread function whose abscissa is not greater than N and whose ordinate is not greater than N in sequence, so as to obtain nxn small convolution kernels;
8. The back projection acquisition method in the reconstruction method for microscopic light field volume imaging according to claim 7, wherein in step 321, the first preset value is set to i- (N +1)/2, and the second preset value is set to j- (N + 1)/2.
9. A reconstruction method for microscopic light field volume imaging comprising the method of any one of claims 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/105190 WO2021098276A1 (en) | 2019-11-22 | 2020-07-28 | Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein |
PCT/CN2020/129083 WO2021098645A1 (en) | 2019-11-22 | 2020-11-16 | Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911156679 | 2019-11-22 | ||
CN2019111566792 | 2019-11-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110989154A true CN110989154A (en) | 2020-04-10 |
CN110989154B CN110989154B (en) | 2020-07-31 |
Family
ID=70090028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911227287.0A Active CN110989154B (en) | 2019-11-22 | 2019-12-04 | Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110989154B (en) |
WO (2) | WO2021098276A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021098276A1 (en) * | 2019-11-22 | 2021-05-27 | 北京大学 | Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein |
CN113971722A (en) * | 2021-12-23 | 2022-01-25 | 清华大学 | Fourier domain optical field deconvolution method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090276188A1 (en) * | 2008-05-05 | 2009-11-05 | California Institute Of Technology | Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors |
CN106199941A (en) * | 2016-08-30 | 2016-12-07 | 浙江大学 | A kind of shift frequency light field microscope and three-dimensional super-resolution microcosmic display packing |
CN106767534A (en) * | 2016-12-30 | 2017-05-31 | 北京理工大学 | Stereomicroscopy system and supporting 3 d shape high score reconstructing method based on FPM |
CN110047430A (en) * | 2019-04-26 | 2019-07-23 | 京东方科技集团股份有限公司 | Light field data reconstructing method, light field data restructing device and light field display device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015157769A1 (en) * | 2014-04-11 | 2015-10-15 | The Regents Of The University Of Colorado, A Body Corporate | Scanning imaging for encoded psf identification and light field imaging |
CN108364342B (en) * | 2017-01-26 | 2021-06-18 | 中国科学院脑科学与智能技术卓越创新中心 | Light field microscopic system and three-dimensional information reconstruction method and device thereof |
CN109615651B (en) * | 2019-01-29 | 2022-05-20 | 清华大学 | Three-dimensional microscopic imaging method and system based on light field microscopic system |
CN110441271B (en) * | 2019-07-15 | 2020-08-28 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural network |
CN110989154B (en) * | 2019-11-22 | 2020-07-31 | 北京大学 | Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof |
-
2019
- 2019-12-04 CN CN201911227287.0A patent/CN110989154B/en active Active
-
2020
- 2020-07-28 WO PCT/CN2020/105190 patent/WO2021098276A1/en active Application Filing
- 2020-11-16 WO PCT/CN2020/129083 patent/WO2021098645A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090276188A1 (en) * | 2008-05-05 | 2009-11-05 | California Institute Of Technology | Quantitative differential interference contrast (dic) microscopy and photography based on wavefront sensors |
CN106199941A (en) * | 2016-08-30 | 2016-12-07 | 浙江大学 | A kind of shift frequency light field microscope and three-dimensional super-resolution microcosmic display packing |
CN106767534A (en) * | 2016-12-30 | 2017-05-31 | 北京理工大学 | Stereomicroscopy system and supporting 3 d shape high score reconstructing method based on FPM |
CN110047430A (en) * | 2019-04-26 | 2019-07-23 | 京东方科技集团股份有限公司 | Light field data reconstructing method, light field data restructing device and light field display device |
Non-Patent Citations (1)
Title |
---|
XIAOSHUAI HUANG EL: "Fast, long-term, super-resolution imaging with Hessian", 《NATURE BIOTECHNOLOGY》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021098276A1 (en) * | 2019-11-22 | 2021-05-27 | 北京大学 | Micro-light field volume imaging reconstruction method, positive process obtaining method therein and back projection obtaining method therein |
WO2021098645A1 (en) * | 2019-11-22 | 2021-05-27 | 北京大学 | Reconstruction method for microscopic light field stereoscopic imaging, and forward and backward acquistion method therefor |
CN113971722A (en) * | 2021-12-23 | 2022-01-25 | 清华大学 | Fourier domain optical field deconvolution method and device |
CN113971722B (en) * | 2021-12-23 | 2022-05-17 | 清华大学 | Fourier domain optical field deconvolution method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2021098276A1 (en) | 2021-05-27 |
CN110989154B (en) | 2020-07-31 |
WO2021098645A1 (en) | 2021-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Residual non-local attention networks for image restoration | |
Luo et al. | Deep constrained least squares for blind image super-resolution | |
Farrugia et al. | Super resolution of light field images using linear subspace projection of patch-volumes | |
Belekos et al. | Maximum a posteriori video super-resolution using a new multichannel image prior | |
Ignatov et al. | Ntire 2019 challenge on image enhancement: Methods and results | |
JP2019537157A5 (en) | ||
CN112750082A (en) | Face super-resolution method and system based on fusion attention mechanism | |
CN110989154B (en) | Reconstruction method for microscopic optical field volume imaging and positive process and back projection acquisition method thereof | |
CN112070670B (en) | Face super-resolution method and system of global-local separation attention mechanism | |
CN109389667B (en) | High-efficiency global illumination drawing method based on deep learning | |
CN113538243B (en) | Super-resolution image reconstruction method based on multi-parallax attention module combination | |
CN115439702B (en) | Weak noise image classification method based on frequency domain processing | |
Xie et al. | Non-local nested residual attention network for stereo image super-resolution | |
CN117575915B (en) | Image super-resolution reconstruction method, terminal equipment and storage medium | |
CN113658044A (en) | Method, system, device and storage medium for improving image resolution | |
CN108229194A (en) | Method for Digital Image Scrambling based on multithreading model and multistage scramble | |
Wang et al. | Osffnet: Omni-stage feature fusion network for lightweight image super-resolution | |
Conde et al. | Real-time 4k super-resolution of compressed AVIF images. AIS 2024 challenge survey | |
CN111524078A (en) | Dense network-based microscopic image deblurring method | |
CN116109768A (en) | Super-resolution imaging method and system for Fourier light field microscope | |
CN117710189A (en) | Image processing method, device, computer equipment and storage medium | |
CN111951159B (en) | Processing method for super-resolution of light field EPI image under strong noise condition | |
CN116128722A (en) | Image super-resolution reconstruction method and system based on frequency domain-texture feature fusion | |
CN111861881B (en) | Image super-resolution reconstruction method based on CNN interpolation | |
CN111915492B (en) | Multi-branch video super-resolution method and system based on dynamic reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |