CN113296259A - Super-resolution imaging method and device based on aperture modulation subsystem and deep learning - Google Patents

Super-resolution imaging method and device based on aperture modulation subsystem and deep learning Download PDF

Info

Publication number
CN113296259A
CN113296259A CN202110572117.7A CN202110572117A CN113296259A CN 113296259 A CN113296259 A CN 113296259A CN 202110572117 A CN202110572117 A CN 202110572117A CN 113296259 A CN113296259 A CN 113296259A
Authority
CN
China
Prior art keywords
deep learning
imaging
image
super
learning network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110572117.7A
Other languages
Chinese (zh)
Other versions
CN113296259B (en
Inventor
王志强
何晋平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Astronomical Optics and Technology NIAOT of CAS
Original Assignee
Nanjing Institute of Astronomical Optics and Technology NIAOT of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Astronomical Optics and Technology NIAOT of CAS filed Critical Nanjing Institute of Astronomical Optics and Technology NIAOT of CAS
Priority to CN202110572117.7A priority Critical patent/CN113296259B/en
Publication of CN113296259A publication Critical patent/CN113296259A/en
Application granted granted Critical
Publication of CN113296259B publication Critical patent/CN113296259B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a super-resolution imaging method and a device based on an aperture modulation subsystem and deep learning, which comprises the following steps: an aperture modulation subsystem is built on the aperture diaphragm plane of the original optical imaging system or in an external connection mode; constructing a deep learning network model according to a preset training strategy; acquiring and standardizing data of a target image; deep learning network training, optimization and performance characterization; and integrating the trained deep learning network and a data preprocessing module into an application program interface for calling an image processing module of an imaging detector to perform quasi-real-time super-resolution imaging and display. The invention provides a more universal method for exploring super-diffraction limit imaging research based on deep learning, can realize a rapid training data acquisition process, and is expected to be applied to training data acquisition and SR imaging research of a moving target. The method is suitable for simple point source targets and complex extended targets, and has strong resolution enhancement capability.

Description

Super-resolution imaging method and device based on aperture modulation subsystem and deep learning
Technical Field
The invention relates to the field of high-resolution imaging, in particular to a method and a device for exceeding optical diffraction limit imaging based on an aperture modulation subsystem and a deep learning technology, and particularly relates to an imaging method capable of realizing rapid image acquisition and having strong super-resolution performance.
Background
The resolution of conventional optical imaging systems is inherently limited by the abbe-rayleigh Diffraction Limit (DL). In recent decades, there has been an increasing interest in developing Super-Resolution (SR) imaging methods aimed at breaking through DL, and powerful concepts such as near-field scanning optical microscope (SNOM), optically activated positioning microscope (PALM), random optical reconstruction microscope (STORM), stimulated emission depletion microscope (STED), and structured light illumination microscope (SIM) have been invented and commonly applied to SR microscopy.
In the past few years, data-driven methods using Deep Learning Networks (DLN) have become a potential solution to the "inverse problem" solution associated with optical SR imaging. Unlike iterative optimization algorithms, DLN-based SR methods do not require estimation of point spread functions or numerical modeling of the imaging process, but rather obtain an optimized, non-iterative reconstruction tool through training, thereby enabling fast resolution enhancement. In 2017, Rivenson et al [ Rivenson Y, Gorocs Z, Gunaydin H, et al deep Learning microscopia. Low Resolution (LR) and High Resolution (HR) image pairs are obtained by switching two microscope objectives with different Numerical Apertures (NA) and different fields of View (FOV), respectively, and then trained in an "end-to-end" manner. The obtained deep learning network can improve the resolution, the field of view and the depth of field of the microscopic imaging system, wherein the resolution improvement capability is about 1.286 times. Since then, DLN has proven useful for improving the resolution of fluorescence microscopy, coherent imaging, and STORM imaging, among others. However, the time consumption for data acquisition by switching the objective lens is long, which limits the application of the method in the field of training data acquisition and SR imaging of moving targets (such as living cells); at the same time, because the magnification of the LR and HR images is different, an additional image registration operation is required to match the field of view (FOV) of each group of images; therefore, the matching problem of the objective lens NA and the FOV needs to be considered in the actual experiment and DLN construction process, the realization of the diversification of the data acquisition mode and the DLN construction mode is severely limited, and the universality of the method is insufficient.
Disclosure of Invention
In order to solve the problems in the background art, a super-resolution imaging method and a super-resolution imaging device based on an aperture modulation subsystem and deep learning are provided, so that the acquisition speed of training data and the universality of a DLN construction mode can be effectively improved.
In a first aspect, a super-resolution imaging method based on an aperture modulation subsystem and deep learning is provided, which includes the following steps:
s1: an aperture modulation subsystem is built on the aperture diaphragm plane of the original optical imaging system or in an external connection mode;
s2: constructing a deep learning network model according to a preset training strategy;
s3: acquiring and standardizing data of a target image;
s4: deep learning network training, optimization and performance characterization;
s5: and integrating the trained deep learning network and a data preprocessing module into an application program interface for calling an image processing module of an imaging detector to perform quasi-real-time super-resolution imaging and display.
In one possible design, the aperture modulation subsystem adopts an external connection mode; the external connection mode takes a real image surface of an original optical imaging system as input, and the iris diaphragm is positioned on an aperture diaphragm plane between the collimating lens and the imaging lens; in the design, the design of the original optical system is not changed, the optical system can be integrated in the existing optical system as an additional module, the SR imaging with compact structure and low cost is realized, and the performance of the existing optical imaging equipment is expected to be further improved.
In one possible design, the DLN model training strategy may use a single LR image as the network input, and the two network outputs/labels are respectively a Medium Resolution (MR) image and an HR image; in the design, the introduction of the MR image can ensure the fidelity of the image while improving the resolution enhancement performance.
In one possible design, a corresponding DLN is constructed according to a DLN model training strategy, and only a convolutional layer is used in the construction process of the DLN to ensure the expansion and contraction characteristics of the network; the constructed Loss function of the DLN adopts at least one item of a data fidelity term and/or some regularization terms used for specific purposes, and coefficients of the regularization terms can be subjected to empirical value taking according to an imaging target; the constructed DLN comprises characteristic channel weighting and skipping connection operation, namely different weights are automatically given to residual error information of different frequency bands, so that high-frequency components can be transmitted backwards by using self-adaptive larger weights.
In one possible design, target image data with different resolutions are acquired by changing the aperture of an iris diaphragm and are subjected to standardization processing to form a training, verifying and testing data set of a deep learning network model; wherein the normalization process includes at least one of image segmentation, block normalization, and image noise reduction; then, training, optimizing and characterizing the performance of the deep learning network, and improving the super-resolution extrapolation performance of the deep learning network; wherein the training and optimization process includes at least one of data enhancement and hyper-parameter adjustment.
In one possible design, a trained deep learning network is used for super-resolution extrapolation imaging, and the method comprises the following steps: firstly, carrying out m times of bicubic interpolation image amplification operation to match the sampling rate of a camera, and then inputting the amplified image into a trained deep learning network to carry out super-resolution extrapolation enhancement.
It should be understood that the above implementation of the aperture modulation subsystem, the DLN construction method, and the image enlargement operation are only examples and are not limited, and other implementations are possible.
In a second aspect, a super-resolution imaging device based on an aperture modulation subsystem and deep learning is provided; the imaging device is realized in an external connection mode and comprises an original optical imaging system, a collimating lens, an iris diaphragm, an imaging lens and an imaging detector; means/units for performing the method steps as described in the first aspect or any one of the possible designs of the first aspect;
in one possible design, the collimating lens takes the real image plane of the original optical imaging system as input; then the iris diaphragm is positioned on an aperture diaphragm plane between the collimating lens and the imaging lens to realize rapid and variable aperture modulation; finally, the imaging lens re-images the object on the imaging detector.
In one possible design, the imaging detector uses the detector of the original optical imaging system, and the trained deep learning network and the data preprocessing module can be directly called by the imaging detector to perform quasi-real-time super-resolution imaging and display.
Compared with the prior art, the invention has the beneficial effects that:
in addition to the above-mentioned advantages of the first aspect to the second aspect, (1) the aperture modulation subsystem only changes the numerical aperture of the optical imaging system, and does not change the FOV and magnification thereof, so that images with different resolutions occupy the same number of pixels, and no complicated image registration operation is required during training; meanwhile, the aperture modulation subsystem can realize the rapid acquisition of image pairs with different resolutions, the acquisition speed depends on the aperture modulation speed, and the aperture modulation subsystem is expected to be applied to the training data acquisition and SR imaging research of moving targets (such as living cells); (2) different from the previous one-to-one image acquisition and network construction modes, the method can conveniently and rapidly realize various image acquisition and DLN construction modes, provides a more universal method for exploring the performance of exceeding the diffraction limit imaging of the imaging system based on the DLN method, and has higher improvement effect on the resolution of the imaging system if the single-input double-output 3-aperture modulation strategy provided by the embodiment, so that the obtained SR image has higher quality and smaller distortion.
Drawings
FIG. 1 is a schematic flow diagram of an SR imaging method based on an aperture modulation subsystem and deep learning;
FIG. 2 is a schematic diagram of an embodiment of an SR imaging apparatus based on an aperture modulation subsystem and deep learning;
FIG. 3 is a schematic diagram of a dpcCARTs-Net training framework of a single-input dual-output 3-aperture modulation strategy in an embodiment of the present application;
FIG. 4 is a schematic diagram of an SR extrapolation process in an embodiment of the present application;
FIG. 5 is a schematic diagram of the sources of dpcARTs-Net training data in an embodiment of the present application; numerical simulation: (a) 0.1% sparsity, DLR2.5mm, (b) 0.1% sparsity, DHR7.5mm, (c) 10% open, DLR2.5mm, (D) 10% open, DHR7.5 mm; experiment: (e) longitudinal slicing of corn seeds;
FIG. 6 is a comparison graph of SR extrapolation imaging results of dppCARTs-Net on random point sources of different densities in an embodiment of the present application; (a) the- (d) is an image of point sources with different distances, which sequentially corresponds to the conditions of 1-2 times and 2.1-3 times of the diffraction limit of the optical imaging system from top to bottom from left to right, and the change step length is 0.1 time; (a) is the maximum aperture DmaxA 3-fold interpolated magnification of the HR image at 7.5 mm; (b) the image is an ideal interpolation enlarged image of 3 times of super-resolution image; SR extrapolation of dpcCARTs-Net trained using (c)2 aperture and (d)3 aperture modulation strategies; (e)2.3 times (f)2.8 times resolution enhancement; (e) the outer frames of the graphs (a) and (f) correspond to the labels in the graphs (c) and (d), and respectively show that the resolution enhancement capability of the 2-aperture and 3-aperture modulation trained dpccars-Net is 2.3 times and 2.8 times;
FIG. 7 is a comparison of SR extrapolation imaging results of dppCARTs-Net on a microscopic resolution plate in an example of the present application; (a) HR input to the network; SR extrapolation of dpcCARTs-Net trained using (b)2 aperture and (c)3 aperture modulation strategies; comparing the transversal lines of the different resolution line pairs with (d)400 lp/mm; (e)450 lp/mm; (f)500 lp/mm; (g)550 lp/mm; (h)600 lp/mm; the line shapes in the diagrams (d-h) correspond one-to-one to the borders of the diagrams (a-c);
FIG. 8 is a comparison of SR extrapolation imaging results of dpcCARTs-Net on corn seed vertical cuts in an embodiment of the present application; (a)3, an SR extrapolation result example trained by an aperture modulation strategy; (b-j) are enlarged views of regions of interest (ROI) marked by different linear boxes in (a), wherein (b, e, h) corresponds to input HR images, (c, f, i) and (d, g, j) correspond to SR extrapolation results of dpcCARTs-Net trained by using 2-aperture and aperture modulation strategies, respectively; (k) and (l) a cross-sectional comparison chart of two white and black characteristic points in the middle frame area in the step (c) respectively; the arrows in (g, j) point to the original blurred lines and shapes that the trained dpcCARTs-net improves the resolution and contrast.
Detailed Description
The present embodiment will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the present example provides a super-resolution method based on an aperture modulation subsystem and deep learning, comprising:
step S1: and the real image surface of the optical imaging system is used as an object plane of the aperture modulation subsystem to build the aperture modulation subsystem.
In this embodiment, the aperture modulation subsystem is implemented in an external connection manner. Fig. 2 is a schematic diagram of the apparatus of the present embodiment. The aperture modulation subsystem is composed of a reflector 2, a collimating lens 3, an electric iris diaphragm 4, an imaging lens 5 and an imaging detector 6, and can be integrated on the second layer cladding plate 1. The real image plane of the original imaging optical system (exemplified by the microscope 7 in the figure) is also the front focal plane of the collimating lens 3. Between the collimator lens 3 and the imaging lens 5 there is an aperture stop plane, in which the motorized iris 4 is mounted for fast, variable aperture modulation. The imaging lens 5 re-images the object on the imaging detector 6. The integrated aperture modulation subsystem has the characteristics of easy adjustment, easy disassembly and the like, and can be installed in the existing optical imaging system through simple modification; and when the data acquisition required by training is finished, the data can be removed, and the trained DLN can be continuously applied to the original optical imaging system. Since the objective lens 10 does not need to be switched, ideally, all the adjustable devices in the apparatus of the present embodiment, such as the electric iris 4, the imaging detector 6, the illumination light power, the electric stage 11, etc., can be controlled by the data acquisition program of the notebook computer 9, so as to realize rapid and automatic image acquisition and storage, which has great advantages in the application of moving target imaging.
Optionally, the aperture modulation subsystem may adopt an inscribed mode; the inscription mode is realized by installing an iris diaphragm on the aperture diaphragm plane of the original optical imaging system, and the specific implementation mode can refer to the following patents [ publication number: CN 108398805A ].
Further optionally, the aperture modulation subsystem may adopt a light splitting mode; the beam splitting mode divides an imaging light path into at least two beams by using at least one spectroscope, and at least two variable diaphragms are used for acquiring target image data with different resolutions; in the design, the requirement on the modulation speed of the iris diaphragm can be reduced, and the real-time image acquisition and SR imaging capability of the moving target can be further improved.
Step S2: and constructing a deep learning network model according to a preset training strategy.
The predetermined training strategy of this embodiment is a three-aperture modulation strategy, and a DLN structure with single input and dual output/tags is constructed by using an LR image as input data and an MR image and an HR image as tag data, where LR, MR, and HR correspond to iris apertures D, respectivelyLR、DMR=N1×DLRAnd DHR=N2×DLR(N1<N2) Time-acquired target image, DHRCorresponding to the maximum numerical aperture of the original optical system, i.e. the diffraction limited case.
Optionally, more diversified image acquisition and DLN construction modes can be obtained by simple expansion and different network architecture modes based on this example, for example, the input and output can adopt a one-to-many, many-to-one, or many-to-many construction mode, or model construction is performed by using an image sequence corresponding to the aperture change of the iris diaphragm; different from the one-to-one image acquisition and DLN construction mode proposed by Rivenson et al, the method has stronger universality.
The DLN proposed in this example is named as dpcCARTs-Net, and the schematic structural diagram is shown in FIG. 3. The upper half of fig. 3 is an overall architecture of the network, and is composed of three parts, namely, an original feature extraction, a depth pyramid cascaded Channel assignment Residual transmission block (depepyramidal cascaded Channel assignment transmission blocks) and a Residual reconstruction, and the lower half of fig. 3 is a backbone of the network, namely, a Channel weighted Residual transmission (CART) module. The element-by-element addition operation in the CART module is a short skip connection and the element-by-element addition operation from input to output is a long skip connection.
Assume that the LR input, HR tag, and network output are represented by x, y, respectively1And
Figure BDA0003082944340000061
denotes, MR tag y2And are corresponding toNetwork output of
Figure BDA0003082944340000062
For optional operation (drawn with dashed connecting lines in fig. 3). The "one-to-one" training scenario proposed by Rivenson et al can be achieved by simply breaking the dashed connection in fig. 3 when the MR tag data is not considered. For fast convergence of DLN, x, y1And y2Are respectively DLR、DMRAnd DHRTime-acquired LR, MR, HR image preprocessing data. This embodiment uses only one convolution layer C to extract the original features f of the input0I.e. by
f0=C(x) (1)
Then, f0The residual error is input into a depth pyramid cascade channel weighted residual error transfer module, and the module group consists of CART modules with N characteristic channels increased step by step. With Hcart kRepresenting the operation of the kth CART module, the depth feature f extracted by the module groupdeepCan be expressed as
fdeep=Hcart N(···(Hcart k···(Hcart 1(f0)))) (2)
fdeepIs continued to be input to a residual reconstruction layer also consisting of one convolutional layer C, which is used to predict the residual between input and output, the output of dpcCARTs-Net can be represented as
Figure BDA0003082944340000063
A CART module: for DLN for SR-oriented imaging applications, the more sufficient the high frequency features that the network can learn, the better the SR performance. Different from the method of Rivenson et al, which adopts the same weight for the characteristic channel, in order to focus the network on more high-frequency information characteristics, the invention automatically gives different weights to residual information of different frequency bands. The CART module used in this embodiment is similar to Zhang et al, and the main change is that because the characteristic channel c is gradually increased in each CART block, the channel number of the middle low-dimensional characteristic is obtained from Λ (c/r), r is a constant scaling ratio, and Λ (·) is an upward rounding operation; the operation is beneficial to DLN self-adaptive learning of high-frequency information characteristics of a plurality of different frequency bands, and the resolution and the fidelity of the image can be improved simultaneously. The characteristic channel growth formula and short skip connections for convolutional layers in the CART module are the same as Rivenson et al. In contrast, when the total number N of residual transfer modules is the same, the Net depth of the dpcarts-Net is not significantly increased, but a wider receptive field and better learning results can be provided. And the increased long skip connection enables the low-frequency information to quickly reach the output end, so that the convolution network focuses on high-frequency information transmission under the condition of not causing low-frequency component distortion, and meanwhile, the stability of network training is ensured.
Loss (cost) function: the present embodiment uses mean-squared error (i.e., L2 norm) as the data fidelity term and mean absolute error (i.e., L1 norm) as one of the weight regularizers for simultaneously improving the weight sparsity and the spatial sparsity of the imaging target. Since the norms of L2 and L1 are both calculated pixel by pixel, the present embodiment uses the Structural Similarity Image (SSIM) as another regularizer for considering the local correlation of the image and obtaining a more realistic SR result. Since the goal of the training is to minimize the loss function, it is necessary to minimize
Figure BDA0003082944340000071
SSIM conversion of sum y to (1-L)SSIM) The mixing loss function can be expressed as,
L=LMSE1·LMAE2·(1-LSSIM) (4)
wherein λ1And λ2For the corresponding regularization coefficients, empirical values are usually taken according to the characteristics of the target to be imaged. In this embodiment, an adam (adaptive motion estimation) optimization strategy is used for learning network parameters.
It should be noted that the DLN training model provided by the present application may also adopt structures such as VDSR, U-Net, etc. and their variants, or adopt other advanced characteristic channel weighting operations; the loss function may use other data fidelity terms and/or some regularization terms for specific purposes, and the learning of the network parameters may also use optimization strategies such as SGD, RMSProp, etc., which are not limited herein.
The super-resolution extrapolation imaging process comprises the following steps: the main purpose of the present invention is to exceed the diffraction limit of the original optical imaging system, so during SR extrapolation, as shown in FIG. 4, with the imaging system at DHRAnd (3) taking the HR image acquired under the maximum numerical aperture of the original optical system as input. As described above, the LR image (i.e. the training input data) and the HR image (the extrapolated input) occupy the same number of pixels, which means that the imaging detector sampling rate difference between them is m times. To solve this problem, the HR image should first be subjected to an image enlarging operation. In this embodiment, a bicubic interpolation method embedded with ReLU operation is used to perform numerical amplification on an HR image, so as to ensure that values of all pixel points of the image are non-negative. The whole SR extrapolation process is as follows: HR image of h × w pixels is first enlarged by m times (m ═ N)1 or N2) And (h × m) x (w × m) images exceeding the diffraction limit of the imaging system can be obtained by matching the sampling rate of the detector and then inputting the amplified (h × m) x (w × m) pixel images into trained dpcCARTs-net to realize rapid resolution enhancement. The key to realizing the operation is the expandability of the dpcCARTs-net, namely, only a convolution layer is used in the network construction process, so that the dpcCARTs-net can perform SR extrapolation operation on images with different sizes.
Alternatively, the numerical amplification operation may be implemented using a more advanced interpolation method; further optionally, under the condition that the numerical aperture is guaranteed to be unchanged, a lens combination or a microscope objective is used for realizing the physical amplification operation with the same m times in the experiment, and then the preprocessed data of the HR image can be directly input into trained dpcCARTs-Net for SR extrapolation in the subsequent SR extrapolation process.
Step S3: data acquisition and normalization of target images.
In order to verify the super-resolution performance of the dpcCARTs-Net, numerical simulation verification and experimental study were carried out in the present example. Acquiring different resolutions according to the training strategy in the step S2And preprocessing the target image data to construct a training set, a verification set and a test set of the DLN. Setting the maximum aperture Dmax7.5mm, the aperture D can be obtained by the aperture modulation processLR=2.5mm,DMR=5mm,DHRImage data at 7.5mm, where DLRImages taken at 2.5mm are used as input data for dpcCARTs-Net, DMR=5mm,DHRImages collected at 7.5mm are used as label data for dpc arts-Net, and the corresponding theoretical resolution enhancement factors are 2 times and 3 times, respectively.
In numerical simulation, the focal length of the optical imaging system is assumed to be 30cm, and the maximum aperture D is assumed to bemaxThe Airy spot of the system occupies 12 pixels at 7.5mm, and the wavelength λ is 0.53 μm. As shown in FIG. 5, the invention uses different sparsity point sources generated randomly as the target to be imaged, the sparsity is distributed between 0.1% and 10% on average, and FIG. 5(a-d) gives random examples under the most sparse and the most dense conditions. The influence of noise is not considered here. The initial image is cropped into sub-image blocks of non-overlapping P × P (e.g., P ═ 64) pixels, each sub-image block is subjected to a separate normalization process, and corresponding training data, validation data, and test data are generated.
In the actual experiment process, in order to obtain enough experimental data for training the dpcCARTs-Net, the present embodiment uses a 20X/0.4NA microscope objective for data acquisition. A commercial wide field microscope was placed on the first bread board 8 (i.e. a stable optical platform) in fig. 2 to reduce the error factor due to platform vibration. A filter around 530nm was used in the experiment to achieve a narrow band of wavelengths. The imaging sample was a longitudinal section of corn seed (slice thickness: 5 μm; hematoxylin staining), as shown in fig. 5 (e). All images were fully sampled by the imaging detector in this example experiment. Optionally, the data enhancement is performed by image rotation and image cropping, and the random overlapping rate selected during image cropping is between 35% and 45%.
Compared with numerical simulation, because various error factors such as noise, aberration and the like inevitably exist in the experimental process, before the data enhancement, the sub-image block normalization and the data preprocessing operation required by the deep learning platform, the noise reduction filtering operation needs to be added to the experimental data, and the calculation method of the embodiment is that
xf=|F-1(F(x)×Pcutoff)| (5)
Wherein F and F-1Representing the Fourier transform and its inverse, x representing the acquired raw image data, xfIs the filtered image data, PcutoffIs a circular low-pass filter function with a radius equal to the theoretical cut-off frequency of the imaging system at which image x was acquired.
Alternatively, the image denoising operation may be implemented using a more advanced denoising method;
step S4: deep learning network training, optimization and performance characterization.
In this embodiment, the feature channels of the original feature extraction convolutional layer can be c0The characteristic channel of the residual reconstruction convolution layer is determined by the processed picture, the gray image is 1, and the RGB color image is 3; the total number of CART modules can be N-8, and the MR output
Figure BDA0003082944340000091
Can be led out by a 5 th CART module and output HR
Figure BDA0003082944340000092
Can be led out from the 8 th CART module; except that in the channel weighting unit and the enhancement convolutional layer are set to 1 × 1, in all other convolutional layers, the convolutional kernel size can be set to 3 × 3, and all convolutional layers keep the size of the characteristic channel the same using the zero padding method; the scaling ratio r can be set to 8 in the channel weighting unit; the building platforms of the dpcCARTs-Net are TensorFlow (v2.1.1) and Keras (v2.2.4-tf), and the training platform is a standard workstation.
For numerical simulation data, considering the sparse characteristic of point source target, lambda can be taken according to experience during training10.001 to 0.005 and λ2Other training parameters use the Tensorflow platform default values 0.00001 ~ 0.0001. It should be noted that only the dotted line in fig. 3 needs to be connectedThe partial disconnection of (a) enables "one-to-one" training of LR images to HR images. Therefore, in the numerical simulation verification link, the single-input dual-output 3-aperture modulation and the single-input single-output 2-aperture modulation are trained respectively in this embodiment for comparison research, and the trained networks are respectively denoted as SR-3AP and SR-2 AP. In order to represent the resolution improvement performance of the trained dpcCARTs-Net, the invention uses two point sources with the same strength which can be distinguished under the Rayleigh criterion to test the training effect. As can be seen from fig. 6(a, b), the double 3-time interpolation process merely amplifies the sampling rate of the image, and has no effect on the resolution of the image. Comparing FIG. 6(a), except for slight distortion, both SR-2AP in FIG. 6(c) and SR-3AP in FIG. 6(d) can exceed the diffraction limit of the optical imaging system, achieving SR imaging effect. As shown by the labeled areas of FIGS. 6(c) and 6(d), the resolution enhancement capabilities of SR-2AP and SR-3AP are 2.3 times and 2.8 times, respectively, with the latter being significantly better than the former and the latter SR extrapolated image being more similar to the ideal 3-fold SR imaging result of FIG. 6 (b). The cross-sectional diagrams of the two resolution cases are shown in fig. 6(e) and 6(f), and for comparison, each cross-sectional line is subjected to light intensity normalization separately, wherein 0.735 is the normalized intensity of the saddle point position under the rui criterion. The results show that by adding one aperture of label data, the SR performance and image fidelity of the dpcCARTs-Net can be improved to some extent.
Optionally, the SR performance of dpccars-Net is expected to be further improved by a subsequent parameter optimization process, for example, different weights are assigned to the los of different tags in the case of 3-aperture modulation.
Taking the corn seed longitudinal cutting image collected in the experiment into consideration of the complexity of a biological sample, and taking lambda according to experience during training10.00001 to 0.0001 and λ20.0001 to 0.001. For trained dpcCARTs-Net, this example first quantitatively characterizes the SR performance of the network using a microscopic high resolution target plate. FIG. 7(a) the resolution of the network HR input is about 400 lp/mm; the resolution of SR-2AP in FIG. 7(b) is about 500 lp/mm; the resolution of SR-3AP in FIG. 7(c) is close to 600 lp/mm; as is clear from FIG. 7, although some of the image is presentWith a slight shift, well-trained dpcCARTs networks can exceed the diffraction limit and the resolution enhancement capability of SR-3AP is superior to that of SR-2 AP.
To test the SR performance of dpcCARTs-net on biological samples, this example used a completely new corn seed longitudinal section for SR extrapolation studies. FIG. 8(a) shows an example of SR-3AP extrapolation, and FIG. 8(b-j) shows an enlarged view of ROI. The results show that compared to the network input HR images, the dpcCARTs-net output images are significantly improved in both resolution and contrast, and SR-3AP has superior SR capability to SR-2 AP. FIGS. 8(k) and 8(l) show a comparison of the cross-sectional line of the two white and black feature points of FIG. 8(a-c), which indicates that the extrapolation of SR-3AP more clearly distinguishes the two blurred white/black feature points of FIG. 8 (a); the arrows in FIG. 8(f, i) indicate that SR-3AP can provide clearer biological structural features. These results will help to further study the biological information of the specimen.
It should be noted that the opening speed of the variable electric diaphragm used in the experiment of the present invention is about 6mm/s, and may not be suitable for imaging a fast moving object. Alternatively, the use of microelectronic fluidic technology with fast response, circular aperture, miniaturization, etc. can be an alternative to this problem.
Step S5: the trained deep learning network and the data preprocessing module are integrated into an application program interface for the image processing module of the imaging detector to call, and the 'quasi-real-time' SR imaging and display are realized.
As described in step S1, the imaging detector of the aperture modulation subsystem is the imaging detector of the original optical imaging system, and the DLN that is well trained can be continuously applied to the original optical imaging system. In the embodiment, the trained dpcarts-net and the data preprocessing module are integrated into the API, so that the imaging detector can directly call the API to realize 'quasi-real-time' SR imaging and display in the subsequent experimental process. Optionally, in combination with the related image segmentation algorithm, SR imaging and display speed of the ROI can be further improved.
In summary, the invention realizes SR imaging beyond the diffraction limit of the optical system by the aperture modulation subsystem and the deep learning method. Compared with the method of Rivenson et al, the method provided by the invention can provide more operability in the aspects of image acquisition and DLN architecture, and provides a more universal method for exploring SR imaging research based on DLN; the method and the device provided by the invention can realize a rapid training data acquisition process, and are expected to be applied to training data acquisition and SR imaging research of a moving target. The experimental system disclosed by the invention is a preferred embodiment of an external aperture adjustment subsystem, and a single-input double-output data acquisition strategy and a corresponding DLN (digital Living network) architecture mode are also preferred embodiments. The result shows that the embodiment can be simultaneously applied to a simple target (such as a point source target) and a complex extended target, and the resolution enhancement capability is strong.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. The super-resolution imaging method based on the aperture modulation subsystem and the deep learning is characterized by comprising the following steps:
s1: an aperture modulation subsystem is built on the aperture diaphragm plane of the original optical imaging system or in an external connection mode;
s2: constructing a deep learning network model according to a preset training strategy;
s3: acquiring and standardizing data of a target image;
s4: deep learning network training, optimization and performance characterization;
s5: and integrating the trained deep learning network and a data preprocessing module into an application program interface for calling an image processing module of an imaging detector to perform quasi-real-time super-resolution imaging and display.
2. The method of claim 1, wherein the aperture modulation subsystem changes only the numerical aperture of the optical imaging system without changing its field of view and magnification.
3. The method of claim 1, wherein a deep learning network is constructed by a predetermined training strategy; the input and output of the deep learning network adopt a one-to-one or one-to-many or many-to-one or many-to-many construction mode; the build mode uses convolutional layers to ensure the scalable nature of the network.
4. The method of any one of claims 1-3, wherein the aperture modulation subsystem is configured to rapidly acquire image data of different resolutions required by a deep learning network; the image data will be subjected to a normalization process; the normalization process includes at least one of image segmentation, block normalization, and image noise reduction; the standardized image data form a training, verifying and testing data set of the deep learning network model; the Loss function of the deep learning network training adopts at least one item including a data fidelity item and/or some regularization item used for a specific purpose.
5. The method according to any one of claims 1-4, wherein the deep learning network is trained, optimized, and performance characterized, the training and optimization being used to improve super-resolution extrapolation performance of the deep learning network, including at least one of data enhancement and super-parameter tuning.
6. The method according to any one of claims 1-5, wherein the trained deep learning network is used for super-resolution extrapolation imaging, the super-resolution extrapolation imaging comprising: firstly, performing m times of image amplification operation to match the sampling rate of a camera, and then inputting the amplified image into a trained deep learning network to perform super-resolution extrapolation enhancement.
7. A super-resolution imaging device based on an aperture modulation subsystem and deep learning is characterized in that the imaging device is realized in an external connection mode and at least comprises an original optical imaging system, a collimating lens, an iris diaphragm, an imaging lens and an imaging detector.
8. The apparatus of claim 7, wherein the collimating lens takes as input the real image plane of the primary optical imaging system, the iris is located at an aperture stop plane between the collimating lens and the imaging lens, the imaging lens re-images the object on the imaging detector, the imaging detector is a detector of the primary optical imaging system, and the trained deep learning network and data preprocessing module can be directly invoked by the imaging detector.
CN202110572117.7A 2021-05-25 2021-05-25 Super-resolution imaging method and device based on aperture modulation subsystem and deep learning Active CN113296259B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110572117.7A CN113296259B (en) 2021-05-25 2021-05-25 Super-resolution imaging method and device based on aperture modulation subsystem and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110572117.7A CN113296259B (en) 2021-05-25 2021-05-25 Super-resolution imaging method and device based on aperture modulation subsystem and deep learning

Publications (2)

Publication Number Publication Date
CN113296259A true CN113296259A (en) 2021-08-24
CN113296259B CN113296259B (en) 2022-11-08

Family

ID=77324875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110572117.7A Active CN113296259B (en) 2021-05-25 2021-05-25 Super-resolution imaging method and device based on aperture modulation subsystem and deep learning

Country Status (1)

Country Link
CN (1) CN113296259B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092329A (en) * 2021-11-19 2022-02-25 复旦大学 Super-resolution fluorescence microscopic imaging method based on sub-pixel neural network
CN114967121A (en) * 2022-05-13 2022-08-30 哈尔滨工业大学 End-to-end single lens imaging system design method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010015026A (en) * 2008-07-04 2010-01-21 Olympus Corp Super-resolution microscope and spatial modulation optical element used therein
CN108398805A (en) * 2018-02-14 2018-08-14 中国科学院国家天文台南京天文光学技术研究所 Super-resolution telescope imaging method and its system
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN111415303A (en) * 2020-02-14 2020-07-14 清华大学 Zone plate coding aperture imaging method and device based on deep learning
CN112037136A (en) * 2020-09-18 2020-12-04 中国科学院国家天文台南京天文光学技术研究所 Super-resolution imaging method based on aperture modulation
CN112435305A (en) * 2020-07-09 2021-03-02 上海大学 Ultra-high resolution ultrasonic imaging method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010015026A (en) * 2008-07-04 2010-01-21 Olympus Corp Super-resolution microscope and spatial modulation optical element used therein
CN108398805A (en) * 2018-02-14 2018-08-14 中国科学院国家天文台南京天文光学技术研究所 Super-resolution telescope imaging method and its system
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN111415303A (en) * 2020-02-14 2020-07-14 清华大学 Zone plate coding aperture imaging method and device based on deep learning
CN112435305A (en) * 2020-07-09 2021-03-02 上海大学 Ultra-high resolution ultrasonic imaging method based on deep learning
CN112037136A (en) * 2020-09-18 2020-12-04 中国科学院国家天文台南京天文光学技术研究所 Super-resolution imaging method based on aperture modulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
范瑶: "《差分相衬显微成像技术发展综述》", 《红外与激光工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092329A (en) * 2021-11-19 2022-02-25 复旦大学 Super-resolution fluorescence microscopic imaging method based on sub-pixel neural network
CN114967121A (en) * 2022-05-13 2022-08-30 哈尔滨工业大学 End-to-end single lens imaging system design method

Also Published As

Publication number Publication date
CN113296259B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
WO2019103909A1 (en) Portable microscopy device with enhanced image performance using deep learning and methods of using the same
US9332190B2 (en) Image processing apparatus and image processing method
CN113296259B (en) Super-resolution imaging method and device based on aperture modulation subsystem and deep learning
JP6576921B2 (en) Autofocus method and system for multispectral imaging
CN109255758B (en) Image enhancement method based on all 1 x 1 convolution neural network
JP5372068B2 (en) Imaging system, image processing apparatus
US11776094B1 (en) Artificial intelligence based image quality assessment system
CN110070517B (en) Blurred image synthesis method based on degradation imaging mechanism and generation countermeasure mechanism
US20070047804A1 (en) Image processing apparatus which processes an image obtained by capturing a colored light-transmissive sample
CN110246083B (en) Fluorescence microscopic image super-resolution imaging method
WO2018227465A1 (en) Sparse positive source separation model-based image deblurring algorithm
CN111429433A (en) Multi-exposure image fusion method based on attention generation countermeasure network
CN110097106A (en) The low-light-level imaging algorithm and device of U-net network based on deep learning
JPH10509817A (en) Signal restoration method and apparatus
CN115032196B (en) Full-scribing high-flux color pathological imaging analysis instrument and method
CN113568156A (en) Spectral microscopic imaging device and implementation method
CN116721017A (en) Self-supervision microscopic image super-resolution processing method and system
CN113917677B (en) Three-dimensional super-resolution light sheet microscopic imaging method and microscope
Fazel et al. Analysis of super-resolution single molecule localization microscopy data: A tutorial
US20030071909A1 (en) Generating images of objects at different focal lengths
CN111524078A (en) Dense network-based microscopic image deblurring method
CN115586164A (en) Light sheet microscopic imaging system and method based on snapshot time compression
CN112819742B (en) Event field synthetic aperture imaging method based on convolutional neural network
CN110443755B (en) Image super-resolution method based on high-low frequency signal quantity
CN115428037A (en) Method and system for collecting living cell biological sample fluorescence image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant