CN115689880A - Fluorescence microscopic image super-resolution method based on FSRCNN - Google Patents

Fluorescence microscopic image super-resolution method based on FSRCNN Download PDF

Info

Publication number
CN115689880A
CN115689880A CN202110859098.6A CN202110859098A CN115689880A CN 115689880 A CN115689880 A CN 115689880A CN 202110859098 A CN202110859098 A CN 202110859098A CN 115689880 A CN115689880 A CN 115689880A
Authority
CN
China
Prior art keywords
resolution
image
super
fluorescence microscopic
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110859098.6A
Other languages
Chinese (zh)
Inventor
迟崇巍
何坤山
田捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Dipu Medical Technology Co ltd
Original Assignee
Zhuhai Dipu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Dipu Medical Technology Co ltd filed Critical Zhuhai Dipu Medical Technology Co ltd
Priority to CN202110859098.6A priority Critical patent/CN115689880A/en
Publication of CN115689880A publication Critical patent/CN115689880A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The invention relates to a fluorescence microscopic image super-resolution method based on FSRCNN, which comprises the following steps: 1: acquiring a common fluorescence microscopic image under a good imaging condition, and acquiring a corresponding super-resolution image by using a super-resolution radial fluctuation algorithm (SRRF); 2: carrying out image amplification on the common fluorescence microscopic image and the SRRF super-resolution image, and then dividing the images into a training set, a testing set and a verification set in proportion; 3: training the sorted data set by a fast super-resolution convolutional neural network (FSRCNN) and establishing a learning model; 4: verifying the performance of the learning model by using a verification set; 5: quantitatively evaluating the performance of the model by using a test set; 6: collecting a low-resolution fluorescence microscopic image under a weak imaging condition; 7: and performing super-resolution processing on the low-resolution fluorescence microscopic image by using a fast super-resolution convolutional neural network (FSRCNN) to obtain a corresponding high-resolution image. The invention can not only rapidly generate fluorescence microscopic images with less noise and clear details, but also reduce the photobleaching of fluorescence samples caused by multiple exposure or overexposure.

Description

Fluorescence microscopic image super-resolution method based on FSRCNN
Technical Field
The invention belongs to the field of super-resolution fluorescence microscopic imaging, and particularly relates to a fluorescence microscopic image super-resolution method based on FSRCNN, which mainly aims to improve the resolution of a fluorescence microscopic image.
Background
The optical microscope is an important tool for researchers to explore the micro world, however, images observed by the traditional fluorescence microscope cannot meet the requirements of the researchers, the super-resolution fluorescence microscope breaks through the obstacle of optical diffraction limit, powerful technical support is provided for the students in the fields of biology, medicine and the like, and people can observe the dynamic process of abundant and complex subcellular structures in cells with more accurate space-time resolution.
One of the improvement methods of the super-resolution fluorescence microscopic imaging technology is to improve the resolution of the traditional fluorescence microscopic image through an image reconstruction algorithm. Wherein a super-resolution radial fluctuation algorithm (SRRF) and a super-resolution optical fluctuation imaging (SOFI) are two typical representatives. The principle of the SOFI algorithm is based on recording time series images, performing statistical analysis on fluorescence fluctuation, generally marking samples by using quantum dots, performing accumulation calculation on independent random scintillation characteristics of dyes, and positioning molecular mass centers to reconstruct images to realize super resolution, so that the demand of original data is very large. The SRRF method is used for reconstructing a super-resolution image by utilizing the random fluctuation characteristics of fluorescent molecules in a time sequence image obtained in wide-field imaging, and the algorithm has a good super-resolution reconstruction effect on an image acquired by a fluorescent microscope. From the development in recent years, the SRRF super-resolution reconstruction algorithm has a better application prospect, but because the traditional SRRF algorithm needs to select a certain number of wide-field fluorescence maps as input according to the image quality (generally, the weaker the fluorescence signal, the more the number of wide-field fluorescence maps is needed). Therefore, this method has limitations on the observation of living cells, such as being disadvantageous for the observation of continuous dynamic processes of living cells.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a fluorescence microscopic image super-resolution method based on FSRCNN, which can obtain an image with the same resolution as an SRRF super-resolution reconstruction image based on a low-resolution fluorescence microscopic image. Meanwhile, the problem of fluorescence bleaching caused by multiple exposure or overexposure in the SRRF algorithm can be relieved.
The method comprises the following steps:
step 1: aiming at various cell samples, acquiring common fluorescence microscopic images under good imaging conditions, and obtaining corresponding super-resolution images by utilizing a super-resolution radial fluctuation algorithm (SRRF);
step 2: carrying out image amplification on the common fluorescence microscopic image and the SRRF super-resolution image in the step 1, and then dividing the images into a training set, a testing set and a verification set in proportion;
and step 3: constructing an algorithm environment of a fast super-resolution convolutional neural network (FSRCNN), training the training set data finished in the step 2 through the FSRCNN and establishing a learning model;
and 4, step 4: and verifying the obtained learning model by using a verification set, inputting all images in the verification set into the trained learning model to obtain a prediction output, and then calculating the mean square error (loss function) between the prediction output and the true value output. train _ loss is loss on training data, val _ loss is loss on a verification set, and finally the performance of the FSRCNN network is verified according to the variation trends of train _ loss and val _ loss;
and 5: the test set refers to data that has not been used in the training phase to test the performance of the model. And after the final training model is determined, inputting all images in the test set into the trained learning model to obtain prediction output. Quantitatively evaluating the model performance by calculating the Structural Similarity (SSIM) and the peak signal-to-noise ratio (PSRR) between the predicted output and the true output;
step 6: collecting a low-resolution fluorescence microscopic image under a weak imaging condition;
and 7: and (4) reading parameters in the learning model by using a fast super-resolution convolution neural network (FSRCNN), performing super-resolution processing on the low-resolution fluorescence microscopic image obtained in the step (6), and outputting a corresponding high-resolution fluorescence microscopic image by using the FSRCNN.
Preferably, the good imaging conditions in step 1 are: the imaging conditions with better image quality can be acquired, and the imaging conditions comprise laser intensity, exposure time of the CCD, sensitivity and other parameter settings.
Preferably, the image augmentation in step 2 refers to data augmentation of the data set by random cropping, flipping or scaling. And after the data set is amplified, randomly grouping the data set into a training set, a verification set and a test set.
Preferably, the weak imaging conditions in step 6 are: on the premise of acquiring imaging with better image quality, the laser intensity, the exposure time and the sensitivity of the CCD are all reduced by 1/5 to be used as weak imaging conditions.
Preferably, the Mean Square Error (MSE) in step 4 is:
Figure 100002_DEST_PATH_IMAGE001
therein, wherein
Figure 100002_DEST_PATH_IMAGE002
Is a low-resolution image (LR),
Figure 100002_DEST_PATH_IMAGE003
for high resolution images (HR), θ is the neural network parameter, n is the batch size, and
Figure 100002_DEST_PATH_IMAGE004
is a parameter of
Figure 100002_DEST_PATH_IMAGE005
Is/are as follows
Figure 883389DEST_PATH_IMAGE002
The network output of (1).
Preferably, the Structural Similarity (SSIM) formula in step 5 is:
Figure 100002_DEST_PATH_IMAGE006
wherein
Figure 100002_DEST_PATH_IMAGE007
And
Figure 100002_DEST_PATH_IMAGE008
mean values representing images X and Y, respectively, for luminance estimation;
Figure 100002_DEST_PATH_IMAGE009
and
Figure 100002_DEST_PATH_IMAGE010
standard deviations of images X and Y, respectively, are used as contrast estimates;
Figure 100002_DEST_PATH_IMAGE011
representing the covariance of images X and Y, is used as a measure of structural similarity.
Preferably, the peak signal-to-noise ratio (PSNR) in step 5 is given by:
Figure 100002_DEST_PATH_IMAGE012
wherein
Figure 100002_DEST_PATH_IMAGE013
Is the grey scale level of the image and,
Figure 100002_DEST_PATH_IMAGE014
is the mean square error.
Preferably, as shown in fig. 2, the FSRCNN network structure is divided into 5 parts, which are respectively feature extraction, contraction, mapping, expansion and deconvolution, and the specific implementation is as follows:
step 1: the characteristic extraction layer uses d convolution kernels with the size of 5 × 5 to extract the characteristics of the input low-resolution fluorescence microscopic image to obtain a low-resolution characteristic diagram;
and 2, step: in the feature mapping process, the low-resolution fluorescence microscopic image needs to be mapped into the high-resolution feature map, but the dimension of the low-resolution feature map is generally high, which greatly increases the calculation cost, so that the dimension reduction of the image needs to be performed by using a shrinkage layer. Reducing the number of calculations by reducing the parameters by s convolution kernels of size 1 x 1 and then down-scaling the d-dimensional image to s;
and step 3: the mapping layer carries out nonlinear mapping on the characteristics of the low-resolution fluorescence microscopic image, the mapping layer is divided into m (the number of the mapping layers determines the depth and the complexity of the SR) small mapping layers, the size of a convolution kernel of each layer is 3 x 3, and the number of the convolution kernels is s;
and 4, step 4: the direct use of low-dimensional high-resolution features for image restoration leads to poor quality of reconstructed images, so that the expansion layer uses d convolution kernels with the size of 1 x 1 for dimensionality increase;
and 5: setting the step size to be n = k, performing deconvolution operation on the high-resolution image features by using 1 convolution kernel with the size of 9 × 9, which can be regarded as the inverse process of convolution, receiving the high-resolution features through upsampling operation, and finally outputting corresponding high-resolution images.
The invention has the following beneficial effects:
the invention has the innovativeness that the image super-resolution algorithm based on the FSRCNN is applied to the field of fluorescence microscopic imaging, so that the image with the same resolution as the SRRF super-resolution reconstruction image can be rapidly generated, meanwhile, the reconstruction effect and the execution efficiency are improved on the premise of not sacrificing the resolution, and the photobleaching problem of a fluorescence sample is relieved. And on the premise of not changing hardware conditions, a high-resolution fluorescence microscopic image with less noise and clear details is obtained at a lower cost by an image processing method.
Drawings
FIG. 1 is a schematic diagram of a super-resolution radial fluctuation algorithm (SRRF).
Fig. 2 is a schematic structural diagram of a fast super-resolution convolutional neural network (FSRCNN).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention provides a fluorescence microscopic image super-resolution method based on FSRCNN, which comprises the following steps:
step 1: a plurality of cell samples, including a sample of 100 subcellular structures, is obtained. For the same cell sample, collecting 250 common fluorescence sequence images under the condition of good imaging conditions, and then inputting the images into a super-resolution radial fluctuation algorithm (SRRF) to obtain 1 corresponding super-resolution image;
step 2: and (3) sorting the image data set and preparing training data, wherein the super-resolution image obtained in the step (1) is subjected to data amplification through a random cutting, overturning or scaling method. The scaling means that each image is reduced to 0.9, 0, 8, 0.7, and 0.6, and the rotation means that each image is rotated by 90, 180, and 270 degrees. After dataset amplification as 7:1:2, randomly grouping the data sets according to the proportion, and dividing the data sets into a training set, a verification set and a test set;
to prepare the training data, the training set images obtained in the above process are down-sampled to low resolution images according to a scaling factor n =3, and then a set of sizes is made
Figure DEST_PATH_IMAGE015
Of the low-resolution sub-image set, wherein
Figure DEST_PATH_IMAGE016
=11, step k is 15; finally, the high-resolution image is cut into the size of
Figure DEST_PATH_IMAGE017
The LR/HR sub-image pairs are the primary training data;
and 3, step 3: constructing an FSRCNN algorithm environment, training the training set data finished in the step 2 through FSRCNN and establishing a needed learning model;
and 4, step 4: the validation set needs to be used simultaneously for testing during model training to ensure that the network does not over-fit. The method comprises the following specific steps: the verification set is a small part of the image set extracted after the amplification in the step 2, and the part does not participate in the learning process of the network. Each time a model is obtained, all common fluorescence images in the verification set are input into the model to obtain a prediction super-resolution image output by the model, then the mean square error (loss function) of the training set and the verification set is respectively calculated,
wherein the MSE equation is:
Figure 315376DEST_PATH_IMAGE001
train _ loss is the loss on training data, val _ loss is the loss on validation set. If train _ loss and val _ loss are reduced and the trends are consistent, the model is still learned, and the reliability is high. If train _ loss continues to decrease but val _ loss begins to increase indicating model overfitting, training may need to be terminated. If train _ loss and val _ loss continuously rise to prove that the network structure design is not proper or the hyper-parameter setting of the training model is not proper, the training needs to be stopped;
and 5: and (5) testing the model obtained by training in the step (2-4), and carrying out quantitative analysis on the model obtained in the step (2-4) by using the image evaluation parameters. Inputting the images in the test set into a model, and calculating a prediction result output by the model and the Structural Similarity (SSIM) and the peak signal-to-noise ratio (PSNR) of a corresponding SRRF super-resolution graph;
wherein SSIM is as follows:
Figure 829534DEST_PATH_IMAGE006
wherein the PSNR formula is:
Figure 693585DEST_PATH_IMAGE012
step 6: on the premise of good imaging conditions, the laser intensity, the exposure time and the sensitivity of the CCD are all reduced by 1/5 to serve as weak imaging conditions, and then 5 low-resolution fluorescence microscopic images are collected under the weak imaging conditions;
and 7: and (3) reading parameters in the trained model by using the FSRCNN, performing super-resolution processing on the 5 low-resolution fluorescence microscopic images acquired in the step (6), and outputting the high-resolution fluorescence microscopic images with low noise and clear details by using the FSRCNN.
The FSRCNN network implementation steps are as follows:
1. the feature extraction layer performs feature extraction on the input low-resolution fluorescence microscopic image by using d convolution kernels with the size of 5 × 5 to obtain a low-resolution feature map;
2. in the process of feature mapping, the low-resolution fluorescence microscope image needs to be mapped into the high-resolution feature map, but the dimension of the low-resolution feature map is generally very high, which greatly increases the calculation cost, so that a shrinkage layer needs to be used for reducing the dimension of the image. Reducing the number of calculations by reducing the parameters by s convolution kernels of size 1 x 1 and then down-scaling the d-dimensional image to s;
3. the mapping layer carries out nonlinear mapping on the characteristics of the low-resolution fluorescence microscopic image, the mapping layer is divided into m (the number of the mapping layers determines the depth and the complexity of the SR) small mapping layers, the size of a convolution kernel of each layer is 3 x 3, and the number of the convolution kernels is s;
4. the direct use of low-dimensional high-resolution features for image restoration leads to poor quality of reconstructed images, so that the expansion layer uses d convolution kernels with the size of 1 x 1 for dimensionality increase;
5. setting the step size to be n = k, performing deconvolution operation on the high-resolution image features by using 1 convolution kernel with the size of 9 × 9, wherein the operation can be regarded as an inverse process of convolution, receiving the high-resolution features through an upsampling operation, and finally outputting corresponding high-resolution images.
While the preferred embodiment and basic principles of the present invention have been described in detail, it will be understood by those skilled in the art that the invention is not limited to the embodiments described above, but is capable of various modifications and substitutions without departing from the spirit of the invention.

Claims (8)

1. A fluorescence microscopic image super-resolution method based on a fast super-resolution convolutional neural network (FSRCNN) is characterized by comprising the following steps:
step 1: aiming at various cell samples, acquiring common fluorescence microscopic images under good imaging conditions, and obtaining corresponding super-resolution images by utilizing a super-resolution radial fluctuation algorithm (SRRF);
and 2, step: carrying out image amplification on the common fluorescence microscopic image and the SRRF super-resolution image in the step 1, and then proportionally dividing the images into a training set, a test set and a verification set;
and step 3: constructing an algorithm environment of a fast super-resolution convolutional neural network (FSRCNN), training the training set data sorted in the step 2 through the FSRCNN and establishing a learning model;
and 4, step 4: verifying the obtained learning model by using a verification set, inputting all images in the verification set into the trained learning model to obtain a prediction output, and then calculating a mean square error (loss function) between the prediction output and a true value output; train _ loss is loss on training data, val _ loss is loss on a verification set, and finally the performance of the FSRCNN network is verified according to the variation trends of train _ loss and val _ loss;
and 5: the test set refers to data that has not been used in the training phase to test the performance of the model. And after the final training model is determined, inputting all images in the test set into the trained learning model to obtain prediction output. The model performance was quantitatively evaluated by calculating the Structural Similarity (SSIM) and the peak signal-to-noise ratio (PSRR) between the predicted and true outputs.
Step 6: collecting a low-resolution fluorescence microscopic image under a weak imaging condition;
and 7: and (3) reading parameters in the learning model by using a fast super-resolution convolutional neural network (FSRCNN), performing super-resolution processing on the low-resolution fluorescence microscopic image obtained in the step (6), and outputting a corresponding high-resolution fluorescence microscopic image by using the FSRCNN.
2. The good imaging conditions according to claim 1 mean that: the imaging conditions with better image quality can be acquired, and the imaging conditions comprise laser intensity, exposure time of the CCD, sensitivity and other parameter settings.
3. The image amplification according to claim 1 refers to performing data amplification on a data set by a random cropping, flipping or scaling method, and randomly grouping the data set after the data set is amplified into a training set, a verification set and a test set.
4. A weak imaging condition according to claim 1 refers to: on the premise of acquiring imaging with better image quality, the laser intensity, the exposure time and the sensitivity of the CCD are all reduced by 1/5 to be used as weak imaging conditions.
5. The Mean Square Error (MSE) formula of claim 1 is:
Figure DEST_PATH_IMAGE001
wherein
Figure DEST_PATH_IMAGE002
A low-resolution image (LR) of the image,
Figure DEST_PATH_IMAGE003
for high resolution images (HR), θ is the neural network parameter, n is the batch size, and
Figure DEST_PATH_IMAGE004
is a parameter of
Figure DEST_PATH_IMAGE005
Is
Figure 386375DEST_PATH_IMAGE002
The network output of (1).
6. The Structural Similarity (SSIM) formula as claimed in claim 1 is:
Figure DEST_PATH_IMAGE006
in which
Figure DEST_PATH_IMAGE007
And
Figure DEST_PATH_IMAGE008
mean values of images X and Y, respectively, are used as luminance estimates;
Figure DEST_PATH_IMAGE009
and
Figure DEST_PATH_IMAGE010
standard deviations of images X and Y, respectively, are used as contrast estimates;
Figure DEST_PATH_IMAGE011
representing the covariance of images X and Y, is used as a measure of structural similarity.
7. The peak signal-to-noise ratio (PSNR) equation of claim 1 is:
Figure DEST_PATH_IMAGE012
in which
Figure DEST_PATH_IMAGE013
Is the gray scale level of the image,
Figure DEST_PATH_IMAGE014
is the mean square error.
8. The fast super-resolution convolutional neural network (FSRCNN) of claim 1 is divided into 5 parts, which are respectively feature extraction, contraction, mapping, expansion and deconvolution, and is implemented as follows:
step 1: the feature extraction layer performs feature extraction on the input low-resolution fluorescence microscopic image by using d convolution kernels with the size of 5 × 5 to obtain a low-resolution feature map;
and 2, step: in the process of feature mapping, the features of the low-resolution fluorescence microscopy image need to be mapped into a high-resolution feature map, but the dimension of the low-resolution feature map is generally very high, which greatly increases the calculation cost, so that a contraction layer is needed to reduce the dimension of the image, s convolution kernels with the dimension of 1 × 1 are used to reduce parameters, and then the dimension of the d-dimensional image is reduced to s, so that the calculation amount is reduced;
and step 3: the mapping layer carries out nonlinear mapping on the characteristics of the low-resolution fluorescence microscopic image, the mapping layer is divided into m (the number of the mapping layers determines the depth and the complexity of the SR) small mapping layers, the size of a convolution kernel of each layer is 3 x 3, and the number of the convolution kernels is s;
and 4, step 4: the direct use of low-dimensional high-resolution features for image restoration leads to poor reconstructed image quality, so that the expansion layer uses d convolution kernels with the size of 1 × 1 for upscaling;
and 5: setting the step size to be n = k, performing deconvolution operation on the high-resolution image features by using 1 convolution kernel with the size of 9 × 9, wherein the operation can be regarded as an inverse process of convolution, receiving the high-resolution features through upsampling operation, and finally outputting corresponding high-resolution images.
CN202110859098.6A 2021-07-28 2021-07-28 Fluorescence microscopic image super-resolution method based on FSRCNN Pending CN115689880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110859098.6A CN115689880A (en) 2021-07-28 2021-07-28 Fluorescence microscopic image super-resolution method based on FSRCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110859098.6A CN115689880A (en) 2021-07-28 2021-07-28 Fluorescence microscopic image super-resolution method based on FSRCNN

Publications (1)

Publication Number Publication Date
CN115689880A true CN115689880A (en) 2023-02-03

Family

ID=85059297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110859098.6A Pending CN115689880A (en) 2021-07-28 2021-07-28 Fluorescence microscopic image super-resolution method based on FSRCNN

Country Status (1)

Country Link
CN (1) CN115689880A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503258A (en) * 2023-06-20 2023-07-28 中国科学院生物物理研究所 Super-resolution computing imaging method, device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503258A (en) * 2023-06-20 2023-07-28 中国科学院生物物理研究所 Super-resolution computing imaging method, device, electronic equipment and storage medium
CN116503258B (en) * 2023-06-20 2023-11-03 中国科学院生物物理研究所 Super-resolution computing imaging method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111524064B (en) Fluorescence microscopic image super-resolution reconstruction method based on deep learning
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
CN110211045B (en) Super-resolution face image reconstruction method based on SRGAN network
CN112465701B (en) Deep learning super-resolution reconstruction method of microscopic image, medium and electronic equipment
US20210321963A1 (en) Systems and methods for enhanced imaging and analysis
CN110246083B (en) Fluorescence microscopic image super-resolution imaging method
Fu et al. Image super-resolution based on generative adversarial networks: a brief review
CN115546032B (en) Single-frame image super-resolution method based on feature fusion and attention mechanism
Yang et al. Image super-resolution based on deep neural network of multiple attention mechanism
CN114331911A (en) Fourier laminated microscopic image denoising method based on convolutional neural network
CN115689880A (en) Fluorescence microscopic image super-resolution method based on FSRCNN
CN109785234B (en) Raman imaging method, system and device
Cheng et al. Fast and lightweight network for single frame structured illumination microscopy super-resolution
Wu et al. A novel perceptual loss function for single image super-resolution
Schambach et al. A multispectral light field dataset and framework for light field deep learning
JP2023532755A (en) Computer-implemented method, computer program product, and system for processing images
TW201910929A (en) Generating high resolution images from low resolution images for semiconductor applications
CN116468083A (en) Transformer-based network generation countermeasure method
Liang et al. Ultrahigh-Resolution Reconstruction of Shale Digital Rocks from FIB-SEM Images Using Deep Learning
CN116563100A (en) Blind super-resolution reconstruction method based on kernel guided network
Shao et al. SRWGANTV: image super-resolution through wasserstein generative adversarial networks with total variational regularization
Du et al. X-ray image super-resolution reconstruction based on a multiple distillation feedback network
Wang et al. Deep learning super‐resolution electron microscopy based on deep residual attention network
Yang et al. Deep networks for image super-resolution using hierarchical features
Thuan et al. Edge-focus thermal image super-resolution using generative adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication