CN110675333A - Microscopic imaging processing method based on neural network super-resolution technology - Google Patents

Microscopic imaging processing method based on neural network super-resolution technology Download PDF

Info

Publication number
CN110675333A
CN110675333A CN201910790869.3A CN201910790869A CN110675333A CN 110675333 A CN110675333 A CN 110675333A CN 201910790869 A CN201910790869 A CN 201910790869A CN 110675333 A CN110675333 A CN 110675333A
Authority
CN
China
Prior art keywords
neural network
matrix
convolution
pictures
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910790869.3A
Other languages
Chinese (zh)
Other versions
CN110675333B (en
Inventor
李歧强
张中豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910790869.3A priority Critical patent/CN110675333B/en
Publication of CN110675333A publication Critical patent/CN110675333A/en
Application granted granted Critical
Publication of CN110675333B publication Critical patent/CN110675333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Optimization (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a microscopic imaging processing method based on a neural network super-resolution technology, which comprises the following steps: training the full convolution neural network and arranging the trained full convolution neural network M on a computer for controlling the microscope, controlling the microscope to shoot pictures, and compensating the shot pictures in real time to obtain clear pictures. The processing method disclosed by the invention greatly improves the shooting speed, improves the picture quality, and inhibits defocusing blur, especially when a plurality of pictures are shot for a sample; even can replace the automatic focusing, remove the motor that controls the lens and move up and down, simplify the optical detection system.

Description

Microscopic imaging processing method based on neural network super-resolution technology
Technical Field
The invention relates to an image processing method, in particular to a microscopic imaging processing method based on a neural network super-resolution technology.
Background
Microscopic imaging techniques are an effective means of observing cells with high spatial and temporal resolution. Before observing the sample, researchers need to focus the microscope once, but when the sample area is too large and needs to be observed continuously, the above operation has problems: if the sample is curved more than the depth of field of the optical detection system used, a sharp partial image or a blurred partial image will occur. This situation can seriously affect the subsequent analysis and judgment.
The super-resolution technology refers to a process of restoring a high-resolution image from a given low-resolution image by using a specific algorithm and a processing flow, using knowledge related to the fields of digital image processing, computer vision, and the like. The method aims to overcome or compensate the problems of imaging image blurring, low quality, insignificant region of interest and the like caused by the limitation of an image acquisition system or an acquisition environment. Super-resolution is directed to blurring due to the resolution degradation caused by bicubic sampling, but can also be applied to overcome defocus blurring in theory.
The solution adopted in the prior art is to move the lens up and down when the imaging system scans, so that there is a small overlapping area in three consecutive images, and the position of the focal plane is determined by calculating the ambiguity of the overlapping area. The method is essentially equivalent to sampling in multiple places, and is slow and time-consuming.
Disclosure of Invention
In order to solve the technical problems, the invention provides a microscopic imaging processing method based on a neural network super-resolution technology, so as to achieve the purposes of greatly improving the shooting speed, improving the picture quality and inhibiting defocusing blur.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a microscopic imaging processing method based on a neural network super-resolution technology comprises the following steps:
(1) training of the full convolution neural network:
shooting a group of clear pictures Y by a microscope, then carrying out Gaussian filtering on the Y to obtain corresponding defocused blurred pictures X, converting image information data X, Y into numpy arrays, then carrying out normalization processing on the arrays, and respectively recording the normalized arrays as Xnorm、YnormWhere X isnorm、YnormAre two of the same shapeThe shape of the matrix is uniformly recorded as L W H C, L is the number of the taken clear pictures, W is the number of rows of the matrix, H is the number of columns of the matrix, if the taken grey pictures are taken, c is 1, and if the taken grey pictures are taken, c is 3;
with XnormBeing an input to the network, YnormFor the output of the network, the learning rate is set to be 3E-4, an Adam optimizer is adopted during training, the network is trained, and a full convolution neural network M is obtained;
(2) microscopic imaging treatment:
arranging the trained full convolution neural network M on a computer for controlling a microscope, controlling the microscope to shoot pictures, simultaneously utilizing the trained full convolution neural network M to compensate the shot pictures in real time, firstly carrying out normalization operation on the shot pictures to obtain an array of 1W H c, taking the array as the input of the network to obtain the output with the shape of 1W H c, carrying out normalization on the output, and mapping pixel values to 0-255 to obtain a clear picture.
In the above scheme, the full convolution neural network convolves the input of the network, and is provided with a two-dimensional matrix a and a matrix B, and a calculation formula of a convolution result matrix C of the two-dimensional matrix a and the matrix B is as follows:
C(j,k)=∑pqA(p,q)B(j-p+1,k-q+1)
wherein p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0.
In a further technical scheme, when the full convolution neural network performs convolution, the input shape is Winput*HinputMatrix of c, with n Wfilter*HfilterConvolution kernels of c are respectively convolved to obtain the shape Woutput*HoutputN, W is the number of rows of the matrix, H is the number of columns of the matrix, subscripts represent the matrix to which the subscripts belong, and c is the number of characteristic channels of the matrix;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
Woutput=(Winput-Wfilter+2P)/S+1
Houtput=(Hinput-Hfilter+2P)/S+1
P is the padding size and S is the step size.
In a further technical solution, the loss function formula is as follows:
Figure BDA0002179504940000021
wherein the content of the first and second substances,
Figure BDA0002179504940000022
as a function of the losses of the network,is the output of the network and is,
Figure BDA0002179504940000024
is a true value, | × | non-conducting phosphor1Is the norm of L1.
In a further technical scheme, when the full convolution neural network performs convolution, the number of convolution kernels in each of up-sampling and down-sampling is 32; the down-sampling comprises two convolution layers and a maximum pooling layer, the size of the convolution kernel is 3 x 3, and the maximum pooling step length is 2 x 2; adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of a convolution kernel is 3 x 3, and then performing up-sampling; the up-sampling uses deconvolution, with convolution kernel size of 2 x 2 and step size of 2 x 2.
By the technical scheme, the microscopic imaging processing method based on the neural network super-resolution technology compensates image blur by the super-resolution technology, the trouble of focusing at each shooting position can be avoided by the technology, the shooting speed is greatly improved, and especially when a plurality of pictures are shot on a sample; even can replace the automatic focusing, remove the motor that controls the lens and move up and down, simplify the optical detection system. During shooting, the method carries out convolution processing on the shot picture by using a full convolution neural network, and carries out real-time compensation on the picture, thereby obtaining a clear picture.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic diagram of a full convolution neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolution process according to an embodiment of the present invention;
FIG. 3 is a pre-processed image as disclosed in an embodiment of the present invention;
fig. 4 is a processed image as disclosed in an embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The invention provides a microscopic imaging processing method based on a neural network super-resolution technology, which can improve the picture quality and inhibit defocusing blur.
A microscopic imaging processing method based on a neural network super-resolution technology comprises the following steps:
(1) training of the full convolution neural network:
shooting a group of clear pictures Y by a microscope, then carrying out Gaussian filtering on the Y to obtain corresponding defocused blurred pictures X, converting image information data X, Y into numpy arrays, then carrying out normalization processing on the arrays, and respectively recording the normalized arrays as Xnorm、YnormHere Y isnorm、YnormThe method comprises the following steps that two matrixes with the same shape are provided, the shapes are uniformly marked as L, W, H and c, L is the number of clear pictures to be shot, W is the number of rows of the matrixes, H is the number of columns of the matrixes, if grey pictures are shot, c is 1, and if color pictures are shot, c is 3;
with XnormBeing an input to the network, YnormAnd (3) setting the learning rate as 3E-4 for the output of the network, and training the network by adopting an Adam optimizer during training to obtain the full convolution neural network M.
Full convolution neural network M As shown in FIG. 1, the input is obtained by successive down-sampling0,0、X1,0、X2,0、X3,0、X4 ,0The down-sampling can increase the robustness to some small disturbances of the input image, such as image translation, rotation, etc., reduce the risk of over-fitting, reduce the amount of computation, and increase the size of the receptive field. Then respectively adding X1,0、X2,0、X3,0、X4,0Upsampling, the effect of which is to re-decode the abstract features to the size of the original image, i.e. X1,0Up-sampling obtains X0,1Is mixing X2,0Up-sampling obtains X in turn1,1、X0,2Is mixing X3,0Up-sampling obtains X in turn2,1、X1,2、X0,3Is mixing X4,0Up-sampling obtains X in turn3,1、X2,2、X1 ,3、X0,4. In addition, in order to integrate features of different layers, a large number of hopping connections, such as X, are added to the network0,0To X0,1、X0,2、X0,3、X0,4There is a jump connection. Finally, in order to make the network converge better, a strategy of deep supervision is added, namely X0 ,1、X0,2、X0,3、X0,4Will be compared with the ground channel and participate in the calculation of the loss function.
The invention adopts convolution-convolution mode, in order to reduce memory occupation and accelerate calculation speed, the number of convolution kernels of each layer of down-sampling is fixed to 32, the down-sampling comprises two layers of convolution layers and one layer of maximum pooling layer, the size of the convolution kernels is 3 x 3, and the maximum pooling step length is 2 x 2. And adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of a convolution kernel is 3 x 3, and then performing up-sampling. Up sampling and samplingDeconvolution was used, with convolution kernel size 2 x 2, step size 2 x 2, and the number of convolution kernels in each upsampling was also fixed at 32. In order to reduce the memory occupation, the invention adopts an add mode for jump connection. Finally, the loss function
Figure BDA0002179504940000041
Defined as L1 norm, specifically defined as follows:
Figure BDA0002179504940000042
wherein the content of the first and second substances,
Figure BDA0002179504940000043
is the output of the network and is,
Figure BDA0002179504940000044
is a true value, | × | non-conducting phosphor1Is the norm of L1.
The full convolution neural network is used for convolving the input of the network, and is provided with a two-dimensional matrix A and a matrix B, and the calculation formula of a convolution result matrix C of the two-dimensional matrix A and the matrix B is as follows:
C(j,k)=∑pqA(p,q)B(j-p+1,k-q+1)
wherein p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0.
When the fully convolutional neural network performs convolution, as shown in FIG. 2, the input shape is Winput*HinputMatrix of c, with n Wfilter*HfilterConvolution kernels of c are respectively convolved to obtain the shape Woutput*HoutputN, W is the number of rows of the matrix, H is the number of columns of the matrix, subscripts represent the matrix to which the subscripts belong, and c is the number of characteristic channels of the matrix;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
Woutput=(Winput-Wfilter+2P)/S+1
Houtput=(Hinput-Hfilter+2P)/S+1
P is the padding size and S is the step size.
(2) Microscopic imaging treatment:
the trained full convolution neural network M is arranged on a computer for controlling a microscope, the microscope is controlled to shoot pictures, the obtained pictures are shown in figure 3, meanwhile, the shot pictures are compensated in real time by the trained full convolution neural network M, namely, the shot pictures are firstly normalized to obtain an array of 1W H c, the array is used as the input of the network to obtain the output with the shape of 1W H c, the output is subjected to normalization, and the pixel value is mapped to 0-255 to obtain a clear picture, which is shown in figure 4.
To further save processing time, two threads or two processes may be broken up in the computer's CPU. One thread/process is responsible for controlling the microscope to take pictures; and the other one is responsible for compensating the shot picture in real time and improving the picture quality, so that the shooting time cannot be increased.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A microscopic imaging processing method based on a neural network super-resolution technology is characterized by comprising the following steps:
(1) training of the full convolution neural network:
shooting a group of clear pictures Y by a microscope, and then carrying out Gaussian filtering on the Y to obtain corresponding defocused blurred picturesFor slice X, image information data X, Y is converted into numpy array, then the array is normalized, and the normalized array is marked as Xnorm、YnormWhere X isnorm、YnormThe method comprises the following steps that two matrixes with the same shape are provided, the shapes are uniformly marked as L, W, H and c, L is the number of clear pictures to be shot, W is the number of rows of the matrixes, H is the number of columns of the matrixes, if grey pictures are shot, c is 1, and if color pictures are shot, c is 3;
with XnormBeing an input to the network, YnormFor the output of the network, the learning rate is set to be 3E-4, an Adam optimizer is adopted during training, the network is trained, and a full convolution neural network M is obtained;
(2) microscopic imaging treatment:
arranging the trained full convolution neural network M on a computer for controlling a microscope, controlling the microscope to shoot pictures, simultaneously utilizing the trained full convolution neural network M to compensate the shot pictures in real time, firstly carrying out normalization operation on the shot pictures to obtain an array of 1W H c, taking the array as the input of the network to obtain the output with the shape of 1W H c, carrying out normalization on the output, and mapping pixel values to 0-255 to obtain a clear picture.
2. A microscopic imaging processing method based on neural network super-resolution technology according to claim 1, characterized in that the fully convolutional neural network convolves the input of the network, and has a two-dimensional matrix a and a matrix B, and the calculation formula of the convolution result matrix C is as follows:
C(j,k)=∑pqA(p,q)B(j-p+1,k-q+1)
wherein p and q are respectively the abscissa and the ordinate of the matrix A, j and k are respectively the abscissa and the ordinate of the matrix C, and if the matrix elements exceed the boundary, the values are replaced by 0.
3. The microscopic imaging processing method based on the neural network super-resolution technology as claimed in claim 2, wherein when the fully-convolutional neural network is convolved, the input shape isWinput*HinputMatrix of c, with n Wfilter*HfilterConvolution kernels of c are respectively convolved to obtain the shape Woutput*HoutputN, W is the number of rows of the matrix, H is the number of columns of the matrix, subscripts represent the matrix to which the subscripts belong, and c is the number of characteristic channels of the matrix;
the output is regarded as the characteristic extracted by the convolution layer, when the loss function is appointed and the truth value is given, the parameter in the convolution kernel is updated according to the size of the loss function and the gradient descending direction to the fastest descending direction, wherein
Woutput=(Winput-Wfilter+2P)/S+1
Houtput=(Hinput-Hfilter+2P)/S+1
P is the padding size and S is the step size.
4. The microscopic imaging processing method based on the neural network super-resolution technology as claimed in claim 3, wherein the loss function formula is as follows:
Figure FDA0002179504930000021
wherein the content of the first and second substances,as a function of the losses of the network,
Figure FDA0002179504930000023
is the output of the network and is,
Figure FDA0002179504930000024
is a true value, | × | non-conducting phosphor1Is the norm of L1.
5. The microscopic imaging processing method based on the neural network super-resolution technology as claimed in claim 3, wherein when the fully convolutional neural network performs convolution, the number of convolution kernels in each of the up-sampling and the down-sampling is 32; the down-sampling comprises two convolution layers and a maximum pooling layer, the convolution kernel size is 3 x 3, and the maximum pooling step size is 2 x 2; adding a convolution layer behind the tensor obtained by jump connection, synthesizing the characteristics between different layers, wherein the size of a convolution kernel is 3 x 3, and then performing up-sampling; the up-sampling uses deconvolution, with convolution kernel size of 2 x 2 and step size of 2 x 2.
CN201910790869.3A 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology Active CN110675333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910790869.3A CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910790869.3A CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Publications (2)

Publication Number Publication Date
CN110675333A true CN110675333A (en) 2020-01-10
CN110675333B CN110675333B (en) 2023-04-07

Family

ID=69075569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910790869.3A Active CN110675333B (en) 2019-08-26 2019-08-26 Microscopic imaging processing method based on neural network super-resolution technology

Country Status (1)

Country Link
CN (1) CN110675333B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311522A (en) * 2020-03-26 2020-06-19 重庆大学 Two-photon fluorescence microscopic image restoration method based on neural network and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
CN109035146A (en) * 2018-08-09 2018-12-18 复旦大学 A kind of low-quality image oversubscription method based on deep learning
CN109087247A (en) * 2018-08-17 2018-12-25 复旦大学 The method that a kind of pair of stereo-picture carries out oversubscription
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
CN109801215A (en) * 2018-12-12 2019-05-24 天津津航技术物理研究所 The infrared super-resolution imaging method of network is generated based on confrontation
CN110163800A (en) * 2019-05-13 2019-08-23 南京大学 A kind of micro- phase recovery method and apparatus of chip based on multiple image super-resolution

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846463A (en) * 2017-01-13 2017-06-13 清华大学 Micro-image three-dimensional rebuilding method and system based on deep learning neutral net
CN108549892A (en) * 2018-06-12 2018-09-18 东南大学 A kind of license plate image clarification method based on convolutional neural networks
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method
CN109035146A (en) * 2018-08-09 2018-12-18 复旦大学 A kind of low-quality image oversubscription method based on deep learning
CN109087247A (en) * 2018-08-17 2018-12-25 复旦大学 The method that a kind of pair of stereo-picture carries out oversubscription
CN109636733A (en) * 2018-10-26 2019-04-16 华中科技大学 Fluorescent image deconvolution method and system based on deep neural network
CN109801215A (en) * 2018-12-12 2019-05-24 天津津航技术物理研究所 The infrared super-resolution imaging method of network is generated based on confrontation
CN110163800A (en) * 2019-05-13 2019-08-23 南京大学 A kind of micro- phase recovery method and apparatus of chip based on multiple image super-resolution

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DANIELE RAVÌ等: "Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction", 《INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY》 *
OLAF RONNEBERGER等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV:1505.04597V1》 *
ZONGWEI ZHOU等: "UNet++: A Nested U-Net Architecture for Medical Image Segmentation", 《ARXIV:1807.10165V1》 *
李喆: "无透镜数字全息显微图像超分辨率重建方法及应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈华等: "基于神经网络的三维宽场显微图像复原研究", 《光子学报》 *
韦玉婧等: "基于深度卷积网络的单图像超分辨率重建", 《闽江学院学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311522A (en) * 2020-03-26 2020-06-19 重庆大学 Two-photon fluorescence microscopic image restoration method based on neural network and storage medium
CN111311522B (en) * 2020-03-26 2023-08-08 重庆大学 Neural network-based two-photon fluorescence microscopic image restoration method and storage medium

Also Published As

Publication number Publication date
CN110675333B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
CN108549892B (en) License plate image sharpening method based on convolutional neural network
CN109410127B (en) Image denoising method based on deep learning and multi-scale image enhancement
CN112288658A (en) Underwater image enhancement method based on multi-residual joint learning
CN109801215B (en) Infrared super-resolution imaging method based on countermeasure generation network
CN109636733B (en) Fluorescence image deconvolution method and system based on deep neural network
EP3924933A1 (en) Image processor
CN111161178A (en) Single low-light image enhancement method based on generation type countermeasure network
CN114998141A (en) Space environment high dynamic range imaging method based on multi-branch network
CN116847209B (en) Log-Gabor and wavelet-based light field full-focusing image generation method and system
CN114463196B (en) Image correction method based on deep learning
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN114596233A (en) Attention-guiding and multi-scale feature fusion-based low-illumination image enhancement method
Zhou et al. High dynamic range imaging with context-aware transformer
CN110675333B (en) Microscopic imaging processing method based on neural network super-resolution technology
CN110852947B (en) Infrared image super-resolution method based on edge sharpening
CN111932452A (en) Infrared image convolution neural network super-resolution method based on visible image enhancement
CN110675320A (en) Method for sharpening target image under spatial parameter change and complex scene
CN116596799A (en) Low-illumination image enhancement method based on channel space compound attention
Oh et al. Residual dilated u-net with spatially adaptive normalization for the restoration of under display camera images
EP3992902A1 (en) Method and image processing device for improving signal-to-noise of image frame sequences
CN113674149A (en) Novel super-resolution reconstruction method based on convolutional neural network
CN114862685A (en) Image noise reduction method and image noise reduction module
CN108665412B (en) Method for performing multi-frame image super-resolution reconstruction by using natural image priori knowledge
Zhang et al. An effective image restorer: Denoising and luminance adjustment for low-photon-count imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant