CN116309073B - Low-contrast stripe SIM reconstruction method and system based on deep learning - Google Patents

Low-contrast stripe SIM reconstruction method and system based on deep learning Download PDF

Info

Publication number
CN116309073B
CN116309073B CN202310320880.XA CN202310320880A CN116309073B CN 116309073 B CN116309073 B CN 116309073B CN 202310320880 A CN202310320880 A CN 202310320880A CN 116309073 B CN116309073 B CN 116309073B
Authority
CN
China
Prior art keywords
contrast
sim
low
image
stripe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310320880.XA
Other languages
Chinese (zh)
Other versions
CN116309073A (en
Inventor
刘文杰
陈云博
叶子桐
叶涵楚
陈友华
李海峰
匡翠方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202310320880.XA priority Critical patent/CN116309073B/en
Publication of CN116309073A publication Critical patent/CN116309073A/en
Application granted granted Critical
Publication of CN116309073B publication Critical patent/CN116309073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks

Abstract

A low-contrast stripe SIM reconstruction method based on deep learning comprises the following steps: firstly, manufacturing a low-contrast stripe SIM image training data set; then constructing and training a low-contrast SIM super-resolution neural network; finally, super-resolution reconstruction of low-contrast stripe SIM experimental data is realized. The invention further comprises a low-contrast stripe SIM reconstruction system based on deep learning. The invention can realize high-quality and high-resolution SIM image reconstruction under the condition of low-contrast illumination stripes. The dependence of the traditional SIM technology on the contrast of illumination stripes is overcome, and the application range is greatly expanded; the low-contrast stripe SIM image training set required by the invention can be obtained through simulation without experimental acquisition, thereby greatly reducing the manufacturing difficulty of the training set; the invention does not increase the complexity of any system, can be realized based on any existing SIM system, and has wide application range.

Description

Low-contrast stripe SIM reconstruction method and system based on deep learning
Technical Field
The invention relates to the technical field of optical super-resolution microscopic imaging, in particular to a low-contrast stripe SIM reconstruction method and system based on deep learning.
Background
Structured light illumination micro-technology (Structured Illumination Microscopy, SIM) is an optical super-resolution microscopic imaging technology. It uses structured light illumination to excite a fluorescent sample, modulating a high frequency signal corresponding to the fine structure to a low frequency signal. And then demodulating the low-frequency signal to obtain high-frequency (detail) information of the sample, thereby realizing super-resolution optical microscopic imaging breaking through diffraction limit.
The SIM needs to acquire a plurality of images containing illumination stripes at different phases and directions, and then obtain a super-resolution image by using an image reconstruction algorithm. Thus, structured light illumination is one of the cores of the technology, and can move high-frequency information that is not otherwise available to a low-frequency region, so as to be received by an imaging system, thereby improving resolution. In this process, the contrast of the illumination fringes directly affects the resolution and the signal-to-noise ratio of the super-resolution microscopic imaging result.
However, in practical applications, the low contrast fringes may occur, for example, the linear polarization of the illumination beam is not good and is not consistent, the sample to be measured is thicker, and the imaging background and noise are larger, which may result in lower contrast of the illumination fringes. Furthermore, in the case of observing the movement of living cells for a long period of time, in order to pursue higher imaging speeds and lower photobleaching and photodamage effects, researchers are generally required to reduce the intensity of the laser light source or the exposure time of the camera, which also inevitably reduces the contrast of the illumination fringes. In order to obtain high contrast illumination fringes to obtain a better reconstructed image, an interferometric SIM is mostly adopted at present, but the structure of the interferometric SIM system is complex, and the adjustment difficulty is high. The projected DMD-SIM has lower hardware costs than the interferometric SIM. But due to its inherent limitations the illumination stripe contrast of the projection DMD-SIM is relatively low.
Therefore, improving the quality of image reconstruction under low contrast fringe illumination is of paramount importance for SIM applications. Since part of low contrast scenes are difficult to avoid, we need to develop new reconstruction algorithms to promote the universality of super-resolution SIM imaging systems.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior SIM technology in image reconstruction in a low-contrast stripe illumination scene, the invention provides a low-contrast stripe SIM reconstruction method and system based on deep learning.
The invention discloses a low-contrast stripe SIM reconstruction method based on deep learning, which comprises the following steps:
s1: a low contrast striped SIM image training dataset is made. Firstly, obtaining low-contrast stripe patterns of different grades through simulation; then multiplying the truth image with the stripe pattern and then convolving the truth image with a point spread function to obtain low-contrast SIM images with different grades; finally, adding noise in the image;
s2: a low contrast SIM super-resolution neural network is constructed and trained. Firstly, an encoder constructing a network extracts image features layer by layer; then, a decoder constructing a network restores the image information layer by layer; finally, training the super-resolution neural network. Continuously adjusting parameters of the neural network in the network training process, reducing the gap between the output and the true value through iterative optimization, reducing the value of the loss function to network convergence, and storing weight parameters;
s3: super-resolution reconstruction of low contrast striped SIM experimental data. Firstly, inputting nine pieces of SIM experimental data obtained under the imaging condition of low-contrast illumination stripes; then, loading the optimal weight saved in the network training; and finally, outputting a super-resolution reconstruction result of the low-contrast stripe SIM experimental data. Further, the step S1 includes the following substeps:
s1.1: simulating an illumination stripe pattern; simulating three directions, and three phases of illumination fringe patterns for each direction, according to the SIM principle;
s1.2: adjusting the contrast of the stripes; the method is characterized in that: adjusting the contrast ratio by controlling a parameter m in the following formula to obtain stripe patterns with different level contrast ratios;
wherein I (r) represents an illumination fringe pattern, I 0 Representing the average intensity of the illumination stripe, m is the stripe modulation contrast, k 0 Representing the spatial frequency vector of the fringes,representing the initial phase of the fringes;
s1.3: multiplying the truth image with the illumination stripes and then convolving the truth image with the point spread function to obtain a simulated SIM image; further, noise is added to the image to simulate a more realistic SIM imaging result.
Further, the step S2 includes the following sub-steps:
s2.1: constructing a low-contrast SIM super-resolution neural network; constructing an encoder of a network, and extracting image features layer by layer; then constructing a decoder of the network, and recovering the image information layer by layer;
s2.2: training a low-contrast SIM super-resolution neural network; inputting a low-contrast SIM image, and outputting a predicted image of the network through a low-contrast SIM super-resolution neural network; in order to minimize the error between the predicted image and the true SIM image, calculating an image gap through a loss function, and then continuously updating parameters of the neural network in the network training process so as to minimize the loss function; and (3) reducing the gap between the output and the true value through iterative optimization, reducing the value of the loss function to network convergence, and storing the weight parameter.
Further, the step S3 includes the following substeps:
s3.1: inputting low-contrast stripe SIM experimental data into a neural network;
s3.2: loading the optimal weight saved in the network training;
s3.3: outputting a super-resolution reconstruction result image.
The invention discloses a system for implementing a low-contrast stripe SIM reconstruction method based on deep learning, which comprises the following steps:
the low-contrast stripe SIM image training data set manufacturing module is used for manufacturing a low-contrast stripe SIM image training data set; firstly, obtaining low-contrast stripe patterns of different grades through simulation; then multiplying the truth image with the stripe pattern and then convolving the truth image with a point spread function to obtain low-contrast SIM images with different grades; finally, adding noise in the image;
the method comprises the steps that a low-contrast SIM super-resolution neural network building and training module is built, and firstly, an encoder for building the network extracts image features layer by layer; then, a decoder constructing a network restores the image information layer by layer; finally, training a super-resolution neural network;
the super-resolution reconstruction module of the low-contrast stripe SIM experimental data firstly inputs nine pieces of SIM experimental data obtained under the imaging condition of the low-contrast illumination stripe; then, loading the optimal weight saved in the network training; and finally, outputting a super-resolution reconstruction result of the low-contrast stripe SIM experimental data.
The invention also relates to a device of the low-contrast stripe SIM reconstruction method based on deep learning, which comprises a memory and one or more processors, wherein the memory stores executable codes, and the one or more processors are used for realizing the method when executing the executable codes
The invention also relates to a computer readable storage medium, on which a program is stored which, when being executed by a processor, implements a method of the deep learning based low contrast stripe SIM reconstruction method according to the invention.
The invention also relates to a computing device, which comprises a memory and a processor, wherein executable codes are stored in the memory, and the processor realizes the method of the low-contrast stripe SIM reconstruction method based on deep learning when executing the executable codes.
Compared with the prior art, the invention has the following beneficial technical effects:
(1) The method can realize high-quality and high-resolution SIM image reconstruction under the condition of low-contrast illumination stripes. The dependence of the traditional SIM technology on the contrast of illumination stripes is overcome, and the application range is greatly expanded;
(2) The low-contrast stripe SIM image training set required by the method can be obtained through simulation without experimental acquisition, so that the manufacturing difficulty of the training set is greatly reduced;
(3) The method does not increase the complexity of any system and can be realized based on any existing SIM system.
Drawings
The present invention will be described in further detail with reference to the drawings and embodiments.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of the fabrication of a low contrast striped SIM image training dataset of the present invention;
FIG. 3 is a schematic diagram of a stripe pattern simulating different contrast levels in accordance with the present invention;
FIG. 4 is a schematic diagram of a low contrast SIM super-resolution network architecture of the present invention;
FIG. 5 is a schematic diagram of the network training process of the present invention;
FIGS. 6a 1-6 a3 are graphs comparing the original, conventional algorithm and the reconstruction results of the present invention, respectively, in the case of low contrast illumination stripes, and FIGS. 6b 1-6 b3 are graphs comparing the original, conventional algorithm and the reconstruction results of the present invention, respectively, in the case of high contrast illumination stripes;
fig. 7 a-7 c are graphs comparing the results of a wide field map, a conventional algorithm and the reconstruction of the present invention, respectively, in a projected DMD-SIM low contrast fringe scene.
Fig. 8 is a system configuration diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Example 1
Taking an implementation scenario of low contrast stripe SIM super-resolution imaging as an example, referring to fig. 1, a flowchart of the steps of the present invention is shown, which includes the following steps:
(1) Manufacturing a low-contrast striped SIM image training data set:
in order to make a low contrast striped SIM image training dataset, it may be acquired by means of experimental acquisition. However, the contrast level cannot be controlled in the mode, the structure of the SIM system is complex, and the adjustment difficulty is high. Thus, we obtained a stripe pattern of different level contrast through simulation, superimposed on the true SIM image, thus completing the fabrication of a different level contrast stripe SIM image training dataset comprising low contrast stripes. The specific manufacturing process is shown in fig. 2.
First, the illumination stripe pattern is simulated. The mathematical expression of the illumination stripe is shown in formula (1). Wherein I is 0 Represents the average intensity, k, of the illumination stripe 0 Representing the spatial frequency vector of the fringes,representing the fringe initial phase, m is the fringe modulation contrast.
According to equation (1), three directions and three phases of illumination fringe patterns for each direction are simulated.
Then, the fringe contrast is adjusted. From equation (1), we can know that the contrast of the illumination stripe can be adjusted by controlling the magnitude of parameter m. Therefore, we control the parameter m to adjust the contrast to get a fringe pattern of different contrast. In the present invention we set ten contrast levels from low to high. For 2D-SIM, it is often necessary to acquire nine frames of images in three directions, and three phases for each direction. Thus, there are nine illumination fringe patterns at each contrast level. Finally, ten sets of contrast levels, nine per set, of stripe patterns were obtained. As shown in fig. 3.
And finally, multiplying the truth image with the illumination stripes and then convolving the multiplied truth image with the point spread function to obtain the simulated SIM image. Further, noise is added to the image to simulate a more realistic SIM imaging result. By formula (2), D (r) represents the simulated SIM image, F (r) represents the sample information distribution, I (r) represents the illumination fringe pattern, H (r) represents the point spread function of the microscopic imaging system, and N (r) represents the noise added to the image.
(2) Constructing and training a low-contrast SIM super-resolution neural network:
and constructing a low-contrast SIM super-resolution neural network. The network structure is shown in fig. 4. The network is mainly composed of two parts: an encoder and a decoder. The encoder consists of four downsampling modules, responsible for extracting image features from the input. The decoder consists of four up-sampling modules, and is responsible for further feature optimization of the features obtained by the input processed by the encoder and generating a final target image. Each downsampling module contains two 3 x 3 convolutions and one 2 x 2 max pooling layer. Each up-sampling module consists of two 3 x 3 convolutions. Each 3 x 3 convolution back in the network connects the modified linear unit ReLU activation function to boost the expressive power of the model. The network also uses jump connection, the output characteristic diagram of each downsampling stage in the encoder is transmitted to the decoder through the jump connection, and the output characteristic diagram of the upsampling layer in the corresponding stage is spliced with the output characteristic diagram of the upsampling layer in the channel, so that the fusion of shallow layer information and deep layer information is realized, and more semantic information is provided for the decoding process.
And after the low-contrast SIM super-resolution neural network is constructed, performing network training. The flow of network training is shown in fig. 5. And inputting a low-contrast SIM image, and outputting a predicted image of the network through a low-contrast SIM super-resolution neural network. In order to minimize the error between the predicted image and the true SIM image, the image gap is calculated by a loss function, and then the parameters of the neural network are continually updated during the network training process so that the loss function is minimized. And (3) reducing the gap between the output and the true value through iterative optimization, reducing the value of the loss function to network convergence, and storing the weight parameter.
(3) Super-resolution reconstruction of low-contrast striped SIM experimental data
In order to reconstruct low-contrast striped SIM experimental data, firstly inputting the low-contrast striped SIM experimental data into a neural network; then loading the optimal weight saved in the network training; and finally, outputting a super-resolution reconstruction result.
Fig. 6 (a) shows the SIM original image of the microtube sample, the result reconstructed by the SIM conventional algorithm and the result reconstructed by the present invention in the case that the illumination fringes are low contrast. The original SIM map of the microtube sample is lower in resolution. After reconstruction by the conventional SIM algorithm, the resolution is improved, but the artifact of obvious streak feeling is generated. After the reconstruction of the invention, the resolution is greatly improved, and no streak feeling artifact exists. Fig. 6 (b) is a comparison of the reconstruction results in the case where the illumination fringes are of high contrast. The SIM traditional algorithm and the invention can reconstruct super-resolution images better. But in detail the resolution of the reconstructed image of the invention is higher. By comparing (a) with (b), the invention has better reconstruction results in the case of high contrast or low contrast illumination stripes. The adaptability of the invention to different fringe pattern contrasts, especially the robustness to low contrast fringes, is thus demonstrated.
Fig. 7 is the result of testing a real-time projection DMD-SIM image. By comparing the wide-field image, the conventional SIM algorithm reconstruction result and the invention reconstruction result, the invention has the obvious SIM reconstruction effect with higher resolution and higher quality. This shows that the invention also greatly improves the SIM reconstruction quality under the real low-contrast fringe scene, and has practical application value.
Example 2
Referring to fig. 8, the present embodiment relates to a system for implementing a low-contrast stripe SIM reconstruction method based on deep learning of embodiment 1, including:
the low-contrast stripe SIM image training data set manufacturing module is used for manufacturing a low-contrast stripe SIM image training data set; firstly, obtaining low-contrast stripe patterns of different grades through simulation; then multiplying the truth image with the stripe pattern and then convolving the truth image with a point spread function to obtain low-contrast SIM images with different grades; finally, adding noise in the image;
the method comprises the steps that a low-contrast SIM super-resolution neural network building and training module is built, and firstly, an encoder for building the network extracts image features layer by layer; then, a decoder constructing a network restores the image information layer by layer; finally, training a super-resolution neural network;
the super-resolution reconstruction module of the low-contrast stripe SIM experimental data firstly inputs nine pieces of SIM experimental data obtained under the imaging condition of the low-contrast illumination stripe; then, loading the optimal weight saved in the network training; and finally, outputting a super-resolution reconstruction result of the low-contrast stripe SIM experimental data.
Example 3
The embodiment relates to a device of a low-contrast stripe SIM reconstruction method based on deep learning, which comprises a memory and one or more processors, wherein executable codes are stored in the memory, and the one or more processors are used for realizing the low-contrast stripe SIM reconstruction method based on deep learning in embodiment 1 when executing the executable codes.
Example 4
The present invention also relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements a method of a low-contrast stripe SIM reconstruction method based on deep learning as described in embodiment 1.
Example 5
The invention also relates to a computing device, which comprises a memory and a processor, wherein executable codes are stored in the memory, and the processor realizes the method of the low-contrast stripe SIM reconstruction method based on deep learning in the embodiment 1 when executing the executable codes.

Claims (6)

1. The low-contrast stripe SIM reconstruction method based on deep learning is characterized by comprising the following steps of:
s1: manufacturing a low-contrast stripe SIM image training data set; firstly, obtaining low-contrast stripe patterns of different grades through simulation; then multiplying the truth image with the stripe pattern and then convolving the truth image with a point spread function to obtain low-contrast SIM images with different grades; finally, adding noise in the image;
s2: constructing and training a low-contrast SIM super-resolution neural network; firstly, an encoder constructing a network extracts image features layer by layer; then, a decoder constructing a network restores the image information layer by layer; finally, training a super-resolution neural network;
s3: super-resolution reconstruction of low-contrast stripe SIM experimental data; firstly, inputting nine pieces of SIM experimental data obtained under the imaging condition of low-contrast illumination stripes; then, loading the optimal weight saved in the network training; finally, outputting super-resolution reconstruction results of the low-contrast stripe SIM experimental data;
said step S1 comprises the sub-steps of:
s1.1: simulating an illumination stripe pattern; simulating three directions, and three phases of illumination fringe patterns for each direction, according to the SIM principle;
s1.2: adjusting the contrast of the stripes; the method is characterized in that: adjusting the contrast ratio by controlling a parameter m in the following formula to obtain stripe patterns with different level contrast ratios;
wherein I (r) represents an illumination fringe pattern, I 0 Representing the average intensity of the illumination stripe, m is the stripe modulation contrast, k 0 Representing the spatial frequency vector of the fringes,representing the initial phase of the fringes; the contrast of the illumination stripes is adjusted by controlling the size of the parameter m, and ten contrast levels are set from low to high; nine illumination stripe patterns are arranged under each contrast level, so that ten groups of contrast levels and nine groups of stripe patterns are obtained, wherein the total number of the stripe patterns is ninety;
s1.3: multiplying the truth image with the illumination stripes and then convolving the truth image with the point spread function to obtain a simulated SIM image; further, noise is added to the image to simulate a more realistic SIM imaging result, where D (r) represents a simulated SIM image, F (r) represents a sample information distribution, I (r) represents an illumination fringe pattern, H (r) represents a point spread function of the microscopic imaging system, and N (r) represents noise added to the image:
said step S2 comprises the sub-steps of:
s2.1: constructing a low-contrast SIM super-resolution neural network; constructing an encoder of a network, and extracting image features layer by layer; then constructing a decoder of the network, and recovering the image information layer by layer;
the low contrast SIM super-resolution neural network comprises an encoder and a decoder; the encoder consists of four downsampling modules and is responsible for extracting image features from input; the decoder consists of four up-sampling modules and is responsible for carrying out further feature optimization on the features obtained by the input processed by the encoder and generating a final target image; each downsampling module comprises two 3×3 convolution products and one 2×2 max pooling layer; each up-sampling module consists of two 3 x 3 convolutions; each 3×3 convolution in the network is followed by a modified linear unit ReLU activation function to promote the expressive power of the model; the network also uses jump connection, the output characteristic diagram of each downsampling stage in the encoder is transmitted to the decoder through the jump connection, and the output characteristic diagram of the upsampling layer in the corresponding stage is spliced with the channel, so that the fusion of shallow layer information and deep layer information is realized, and more semantic information is provided for the decoding process;
s2.2: training a low-contrast SIM super-resolution neural network; inputting a low-contrast SIM image, and outputting a predicted image of the network through a low-contrast SIM super-resolution neural network; in order to minimize the error between the predicted image and the true SIM image, calculating an image gap through a loss function, and then continuously updating parameters of the neural network in the network training process so as to minimize the loss function; and (3) reducing the gap between the output and the true value through iterative optimization, reducing the value of the loss function to network convergence, and storing the weight parameter.
2. The low-contrast stripe SIM reconstruction method based on deep learning according to claim 1, wherein: said step S3 comprises the sub-steps of:
s3.1: inputting low-contrast stripe SIM experimental data into a neural network;
s3.2: loading the optimal weight saved in the network training;
s3.3: outputting a super-resolution reconstruction result image.
3. A system for implementing a deep learning based low contrast striped SIM reconstruction method as defined in claim 1, wherein: comprising the following steps:
the low-contrast stripe SIM image training data set manufacturing module is used for manufacturing a low-contrast stripe SIM image training data set; firstly, obtaining low-contrast stripe patterns of different grades through simulation; then multiplying the truth image with the stripe pattern and then convolving the truth image with a point spread function to obtain low-contrast SIM images with different grades; finally, adding noise in the image;
the method comprises the steps that a low-contrast SIM super-resolution neural network building and training module is built, and firstly, an encoder for building the network extracts image features layer by layer; then, a decoder constructing a network restores the image information layer by layer; finally, training a super-resolution neural network;
the super-resolution reconstruction module of the low-contrast stripe SIM experimental data firstly inputs nine pieces of SIM experimental data obtained under the imaging condition of the low-contrast illumination stripe; then, loading the optimal weight saved in the network training; and finally, outputting a super-resolution reconstruction result of the low-contrast stripe SIM experimental data.
4. An apparatus for a deep learning-based low-contrast striped SIM reconstruction method, comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors, when executing the executable code, configured to implement the method for a deep learning-based low-contrast striped SIM reconstruction method of any one of claims 1-2.
5. A computer readable storage medium, having stored thereon a program which, when executed by a processor, implements a method of a deep learning based low contrast stripe SIM reconstruction method as claimed in any one of claims 1-2.
6. A computing device comprising a memory and a processor, wherein the memory has executable code stored therein, which when executed by the processor, implements the method of any of claims 1-2.
CN202310320880.XA 2023-03-24 2023-03-24 Low-contrast stripe SIM reconstruction method and system based on deep learning Active CN116309073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310320880.XA CN116309073B (en) 2023-03-24 2023-03-24 Low-contrast stripe SIM reconstruction method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310320880.XA CN116309073B (en) 2023-03-24 2023-03-24 Low-contrast stripe SIM reconstruction method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN116309073A CN116309073A (en) 2023-06-23
CN116309073B true CN116309073B (en) 2023-12-29

Family

ID=86795843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310320880.XA Active CN116309073B (en) 2023-03-24 2023-03-24 Low-contrast stripe SIM reconstruction method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN116309073B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018212811A1 (en) * 2017-05-19 2018-11-22 Google Llc Hiding information and images via deep learning
WO2022111368A1 (en) * 2020-11-26 2022-06-02 上海健康医学院 Deep-learning-based super-resolution reconstruction method for microscopic image, and medium and electronic device
CN114894099A (en) * 2022-05-05 2022-08-12 中国科学院国家天文台南京天文光学技术研究所 Large-range high-precision echelle grating mechanical splicing displacement detection system and method
WO2022223383A1 (en) * 2021-04-21 2022-10-27 Bayer Aktiengesellschaft Implicit registration for improving synthesized full-contrast image prediction tool
CN115272065A (en) * 2022-06-16 2022-11-01 南京理工大学 Dynamic fringe projection three-dimensional measurement method based on fringe image super-resolution reconstruction
CN115619646A (en) * 2022-12-09 2023-01-17 浙江大学 Deep learning optical illumination super-resolution imaging method for sub-fifty nano-structure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018212811A1 (en) * 2017-05-19 2018-11-22 Google Llc Hiding information and images via deep learning
WO2022111368A1 (en) * 2020-11-26 2022-06-02 上海健康医学院 Deep-learning-based super-resolution reconstruction method for microscopic image, and medium and electronic device
WO2022223383A1 (en) * 2021-04-21 2022-10-27 Bayer Aktiengesellschaft Implicit registration for improving synthesized full-contrast image prediction tool
CN114894099A (en) * 2022-05-05 2022-08-12 中国科学院国家天文台南京天文光学技术研究所 Large-range high-precision echelle grating mechanical splicing displacement detection system and method
CN115272065A (en) * 2022-06-16 2022-11-01 南京理工大学 Dynamic fringe projection three-dimensional measurement method based on fringe image super-resolution reconstruction
CN115619646A (en) * 2022-12-09 2023-01-17 浙江大学 Deep learning optical illumination super-resolution imaging method for sub-fifty nano-structure

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度学习的单图像超分辨算法比较探究;王梓欣;牟叶;王德睿;;电子技术与软件工程(第07期);104-106 *
改进的残差卷积神经网络遥感图像超分辨重建;柏宇阳;朱福珍;;黑龙江大学自然科学学报(第03期);124-130 *

Also Published As

Publication number Publication date
CN116309073A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US11055828B2 (en) Video inpainting with deep internal learning
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN110612549A (en) Machine learning based techniques for fast image enhancement
Tang et al. RestoreNet: a deep learning framework for image restoration in optical synthetic aperture imaging system
CN114998141B (en) Space environment high dynamic range imaging method based on multi-branch network
CN112529794A (en) High dynamic range structured light three-dimensional measurement method, system and medium
US9823623B2 (en) Conversion of complex holograms to phase holograms
CN111369466A (en) Image distortion correction enhancement method of convolutional neural network based on deformable convolution
CN114387395A (en) Phase-double resolution ratio network-based quick hologram generation method
Lee et al. Learning to generate multi-exposure stacks with cycle consistency for high dynamic range imaging
Cao et al. Dynamic structured illumination microscopy with a neural space-time model
Shen et al. Deeper super-resolution generative adversarial network with gradient penalty for sonar image enhancement
CN116309073B (en) Low-contrast stripe SIM reconstruction method and system based on deep learning
CN117058302A (en) NeRF-based generalizable scene rendering method
CN116957931A (en) Method for improving image quality of camera image based on nerve radiation field
CN115586164A (en) Light sheet microscopic imaging system and method based on snapshot time compression
CN115578497A (en) Image scene relighting network structure and method based on GAN network
CN111260558B (en) Image super-resolution network model with variable magnification
JP4339639B2 (en) Computer hologram creation method
CN114693828B (en) Fourier laminated imaging reconstruction method based on alternating direction multiplier method
CN117197627B (en) Multi-mode image fusion method based on high-order degradation model
JP2023035928A (en) Neural network training based on consistency loss
Stojnev Ilić Preprocessing Image Data for Deep Learning
Zhou et al. A Low-Light Image Enhancement Algorithm Incorporating Cross-Mixed Attention and Receptive Field Expansion Mechanism
Satapathy et al. OpenCLTM Implementation of Rapid Image Restoration Kernels Based on Blind/Non-blind Deconvolution Techniques for Heterogeneous Parallel Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant