CN114387196A - Method and device for generating undersampled image of super-resolution microscope - Google Patents

Method and device for generating undersampled image of super-resolution microscope Download PDF

Info

Publication number
CN114387196A
CN114387196A CN202111600840.8A CN202111600840A CN114387196A CN 114387196 A CN114387196 A CN 114387196A CN 202111600840 A CN202111600840 A CN 202111600840A CN 114387196 A CN114387196 A CN 114387196A
Authority
CN
China
Prior art keywords
image
resolution
super
microscope
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111600840.8A
Other languages
Chinese (zh)
Other versions
CN114387196B (en
Inventor
姜伟
徐蕾
阚世超
余茜颖
蒋兴然
申艾欣
王旭升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hamde Ningbo Intelligent Medical Technology Co ltd
Original Assignee
Hamde Ningbo Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hamde Ningbo Intelligent Medical Technology Co ltd filed Critical Hamde Ningbo Intelligent Medical Technology Co ltd
Priority to CN202111600840.8A priority Critical patent/CN114387196B/en
Publication of CN114387196A publication Critical patent/CN114387196A/en
Application granted granted Critical
Publication of CN114387196B publication Critical patent/CN114387196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The invention provides a method and a device for generating an under-sampled image of a super-resolution microscope, electronic equipment and a storage medium, wherein the method for generating the under-sampled image comprises the steps of acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample; dividing the low-resolution wide-field image-super-resolution undersampled image pair into a training set and a test set; training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model; and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope. The target model supports WF image input of a target structure with any size, a reconstruction result close to a real super-resolution microscope spark image can be obtained without zooming the image in the model testing process, and the defects of image distortion and incapability of quantification caused by random zooming in cell molecular biology and molecular imaging application are avoided and overcome.

Description

Method and device for generating undersampled image of super-resolution microscope
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for generating an under-sampled image of a super-resolution microscope, electronic equipment and a storage medium.
Background
Fluorescence Microscopy/Microscopy (FM) plays a key role in the visualization of biological systems, particularly in the visualization of biological cell tissue structures and cellular sub-organ macromolecular structures and intermolecular interactions, for example: molecular specific and multicolor imaging techniques allow researchers to observe cellular tissue morphology and interactions between specific molecules via FM, such as live cell imaging, and the like. However, the ordinary optical microscope has diffraction limit, the lateral resolution is about 200-300nm, and the axial resolution is about 300-500 nm. Although Electron Microscopy/Microscopy (EM) pushes resolution to the nanometer scale, EM has many disadvantages such as short imaging wavelength and the need for observation in vacuum, which limit its widespread use in cell molecular biology research.
The development of Super-Resolution Microscopy/Microscopy (SRM) breaks the diffraction limit of optical Microscope Resolution, allowing microscopic imaging to reach the level of detail that EM can observe. Meanwhile, the SRM retains the advantages of FM in sample preparation, imaging flexibility, target specificity and the like to a certain extent. Representative SRMs are STORM, STED, PALM, SIM, etc. However, SRM also has its limitations: 1) the SRM relies on precise optical microscope components and settings, which are expensive and so on, limiting its wide application; 2) the application and analysis of each SRM method, the complicated preparation of SRM samples, long imaging acquisition time, difficulty in multicolor imaging, phototoxicity and photobleaching damage, etc. limit its wide application.
In order to solve the limitation of SRM, in recent years, deep learning networks have been applied to the SRM field, and artificial intelligence has been applied to the task of reconstructing low-resolution microscopic images into super-resolution microscopic images. For example: aiming at the problems that the imaging time of PALM and STORM is long and the fluorescence of a sample is easy to quench, an ANNA-PALM deep learning model established on the basis of A-Net utilizes a sparsely sampled super-resolution microscopic image (sparse) and/or a low-resolution microscopic image (wide field image, Widefield, WF) to reconstruct a super-resolution image, but under the same data set, the ANNA-PALM only utilizes a WF image to reconstruct the super-resolution image, and the effect is not ideal. To achieve good SRM reconstruction, a large number of WF and SRM image pairs need to be obtained, which is difficult to achieve in practice.
The sparse acquired by the super-resolution microscope plays an important guiding role in the SRM image reconstruction process and is essential. Therefore, a perfect SRM image is obtained by depending on an ANNA-PALM deep learning model, the network must input an undersampled image sparse acquired by a super-resolution microscope and/or a WF image acquired by a common fluorescence microscope, but only the WF image is input, and SRM image reconstruction is difficult to realize.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating an under-sampled image of a super-resolution microscope, electronic equipment and a storage medium, and can realize a reconstruction strategy and a reconstruction method from a low-resolution WF image to the under-sampled image of the super-resolution microscope.
In a first aspect, an embodiment of the present invention provides a method for generating an undersampled image of a super-resolution microscope, where the method for generating the undersampled image includes:
acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample;
splitting the super-resolution sampling image set into a super-resolution undersampled image set, and dividing the low-resolution wide-field image and the super-resolution undersampled matched image pair into a training set and a test set;
training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope.
As one possible implementation, the acquiring the low-resolution wide-field image set and the super-resolution sampling image set of the imaging sample includes:
carrying out super-resolution immunofluorescence staining on a plurality of mammal cell line subcellular structures and macromolecular compounds to obtain an imaging sample;
and respectively shooting the imaging sample through a wide-field microscope and a super-resolution microscope to obtain a low-resolution wide-field image set and a super-resolution sampling image set.
As a possible implementation manner, the pre-constructed UR-Net-8 deep learning network comprises an input scale adaptive generator and an input scale adaptive discriminator; and the generator obtains the target model by carrying out confrontation training with the discriminator.
As a possible implementation manner, the discriminator includes:
4 stacked convolutional layers, 1 spatial pyramid pooling layer for adaptive sampling and 1 fully-connected layer for classification discrimination.
As a possible implementation mode, the target model is trained by adopting a stochastic gradient descent method, and the training process comprises an image pre-training stage adopting fixed input scale and an image fine-tuning stage adopting adaptive input scale.
In one possible implementation, the encoder and decoder of the generator communicate information between the corresponding convolutional layer and the deconvolution layer using a jump residual connection, and the convolutional layer, the deconvolution layer, and the corresponding jump connection form a U-network.
As one possible implementation, the loss function of the countermeasure network of the target model adopts cross entropy loss, L1 loss and MS-SSIM loss; wherein the MS-SSIM loss is 1 and the difference between the MS-SSIM value of the true under-sampled image of the super-resolution microscope and the reconstructed under-sampled image of the super-resolution microscope; and updating the target model parameters through loss value back propagation, and updating the target model parameters by adopting cross entropy loss back propagation through the discriminator.
In a second aspect, an embodiment of the present invention provides an apparatus for generating an undersampled image of a super-resolution microscope, where the apparatus for generating an undersampled image includes:
the data acquisition module is used for acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample;
the training set dividing module is used for dividing the super-resolution sampling image set into a super-resolution undersampled image set, and dividing the low-resolution wide-field image and the super-resolution undersampled matched image pair into a training set and a test set;
the training module is used for training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and the generating module is used for inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the under-sampled image of the target super-resolution microscope.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program thereon, and the processor implements the method according to any one of the first aspect when executing the program.
In a fourth aspect, an embodiment of the invention provides a computer-readable storage medium on which is stored a computer program which, when executed by a processor, implements the method of any one of the first aspects.
The invention provides a method and a device for generating an under-sampled image of a super-resolution microscope, electronic equipment and a storage medium, wherein the method for generating the under-sampled image comprises the steps of acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample; splitting the super-resolution sampling image set into an undersampled image set, and splitting the low-resolution wide-field image-super-resolution undersampled image pair into a training set and a test set; training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model; and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope. The target model supports WF image input with any size of a target structure, a reconstruction result close to a real super-resolution microscope spark image can be obtained without zooming the image in the model testing process, and the defects of image distortion and incapability of quantification caused by random zooming in cell molecular biology and molecular imaging application are avoided and overcome; inputting only one collected WF of a target structure into a UR-Net-8 model, and reconstructing a super-resolution spark image within second-level time; the model has robustness to WF image tests of different cell lines; the undersampled image generated by the UR-Net-8 can accurately simulate a spark image acquired by a real super-resolution microscope, and can play a guiding role in further utilizing a depth learning model to directly reconstruct a super-resolution image from a WF image.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of any embodiment of the invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 shows a flow chart of a method of generating a super-resolution microscope undersampled image of an embodiment of the present invention;
FIG. 2 is a schematic diagram of an input scale adaptive generator according to an embodiment of the present invention;
FIG. 3 is a diagram of the components of a UR unit and a UR-Net according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an input scale adaptive discriminator according to an embodiment of the present invention;
FIG. 5 is an input low resolution wide field microscope image and a reconstructed super resolution microscope spark image of an embodiment of the invention;
FIG. 6 is a schematic structural diagram of an apparatus for generating an under-sampled image of a super-resolution microscope according to an embodiment of the present invention;
fig. 7 shows a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
The invention provides a reconstruction strategy and a method for realizing low-resolution WF image to super-resolution microscope spark image based on a newly built UR-Net-8 deep learning network. The network design is based on the idea of a deep learning image reconstruction encoder and decoder, and information is transmitted in a jumping mode between an encoding layer and a corresponding decoding layer by adopting U-shaped residual errors. And (3) performing deep learning model training by using a super-resolution sparse image acquired by the SRM and a WF image acquired by a common optical microscope as a training data pair and applying a UR-Net-8 network. After the model training is finished, the reconstruction from the low-resolution WF image with any size to the super-resolution microscope spark image is realized, and the reconstruction result close to the true under-sampling image of the super-resolution microscope is obtained. A deep learning network framework UR-Net-8 model which can directly predict an undersampled image (spark) similar to a super-resolution microscope from a WF image based on a small data set is newly built, and a spark (generated spark) UR-Net-8 model generated by a target structure (such as a microtube) is built through training and testing the small-scale data set. The innovation can create necessary conditions for guiding the WF image to learn correct pixel distribution and distribution range in the SRM image reconstruction by using the generated sparse image and realizing accurate SRM image reconstruction.
It should be noted that, the description of the embodiment of the present invention is only for clearly illustrating the technical solutions of the embodiment of the present invention, and does not limit the technical solutions provided by the embodiment of the present invention.
Fig. 1 illustrates a method for generating an undersampled image of a super-resolution microscope according to an embodiment of the present invention, where the method for generating the undersampled image includes:
s20, acquiring a low-resolution wide-field image set and a super-resolution sampling image set of the imaging sample;
specifically, the acquiring the low-resolution wide-field image set and the super-resolution sampling image set of the imaging sample comprises:
carrying out super-resolution immunofluorescence staining on a plurality of mammal cell line subcellular structures and macromolecular compounds to obtain an imaging sample;
and respectively shooting the imaging sample through a wide-field microscope and a super-resolution microscope to obtain a low-resolution wide-field image set and a super-resolution sampling image set.
The establishment of a newly-built UR-Net-8 deep learning network model for converting WF images into superprecise images needs a training set and a testing set, and an imaging sample is obtained by carrying out super-resolution immunofluorescence staining (GFP fluorescence and other fluorescent probe labels) on various mammalian cell lines such as subcellular structures of U373MG, U2-OS and the like and macromolecular compounds such as cytoskeleton microtubules and the like;
s40, splitting the super-resolution sampling image set into a super-resolution undersampling image set, and splitting the low-resolution wide-field image and the super-resolution undersampling matching image pair into a training set and a test set;
specifically, an imaging sample is shot by using a wide-field microscope and a super-resolution microscope respectively to obtain a low-resolution WF image set and a super-resolution sampling image set, the collected super-resolution sampling image set can be divided into a spark image set, and then the WF image-super-resolution spark image pair is randomly divided into a training set and a testing set.
S60, training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and S80, inputting the low-resolution microscope wide-field image to be generated into the target model to obtain a target super-resolution microscope undersampled image.
The embodiment of the invention provides a method for generating an under-sampling image of a super-resolution microscope, which comprises the steps of acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample; splitting the super-resolution sampling image set into an undersampled image set, and splitting the low-resolution wide-field image-super-resolution undersampled image pair into a training set and a test set; training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model; and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope. The target model supports WF image input with any size of a target structure, a reconstruction result close to a real super-resolution microscope spark image can be obtained without zooming the image in the model testing process, and the defects of image distortion and incapability of quantification caused by random zooming in cell molecular biology and molecular imaging application are avoided and overcome; and the super-resolution spark image can be reconstructed within the second time by inputting only one acquired WF of the target structure into the UR-Net-8 model. The target model is robust to WF image testing of different cell lines. The under-sampling image generated by the target model can accurately simulate a real sparse image acquired by a super-resolution microscope, and can play a guiding role in further utilizing a deep learning model to directly reconstruct the super-resolution image from a WF image.
Specifically, the pre-constructed UR-Net-8 deep learning network comprises an input scale self-adaptive generator and an input scale self-adaptive discriminator; and the generator obtains the target model by carrying out confrontation training with the discriminator.
Specifically, the target model generation countermeasure network comprises an input scale self-adaptive generator and an input scale self-adaptive discriminator; the input scale self-adaptive generator is formed by combining an encoder consisting of 8 convolutional layers and a decoder consisting of 8 deconvolution layers; the input scale self-adaption is realized by automatically calculating the size of a feature map of each convolutional layer and each deconvolution layer according to the size of an input image; the input scale self-adaptive generator performs countermeasure training with the input scale self-adaptive discriminator to obtain a reconstruction model of the super-resolution microscope spark image, and inputs the low-resolution microscope WF image to the trained input scale self-adaptive generator to output the super-resolution microscope spark image.
The input scale self-adaptive discriminator consists of 4 stacked convolution layers, 1 spatial pyramid pooling layer for self-adaptive sampling and 1 full-connection layer for realizing classification discrimination. The arbiter is used only for the confrontational training phase. The input of the discriminator is a low-resolution WF image + a real super-resolution microscope spark image and a low-resolution WF image + a super-resolution microscope spark image reconstructed by the input scale self-adaptive generator. And the output of the discriminator is 0 or 1, and respectively indicates that the corresponding input image contains the super-resolution microscope spark image reconstructed by the input scale self-adaptive generator or the real super-resolution microscope spark image.
Specifically, the novel deep learning generation countermeasure network (UR-Net-8) can be trained by adopting a stochastic gradient descent method, and the training process adopts a mode of image pre-training with fixed input scale and image fine adjustment with self-adaptive input scale. And reconstructing the input low-resolution WF image into an output super-resolution microscope spark image by using the trained novel deep learning generation countermeasure network input scale self-adaptive generator.
Further, the low-resolution WF image and the super-resolution microscope spark image are respectively scaled to 576 × 576 size, and sub-images of 512 × 512 size are randomly cropped from the image of 576 × 576 size, and a random horizontal flip operation is performed. Inputting the low-resolution WF image which is cut and randomly and horizontally turned into the generator;
specifically, in the encoder and decoder of the input scale adaptive generator, jump residual connection is adopted between the corresponding convolutional layer and the corresponding deconvolution layer for information transmission, and the convolutional layer, the deconvolution layer and the corresponding jump connection form a U-type network, and the U-type network comprises 8U-type networks in total. The method comprises the following specific steps:
s601: the convolution layer characteristic diagram corresponding to each encoder uses a convolution kernel with the size of 5 multiplied by 5, the step size is 2, convolution is carried out, the deconvolution layer characteristic diagram corresponding to each decoder uses a convolution kernel with the size of 5 multiplied by 5, and the step size is 2, deconvolution is carried out. The corresponding convolution and deconvolution in this step are both activated by the relu function and normalized by the batch normalization (bn) operation.
S602: and splicing the convolution layer input characteristic graph and the deconvolution layer output characteristic graph corresponding to the U-shaped residual error structure in a channel dimension, and performing convolution by using a convolution kernel with the size of 3 multiplied by 3 and the step length of 1. The convolution corresponding to the step is activated by adopting an lrelu function, the threshold parameter is set to be 0.2, the number of convolution output channels is reduced by half, and meanwhile batch normalization is adopted to normalize the output.
S603: and (4) taking the encoder input characteristic corresponding to the U-shaped residual error structure, such as the output characteristic diagram obtained in the step (S602), as the decoder input corresponding to the cascade U-shaped residual error structure.
Further, the discriminator of the input scale adaptive discriminator is composed of 4 stacked convolution layers, 1 spatial pyramid pooling layer for adaptive sampling and 1 full-connection layer for realizing classification discrimination, and the specific steps include:
inputting the low-resolution WF image, the real super-resolution microscope spark image, the low-resolution WF image and the super-resolution microscope spark image reconstructed by the input scale adaptive generator in the step 1 into the convolution layer of the discriminator;
wherein the convolution kernel size of each of the 4 convolutional layers is 5 × 5, and the step size is 2. Activating each convolution layer by adopting a batch normalization (bn) function and a relu function;
the spatial pyramid is composed of 1 × 1, 2 × 2, 3 × 3 and 4 × 4 grids, and the features are sampled into fixed-length vectors (30 times of the number of input feature maps of the spatial pyramid);
and the full connection layer outputs 0 or 1, which respectively indicates that the corresponding input image contains the super-resolution microscope spark image reconstructed by the input scale self-adaptive generator or a real super-resolution microscope spark image.
Further, training a loss function of the target model by adopting cross entropy loss, L1 loss and MS-SSIM loss; the MS-SSIM loss is the difference between 1 and the MS-SSIM value between the real super-resolution microscope spark image and the reconstructed super-resolution microscope spark image; the model parameters are updated by back propagation of the loss values. The discriminator updates the model parameters by adopting cross entropy loss back propagation;
pre-training the novel deep learning generation confrontation network by using an input image with a fixed size of 512 x 512; the novel deep learning generation countermeasure network is finely tuned using scale-adaptive input images. In the training process, the parameters of the generator are updated for 4 times while the parameters of the discriminator are updated for 1 time.
The UR-Net-8 deep learning network is trained and tested by using a large number of collected data set images (such as microtubules) of different cell structures, so that a UR-Net-8 model which has high robustness and high accuracy and can convert WF images into super-resolution spark images is obtained.
The advantageous effects of the invention are explained below in a preferred embodiment:
obtaining imaging samples by performing super-resolution immunofluorescence staining (GFP fluorescence and other fluorescent probe labeling) on subcellular structures and macromolecular complexes such as cytoskeletal microtubules and the like of various mammalian cell lines such as U373MG and U2-OS;
imaging samples were obtained by super-resolution immunofluorescence staining (GFP fluorescence and other fluorescent probe labeling) of subcellular structures and macromolecular complexes such as cytoskeletal microtubules of various mammalian cell lines such as U373MG and U2-OS, involving the following materials and sources:
U2-OS and U373MG cells (ATCC); DMEM (Gibco, 11965092);
PBS(Gibco,20012050);Trypsin-EDTA(Gibco,25200072);
paraformaldehyde(PFA)(Sigma-Aldrich,158127);
Glutaraldehyde(Sigma-Aldrich,G6257);NaBH4(Sigma-Aldrich,71320);
TritonX-100(Sigma-Aldrich,T8787),BSA(Sigma-Aldrich,V900933);
Goat serum(Solarbio,SL038);
Mouse anti-α-tubulin(Sigma-Aldrich,T5168),Goat anti mouse IgG
Alexa fluor 647
(Invitrogen,A21236),Sodium-choride(Sigma-Aldrich,S9888);
Tris-cl(Sigma-Aldrich,10708976001);
Glucose(Sigma-Aldrich,D9434);
HCl(Sigma-Aldrich,258148);
Glucose oxidase(Sigma-Aldrich,G7141)catalase(Sigma-Aldrich,C9322;
β-mercaptoethanol(Sigma-Aldrich,M3148);
NaOH(Sigma-Aldrich,S588);
MgCl2(Sigma-Aldrich,M2393)。
the specific dyeing steps are as follows:
2 x 10^ 5U 373MG cells were plated on a 35mm glass plate (diameter 23mm), incubated overnight, fixed with 4% paraformaldehyde and 0.02% glutaraldehyde for 10min, discarded the waste solution, 0.01% NaBH4Reduction for 10min, PBS washing 3 times, using blocking solution (10% coat serum, 3% BSA, 0.2% Triton X-100) for blocking, then carrying out primary antibody incubation (antibody dilution composition containing 0.5% BSA, 0.1% Triton X-100). The primary anti-dilution ratio is: after incubation of the mouse anti-alpha-tubulin 1:50000 primary antibody, washing the residual primary antibody with washing liquor, wherein the washing liquor comprises 0.2% BSA, 0.1% TritonX-100 and the like, and washing for 5 times and 15min each time. After the primary antibody is washed, incubation is carried out according to a corresponding secondary antibody (Goat anti mouse Alexa Fluor 647) and a proportion (generally 1: 200-1: 500), and washing is carried out for 3 times (15 min each time) after the incubation is finished. Blocking the washed sample for 10min again, washing with PBS for 3 times (5 min each time) for 3 min to obtain the final productSoak in PBS and image less than one week. Super-resolution imaging requires the assistance of an imaging buffer, and the major components comprise 50mM Tris-Cl (pH 8.0), 10mM NaCl, 10% glucose, 10% (w/v) glucose, 0.56mg/mL glucose oxidase, 0.17mg/mL catalase and 0.14M beta-mercaptoethanol.
And respectively shooting an imaging sample by using a wide-field microscope and a super-resolution microscope to obtain a low-resolution wide-field image set and a super-resolution sampling image set, and randomly dividing the imaging sample into a training set and a testing set.
The microtubule stained sample of U373MG cells was imaged by using a wide field microscope and a super resolution microscope, and the shooting setting parameters were as follows:
the wide-field microscope shooting setting parameter is exposure time of 50ms, the super-resolution microscope shooting single-frame exposure time is 10-20ms, the imaging field size is 512 × 512 pixels or 256 × 256 pixels, the lens is set to 0.4x, the 647nm laser is set to 2Kw/cm2, the 405nm laser is set to 0-1W/cm2, the imaging frame number is 50000-60000 frames, and after 30 FOVs (field of view) are collected, manual cutting is carried out to obtain the ROIs (region of interest) with the best quality, and the ROIs are randomly divided into a training set and a testing set.
The image synthesis and analysis parameters are as follows: and (3) carrying out Gaussian fitting on the shot images by using NIS-Elements AR Analysis (Nikon) software, identifying emitted fluorescent molecules by analyzing each image, determining the positioning precision of the emitted fluorescent molecules, correspondingly identifying the peak profile of each fluorophore by using a Point Spread Function (PSF), and finally obtaining a reconstructed STORM super-resolution image.
The training set image is defined as follows: the wide field images (WF) related to the invention are all from a wide field microscope, and undersampled images (sparse, K is 10000) are defined as 10000 frames randomly taken from the total number of frames shot by a super-resolution microscope; each training pair of UR-Net-8 is defined as 1 WF image and 30 undersampled images.
Designing and building a UR-Net-8 deep learning network (target model) which has a novel framework and can realize reconstruction from a low-resolution microscopic image to a super-resolution microscopic undersampled image, and a training strategy and a method.
As shown in fig. 2 to 5, a reconstruction strategy and method for generating a countermeasure network UR-Net-8 based on novel deep learning to realize the low resolution microscope image to the super resolution microscope undersampled image includes the following steps (S):
s100: the novel deep learning generation confrontation network comprises an input scale self-adaptive generator and an input scale self-adaptive discriminator; the input scale self-adaptive generator is formed by combining an encoder consisting of 8 convolutional layers and a decoder consisting of 8 deconvolution layers; the input scale self-adaption is realized by automatically calculating the size of a feature map of each convolutional layer and each deconvolution layer according to the size of an input image; the input scale self-adaptive generator performs countermeasure training with the input scale self-adaptive discriminator to obtain a reconstruction model of the super-resolution microscope spark image, and inputs the low-resolution WF image to the trained input scale self-adaptive generator to output the super-resolution microscope spark image.
S200: the input scale adaptive discriminator in S100 is composed of 4 stacked convolutional layers, 1 spatial pyramid pooling layer for adaptive sampling, and 1 fully-connected layer for implementing classification discrimination. The discriminator is used only for the confrontational training phase in S100. The input of the discriminator is a low-resolution WF image + a real super-resolution microscope spark image, a low-resolution WF image + a super-resolution microscope spark image reconstructed by the input scale adaptive generator in the S100. The output of the discriminator is 0 or 1, and the corresponding input image respectively contains the super-resolution microscope spark image reconstructed by the input scale adaptive generator in the S100 or the real super-resolution microscope spark image.
S300: and training the novel deep learning generation countermeasure network (UR-Net-8) by adopting a random gradient descent method, wherein the training process adopts a mode of image pre-training with fixed input scale and image fine adjustment with self-adaptive input scale. And reconstructing the input low-resolution WF image to an output super-resolution microscope sparse image by using the trained novel deep learning generation countermeasure network input scale self-adaptive generator.
Specifically, as shown in fig. 2, the fogging image input in S100 is uniformly scaled to a size of 512 × 512, and is randomly horizontally flipped, and then the processed fogging image is input to the generator. The generator is composed of 8U-shaped residual error network cascades, the U-shaped residual error network is composed of a U-shaped network and a residual error network, and meanwhile the size of a deconvolution output characteristic diagram in the U-shaped residual error network is consistent with the size of an input characteristic diagram of the U-shaped residual error network:
the convolution layer characteristic diagram corresponding to each encoder uses a convolution kernel with the size of 5 multiplied by 5, the step size is 2, convolution is carried out, the deconvolution layer characteristic diagram corresponding to each decoder uses a convolution kernel with the size of 5 multiplied by 5, and the step size is 2, and deconvolution is carried out. The corresponding convolution and deconvolution are both activated with the relu function and normalized with a batch normalization (bn) operation.
And splicing the convolution layer input characteristic graph and the deconvolution layer output characteristic graph corresponding to the U-shaped residual error structure in a channel dimension, and performing convolution by using a convolution kernel with the size of 3 multiplied by 3 and the step length of 1. The convolution corresponding to the step is activated by adopting an lrelu function, the threshold parameter is set to be 0.2, the number of convolution output channels is reduced by half, and meanwhile batch normalization is adopted to normalize the output.
And subtracting the deconvolution layer output characteristic graph from the encoder input characteristic corresponding to the U-shaped residual error structure to serve as the decoder input corresponding to the cascaded U-shaped residual error structure.
Training the novel deep learning to generate a loss function of the countermeasure network by adopting cross entropy loss, L1 loss and MS-SSIM loss; the MS-SSIM loss is the difference between 1 and the MS-SSIM value between the real super-resolution microscope spark image and the reconstructed super-resolution microscope spark image; the model parameters are updated by back propagation of the loss values. The discriminator updates the model parameters by adopting cross entropy loss back propagation;
specifically, as shown in fig. 4, the discriminator in S2 is composed of 4 stacked convolutional layers, 1 spatial pyramid pooling layer for adaptive sampling, and 1 fully-connected layer for implementing classification discrimination:
inputting the low-resolution WF image + a real super-resolution microscope spark image, the low-resolution WF image + a super-resolution microscope spark image reconstructed by the input scale adaptive generator in the S1 into the convolution layer of the discriminator;
wherein the convolution kernel size of each of the 4 convolutional layers is 5 × 5, and the step size is 2. Activating each convolution layer by adopting a batch normalization (bn) function and a relu function;
wherein, the spatial pyramid in step S200 is composed of 1 × 1, 2 × 2, 3 × 3 and 4 × 4 grids, and the features are sampled into fixed length vectors (30 times of the number of the input feature maps of the spatial pyramid); s400: and the full connection layer in S200 outputs 0 or 1, which respectively indicate that the corresponding input image contains the super-resolution microscope spark image reconstructed by the input scale adaptive generator in S100 or the real super-resolution microscope spark image.
When the method is implemented, firstly, the novel deep learning generation countermeasure network is pre-trained by using an input low-resolution microscope image with the fixed size of 512 multiplied by 512, and a model capable of reconstructing a 512 multiplied by 512 resolution microscope spark image is obtained; and then, finely adjusting the novel deep learning generation countermeasure network by using the scale self-adaptive input low-resolution WF image to obtain a model with an accurate reconstruction effect on the input low-resolution WF image with any size. The method effectively improves the performance from the low-resolution WF image to the super-resolution microscope spark image under different scales, solves the problem of information loss caused by image scaling in the process of reconstructing the microscope image, and has higher application value.
The UR-Net-8 deep learning network is trained and tested by using a large number of collected data set images (such as microtubules) of different cell structures, so that a UR-Net-8 model which has high robustness and high accuracy and can convert wide-field images into super-resolution undersampled images is obtained.
And (5) training the UR-Net-8 to obtain a UR-Net-8 model capable of converting the WF image into a super-resolution spark image. The model was tested, as shown in fig. 4, the first image was a low-resolution WF image of microtubule staining of U2-OS cells, the second image was a spark image (K: 10000) acquired by a real super-resolution microscope, the third image was a super-resolution microscope spark image reconstructed by the corresponding generator model, and the fourth image was a fully sampled super-resolution image (perfect, K: 50000) actually acquired by a super-resolution microscope. The undersampled image acquired by the real super-resolution microscope can be well simulated by the image generated by the model.
Based on the same inventive concept, the embodiment of the present invention further provides a device for generating an under-sampled image of a super-resolution microscope, which can be used to implement the method for generating an under-sampled image of a super-resolution microscope described in the above embodiments, as described in the following embodiments. The principle of solving the problems of the generating device of the super-resolution microscope undersampled image is similar to the generating method of the super-resolution microscope undersampled image, so the implementation of the generating device of the super-resolution microscope undersampled image can refer to the implementation of the generating method of the super-resolution microscope undersampled image, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
Fig. 6 shows an apparatus for generating an undersampled image of a super-resolution microscope according to an embodiment of the present invention, the apparatus for generating an undersampled image includes:
a data acquisition module 20 for acquiring a low-resolution wide-field image set and a super-resolution sampling image set of the imaging sample;
a training set dividing module 40, configured to divide the super-resolution sampling image set into a super-resolution under-sampling image set, and divide the low-resolution wide-field image and the super-resolution under-sampling matching image pair into a training set and a test set;
the training module 60 is used for training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and the generating module 80 is configured to input the low-resolution microscope wide-field image to be generated into the target model to obtain a target super-resolution microscope undersampled image.
The embodiment of the invention provides a device for generating an under-sampling image of a super-resolution microscope, which is characterized in that a data acquisition module 20 is used for acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample; the training set dividing module 40 divides the super-resolution sampling image set into an under-sampling image set, and the training module 60 divides the low-resolution wide-field image-super-resolution under-sampling image pair into a training set and a test set; the generation module 80 trains and tests a pre-constructed UR-Net-8 deep learning network based on the training set and the test set to obtain a target model; and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope. The target model supports WF image input with any size of a target structure, a reconstruction result close to a real super-resolution microscope spark image can be obtained without zooming the image in the model testing process, and the defects of image distortion and incapability of quantification caused by random zooming in cell molecular biology and molecular imaging application are avoided and overcome; and the super-resolution spark image can be reconstructed within the second time by inputting only one acquired WF of the target structure into the UR-Net-8 model. The target model is robust to WF image testing of different cell lines. The under-sampling image generated by the target model can accurately simulate a real sparse image acquired by a super-resolution microscope, and can play a guiding role in further utilizing a deep learning model to directly reconstruct the super-resolution image from a WF image.
Fig. 7 is a schematic structural diagram of an electronic device to which an embodiment of the present invention can be applied, and as shown in fig. 7, the electronic device includes a Central Processing Unit (CPU)701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present invention further provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus for generating a super-resolution microscope undersampled image in the above embodiment; or it may be a computer-readable storage medium that exists separately and is not built into the electronic device. The computer readable storage medium stores one or more programs for use by one or more processors in performing a method for generating a super-resolution microscope undersampled image as described in the present invention.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (10)

1. A method for generating an undersampled image of a super-resolution microscope is characterized by comprising the following steps:
acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample;
splitting the super-resolution sampling image set into a super-resolution undersampled image set, matching the low-resolution wide-field image and the super-resolution undersampled image with an image pair, and randomly dividing the image pair into a training set and a test set;
training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the undersampled image of the target super-resolution microscope.
2. The method of generating an undersampled image of claim 1, wherein said acquiring a low resolution wide field image set and a super-resolution sampled image set of an imaging sample comprises:
carrying out super-resolution immunofluorescence staining on a plurality of mammal cell line subcellular structures and macromolecular compounds to obtain an imaging sample;
and respectively shooting the imaging sample through a wide-field microscope and a super-resolution microscope to obtain a low-resolution wide-field image set and a super-resolution sampling image set.
3. The method of generating an undersampled image of claim 2, wherein said pre-constructed UR-Net-8 deep learning network comprises an input scale adaptive generator and an input scale adaptive discriminator; and the generator obtains the target model by carrying out confrontation training with the discriminator.
4. The method of generating an undersampled image according to claim 3, wherein said discriminator includes:
4 stacked convolutional layers, 1 spatial pyramid pooling layer for adaptive sampling and 1 fully-connected layer for classification discrimination.
5. The method of claim 4, wherein the target model is trained by a stochastic gradient descent method, and the training process comprises an image pre-training phase with fixed input scale and an image fine-tuning phase with adaptive input scale.
6. The method of claim 3, wherein the encoder and decoder of the generator communicate information between the corresponding convolutional layer and deconvolution layer using a jump residual connection, and the convolutional layer, the deconvolution layer, and the corresponding jump connection form a U-type network.
7. The method of generating an undersampled image according to claim 3,
the loss function of the countermeasure network of the target model adopts cross entropy loss, L1 loss and MS-SSIM loss; wherein the MS-SSIM loss is 1 and the difference between the MS-SSIM value of the true under-sampled image of the super-resolution microscope and the reconstructed under-sampled image of the super-resolution microscope; and updating the target model parameters through loss value back propagation, and updating the target model parameters by adopting cross entropy loss back propagation through the discriminator.
8. An apparatus for generating an undersampled image of a super-resolution microscope, the apparatus comprising:
the data acquisition module is used for acquiring a low-resolution wide-field image set and a super-resolution sampling image set of an imaging sample;
the training set dividing module is used for dividing the super-resolution sampling image set into a super-resolution undersampled image set, and dividing the low-resolution wide-field image and the super-resolution undersampled matched image pair into a training set and a test set;
the training module is used for training and testing a pre-constructed UR-Net-8 deep learning network based on the training set and the testing set to obtain a target model;
and the generating module is used for inputting the wide-field image of the low-resolution microscope to be generated into the target model to obtain the under-sampled image of the target super-resolution microscope.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the computer program, implements the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202111600840.8A 2021-12-24 2021-12-24 Method and device for generating undersampled image of super-resolution microscope Active CN114387196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111600840.8A CN114387196B (en) 2021-12-24 2021-12-24 Method and device for generating undersampled image of super-resolution microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111600840.8A CN114387196B (en) 2021-12-24 2021-12-24 Method and device for generating undersampled image of super-resolution microscope

Publications (2)

Publication Number Publication Date
CN114387196A true CN114387196A (en) 2022-04-22
CN114387196B CN114387196B (en) 2024-08-27

Family

ID=81198889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111600840.8A Active CN114387196B (en) 2021-12-24 2021-12-24 Method and device for generating undersampled image of super-resolution microscope

Country Status (1)

Country Link
CN (1) CN114387196B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681595A (en) * 2023-08-01 2023-09-01 长春理工大学 Remote computing super-resolution imaging device based on multimodal PSF
WO2024044981A1 (en) * 2022-08-30 2024-03-07 深圳华大智造科技股份有限公司 Super-resolution analysis system and method, and corresponding imaging device and model training method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977682A (en) * 2017-12-19 2018-05-01 南京大学 Lymph class cell sorting method and its device based on the enhancing of polar coordinate transform data
CN110349237A (en) * 2019-07-18 2019-10-18 华中科技大学 Quick body imaging method based on convolutional neural networks
CN111052173A (en) * 2017-07-31 2020-04-21 巴斯德研究所 Method, apparatus and computer program for improving reconstruction of dense super-resolution images from diffraction limited images acquired from single molecule positioning microscopy
CN111524064A (en) * 2020-03-11 2020-08-11 浙江大学 Fluorescence microscopic image super-resolution reconstruction method based on deep learning
CN113383225A (en) * 2018-12-26 2021-09-10 加利福尼亚大学董事会 System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111052173A (en) * 2017-07-31 2020-04-21 巴斯德研究所 Method, apparatus and computer program for improving reconstruction of dense super-resolution images from diffraction limited images acquired from single molecule positioning microscopy
CN107977682A (en) * 2017-12-19 2018-05-01 南京大学 Lymph class cell sorting method and its device based on the enhancing of polar coordinate transform data
CN113383225A (en) * 2018-12-26 2021-09-10 加利福尼亚大学董事会 System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning
CN110349237A (en) * 2019-07-18 2019-10-18 华中科技大学 Quick body imaging method based on convolutional neural networks
CN111524064A (en) * 2020-03-11 2020-08-11 浙江大学 Fluorescence microscopic image super-resolution reconstruction method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANTHONY BARSIC: "Three-dimensional super-resolution and localization of dense clusters of single molecules", SCIENTIFIC REPORTS, vol. 4, no. 4, 23 June 2014 (2014-06-23), pages 1 - 8 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024044981A1 (en) * 2022-08-30 2024-03-07 深圳华大智造科技股份有限公司 Super-resolution analysis system and method, and corresponding imaging device and model training method
CN116681595A (en) * 2023-08-01 2023-09-01 长春理工大学 Remote computing super-resolution imaging device based on multimodal PSF
CN116681595B (en) * 2023-08-01 2023-11-03 长春理工大学 Remote computing super-resolution imaging device based on multimodal PSF

Also Published As

Publication number Publication date
CN114387196B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
de Haan et al. Deep-learning-based image reconstruction and enhancement in optical microscopy
Qiao et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy
CN114387196B (en) Method and device for generating undersampled image of super-resolution microscope
Zhao et al. Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales
CN114252423B (en) Method and device for generating full sampling image of super-resolution microscope
JP6791245B2 (en) Image processing device, image processing method and image processing program
JPH10509817A (en) Signal restoration method and apparatus
CN114331840B (en) Method and device for reconstructing high-fidelity super-resolution microscopic image
CN112633248B (en) Deep learning full-in-focus microscopic image acquisition method
US20220237783A1 (en) Slide-free histological imaging method and system
WO2020081125A1 (en) Analyzing complex single molecule emission patterns with deep learning
CN106204466A (en) A kind of self-adaptive solution method for Fourier lamination micro-imaging technique
US20240281933A1 (en) Systems and methods for image processing
CN114387264B (en) HE staining pathological image data expansion and enhancement method
US20220343463A1 (en) Changing the size of images by means of a neural network
Fazel et al. Analysis of super-resolution single molecule localization microscopy data: A tutorial
CN110785709B (en) Generating high resolution images from low resolution images for semiconductor applications
Kölln et al. Label2label: training a neural network to selectively restore cellular structures in fluorescence microscopy
Prigent et al. SPITFIR (e): a supermaneuverable algorithm for fast denoising and deconvolution of 3D fluorescence microscopy images and videos
JP2023532755A (en) Computer-implemented method, computer program product, and system for processing images
Zhang et al. Spatiotemporal coherent modulation imaging for dynamic quantitative phase and amplitude microscopy
Dai et al. Exceeding the limit for microscopic image translation with a deep learning-based unified framework
CN114897693A (en) Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network
Gil et al. Segmenting quantitative phase images of neurons using a deep learning model trained on images generated from a neuronal growth model
Price et al. Introduction and historical perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant