CN114926562A - Hyperspectral image virtual staining method based on deep learning - Google Patents

Hyperspectral image virtual staining method based on deep learning Download PDF

Info

Publication number
CN114926562A
CN114926562A CN202210556266.9A CN202210556266A CN114926562A CN 114926562 A CN114926562 A CN 114926562A CN 202210556266 A CN202210556266 A CN 202210556266A CN 114926562 A CN114926562 A CN 114926562A
Authority
CN
China
Prior art keywords
deep learning
sample
image
picture
staining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210556266.9A
Other languages
Chinese (zh)
Inventor
王毅
朱若华
何海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou Medical University
Original Assignee
Wenzhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou Medical University filed Critical Wenzhou Medical University
Priority to CN202210556266.9A priority Critical patent/CN114926562A/en
Publication of CN114926562A publication Critical patent/CN114926562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a hyperspectral image virtual staining method based on deep learning, which comprises the following steps: splicing bright field pictures of a tissue section to be dyed, which are shot under a plurality of spectral wavelengths, into a spectral cube; the spectral cube is used as the input of a pre-trained UNet deep learning network model, key boundary information in the spectral cube is extracted in a step-by-step increasing mode through a down-sampling process, and a virtual dye image is reconstructed through an up-sampling process and an attention mechanism. The invention uses the spectrum cube of the bright field picture as the input of the deep learning network, and can judge the boundary information of the tissue components through the spectrum information of different components, thereby improving the accuracy and the reconstruction effect of the generated virtual staining picture.

Description

Hyperspectral image virtual staining method based on deep learning
Technical Field
The invention relates to the technical field of virtual staining, in particular to a hyperspectral image virtual staining method based on deep learning.
Background
Hyper-spectral imaging (HSI) is a new emerging technology of biomedical visualization, can simultaneously acquire 2-dimensional spatial information and 1-dimensional spectral information of biological tissues, covers spectral ranges of visible light, infrared light, ultraviolet light and the like, has higher spectral resolution, can provide diagnostic information related to tissue physiology, morphology and biochemical components, provides more detailed spectral characteristics for the research of biological histology, and further provides more auxiliary information for medical pathological diagnosis. The hyperspectral imaging technology has applications in cancer, heart disease, retinal disease, diabetic foot, shock, histopathology, image-guided surgery, and the like.
Clinical gold standard images of tissue sections are time consuming and labor intensive, and the procedure requires the use of multiple reagents and introduces irreversible effects on the tissue. Various studies have been made to change the workflow using different imaging modalities. For example using nonlinear microscopy based on two-photon fluorescence, second harmonic generation, third harmonic generation and raman scattering, or using a controlled supercontinuum source to acquire multimodal images. These methods require the use of ultra-fast lasers or super-continuous light sources and require relatively long scan times due to the weak light signal. At the same time, these microscope imaging systems are expensive, complex to implement on a microscope, and currently unavailable in most environments.
In addition, in recent years, a method based on deep learning is also widely developed, an image of an unmarked tissue section is virtually converted into various staining images, and in a specific application, a single bright field image or a bright field holographic image is used as an input of a network, so that the generated staining result hardly considers the boundary relation between different components in the tissue, and the reconstruction effect is poor.
Therefore, the technical staff in the field needs to solve the problem of how to provide a hyperspectral image virtual staining method based on deep learning, which can extract boundary structure and tissue characteristic information from a spectrum and has a better reconstruction result.
Disclosure of Invention
In view of the above, the invention provides a hyperspectral image virtual staining method based on deep learning, which uses a spectrum cube of a bright field picture as input of a deep learning network, and can judge organization component boundary information through spectrum information of different components, thereby improving the accuracy and reconstruction effect of a generated virtual stained picture.
In order to achieve the purpose, the invention adopts the following technical scheme:
a hyperspectral image virtual staining method based on deep learning comprises the following steps:
splicing bright field pictures of a tissue section to be dyed, which are shot under a plurality of spectral wavelengths, into a spectral cube;
the spectrum cube is used as the input of a pre-trained UNet deep learning network model, key boundary information in the spectrum cube is extracted in a step-by-step increasing mode through a down-sampling process, and a virtual staining image is reconstructed through an up-sampling process and an attention mechanism.
Further, the training process of the UNet deep learning network model comprises the following steps:
obtaining a plurality of tissue slices;
shooting each tissue slice under a plurality of spectral wavelengths respectively to obtain a plurality of sample bright field pictures of one tissue slice;
h & E staining is respectively carried out on each tissue slice, and shooting is carried out, so that a real staining picture of the sample is obtained;
carrying out image registration on the sample bright field picture and the sample real dyeing picture of the same tissue slice, and splicing the sample bright field pictures with multiple wavelengths of the same tissue slice after registration into a sample spectrum cube;
carrying out normalization processing on the sample real dyeing picture;
forming a sample pair by the sample spectrum cube of the same tissue section and the normalized sample real staining picture;
and training a pre-constructed UNet deep learning network model by using a plurality of sample pairs until a preset constraint condition is met.
Further, the spectrum cube or the sample spectrum cube is composed of 65 wavelength bright field pictures shot according to the step length of 5nm under the spectrum wave band of 400nm-725 nm.
Further, when image registration is carried out, a sample bright field picture and a sample real staining picture of the same tissue slice are subjected to image registration by adopting a deep learning image registration network VoxelMorph.
Further, the normalization process includes:
unifying the characteristics of the real dyeing pictures of the sample into normal distribution by adopting a normalization function;
and normalizing the style pattern of the sample real dyed picture through a loop countermeasure network.
Further, when the UNet deep learning network model is trained, the method further comprises:
performing data enhancement processing on a plurality of the sample pairs input into the UNet deep learning network model.
Further, the data enhancement comprises: image flipping, filling, random cropping and adding random noise.
Further, the preset constraint condition is as follows: and evaluating the correlation between the reconstructed virtual staining image and the sample real staining image through the peak signal-to-noise ratio and the structural similarity, and reversely and iteratively updating the parameter values of the UNet deep learning network model according to the correlation result.
According to the technical scheme, compared with the prior art, the hyperspectral image virtual staining method based on deep learning is disclosed and provided, a UNet-GAN network is used as a main deep learning framework, a bright field image under a microscope can be converted into an H & E stained image in one step, and the microimaging steps in the working flow are accelerated and improved; meanwhile, the spectrum cube of the bright field picture is used as training set input, and the boundary information of the tissue components can be judged through the spectrum information of different components, so that the accuracy of the generated virtual staining picture is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a hyperspectral image virtual staining method based on deep learning provided by the invention;
fig. 2 is an architecture diagram of the UNet deep learning network model provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in FIG. 1, the embodiment of the invention discloses a hyperspectral image virtual staining method based on deep learning, which comprises the following steps:
splicing bright field pictures of a tissue section to be dyed, which are shot under a plurality of spectral wavelengths, into a spectral cube;
the spectral cube is used as the input of a pre-trained UNet deep learning network model, key boundary information in the spectral cube is extracted in a step-by-step increasing mode through a down-sampling process, and a virtual dye image is reconstructed through an up-sampling process and an attention mechanism.
As shown in fig. 2, the UNet deep learning network model architecture according to the embodiment of the present invention includes an input layer, a downsampling convolutional network, an upsampling convolutional network, an attention mechanism block, a concatenation layer, and an output layer. The input is a spectrum cube, the output is a 3-channel virtual staining pattern, and the splicing layer is formed by splicing and splicing output results of a down-sampling convolution network and an up-sampling convolution network according to the channel direction.
The invention can realize the mapping from the bright field spectrum image to the H & E dyed image, and by combining the different wavelength 400nm-750nm spectrum images of the bright field image into a spectrum cube as network input, the key characteristics of the tissue slice are extracted through the down-sampling process, and the finer tissue component characteristics can be obtained from the key characteristics along with the iteration of the sampling network. And then, a virtual staining image is reconstructed through an up-sampling process, and due to the adoption of a jump connection mode of an attention mechanism, key features extracted in the down-sampling process can be concerned in the staining image restoration process, so that the original cell structure can be maintained. The UNet structure and the Attention mechanism block are used in the network, wherein the UNet network structure used has a jump splicing structure, so that the deep network can obtain the characteristics of a shallow layer, and the boundary information of the organization components and the overall organization structure can be concerned more without being damaged. The added attention mechanism block has the function of balancing weight, and the attention mechanism is added to the skip splicing part, so that common characteristics, namely boundary and structural characteristics, shared by the shallow layer and the deep layer can be strengthened.
In a specific embodiment, the training process of the UNet deep learning network model comprises the following steps:
obtaining a plurality of tissue slices; the tissue sections used in the training process are all from clinical tissue sections.
Respectively shooting each tissue slice under a plurality of spectral wavelengths to obtain a plurality of sample bright field pictures of one tissue slice; through the combined use and shooting of an Orlympus microscope and a spectrometer, bright field images with 65 wavelengths including a spectral band of 400 and a step length of 725nm and 5nm are spliced into a spectral cube.
H & E staining is respectively carried out on each tissue slice, and shooting is carried out, so that a real staining picture of the sample is obtained;
carrying out image registration on a sample bright field picture and a sample real dyeing picture of the same tissue slice, and splicing the registered sample bright field pictures with multiple wavelengths of the same tissue slice into a sample spectrum cube;
carrying out normalization processing on the real dyeing picture of the sample;
forming a sample pair by using a sample spectrum cube of the same tissue section and a normalized sample real staining picture;
and training the pre-constructed UNet deep learning network model by using a plurality of sample pairs until a preset constraint condition is met.
In one embodiment, in image registration, a VoxelMorph deep learning image registration network is adopted to perform image registration on a sample bright field picture and a sample real staining picture of the same tissue section.
Specifically, since the tissue section needs to be manually stained in the process from the bright field to the stained image, and deformation errors of the tissue may be introduced in the manual operation process, image registration needs to be performed on the bright field and the stain when the tissue section is transmitted into the training network. The method uses a deep learning image registration network VoxelMorph to complete registration of a bright field image and a real dyeing image in a sample, extracts image characteristics through a neural network, calculates similarity between images, enables a measure function to achieve optimal parameters of global optimization through an optimization algorithm, generates an optimal transformation function, and achieves the aim of image registration.
The measurement function of the image similarity calculates and generates the structural similarity between the image and the original image (namely the bright field image and the real dyeing image) through an SSIM (structural similarity) function, the registration effect of the numerical value is better when the numerical value is closer to 1 two images, the registration network can obtain a registered transformation function to generate a registered image every time, then the SSIM value of the registered image and the original image is calculated, the transformation function is changed in a continuous iteration mode, and finally the structure of the registered image is consistent with that of the original image. Compared with the traditional method, the method for carrying out image registration by adopting deep learning has higher speed and lower calculation cost.
In another embodiment, the normalization process comprises:
unifying the characteristics of the real dyeing pictures of the sample into normal distribution by adopting a normalization function;
and normalizing the style and style of the sample real dyed picture through a loop countermeasure network.
Specifically, when the histiocyte staining is manually operated and bright field pictures of tissue sections are shot, the brightness of light is not uniform, so that the difference of pixel values among all images of a training set is large, the convergence speed of a UNet deep learning network model can be improved by data set normalization, the output precision is improved, and the output style of results is determined. In the training process of the neural network, the feature distribution of the data set is different and may be distributed in a biased manner, so that the large feature values and the small feature values are generated when the gradient is decreased and reversely transmitted, and the gradient operation cannot take into account the decreasing trends of different features in different dimensions and different levels, so that the training is difficult, and loss can continuously oscillate. In addition, because different staining methods cause the stained image patterns to be non-uniform, and some colors are dark and light, so that the output of the network cannot be uniform, and poor output results are caused.
In one embodiment, when the UNet deep learning network model is trained, due to the small amount of collected medical data, data enhancement is performed by a transfomer method of a deep learning framework torchvision library before being transmitted into the network, and the data enhancement comprises inversion, filling, random cutting and random noise addition of a transmitted image. The invention takes the jogged 65-wavelength spectrum cube as the first layer input of the generator, the number of channels is 65, the down sampling process progressively increases to extract the key boundary information in the image, the up sampling process progressively decreases to realize the dyeing reduction of the image, and finally the three-channel result of the RGB image is output. Meanwhile, each part in the network is added with residual connection, and the parameters of the shallow layer and the deep layer of the neural network are added, so that the problems of gradient dissipation and gradient explosion caused by the fact that the network is too deep can be solved, the network robustness is improved while the network depth is improved, and the common Adam is used as an optimizer of the network to accelerate the convergence speed of the network.
Analyzing the result of the network training: and comparing the virtual dyeing image result output by the UNet deep learning network model with the real dyeing image result, calculating the correlation between the virtual dyeing image result and the real dyeing image result, and evaluating the accuracy of the virtual dyeing result. The distance between the virtual dyeing picture and the real dyeing picture can be calculated through the PNSR (Peak Signal-to-Noise Ratio) Peak Signal-to-Noise Ratio and the SSIM (structural simple similarity) structure similarity, and the parameter values of all layers in the network are reversely updated through the distance, so that the overall accuracy is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A hyperspectral image virtual staining method based on deep learning is characterized by comprising the following steps:
splicing bright field pictures of a tissue section to be dyed, which are shot under a plurality of spectral wavelengths, into a spectral cube;
the spectral cube is used as the input of a pre-trained UNet deep learning network model, the key boundary information in the spectral cube is extracted in a step-by-step incremental mode through a down-sampling process, and a virtual staining image is reconstructed through an up-sampling process and an attention mechanism.
2. The hyperspectral image virtual staining method based on deep learning according to claim 1, wherein the training process of the UNet deep learning network model comprises the following steps:
obtaining a plurality of tissue slices;
shooting each tissue slice under a plurality of spectral wavelengths respectively to obtain a plurality of sample bright field pictures of one tissue slice;
h & E staining is respectively carried out on each tissue slice, and shooting is carried out, so that a real staining picture of the sample is obtained;
carrying out image registration on the sample bright field picture and the sample real staining picture of the same tissue slice, and splicing the sample bright field pictures with multiple wavelengths of the same tissue slice after registration into a sample spectrum cube;
carrying out normalization processing on the sample real dyeing picture;
forming a sample pair by the sample spectrum cube of the same tissue section and the normalized sample real staining picture;
and training a pre-constructed UNet deep learning network model by using a plurality of sample pairs until a preset constraint condition is met.
3. The hyperspectral image virtual staining method based on deep learning according to claim 2 is characterized in that the spectrum cube or the sample spectrum cube is composed of 65-wavelength bright field pictures shot according to the step length of 5nm at the spectrum band of 400nm-725 nm.
4. The hyperspectral image virtual staining method based on deep learning of claim 2, wherein in image registration, a sample bright field picture and a sample real staining picture of the same tissue section are subjected to image registration by adopting a deep learning image registration network VoxelMorph.
5. The method for virtually dyeing the hyperspectral images based on deep learning according to claim 2, wherein the normalization process comprises:
unifying the characteristics of the real dyeing pictures of the sample into normal distribution by adopting a normalization function;
and normalizing the style pattern of the sample real dyed picture through a loop countermeasure network.
6. The hyperspectral image virtual staining method based on deep learning according to claim 2, wherein when the UNet deep learning network model is trained, the method further comprises:
performing data enhancement processing on a plurality of the sample pairs input into the UNet deep learning network model.
7. The hyperspectral image virtual staining method based on deep learning of claim 6, wherein the data enhancement comprises: image flipping, filling, random cropping and adding random noise.
8. The hyperspectral image virtual staining method based on deep learning according to claim 2, wherein the preset constraint condition is as follows: and evaluating the correlation between the reconstructed virtual staining image and the sample real staining image through the peak signal-to-noise ratio and the structural similarity, and reversely and iteratively updating the parameter value of the UNet deep learning network model according to the correlation result.
CN202210556266.9A 2022-05-20 2022-05-20 Hyperspectral image virtual staining method based on deep learning Pending CN114926562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210556266.9A CN114926562A (en) 2022-05-20 2022-05-20 Hyperspectral image virtual staining method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210556266.9A CN114926562A (en) 2022-05-20 2022-05-20 Hyperspectral image virtual staining method based on deep learning

Publications (1)

Publication Number Publication Date
CN114926562A true CN114926562A (en) 2022-08-19

Family

ID=82809915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210556266.9A Pending CN114926562A (en) 2022-05-20 2022-05-20 Hyperspectral image virtual staining method based on deep learning

Country Status (1)

Country Link
CN (1) CN114926562A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012838A (en) * 2022-12-30 2023-04-25 创芯国际生物科技(广州)有限公司 Artificial intelligence-based organoid activity recognition method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012838A (en) * 2022-12-30 2023-04-25 创芯国际生物科技(广州)有限公司 Artificial intelligence-based organoid activity recognition method and system
CN116012838B (en) * 2022-12-30 2023-11-07 创芯国际生物科技(广州)有限公司 Artificial intelligence-based organoid activity recognition method and system

Similar Documents

Publication Publication Date Title
Wang et al. Identification of melanoma from hyperspectral pathology image using 3D convolutional networks
US11798300B2 (en) Method of characterizing and imaging microscopic objects
Vahadane et al. Structure-preserved color normalization for histological images
Liu et al. Deep learning‐based color holographic microscopy
US20220206434A1 (en) System and method for deep learning-based color holographic microscopy
Pande et al. Automated analysis of fluorescence lifetime imaging microscopy (FLIM) data based on the Laguerre deconvolution method
Pradhan et al. Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning
JP6590928B2 (en) Image processing apparatus, imaging system, image processing method, and image processing program
Hidalgo-Gavira et al. Variational Bayesian blind color deconvolution of histopathological images
CN114926562A (en) Hyperspectral image virtual staining method based on deep learning
WO2021198247A1 (en) Optimal co-design of hardware and software for virtual staining of unlabeled tissue
US20180146847A1 (en) Image processing device, imaging system, image processing method, and computer-readable recording medium
Pardo et al. Context-free hyperspectral image enhancement for wide-field optical biomarker visualization
Bozkurt et al. Skin strata delineation in reflectance confocal microscopy images using recurrent convolutional networks with attention
Septiana et al. Elastic and collagen fibers discriminant analysis using H&E stained hyperspectral images
Pan et al. Image restoration and color fusion of digital microscopes
Liu et al. Using hyperspectral imaging automatic classification of gastric cancer grading with a shallow residual network
Goutham et al. Brain tumor classification using Efficientnet-B0 model
Falahkheirkhah et al. A deep learning framework for morphologic detail beyond the diffraction limit in infrared spectroscopic imaging
Ma et al. Light-field tomographic fluorescence lifetime imaging microscopy
Yoshida et al. Noise reduction from chromophore images and reliability improvement by successive minimization of intermixture in the modified Lambert-Beer law
Greenfield et al. Convolutional neural networks in advanced biomedical imaging applications
Liu et al. Colorimetrical evaluation of color normalization methods for H&E-stained images
Mannam et al. Improving fluorescence lifetime imaging microscopy phasor accuracy using convolutional neural networks
Quintana-Quintana et al. Blur-specific image quality assessment of microscopic hyperspectral images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination