WO2022121160A1 - 一种基于深度学习的增强ct图像质量和分辨率的方法 - Google Patents

一种基于深度学习的增强ct图像质量和分辨率的方法 Download PDF

Info

Publication number
WO2022121160A1
WO2022121160A1 PCT/CN2021/083307 CN2021083307W WO2022121160A1 WO 2022121160 A1 WO2022121160 A1 WO 2022121160A1 CN 2021083307 W CN2021083307 W CN 2021083307W WO 2022121160 A1 WO2022121160 A1 WO 2022121160A1
Authority
WO
WIPO (PCT)
Prior art keywords
quality
image
resolution
deep learning
network
Prior art date
Application number
PCT/CN2021/083307
Other languages
English (en)
French (fr)
Inventor
龚南杰
王嘉宸
项磊
Original Assignee
苏州深透智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州深透智能科技有限公司 filed Critical 苏州深透智能科技有限公司
Priority to US18/255,608 priority Critical patent/US20240037732A1/en
Publication of WO2022121160A1 publication Critical patent/WO2022121160A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Definitions

  • the invention relates to the technical field of image processing, in particular to a method for enhancing CT image quality and resolution based on deep learning.
  • Computed tomography is one of the most important imaging and diagnostic modalities in modern hospitals and clinics.
  • CT Computed tomography
  • X-rays during CT scans may cause genetic damage and induce cancer at a probability related to radiation dose. Therefore, in order to improve the quality and resolution of CT images while avoiding or reducing the risk of patients' health damage during scanning, it is necessary to obtain low-noise, high-resolution by reconstructing clinical CT data containing a lot of noise and low resolution. high-quality images.
  • Methods for CT super-resolution are generally divided into two categories: (A) methods based on model reconstruction, and (B) methods based on learning.
  • A methods based on model reconstruction
  • B methods based on learning.
  • the first type of method explicitly models the image degradation process and regularizes it, and reconstructs the data according to the characteristics of the projection, and its effect depends on the accuracy of the assumed model.
  • Learning-based methods also suffer from loss of image detail and block artifacts.
  • the deep learning methods often use simulated data sets for training and evaluation, which often cannot reflect the performance applied to real clinical data, especially for super-resolution tasks, the super-resolution multiple of clinical data. It has the characteristics of being not fixed, which is different from the fixed multiples in deep learning datasets. It can be seen that the current image enhancement tasks and super-resolution tasks for denoising CT images cannot obtain high-quality and real-detailed images.
  • the purpose of the present invention is to solve the problem of image denoising and loss of image details after super-resolution processing in the prior art, and propose a method for enhancing CT image quality and resolution based on deep learning.
  • a method for enhancing the quality and resolution of CT images based on deep learning proposed by the present invention includes the following steps: S1, preprocessing the collected clinical data to obtain a data set; S2, constructing a network including a generation network, a decider network and a perception network The deep learning model of the network; S3, construct the loss function; S4, use the data set and the loss function to update the parameters of the iteratively generated network to obtain the trained deep learning model; S5, input the low-quality and low-resolution images into the trained deep learning model to obtain high-quality, high-resolution images.
  • the process of preprocessing the clinical data in step S1 includes the following steps: S11, obtaining low-quality CT images with low radiation dose and low resolution and high-quality CT images with normal radiation dose and high resolution; S12, according to The low-quality CT image is cropped by the metadata of the medical image, so that the cropped low-quality CT image corresponds to the physical space information of the high-quality CT image, and a data pair with the same physical space information is obtained; S13, cropping the data pair into Small-block data pairs, and perform threshold judgment, and retain small-block data pairs that meet the threshold judgment conditions; S14, perform pixel interception and normalization processing on the reserved small-block data pairs; S15, on the small blocks processed in step S14 Data augmentation is performed on the data pair to obtain a data set for training the deep learning model.
  • the method for cropping the data blocks into small data pairs in step S13 includes the following content: the high-quality CT images in the data blocks are cropped at every fixed number of pixels/layers, and the low-quality CT images are compared with the high-quality CT images. The corresponding number of pixels/layers is scaled to correspond to the information in the physical space of the high-quality CT image.
  • the threshold determination condition in step S13 is: the similarity index between the high-quality CT image in the small-block data pair and the scaled low-quality CT image is higher than the threshold.
  • the data expansion method in step S15 is to flip and rotate the image.
  • the loss function is a combined loss function of mean absolute error loss, perceptual loss and generative adversarial loss.
  • the perceptual loss is a calculation result obtained by inputting the output result of the generation network and the real high-quality CT image to the perceptual network and performing the MSE loss on the output result.
  • the generative adversarial loss includes but is not limited to GAN loss, one of WGAN loss, WGAN-GP loss or rGAN loss.
  • the generation network includes a feature extraction module and an upsampling module
  • the feature extraction module includes a convolution layer, a concatenated convolution block, and then a convolution layer, and finally a low-resolution CT image is obtained from a low-quality CT image.
  • rate feature map each convolution block in the cascaded convolution block includes at least two 3*3*64 or other convolutional layers and an intermediate ReLU layer
  • the upsampling module includes a fully connected network and A convolutional layer that inputs the position information of each pixel of the input high-quality CT image into the fully connected network, and applies the output result to the low-resolution feature map to obtain a high-quality high-resolution map.
  • the optimizer can use the Adam optimizer to optimize the generator network and the decider network, but is not limited to this.
  • the beneficial effects of the present invention include: the present invention builds a deep learning model based on deep learning, and preprocesses clinical data to obtain a data set, which can reduce the impact of spatial dislocation of data collected at different times due to patient displacement or other reasons;
  • the deep learning model of the loss function can realize the end-to-end processing of the two tasks of improving CT image quality and super-resolution to directly obtain the final result.
  • Further beneficial effects include utilizing the upsampling module in the generative network to realize upsampling tasks of any scale.
  • FIG. 1 is a flow chart of main steps of a method for enhancing CT image quality and resolution based on deep learning of the present invention.
  • FIG. 2 is a flow chart of preprocessing clinical data in an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a generation network in a deep learning model according to an embodiment of the present invention.
  • this embodiment proposes a method for enhancing the quality and resolution of CT images based on deep learning, which mainly includes the following steps:
  • step S1 the process of preprocessing the clinical data in step S1 includes the following contents:
  • low-radiation, low-resolution global CT images referred to as low-quality CT images
  • high-quality CT images normal radiation-dose, high-resolution CT images
  • the high-quality CT image does not need to be a global image, but can be a local image whose physical content is contained in the low-quality CT image.
  • low-quality whole lung CT images and high-quality local lung CT images can be required to be more than three times.
  • Crop the low-quality CT image according to the metadata containing the spatial physical quantity information of the medical CT image (usually DICOM type) so that it corresponds to the physical spatial information of the (local) high-quality CT image, and obtain a data pair with the same physical spatial information .
  • the metadata includes PixelSpacing (pixel spacing), SpacingBetweenSlices (slice spacing), ImagePositionPatients (patient coordinate position), ImageOrientationPatients (patient coordinate orientation) and other indicators.
  • Threshold judgment is performed on the cropped small-block data pair (patch), and the judgment condition is: if the similarity index between the high-quality image small block in the small-block data pair and the scaled low-quality image small block (including but not limited to PSNR and SSIM) are higher than a certain value (threshold), the small data pairs that meet the threshold will be retained, otherwise they will be discarded.
  • the threshold is specifically determined according to the super-resolution multiple and the difference in radiation.
  • Pixel value interception and normalization are performed on the retained small data pairs.
  • the pixel value interception is to avoid the small pixel distribution being too sparse, and the normalization is to facilitate the later neural network training (for example, if the pixel value represents is the CT value, and the image is a lung CT image, the threshold can be set to 1500; normalization is to linearly map the pixel value of [-1024, 1500] to [-1, 1]).
  • step S15 Perform data expansion on the small-block data pair processed in step S14 to obtain a data set for training the deep learning model.
  • Data augmentation can be done by image flipping and rotation, etc.
  • the input of the deep learning model is batch data composed of low-quality image small block data, and the expected output should be as much as possible the same as the batch data composed of high-quality image small block data.
  • the obtained deep learning model can output the input low-quality, low-resolution CT images into high-quality, high-resolution CT images.
  • the deep learning model includes a generative network, a decider network, and a perception network.
  • the generation network includes the following:
  • Feature extraction module preliminary feature extraction is performed by a convolution layer (64 convolution kernels with a size of 3*3 and a stride of 1 are used to obtain 64-layer features in this embodiment), and then the cascaded basic volume
  • the accumulation block and the last layer of convolutional layers form the main computing unit (in this embodiment, 16 basic convolutional blocks are set to cascade, each basic block includes two 3*3*64 convolutional layers and the middle ReLU layer), and finally obtain a low-resolution feature map from a low-quality CT image.
  • the results of the main computing unit and the results of preliminary feature extraction are added to form a residual structure.
  • the above is the feature extraction module of the generative network.
  • Upsampling module In order to achieve super-resolution for any size, the upsampling module can learn the number of convolution kernels and weight parameters for upsampling corresponding to different factors: the upsampling module consists of a fully connected network (which can be fully connected by 256 nodes).
  • Layer + ReLU + 256 node fully connected layer and a corresponding convolutional layer, where the input of the fully connected network is the pixel position information of the high-resolution image (the pixel position information of the high-resolution image refers to: in the high-resolution image
  • scale factor the output is the same as the high-resolution image.
  • the same number of filter kernels as the number of pixels in the rate map.
  • the implementation process of the upsampling module is: for each pixel in the high-resolution image, input its position information into the above fully connected network to obtain a filter kernel, and then apply each output filter kernel to the low-resolution feature. After the corresponding position of the image (the result of the calculation unit) (the corresponding position is: the pixel position of the high-resolution image is mapped to the corresponding position of the low-resolution feature map), the corresponding pixel value in the high-resolution image can be obtained. By traversing the high-resolution image All pixel locations in the image to get a high-resolution image.
  • the decider network is used to form a GAN structure with the generation network part to improve the training quality, so that the details of the output of the generation network are richer and more realistic.
  • the arbiter network can use various binary classification network structures for its part. The following structure can be set up in the experiment: the basic unit is composed of convolution kernel, batch normalization layer and ReLU layer, and seven basic units are cascaded to form the feature extraction part. Among them, every other basic unit, the convolution stride in the subsequent basic unit is adjusted to 2, and the number of convolution kernels is doubled.
  • a classification module After the feature extraction is finished, there is a classification module. This module obtains a numerical result from the fully connected layer + ReLU + fully connected layer of 1024 nodes, which represents the probability that the input image is a high-quality, high-resolution image.
  • the perception network part can use VGG16 network or VGG19 network.
  • step S4 the deep learning model is trained, and the loss function used during training is a combined loss function of mean absolute error loss, perceptual loss and generative adversarial loss.
  • the perceptual loss is the result of inputting the output result of the generation network and the real high-quality CT image to the perceptual network respectively, and performing the MSE loss calculation on the output result.
  • Generative adversarial losses include but are not limited to one of GAN loss, WGAN loss, WGAN-GP loss or rGAN loss.
  • the optimizer can use the Adam optimizer to optimize the generator network and the decider network, but is not limited to this.
  • the loss can be the base GAN loss or the WGAN-GP loss or the rGAN loss, etc.) in the decider loss.
  • the parameters of the generator network and the perception network are fixed, and the decider network is optimized by the Adam optimizer using the decider loss.
  • the collected clinical data is preprocessed to obtain the data set. Small chunks of data pairs.
  • the generation network can be, but is not limited to, the U-Net structure, and the decider can be, but is not limited to, Patch GAN.
  • the use of Patch GAN can take into account the influence of different parts of the image, and solve the problem that the output image caused by only one corresponding output for one input is not. precise question.
  • the present invention obtains a real training data set by preprocessing real clinical data, so that the deep learning model can be applied to clinical practice.
  • the deep learning model of the present invention can optimize the low-radiation and low-resolution medical images without clinical use value into high-quality, high-resolution medical images by using the framework of the generative confrontation network and combining the perceptual loss and the pixel-level loss. Medical images, at the same time, the tasks of low-radiation CT image denoising and low-resolution CT image super-resolution can be realized at the same time, so that the generated high-quality images have real details.
  • the deep learning model proposed in the present invention can also be used for other image enhancement tasks, such as denoising or super-resolution of natural images, by changing the dataset.
  • image enhancement tasks such as denoising or super-resolution of natural images
  • by introducing an upsampling module of any scale factor the super-resolution of any scale factor is realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深度学习的增强CT图像质量和分辨率的方法,包括以下步骤:S1、将采集的临床数据进行预处理得到数据集;S2、构建包括生成网络、判决器网络和感知网络的深度学习模型;S3、构建损失函数;S4、利用数据集和损失函数更新迭代生成网络的参数,得到训练完成的深度学习模型;S5、将低质量低分辨率图像输入训练完成的深度学习模型,得到高质量高分辨率图像。本发明基于深度学习构建深度学习模型,将临床数据进行预处理获得数据集,可以减少不同时间采集的数据由于病人位移或其他原因在空间中产生错位的影响;通过结合损失函数的深度学习模型可实现提升CT图像质量和超分辨两个任务端到端处理直接得到最终结果。

Description

一种基于深度学习的增强CT图像质量和分辨率的方法 技术领域
本发明涉及图像处理技术领域,具体地涉及一种基于深度学习的增强CT图像质量和分辨率的方法。
背景技术
计算机断层扫描(CT)是现代医院和诊所最重要的成像和诊断方式之一。为了在扫描过程中直接获取高质量、高分辨率的CT影像,就需要提高扫描设备的成本以及增大扫描过程中的辐射剂量。然而,根据相关研究,CT扫描过程中的X射线可能导致遗传损伤,并在与辐射剂量相关的概率上诱发癌症。因此,为在提高CT影像质量和分辨率的同时避免或减少患者在进行扫描过程中损害健康的风险,需要通过将含有大量噪声和低分辨率的临床CT数据通过重建获得低噪、高分辨率的高质量影像。
关于CT去噪图像增强的方法一般分为三类:(A)重建前的正弦图过滤,(B)重建后的迭代重建和(C)重建后的图像后处理。然而,(A)方法中的正弦图数据极少直接提供给用户,而且该方法可能会受到分辨率损失和边缘模糊的影响。虽然(B)方法极大提高了图像质量,但它们需要很高的计算成本,而且其结果仍然可能失去一些细节并受到剩余伪影的影响。在基于深度学习的图像后处理之前,曾有很多后处理方法被提出,如针对CT去噪的NLM和K-SVD方法,以及BM3D算法,但是由于CT噪声分布不均匀的特点,它们都会出现过度平滑的缺陷。最近深度卷积网络在CT去噪的探索颇有成果,但由于仅使用像素级别的MSE损失进行端到端的训练,其结果不可避免地会忽略了对人类感知至关重要的微妙图像纹理从而导致过平滑的边缘和细节的丢失。
CT超分辨的方法一般分为两类:(A)基于模型重建的方法,(B)基于学习的方法。其中,第一类方法显式建模图像退化过程并进行正则化,根据投影的特点进行重建数据,其效果依赖于所假设模型的准确性。而基于学习的方法同样也会 面临损失图像细节和产生块状瑕疵的问题。
在上述的CT增强以及超分辨任务中,其深度学习的方法常使用仿真数据集进行训练和评估,往往无法体现应用于真实临床数据的性能,特别是对于超分辨任务,临床数据的超分辨倍数具有不固定的特点,区别于深度学习数据集中的固定倍数。可见,目前CT图像进行去噪的图像增强任务和超分辨任务还不能获得高质量及真实细节的图像。
发明内容
本发明目的是为了解决现有技术对图像去噪及实现超分辨处理后图像细节损失的问题,提出一种基于深度学习的增强CT图像质量和分辨率的方法。
本发明提出的一种基于深度学习的增强CT图像质量和分辨率的方法,包括以下步骤:S1、将采集的临床数据进行预处理得到数据集;S2、构建包括生成网络、判决器网络和感知网络的深度学习模型;S3、构建损失函数;S4、利用数据集和损失函数更新迭代生成网络的参数,得到训练完成的深度学习模型;S5、将低质量低分辨率图像输入训练完成的深度学习模型,得到高质量高分辨率图像。
优选地,步骤S1中对临床数据进行预处理的过程包括以下步骤:S11、获取低辐射剂量低分辨率下的低质量CT图像和正常辐射剂量高分辨率下的高质量CT图像;S12、根据医学图像的元数据对低质量CT图像进行裁剪,使裁剪后的低质量CT图像与高质量CT图像的物理空间信息对应,得到相同物理空间信息的数据对;S13、将所述数据对裁剪为小块数据对,并进行阈值判定,保留满足阈值判定条件的小块数据对;S14、对保留的小块数据对进行像素截取和归一化处理;S15、对步骤S14中处理后的小块数据对进行数据扩充得到用于训练所述深度学习模型的数据集。
优选地,步骤S13中将所述数据块裁剪为小块数据对的方法包括以下内容:数据块中高质量CT图像每隔固定像素点/层数进行裁剪,低质量CT图像将与高质量CT图像对应个数的像素点/层数进行放缩以与高质量CT图像物理空间的信息对应。
优选地,步骤S13中所述阈值判定条件为:小块数据对中的高质量CT图像和放缩后的低质量CT图像的相似度指标高于阈值。
优选地,步骤S15中进行数据扩充的方法为对图像进行翻转和旋转。
优选地,所述损失函数为平均绝对误差损失、感知损失和生成对抗损失的组合损失函数。
优选地,所述感知损失为生成网络输出结果和真实的高质量CT图像分别输入感知网络后并将输出结果进行MSE损失的计算结果。
优选地,所述生成对抗损失包括但不限于GAN损失,WGAN损失、WGAN-GP损失或rGAN损失中的一种。
优选地,所述生成网络包括特征提取模块和上采样模块,所述特征提取模块包括一个卷积层、级联卷积块、之后再经过一个卷积层,最终由低质量CT图像获得低分辨率特征图;所述级联卷积块中每个卷积块包括至少两个3*3*64或其他规格的卷积层和中间的ReLU层;所述上采样模块包括一个全连接网络和一个卷积层,将输入的高质量CT图像的每个像素位置信息输入全连接网络,将输出结果作用到低分辨率特征图,得到高质量高分辨率图。
优选地,优化器可以采用Adam优化器对生成网络和判决器网络进行优化,但不限于此。
本发明的有益效果包括:本发明基于深度学习构建深度学习模型,将临床数据进行预处理获得数据集,可以减少不同时间采集的数据由于病人位移或其他原因在空间中产生错位的影响;通过结合损失函数的深度学习模型可实现提升CT图像质量和超分辨两个任务端到端处理直接得到最终结果。进一步的有益效果还包括利用生成网络中的上采样模块,实现了任意尺度的上采样任务。
附图说明
图1是本发明一种基于深度学习的增强CT图像质量和分辨率的方法的主要步骤流程图。
图2是本发明实施例中临床数据进行预处理的流程图。
图3是本发明实施例中深度学习模型中生成网络的结构示意图。
具体实施方式
下面结合具体实施方式并对照附图对本发明作进一步详细说明。应该强调的是,下述说明仅仅是示例性的,而不是为了限制本发明的范围及其应用。
参照以下附图,将描述非限制性和非排他性的实施例,其中相同的附图标记表示相同的部件,除非另外特别说明。
实施例1
如图1所示,本实施例提出一种基于深度学习的增强CT图像质量和分辨率的方法,主要包括以下步骤:
S1、将采集的临床数据进行预处理得到数据集。
S2、构建包括生成网络、判决器网络和感知网络的深度学习模型。
S3、构建损失函数。
S4、利用数据集和损失函数更新迭代生成网络的参数,得到训练完成的深度学习模型。
S5、将低质量低分辨率图像输入训练完成的深度学习模型,得到高质量高分辨率图像。
具体地,步骤S1中对临床数据进行预处理的过程包括以下内容:
S11、获取低辐射剂量低分辨率下的低质量CT图像和正常辐射剂量高分辨率下的高质量CT图像。
短时间内接连获取低辐射剂量、低分辨率的全局CT图像(简称为低质量CT图像)和正常辐射剂量、高分辨率CT图像(简称为高质量CT图像),即设置不同扫描参数快速扫描两次。为了减少病人受到辐射的时间,高质量CT图像无须为全局图像,可以是物理内容包含于低质量CT图像的局部影像。(例如:低质量的全肺CT图像和高质量的局部肺部CT图像)。为了保证可以看到明显的分辨率差异,可将分辨率倍数要求在三倍以上。
S12、根据医学图像的元数据对低质量CT图像进行裁剪,使裁剪后的低质量CT图像与高质量CT图像的物理空间信息对应,得到相同物理空间的数据对。
根据医学CT图像(通常为DICOM类型)的包含空间物理量信息的元数据 对低质量CT图像进行裁剪,使其与(局部)高质量CT图像的物理空间信息对应,得到相同物理空间信息的数据对。其中,元数据包括PixelSpacing(像素间距),SpacingBetweenSlices(切片间距),ImagePositionPatients(病人坐标位置),ImageOrientationPatients(病人坐标朝向)等指标。
S13、将所述数据对裁剪为小块数据对,并进行阈值判定,保留满足阈值判定条件的小块数据对。
同步地遍历每对具有相同物理空间信息的数据对,对每对数据对进行裁剪获得小块数据对(对于高分辨率数据来说,可每隔固定像素点/层数进行裁剪,比如48像素/2层裁剪一个96*96*3的小块,低分辨率数据需要将对应的像素数或层数放缩以寻求与高分辨率数据物理空间的对应)。对裁剪出的小块数据对(patch)进行阈值判定,判定条件为:若小块数据对中的高质量图像小块和放缩后的低质量图像小块的相似度指标(包括但不限于PSNR以及SSIM)高于某一数值(阈值),则将符合阈值的小块数据对保留,否则将其丢弃,该阈值根据超分辨倍数以及辐射量差异具体确定。
S14、对保留的小块数据对进行像素截取和归一化处理。
对保留下来的小块数据对进行像素数值截取和归一化处理,像素值截取是为了避免小块像素分布过于稀疏,归一化是为了方便后级神经网络训练(例如,若像素数值代表的是CT值,且图像为肺部CT图像,可将阈值设定为1500;归一化即为将[-1024,1500]的像素数值线性映射到[-1,1]之间)。
S15、对步骤S14中处理后的小块数据对进行数据扩充得到用于训练所述深度学习模型的数据集。可通过图像翻转和旋转等方式进行数据扩充。
本发明的构思为,在训练深度学习模型时,深度学习模型的输入为低质量图像小块数据组成的成批数据,期望得到的输出要尽量与高质量图像小块数据组成的成批数据相同。在训练结束后,获得的深度学习模型可将输入的低质量、低分辨率的CT图像输出为高质量、高分辨率的CT图像。
在本实施例中,深度学习模型包括生成网络、判决器网络和感知网络。其中,生成网络包括以下内容:
(a)特征提取模块:由一个卷积层进行初步的特征提取(本实施例采用64个3*3大小,步长为1的卷积核得到64层特征),再由级联的基础卷积块以及最后的一层卷积层组成主要计算单元(本实施例中设置为16个基础卷积块级联,每个基础块包括两个3*3*64的卷积层和中间的ReLU层),最终由低质量CT图像获得低分辨率特征图,主要计算单元的结果与初步特征提取的结果相加形成残差结构,上述即为生成网络的特征提取模块。
(b)上采样模块:为了实现对任意尺寸的超分辨,上采样模块可以学习对应不同因子上采样的卷积核数量及权重参重:上采样模块由一个全连接网络(可由256节点全连接层+ReLU+256节点全连接层组成)和一个对应卷积层组成,其中全连接网络的输入是高分辨率图像的像素位置信息(高分辨率图像的像素位置信息指:高分辨率图像中每个像素坐标对应的低分辨率图像相应像素坐标的取整值和每个像素坐标对应的低分辨率图像相应像素坐标的实际值之间的相对偏移量)和尺度因子,输出与高分辨率图像素数量相同数量的滤波器核。上采样模块的实现流程为:对于高分辨率图像中的每个像素,将其位置信息输入到上述全连接网络中得到滤波器核,之后将输出的每个滤波器核作用到低分辨率特征图(计算单元的结果)的对应位置后(对应位置为:高分辨率图像像素位置映射到低分辨率特征图的相应位置),可以得到高分辨率图像中相应的像素数值,通过遍历高分辨图像中的所有像素位置,即可获得高分辨率图像。
判决器网络用于和生成网络部分构成GAN结构以提高训练质量,使得生成网络输出的结果细节更丰富真实。判决器网络其部分可使用多种二分类网络结构。实验中可设置如下结构:由卷积核,批归一化层和ReLU层构成基本单元,将七个基本单元级联构成特征提取部分。其中,每隔一个基本单元,之后的基本单元中的卷积步长调整为2,卷积核数量增加一倍。特征提取部分结束后为分类模块,该模块由1024节点的全连接层+ReLU+全连接层得到一个数值结果,其表征对输入图像为高质量、高分辨图像的概率大小。感知网络部分可采用VGG16网络或VGG19网络。
具体地,在步骤S4中训练深度学习模型,训练时使用的损失函数为平均绝 对误差损失、感知损失和生成对抗损失的组合损失函数。其中,感知损失为生成网络输出结果和真实的高质量CT图像分别输入感知网络后并将输出结果进行MSE损失计算的结果。生成对抗损失包括但不限于GAN损失,WGAN损失,WGAN-GP损失或rGAN损失中的一种。优化器可以采用Adam优化器对生成网络和判决器网络进行优化,但不限于此。
更为详细地深度学习模型训练过程为:
(1)将同一批次数据集中的低质量图像小块数据送入生成网络部分,其中设定一批次为16个小块数据对。
(2)将(1)得到的超分辨结果与同一批次数据集中的高质量图像小块数据进行对比,计算得到平均绝对误差损失。
(3)将(1)得到的超分辨结果与同一批次数据集中的高质量图像小块数据送入感知网络,得到感知网络相应的输出特征图,通过对特征图计算平均绝对误差损失,得到感知损失。
(4)将(1)得到的超分辨结果与同一批次数据集中的高质量图像小块数据送入判决器网络,得到判决器网络相应的输出值(该输出数值代表的含义是:判决其网络认定该输入为高质量图像的概率),通过计算相应的GAN损失(GAN损失可以是基础GAN损失或是WGAN-GP损失或是rGAN损失等)中的生成器损失,得到生成对抗损失。
(5)固定判决器网络和感知网络的参数,根据平均绝对误差损失、感知损失和GAN损失(即生成对抗损失)中的生成损失部分,通过Adam优化器对生成网络中的参数进行更新;
(6)将(1)得到的超分辨结果与同一批次数据集中的高质量图像小块数据送入判决器网络中,得到判决器网络相应的输出值,通过计算得到相应的GAN损失(GAN损失可以是基础GAN损失或是WGAN-GP损失或是rGAN损失等)中的判决器损失。固定生成器网络和感知网络的参数,使用判决器损失,通过Adam优化器对判决器网络进行优化。
(7)重复(1)—(6)直到平均绝对误差损失和感知损失收敛,完成训练。
实施例2
与实施例1不同的是,本实施例将采集的临床数据进行预处理得到数据集的预处理方式为,将低质量CT图像采用三维插值法获得与高质量CT图像相同的尺寸,裁剪后得到小块数据对。
生成网络可以但不限于采用U-Net结构,判决器可以为但不限于Patch GAN,采用Patch GAN可兼顾图像不同部分的影响,解决了一个输入仅有一个对应的输出所带来的输出图像不精确的问题。
不同于现有技术中使用仿真训练集,本发明通过对真实临床数据进行预处理得到真实的训练数据集,使得深度学习模型可以应用到临床实践中。本发明深度学习模型通过使用生成对抗网络的框架,结合感知损失与像素级别的损失可以端到端的实现将不具备临床使用价值的低辐射低分辨率的医学图像优化为高质量、高分辨率的医学图像,同时使得低辐射CT图像去噪和低分辨率CT图像超分辨任务可以同时实现,使得生成的高质量图像具有真实的细节内容。本发明提出的深度学习模型通过更换数据集也可用于其他图像增强任务,如自然图像的去噪或超分辨。更优的实施例中,通过引入任意尺度因子的上采样模块,实现了对任意尺度因子的超分辨实现。
本领域技术人员将认识到,对以上描述做出众多变通是可能的,所以实施例和附图仅是用来描述一个或多个特定实施方式。
尽管已经描述和叙述了被看作本发明的示范实施例,本领域技术人员将会明白,可以对其做出各种改变和替换,而不会脱离本发明的精神。另外,可以做出许多修改以将特定情况适配到本发明的教义,而不会脱离在此描述的本发明中心概念。所以,本发明不受限于在此披露的特定实施例,但本发明可能还包括属于本发明范围的所有实施例及其等同物。

Claims (10)

  1. 一种基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,包括以下步骤:
    S1、将采集的临床数据进行预处理得到数据集;
    S2、构建包括生成网络、判决器网络和感知网络的深度学习模型;
    S3、构建损失函数;
    S4、利用数据集和损失函数更新迭代生成网络的参数,得到训练完成的深度学习模型;
    S5、将低质量低分辨率图像输入训练完成的深度学习模型,得到高质量高分辨率图像。
  2. 如权利要求1所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,步骤S1中对临床数据进行预处理的过程包括以下步骤:
    S11、获取低辐射剂量低分辨率下的低质量CT图像和正常辐射剂量高分辨率下的高质量CT图像;
    S12、根据医学图像的元数据对低质量CT图像进行裁剪,使裁剪后的低质量CT图像与高质量CT图像的物理空间信息对应,得到相同物理空间信息的数据对;
    S13、将所述数据对裁剪为小块数据对,并进行阈值判定,保留满足阈值判定条件的小块数据对;
    S14、对保留的小块数据对进行像素截取和归一化处理;
    S15、对步骤S14中处理后的小块数据对进行数据扩充得到用于训练所述深度学习模型的数据集。
  3. 如权利要求2所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,步骤S13中将所述数据块裁剪为小块数据对的方法包括以下内容:数据块中高质量CT图像每隔固定像素点/层数进行裁剪,低质量CT图像将与高质量CT图像对应个数的像素点/层数进行放缩以与高质量CT图像物理空间的信息对应。
  4. 如权利要求3所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,步骤S13中所述阈值判定条件为:小块数据对中的高质量CT图像和放缩后的低质量CT图像的相似度指标高于阈值。
  5. 如权利要求2所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,步骤S15中进行数据扩充的方法为对图像进行翻转和旋转。
  6. 如权利要求1所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,所述损失函数为平均绝对误差损失、感知损失和生成对抗损失的组合损失函数。
  7. 如权利要求6所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,所述感知损失为生成网络输出结果和真实的高质量CT图像分别输入感知网络后并将输出结果进行MSE损失的计算结果。
  8. 如权利要求6所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,所述生成对抗损失为GAN损失,WGAN损失,WGAN-GP损失或rGAN损失中的一种。
  9. 如权利要求1所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,所述生成网络包括特征提取模块和上采样模块,所述特征提取模块包括一个卷积层、级联卷积块、之后再经过一个卷积层,最终由低质量CT图像获得低分辨率特征图;所述级联卷积块中每个卷积块包括至少两个卷积层和中间的ReLU层;所述上采样模块包括一个全连接网络和一个卷积层,将输入的高质量CT图像的每个像素位置信息输入全连接网络,将输出结果作用到低分辨率特征图,得到高质量高分辨率图。
  10. 如权利要求1所述的基于深度学习的增强CT图像质量和分辨率的方法,其特征在于,采用Adam优化器对生成网络与判决器网络进行优化。
PCT/CN2021/083307 2020-12-07 2021-03-26 一种基于深度学习的增强ct图像质量和分辨率的方法 WO2022121160A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/255,608 US20240037732A1 (en) 2020-12-07 2021-03-26 Method for enhancing quality and resolution of ct images based on deep learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011456115.3 2020-12-07
CN202011456115.3A CN112435309A (zh) 2020-12-07 2020-12-07 一种基于深度学习的增强ct图像质量和分辨率的方法

Publications (1)

Publication Number Publication Date
WO2022121160A1 true WO2022121160A1 (zh) 2022-06-16

Family

ID=74691403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/083307 WO2022121160A1 (zh) 2020-12-07 2021-03-26 一种基于深度学习的增强ct图像质量和分辨率的方法

Country Status (3)

Country Link
US (1) US20240037732A1 (zh)
CN (1) CN112435309A (zh)
WO (1) WO2022121160A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468704A (zh) * 2023-04-24 2023-07-21 哈尔滨市科佳通用机电股份有限公司 基于深度学习的人力制动机轴链检测方法
CN116612206A (zh) * 2023-07-19 2023-08-18 中国海洋大学 一种利用卷积神经网络减少ct扫描时间的方法及系统
CN116797457A (zh) * 2023-05-20 2023-09-22 北京大学 一种同时实现磁共振影像超分辨和伪影去除的方法和系统
CN116844192A (zh) * 2023-07-19 2023-10-03 滁州学院 一种低质量指纹图像的增强处理方法
CN117011316A (zh) * 2023-10-07 2023-11-07 之江实验室 一种基于ct图像的大豆茎秆内部结构辨识方法和系统
CN117274067A (zh) * 2023-11-22 2023-12-22 浙江优众新材料科技有限公司 一种基于强化学习的光场图像盲超分辨处理方法与系统
CN117391984A (zh) * 2023-11-02 2024-01-12 中国人民解放军空军军医大学 一种提升cbct影像质量的方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435309A (zh) * 2020-12-07 2021-03-02 苏州深透智能科技有限公司 一种基于深度学习的增强ct图像质量和分辨率的方法
CN113205461B (zh) * 2021-04-02 2023-07-07 上海慧虎信息科技有限公司 一种低剂量ct影像去噪模型训练方法、去噪方法及装置
CN114241077B (zh) * 2022-02-23 2022-07-15 南昌睿度医疗科技有限公司 一种ct图像分辨率优化方法及装置
CN114565515B (zh) * 2022-03-01 2022-11-25 佛山读图科技有限公司 一种实现投影图数据降噪与分辨率恢复的系统的构建方法
CN114331921A (zh) * 2022-03-09 2022-04-12 南昌睿度医疗科技有限公司 一种低剂量ct图像降噪方法及装置
CN115639605B (zh) * 2022-10-28 2024-05-28 中国地质大学(武汉) 基于深度学习的高分辨率断层的自动识别方法和装置
CN117670961B (zh) * 2024-02-01 2024-04-16 深圳市规划和自然资源数据管理中心(深圳市空间地理信息中心) 基于深度学习的低空遥感影像多视立体匹配方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190333219A1 (en) * 2018-04-26 2019-10-31 Elekta, Inc. Cone-beam ct image enhancement using generative adversarial networks
CN110675461A (zh) * 2019-09-03 2020-01-10 天津大学 一种基于无监督学习的ct图像恢复方法
CN110728727A (zh) * 2019-09-03 2020-01-24 天津大学 一种低剂量能谱ct投影数据的恢复方法
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN112435309A (zh) * 2020-12-07 2021-03-02 苏州深透智能科技有限公司 一种基于深度学习的增强ct图像质量和分辨率的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190333219A1 (en) * 2018-04-26 2019-10-31 Elekta, Inc. Cone-beam ct image enhancement using generative adversarial networks
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN110675461A (zh) * 2019-09-03 2020-01-10 天津大学 一种基于无监督学习的ct图像恢复方法
CN110728727A (zh) * 2019-09-03 2020-01-24 天津大学 一种低剂量能谱ct投影数据的恢复方法
CN112435309A (zh) * 2020-12-07 2021-03-02 苏州深透智能科技有限公司 一种基于深度学习的增强ct图像质量和分辨率的方法

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468704B (zh) * 2023-04-24 2023-10-10 哈尔滨市科佳通用机电股份有限公司 基于深度学习的人力制动机轴链检测方法
CN116468704A (zh) * 2023-04-24 2023-07-21 哈尔滨市科佳通用机电股份有限公司 基于深度学习的人力制动机轴链检测方法
CN116797457A (zh) * 2023-05-20 2023-09-22 北京大学 一种同时实现磁共振影像超分辨和伪影去除的方法和系统
CN116797457B (zh) * 2023-05-20 2024-05-14 北京大学 一种同时实现磁共振影像超分辨和伪影去除的方法和系统
CN116844192B (zh) * 2023-07-19 2024-04-12 滁州学院 一种低质量指纹图像的增强处理方法
CN116612206A (zh) * 2023-07-19 2023-08-18 中国海洋大学 一种利用卷积神经网络减少ct扫描时间的方法及系统
CN116612206B (zh) * 2023-07-19 2023-09-29 中国海洋大学 一种利用卷积神经网络减少ct扫描时间的方法及系统
CN116844192A (zh) * 2023-07-19 2023-10-03 滁州学院 一种低质量指纹图像的增强处理方法
CN117011316A (zh) * 2023-10-07 2023-11-07 之江实验室 一种基于ct图像的大豆茎秆内部结构辨识方法和系统
CN117011316B (zh) * 2023-10-07 2024-02-06 之江实验室 一种基于ct图像的大豆茎秆内部结构辨识方法和系统
CN117391984B (zh) * 2023-11-02 2024-04-05 中国人民解放军空军军医大学 一种提升cbct影像质量的方法
CN117391984A (zh) * 2023-11-02 2024-01-12 中国人民解放军空军军医大学 一种提升cbct影像质量的方法
CN117274067A (zh) * 2023-11-22 2023-12-22 浙江优众新材料科技有限公司 一种基于强化学习的光场图像盲超分辨处理方法与系统

Also Published As

Publication number Publication date
US20240037732A1 (en) 2024-02-01
CN112435309A (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
WO2022121160A1 (zh) 一种基于深度学习的增强ct图像质量和分辨率的方法
CN108898642B (zh) 一种基于卷积神经网络的稀疏角度ct成像方法
Huang et al. CaGAN: A cycle-consistent generative adversarial network with attention for low-dose CT imaging
CN108492269B (zh) 基于梯度正则卷积神经网络的低剂量ct图像去噪方法
US20170372193A1 (en) Image Correction Using A Deep Generative Machine-Learning Model
CN112258415B (zh) 一种基于生成对抗网络的胸部x光片超分辨率和去噪方法
Jiang et al. Low-dose CT lung images denoising based on multiscale parallel convolution neural network
CN115953494B (zh) 基于低剂量和超分辨率的多任务高质量ct图像重建方法
CN107862665B (zh) Ct图像序列的增强方法及装置
CN113870138A (zh) 基于三维U-net的低剂量CT图像去噪方法及系统
CN113516586A (zh) 一种低剂量ct图像超分辨率去噪方法和装置
CN108038840B (zh) 一种图像处理方法、装置、图像处理设备及存储介质
Zhao et al. Sparse-view CT reconstruction via generative adversarial networks
CN110070510A (zh) 一种基于vgg-19提取特征的cnn医学图像降噪方法
CN116645283A (zh) 基于自监督感知损失多尺度卷积神经网络的低剂量ct图像去噪方法
CN110599530B (zh) 基于双正则约束的mvct图像纹理增强方法
Li et al. Learning non-local perfusion textures for high-quality computed tomography perfusion imaging
Trung et al. Dilated residual convolutional neural networks for low-dose CT image denoising
Liu et al. MRCON-Net: Multiscale reweighted convolutional coding neural network for low-dose CT imaging
CN117575915A (zh) 一种图像超分辨率重建方法、终端设备及存储介质
Kim et al. CNN-based CT denoising with an accurate image domain noise insertion technique
CN114298920B (zh) 一种超视野ct图像重建模型训练和超视野ct图像重建方法
Bera et al. Axial consistent memory GAN with interslice consistency loss for low dose computed tomography image denoising
Zhang et al. Deep residual network based medical image reconstruction
CN114926383A (zh) 基于细节增强分解模型的医学图像融合方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21901877

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18255608

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21901877

Country of ref document: EP

Kind code of ref document: A1