WO2024066711A1 - 一种基于聚焦学习的ct血管造影智能成像方法 - Google Patents

一种基于聚焦学习的ct血管造影智能成像方法 Download PDF

Info

Publication number
WO2024066711A1
WO2024066711A1 PCT/CN2023/109843 CN2023109843W WO2024066711A1 WO 2024066711 A1 WO2024066711 A1 WO 2024066711A1 CN 2023109843 W CN2023109843 W CN 2023109843W WO 2024066711 A1 WO2024066711 A1 WO 2024066711A1
Authority
WO
WIPO (PCT)
Prior art keywords
generator
normalized
image
layer
discriminator
Prior art date
Application number
PCT/CN2023/109843
Other languages
English (en)
French (fr)
Inventor
娄昕
杨明亮
吕晋浩
Original Assignee
中国人民解放军总医院第一医学中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国人民解放军总医院第一医学中心 filed Critical 中国人民解放军总医院第一医学中心
Publication of WO2024066711A1 publication Critical patent/WO2024066711A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present invention relates to the field of artificial intelligence technology, and in particular to a CT angiography intelligent imaging method based on focused learning.
  • CT angiography requires contrast agents during its use, so the round-trip CT scan takes up a lot of time and increases related costs. Therefore, relevant technologies or means are needed to solve the above problems.
  • NCCT non-contrast CT
  • the present invention provides a CT angiography intelligent imaging method based on focused learning, which adopts the following technical solution:
  • a CT angiography intelligent imaging method based on focused learning characterized by comprising the following steps:
  • Step 1 collecting NCCT images and corresponding real CTA images and normalizing them, taking the normalized NCCT images and the corresponding normalized real CTA images as sample pairs, and dividing the sample pairs into a training set, a validation set, and a test set;
  • Step 2 Construct an adversarial network model, which includes a generator, a corrector, and a discriminator;
  • Step 3 Construct the joint focus learning loss function of the generator and the corrector, and construct the discriminator loss function
  • Step 4 Use the training set to train the adversarial network model, and use the verification set to verify the trained adversarial network model;
  • Step 5 Input the sample pairs in the test set into the generator to generate the corresponding normalized synthetic CTA Image, the obtained normalized synthetic CTA image is tested and evaluated to obtain the generator with the best test performance;
  • Step 6 Load the generator obtained in step 5, take the normalized NCCT image to be processed as the generator input, and output the normalized synthetic CTA image.
  • the generator includes an input layer, an encoder, a central residual module, a decoder, and an output layer.
  • the generator includes an input layer, an encoder, a central residual module, a decoder, and an output layer.
  • the generator includes an input layer, an encoder, a central residual module, a decoder, and an output layer.
  • the normalized NCCT image is input to the input layer
  • the encoder consists of multiple layers of downsampling convolutional layers
  • the central residual module consists of multiple residual blocks.
  • the decoder consists of multiple upsampling convolutional layers.
  • the output layer performs a 2D convolution operation on the output of the upsampling convolution layer and outputs a normalized synthetic CTA image through the activation function.
  • the corrector includes an encoder, a central residual module, a decoder and an output end.
  • the output end includes a refinement module and an output layer.
  • the normalized synthetic CTA image and the normalized real CTA image output by the generator are input to the encoder.
  • the encoder consists of multiple layers of downsampling convolutional layers
  • the central residual module consists of multiple residual blocks.
  • the decoder consists of multiple upsampling convolutional layers.
  • the refinement module includes residual blocks and convolutional layers.
  • the downsampling convolutional layer of the encoder and the corresponding upsampling convolutional layer of the decoder are connected by jump connection lines.
  • the downsampling convolution layer of the encoder except for the refinement module and output layer at the output end, the downsampling convolution layer of the encoder, the residual block of the central residual module, and the upsampling convolution layer of the decoder all use normalization and functional activation functions, and the output layer outputs the correction space matrix.
  • the discriminator includes multiple layers of downsampling convolutional layers and a 2D convolutional output layer.
  • the input of the discriminator is a normalized real CTA image or a synthesized normalized CTA image.
  • the discriminator outputs a single-channel image matrix block. After average pooling, the single-channel image matrix block obtains the corresponding pooling value.
  • L GAN (G,D) Ex [(1-D(G(x))) 2 ]
  • L GAN (G,D) is the adversarial loss function
  • D is the discriminator
  • G is the generator
  • m is the number of focus scales
  • bi is the i-th
  • the weighting coefficient of is the correction loss function
  • is the weighting coefficient of L Smoot h
  • L Smoot h is Smoothing loss function
  • E(.) is the expectation operator
  • the subscript is the input variable
  • x is the normalized NCCT image input by the generator G
  • y is the normalized real CTA image
  • ° corresponds to the resampling operation
  • R is the corrector
  • ⁇ . ⁇ 1 is the L 1 distance operator.
  • the training of the adversarial network model in step 4 as described above specifically includes the following steps:
  • the discriminator parameters are fixed, and the value of the minimized joint focus learning loss function L GR is calculated to update the parameters of the generator and corrector.
  • the parameters of the generator and the corrector remain fixed, and the value of the minimized discriminator loss function L Adv (G,D) is calculated, and then the discriminator parameters are optimized and updated.
  • the test performance in step 5 includes the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) of the normalized synthetic CTA image, and also includes the structural similarity (SSIM) between the normalized synthetic CTA image and the normalized real CTA image.
  • MAE mean absolute error
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • the present invention has the following beneficial effects:
  • the present invention provides an intelligent imaging method for CT angiography based on focused learning, which reduces the necessity of using contrast agents;
  • the present invention constructs a joint focus learning loss function of the generator and the corrector, so that the CTA image synthesized by the generator can better highlight the vascular tissue;
  • the present invention introduces a corrector to achieve better registration and alignment between NCCT images and CTA images, thereby better establishing a mapping relationship between NCCT images and CTA images, and achieving better quality of synthesized CTA images;
  • the present invention has good robustness and expansibility, and is easy for modular integration and distributed use.
  • FIG1 is a schematic diagram of a network architecture of an adversarial network model of the present invention.
  • FIG2 is a schematic diagram of the network architecture of the generator G of the present invention.
  • FIG3 is a schematic diagram of the network architecture of the corrector R of the present invention.
  • FIG. 4 is a schematic diagram of the network architecture of the discriminator D of the present invention.
  • a CT angiography intelligent imaging method based on focused learning includes the following steps:
  • Step 1 Acquire NCCT images and corresponding real CTA images and perform quality inspection and normalization in sequence.
  • the normalized NCCT images and the corresponding normalized real CTA images are used as sample pairs. Divide the sample pairs into training, validation and test sets:
  • the quality inspection criteria are based on one or more of the following exclusion rules: (1) The scanning interval between the NCCT image and the corresponding real CTA image is no more than 1 month; (2) The layer thickness and number of layers of the NCCT image and the corresponding real CTA image are consistent, and the layers correspond; (3) Whether the NCCT image and the corresponding real CTA image are stored regularly; (4) There are no serious artifacts in the NCCT image or the real CTA image; (5) The NCCT image or the real CTA image is scanned normally and the image is full; (6) The artery has not undergone surgery, such as aneurysm surgery;
  • the normalized NCCT images and the corresponding real CTA images were used as sample pairs, and each sample pair was randomly divided into training set, validation set and test set in a ratio of 6:1:3 for model training, validation and testing.
  • Step 2 Construct an adversarial network model based on nonlinear combination theory: Use convolutional networks to construct the generator, corrector, and discriminator respectively:
  • Step 2.1 constructs a generator.
  • the generator model framework described in this embodiment is shown in Figure 2.
  • the generator structure includes an input layer, an encoder, a central residual module, a decoder, and an output layer in sequence.
  • the encoder includes 2 layers of downsampling convolutional layers
  • the central residual module includes 9 residual blocks
  • the decoder includes 2 layers of upsampling convolutional layers.
  • the number of channels in the input layer changes from 1 to 64
  • the number of channels in the two downsampling convolution layers of the encoder changes from 64 to 128 and 128 to 256
  • the number of channels in each residual block in the central residual module is 256
  • the number of channels in the two upsampling convolution layers of the decoder changes from 256 to 128 and 128 to 64
  • the number of channels in the output layer changes from 64 to 1.
  • the convolution kernels of the input and output layers of the generator are 7 ⁇ 7
  • the convolution step is 1
  • the number of convolution zero padding is 3.
  • the convolution kernels of the encoder and decoder are both 3 ⁇ 3, the convolution step is 2, and the number of convolution zero padding is 1.
  • the convolution kernels of each residual block in the central residual module are all 3 ⁇ 3, and the convolution step and convolution zero padding are both 1. Except for the output layer, the input layer, downsampling convolution layer, residual block and upsampling convolution layer all use instancenormal2d normalization and ReLU function activation function. Finally, the output layer performs a 2D convolution operation on the output of the upsampling convolution layer and outputs a normalized synthetic CTA image through the tanh activation function.
  • the dimensions of the input layer and the output layer are both the number of sample batches ⁇ the number of image channels ⁇ the image width ⁇ the image height.
  • the number of sample batches trained at one time is 1, the number of image channels input to the input layer and the number of image channels output by the output layer are both 1, the image width is 512, and the image height is 512; the input of the input layer is a normalized NCCT image, and the output of the output layer is a normalized synthetic CTA image.
  • the encoder encodes the input normalized NCCT image into deep features.
  • the central residual module performs multiple convolution operations on the encoded deep features to obtain deep features that are closer to the target image.
  • the decoder decodes the features output by the central residual module into the target image.
  • Step 2.2 Construct the corrector.
  • the corrector model framework described in this embodiment is shown in FIG3.
  • the corrector backbone network includes an encoder, a central residual module, a decoder, and an output end.
  • the output end includes a refinement module and an output layer.
  • the input of the corrector is the normalized synthetic CTA image output by the generator and the normalized real CTA image, the output of which is the correction space matrix between the normalized synthetic CTA image and the normalized real CTA image.
  • the encoder includes multiple layers of downsampling convolutional layers, and the decoder includes multiple layers of upsampling convolutional layers.
  • the number of layers of the downsampling convolutional layers is the same as the number of layers of the upsampling convolutional layers.
  • the downsampling convolutional layers of the encoder and the corresponding upsampling convolutional layers of the decoder are connected by jump connection lines.
  • the encoder includes 7 downsampling convolutional layers
  • the central residual module includes 3 residual blocks
  • the decoder includes 7 upsampling convolutional layers
  • the refinement module includes 1 residual block and a convolutional layer.
  • the central residual module is respectively provided with a convolutional layer with a convolution kernel of 1 ⁇ 1, a convolution step of 1, and no convolution zero padding.
  • the convolution kernels of the downsampling convolutional layer, the residual block of the central residual module, the upsampling convolutional layer, and the residual block of the refinement module are all 3 ⁇ 3, the convolution step is 1, the convolution zero padding is 1, and the activation function is LeakyReLU.
  • the convolution kernel of the convolutional layer in the refinement module is 1 ⁇ 1, the step is 1, and the convolution zero padding is 0.
  • the convolution kernel of the output layer is 3 ⁇ 3, the convolution step is 1, the number of zero padding is 1, and there is no activation function.
  • the input sources of each upsampling convolution layer of the decoder include two aspects: the output of the previous level and the output of the downsampling convolution layer of the encoder corresponding to the current upsampling convolution layer, so the number of input channels of the upsampling convolution layer of the decoder is in the form of c1+c2:
  • c1 represents the number of output channels of the previous level, such as the c1 value corresponding to the upsampling convolution layer of the first-level decoder is the number of channels of the convolution layer output after the central residual module, the c1 value corresponding to the upsampling convolution layer of the second-level decoder is the number of output channels of the upsampling convolution layer of the first-level decoder, and so on;
  • c2 is the number of output channels of the downsampling convolution layer
  • the number of channels of the 7 downsampling convolutional layers of the encoder changes from 2->32, 32->64, 64->64, 64->64, 64->64, 64->64, 64->64, 64->64, 64->64, and 64->64.
  • the input of the encoder is the normalized synthetic CTA image and the normalized real CTA image output by the generator, so the number of channels of the first-level downsampling convolutional layer input is 2, the number of channels of the convolutional layer before the central residual module changes from 64->128, the number of channels of each residual block in the central residual module is 128, and the number of channels of the convolutional layer after the central residual module changes from 128->64.
  • the number of channels of the 7 upsampling convolutional layers of the decoder changes from 64+64->64, 64+64->64, 64+64->64, 64+64->64, 64+64->64, 64+32->32.
  • the number of channels of the refinement module is 32.
  • the number of channels in the output layer changes from 32 to 2.
  • the downsampling convolution layer of the encoder, the residual block of the central residual module, and the upsampling convolution layer of the decoder all use instancenormal2d normalization and LeakyReLU function activation functions, and the output layer finally outputs the corrected space matrix.
  • the dimension of the corrected space matrix output by this embodiment is [number of sample batches, number of channels of the output layer, image width, image height], which is [1, 2, 512, 512] in this embodiment.
  • Step 2.3 Construction of the discriminator, which is used to determine whether a given image is a normalized true CTA image.
  • the discriminator model framework of this embodiment is shown in FIG4.
  • the discriminator includes four layers of downsampling convolutional layers and one 2D convolution output layer.
  • Each downsampling convolution layer uses the LeakyReLU function activation function and instancenormal2d normalization, and all convolution operations of the discriminator use 4 ⁇ 4 convolution kernels.
  • the first three downsampling convolutions have a step size of 2 and a zero padding of 1.
  • the convolution step size of the 4th downsampling and output convolution layer is 1 and the zero padding is 1.
  • the input of the discriminator is a normalized real CTA image or a normalized synthetic CTA image.
  • a 62 ⁇ 62 single-channel image matrix block is output.
  • the matrix block of the single-channel image is averaged and pooled by the avg_pool2d function (pooling layer) of torch to obtain the corresponding pooling value.
  • Step 3 Design the adversarial network model: joint focused learning loss function of the generator and corrector, and discriminator loss function. By focusing on the correction loss, the joint learning of the generator and corrector is focused on the target area.
  • L GAN (G,D) Ex [(1-D(G(x))) 2 ]
  • the regional filtering image loss is defined as the region where the normalized image HU value in the default window of the DICOM file of the real CTA image is greater than the threshold of 0.65 as the filtering region, i.e., the regional filtering image;
  • is the weighting coefficient of L Smoot h , in this embodiment, ⁇ is 10, and L Smoot h is the smoothing loss function;
  • E(.) is the expectation operator, and the subscript is the input variable;
  • x is the normalized NCCT image input by the generator G, and G(x) is the output of the generator, i.e., the normalized synthetic CTA image;
  • y is the normalized real CTA image; ° corresponds to the grid_sample() resampling operation in the torch library;
  • R is the corrector, and R(G(x), y)) is the correction space matrix output by the corrector training, which is used to correct the output G
  • Step 4 Use the training set to train the constructed adversarial network model, and use the verification set to verify the intermediate training model.
  • the specific steps are:
  • the discriminator parameters are fixed, and the minimized joint focus learning loss function is calculated based on the normalized synthetic CTA image, the corrected normalized synthetic CTA image, and the normalized real CTA image.
  • the value of L GR is used to update the parameters of the generator and corrector.
  • the parameters of the generator and the corrector remain fixed, and the normalized synthetic CTA image and the normalized real CTA image are respectively sent to the constructed discriminator to calculate the minimized value of the discriminator loss function L Adv (G,D), and the discriminator parameters are optimized and updated using the calculated loss value.
  • validation set data is used to verify the intermediate model after training and update to evaluate the correctness and effectiveness of its iterative update of the model.
  • the experimental platform for this example is a Linux system server with NVIDIA GeForce RTX3090Ti GPU and 64GB memory, and the Python version is 3.8.
  • test set to test and evaluate the generator of the adversarial network model obtained in step 4 input the normalized NCCT image into the generator of the adversarial network model obtained in step 4 to obtain a normalized synthetic CTA image, and test and evaluate it with the normalized real CTA image.
  • the model with the best test performance is determined as the final model for use.
  • the performance test indicators include the mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) of the normalized synthetic CTA image, and also include the structural similarity (SSIM) between the normalized synthetic CTA image and the normalized real CTA image.
  • MAE mean absolute error
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • the generator obtained in step 5 is loaded, and the normalized NCCT image to be processed is used as the input of the generator.
  • the output is the normalized synthetic CTA image.
  • the normalized [-1 1] synthetic CTA image output by the generator is reconstructed to the original grayscale space [-1024 3071] according to the normalized inverse operation to obtain a synthetic image at the original grayscale.
  • the synthesized image in the original grayscale space is converted into binary format and assigned to PixelData in the DICOM header file.
  • Other DICOM header file information is consistent with the header file of the NCCT image, thereby obtaining a synthesized CTA image.
  • a focused learning-based CT angiography intelligent imaging device comprises a first module, a second module, a third module, a fourth module, a fifth module and a sixth module, wherein the above steps 1 to 6 are respectively implemented by the first to sixth modules.
  • the present invention is not limited to the above-mentioned implementation modes.
  • the above-mentioned embodiments are only descriptions of the preferred embodiments of the present invention, and do not limit the concept of the present invention.
  • the implementation schemes in the above-mentioned embodiments can be further combined or replaced.
  • Various changes and improvements made to the technical solutions of the present invention by those skilled in the art all belong to the protection scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于聚焦学习的CT血管造影智能成像方法,采集NCCT图像和对应的真实CTA图像并归一化处理,归一化后的NCCT图像和对应的真实CTA图像作为样本对,将样本划分为训练集、验证集和测试集;构建对抗网络模型,包括生成器、矫正器及判别器;构建生成器与矫正器的联合聚焦学习损失函数,构建判别器损失函数;利用训练集对构建好的对抗网络模型进行训练,利用验证集对训练后的对抗网络模型进行验证;利用测试集获得最佳测试性能的生成器。本发明构建了联合聚焦学习损失函数,使得所述生成器合成CTA图像能够更好的凸显目标区域,例如血管组织;本发明引入了矫正器,使得NCCT图像与CTA图像之间更好的配准对齐。

Description

一种基于聚焦学习的CT血管造影智能成像方法 技术领域
本发明涉及人工智能技术领域,尤其涉及一种基于聚焦学习的CT血管造影智能成像方法。
背景技术
CT血管造影(CT angiography,CTA)由于使用流程中需要造影剂,使得往返的CT扫描占用大量时间并增加相关费用,因此,需通过相关技术或手段来解决上述问题,考虑通过人工智能技术,通过构建聚焦学习的对抗网络模型,实现平扫CT(NCCT)到CTA的图像转换,进而减少CTA检查流程,提供更快更经济的成像选择。
近年,随着人工智能技术的发展,出现了以Pix2pix网络[Isola P,et al.Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1125-1134.]为代表的图像转换模型,较好实现了成对图像之间的模态转换。针对实际难以获得大量高质量成对的医学图像问题,研究者们尝试将cycleGAN模型[Zhu J Y,Proceedings of the IEEE international conference on computer vision.2017:2223-2232.]应用于非配对医学图像模态转换中,但所取得效果有限。针对医学图像难以获得严格配对数据,非配对数据无监督学习效果有限的痛点,近来开发出了以RegGAN[Kong L,et al..Advances in Neural Information Processing Systems,2021,34:1964-1978.]为代表医学图像模态转换模型。由于当前相关模型没有考虑不同组织区域重要性差异,使得在此条件下训练获得的模型无法凸显重要区域的图像数据。
发明内容
为解决上述技术问题,本发明提供了一种基于聚焦学习的CT血管造影智能成像方法,采用了如下所述技术方案:
一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,包括以下步骤:
步骤1、采集NCCT图像和对应的真实CTA图像并归一化处理,归一化后的NCCT图像和对应的归一化真实CTA图像作为样本对,将样本对划分为训练集、验证集和测试集;
步骤2、构建对抗网络模型,对抗网络模型包括生成器、矫正器以及判别器;
步骤3、构建生成器与矫正器的联合聚焦学习损失函数,构建判别器损失函数;
步骤4、利用训练集对对抗网络模型进行训练,利用验证集对训练后的对抗网络模型进行验证;
步骤5、将测试集中的样本对输入到生成器,生成对应的归一化合成CTA 图像,将获得的归一化合成CTA图像进行测试评估,获得最佳测试性能的生成器;
步骤6、加载步骤5获得的生成器,将待处理的归一化NCCT图像作为生成器输入,输出归一化合成CTA图像。
如上所述生成器包括输入层、编码器、中心残差模块、解码器以及输出层,在生成器中:
归一化NCCT图像输入到输入层,
编码器包括多层下采样卷积层,
中心残差模块包括多个残差块,
解码器包括多层上采样卷积层,
除输出层以外,输入层、下采样卷积层、残差块和上采样卷积层均使用了归一化和功能激活函数,输出层将上采样卷积层的输出进行2D卷积操作并经过激活函数输出归一化合成CTA图像。
如上所述矫正器包括编码器、中心残差模块、解码器及输出端,输出端包括提炼模块和输出层,在矫正器中:
生成器输出的归一化合成CTA图像和归一化真实CTA图像输入到编码器,
编码器包括多层下采样卷积层,
中心残差模块包括多个残差块,
解码器包括多层上采样卷积层,
提炼模块包括残差块和卷积层
编码器的下采样卷积层和对应的解码器的上采样卷积层之间通过跳转连接线进行连接,
除输出端的提炼模块和输出层,编码器的下采样卷积层、中心残差模块的残差块和解码器的上采样卷积层均使用了归一化和功能激活函数,输出层输出矫正空间矩阵。
如上所述判别器包括多层下采样卷积层和一个2维卷积输出层,判别器的输入为归一化真实CTA图像或合成的归一化CTA图像,判别器输出单通道图像矩阵块,单通道图像矩阵块经平均池化后,获得对应的池化值。
如上所述步骤3中的生成器与矫正器的联合聚焦学习损失函数LGR定义为:

LGAN(G,D)=Ex[(1-D(G(x)))2]

LGAN(G,D)为对抗损失函数,D为判别器,G为生成器,m为聚焦尺度数,bi为第i个的加权系数,为矫正损失函数,γ为LSmoot h的加权系数,LSmoot h为 平滑损失函数;E(.)为期望运算符,下标为输入变量,x为生成器G输入的归一化的NCCT图像,y为归一化真实CTA图像,°对应于重采样操作,R为矫正器,为梯度运算符为,‖.‖1为L1距离运算符。
如上所述步骤3中判别器损失函数LAdv(G,D)定义为:
minLAdv(G,D)=Ey[(1-D(y))2]+Ex[D(G(x))2]。
如上所述步骤4中的对抗网络模型进行训练具体包括以下步骤:
首先,判别器参数固定不变,计算最小化的联合聚焦学习损失函数LGR的值,进而实现对生成器和矫正器的参数更新;
其次,生成器与矫正器参数保持固定不变,计算最小化的判别器损失函数LAdv(G,D)的值,进而对判别器参数进行优化更新。
如上所述步骤5中的测试性能包括归一化合成CTA图像的平均绝对误差(MAE)和峰值信噪比(PSNR),还包括归一化合成CTA图像与归一化真实CTA图像的结构相似度(SSIM)。
本发明相对于现有技术,具有以下有益效果:
1、本发明提供了基于聚焦学习的CT血管造影智能成像方法,减少造影剂使用的必要性;
2、本发明构建了生成器与矫正器的联合聚焦学习损失函数,使得所述生成器合成CTA图像能够更好的凸显血管组织;
3、本发明引入了矫正器,使得NCCT图像与CTA图像之间更好的配准对齐,从而更好的建立NCCT图像与CTA图像之间的映射关系,合成CTA图像质量更好;
4、本发明具有较好的鲁棒性和扩展性,易于模块化集成和分布式使用。
附图说明
图1为本发明的对抗网络模型的网络架构示意图;
图2为本发明的生成器G的网络架构示意图;
图3为本发明的矫正器R的网络架构示意图;
图4为本发明的判别器D的网络架构示意图。
具体实施方式
为了便于本领域普通技术人员理解和实施本发明,下面结合实例对本发明作进一步的详细描述,此处所描述的实施示例仅用于说明和解释本发明,并非是对本发明的限制。
实施例1
如图1所示,一种基于聚焦学习的CT血管造影智能成像方法,包括以下步骤:
步骤1、采集NCCT图像和对应的真实CTA图像并依次进行质量检查、归一化处理,归一化后的NCCT图像和对应的归一化真实CTA图像作为样本对, 将样本对划分为训练集、验证集和测试集:
质量检查准则按照以下纳排规则中的一种或者多种:(1)NCCT图像与对应的真实CTA图像的扫描间隔时间不超过1个月;(2)NCCT图像与对应的真实CTA图像的层厚和层数一致,且层面对应;(3)NCCT图像和对应的真实CTA图像是否规则存放;(4)NCCT图像或真实CTA图像不存在严重伪影;(5)NCCT图像或真实CTA图像扫描正常,成像充盈;(6)动脉未进行过手术,如动脉瘤术等;
将NCCT图像和对应真实CTA图像的原始灰阶空间由[-1024 3071]归一化至[-1 1],以加速模型训练收敛;
将归一化后的NCCT图像和对应的真实CTA图像作为样本对,将各个样本对按照6:1:3比例,随机划分为训练集、验证集和测试集以供模型的训练、验证和测试。
步骤2、基于非线性组合理论构建对抗网络模型:利用卷积网络分别构建生成器、矫正器、判别器:
步骤2.1构建生成器,本实施例所述的生成器模型框架如图2所示,生成器结构依次包括输入层、编码器、中心残差模块、解码器以及输出层。进一步的,编码器包括2层下采样卷积层,中心残差模块包括9个残差块,解码器包括2层上采样卷积层。
输入层通道数变化为1->64,编码器的2层下采样卷积层通道数变化依次为64->128和128->256,中心残差模块中各个残差块的通道数均为256,解码器的2层上采样卷积层的通道数变化依次为256->128和128->64,输出层通道数变化为64->1。生成器的输入层和输出层的卷积核为7×7,卷积步长为1,卷积补零数为3。编码器与解码器的卷积核均为3×3,卷积步长为2,卷积补零数为1。中心残差模块各残差块的卷积核均为3×3,卷积步长和卷积补零数均为1。除输出层,输入层、下采样卷积层、残差块和上采样卷积层都使用了instancenormal2d归一化和ReLU功能激活函数,最终输出层将上采样卷积层的输出进行2D卷积操作并经过tanh激活函数输出归一化合成CTA图像。
输入层和输出层的维度均为样本批次数目×图像通道数×图像宽度×图像高度。其中本实施例一次训练的样本批次数目为1,输入至输入层的图像通道数与输出层输出的图像通道数均为1,图像宽度为512,图像高度为512;输入层的输入为归一化NCCT图像,输出层的输出为归一化合成CTA图像。
编码器实现将输入的归一化NCCT图像编码为深层特征,中心残差模块将编码的深层特征进行多次卷积操作后获得更具目标图像的深层特征,解码器则将中心残差模块输出的特征解码为目标图像。
步骤2.2构建矫正器,本实施例所述的矫正器模型框架如图3所示,矫正器主干网络包括编码器、中心残差模块、解码器及输出端,输出端包括一个提炼模块和输出层。矫正器的输入为生成器输出的归一化合成CTA图像和归一化真实 CTA图像,其输出为归一化合成CTA图像与归一化真实CTA图像之间的矫正空间矩阵。
编码器包括多层下采样卷积层,解码器包括多层上采样卷积层,下采样卷积层的层数与上采样卷积层的层数相同,编码器的下采样卷积层和对应的解码器的上采样卷积层之间通过跳转连接线进行连接。本实施例中编码器包括7个下采样卷积层,中心残差模块包括3个残差块,解码器包括7个上采样卷积层,提炼模块包括1个残差块和一个卷积层。中心残差模块前后分别带有一个卷积核为1×1,卷积步长为1,无卷积补零数的卷积层。下采样卷积层、中心残差模块的残差块、上采样卷积层以及提炼模块的残差块的卷积核均为3×3,卷积步长均为1,卷积补零数均为1,激活函数均为LeakyReLU。提炼模块中卷积层的卷积核为1×1,步长为1,卷积补零数为0。输出层的卷积核为3×3,卷积步长为1,补零数为1,无激活函数。
如图3所示,由于编码器的下采样卷积层和对应的解码器的上采样卷积层之间跳转连接,解码器的各个上采样卷积层的输入来源包括两方面:上一级的输出和当前级上采样卷积层对应的编码器的下采样卷积层的输出,故解码器的上采样卷积层的输入通道数为c1+c2形式:c1表示上一级的输出通道数,如第1级解码器的上采样卷积层对应的c1值为中心残差模块后卷积层输出的通道数,第2级解码器的上采样卷积层对应的c1值为第1级解码器的上采样卷积层的输出通道数,以此类推;c2为当前级解码器的上采样卷积层对应的编码器的下采样卷积层的输出通道数;解码器的上采样卷积层输出的通道数与当前级上采样卷积层对应的编码器下采样卷积层的输出通道数相同。
本实施例中具体为,编码器的7个下采样卷积层通道数变化依次为2->32、32->64、64->64、64->64、64->64、64->64、64->64,编码器的输入为生成器输出的归一化合成CTA图像和归一化真实CTA图像,故第一级下采样卷积层输入的通道数为2,中心残差模块前卷积层通道数变化为64->128,中心残差模块中各个残差块的通道数为128,中心残差模块后卷积层的通道数变化为128->64,解码器的7个上采样卷积层通道数变化依次为64+64->64、64+64->64、64+64->64、64+64->64、64+64->64、64+64->64、64+32->32。提炼模块的通道数为32。输出层的通道数变化为32->2。除输出端的提炼模块和输出层,编码器的下采样卷积层、中心残差模块的残差块和解码器的上采样卷积层均使用了instancenormal2d归一化和LeakyReLU功能激活函数,最终由输出层输出矫正空间矩阵。
本实施例输出的矫正空间矩阵的维度为[样本批次数目,输出层的通道数,图像宽度,图像高度],本实施例中为[1,2,512,512]。
步骤2.3判别器的构建,用于判别给定图像是否为归一化真实CTA图像。
本实施例的判别器模型框架如图4所示,判别器包括4层下采样卷积层和一 个2维卷积输出层。每个下采样卷积层使用了LeakyReLU功能激活函数和instancenormal2d归一化,判别器所有卷积操作均使用4×4卷积核。前三层下采样卷积步长为2,补零数为1,第4层下采样和输出卷积层的卷积步长为1,补零数为1。判别器的输入为归一化真实CTA图像或归一化合成CTA图像,通过判别器的多层卷积操作之后,输出获得62×62的单通道图像矩阵块。其单通道图像的矩阵块经过torch的avg_pool2d函数(池化层)平均池化后,获得对应的池化值。
步骤3、针对构建的对抗网络模型设计:生成器与矫正器的联合聚焦学习损失函数、判别器损失函数。通过对矫正损失的聚焦设计,使得生成器与矫正器的联合学习聚焦于目标区域。
生成器与矫正器的联合聚焦学习损失函数LGR定义为:

LGAN(G,D)=Ex[(1-D(G(x)))2]

其中:
LGAN(G,D)为对抗损失函数;D为判别器;G为生成器;m为聚焦尺度数,本实施例中m=2;bi为第i个的加权系数,本实施例中b1=20,b2=2,为矫正损失函数,不同i值对应不同的聚焦区域,本实施例中i取值为1、2,i=1时计算y和的全图像损失,i=2时计算y和的区域滤波图像损失,将真实CTA图像的DICOM文件的默认窗中的归一化图像HU值大于0.65阈值的区域定义为滤波区域,即区域滤波图像;γ为LSmoot h的加权系数,本实施例中γ取值为10,LSmoot h为平滑损失函数;E(.)为期望运算符,下标为输入变量;x为生成器G输入的归一化NCCT图像,G(x)为生成器的输出,即归一化合成CTA图像;y为归一化真实CTA图像;°对应于torch库中的grid_sample()重采样操作;R为矫正器,R(G(x),y))为矫正器训练输出的矫正空间矩阵,用于对生成器的输出G(x)进行矫正,获得矫正后的归一化合成CTA图像为梯度运算符,‖.‖1为L1距离运算符。
判别器损失函数LAdv(G,D)定义为:
minLAdv(G,D)=Ey[(1-D(y))2]+Ex[D(G(x))2]。
其符号含义同生成器及矫正器中的损失函数符号含义。
步骤4、利用训练集对构建好的对抗网络模型进行训练,利用验证集对中间训练模型进行验证,具体步骤为:
首先,判别器参数固定不变,依据归一化合成CTA图像、矫正后的归一化合成CTA图像以及归一化真实CTA图像计算最小化的联合聚焦学习损失函数 LGR的值,进而实现对生成器和矫正器参数更新。
其次,生成器与矫正器的参数保持固定不变,将归一化合成CTA图像和归一化真实CTA图像分别送入已构建好的判别器计算得到最小化的判别器损失函数LAdv(G,D)的值,利用计算所获得的损失值对判别器参数进行优化更新。
最后,利用验证集数据对训练更新后的中间模型进行验证测试,评估其模型迭代更新的正确性和有效性。
本实例实验平台为NVIDIA GeForce RTX3090Ti GPU及64GB内存的Linux系统服务器,Python版本为3.8。
所述模型构建选用pytorch作为深度学习框架,优化器为Adam,生成器、矫正器以及判别器的初始学习率均为0.0001,没有衰减策略,模型迭代次数epoch=80。
步骤5、模型测试
使用测试集对步骤4得到的的对抗网络模型的生成器进行测试评估:将归一化NCCT图像输入步骤4得到的对抗网络模型的生成器,获得归一化合成CTA图像,同归一化真实CTA图像进行测试评估,将获得最佳测试性能的模型定为最终的使用模型。
所述性能测试指标包括归一化合成CTA图像的平均绝对误差(MAE)和峰值信噪比(PSNR),还包括归一化合成CTA图像与归一化真实CTA图像的结构相似度(SSIM)。
步骤6、模型使用
加载步骤5得到的生成器,将待处理的归一化NCCT图像作为生成器的输入,输出即为归一化合成CTA图像。
将所述生成器输出的归一化[-1 1]合成CTA图像依据归一化反操作重构至原始灰阶空间[-1024 3071],获得原始灰阶下的合成图像。
将原始灰阶空间下的合成图像转为二进制格式并赋值DICOM头文件中的PixelData,其它DICOM头文件信息与NCCT图像的头文件保持一致,自此获得合成CTA图像。
一种基于聚焦学习的CT血管造影智能成像装置,包括第一模块、第二模块、第三模块、第四模块、第五模块和第六模块,上述步骤1至步骤6分别由第一至第六模块实现。
本发明并不限于上述实施方式,上述实施例仅是对本发明的优选实施例进行描述,并非对本发明的构思进行限定,上述实施例中的实施方案可以进一步组合或者替换,本领域技术人员对本发明的技术方案作出的各种变化和改进,均属于本发明的保护范围。

Claims (6)

  1. 一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,包括以下步骤:
    步骤1、采集NCCT图像和对应的真实CTA图像并归一化处理,归一化后的NCCT图像和对应的归一化真实CTA图像作为样本对,将样本对划分为训练集、验证集和测试集;
    步骤2、构建对抗网络模型,对抗网络模型包括生成器、矫正器以及判别器;
    步骤3、构建生成器与矫正器的联合聚焦学习损失函数,构建判别器损失函数;
    步骤4、利用训练集对对抗网络模型进行训练,利用验证集对训练后的对抗网络模型进行验证;
    步骤5、将测试集中的样本对输入到生成器,生成对应的归一化合成CTA图像,将获得的归一化合成CTA图像进行测试评估,获得最佳测试性能的生成器;
    步骤6、加载步骤5获得的生成器,将待处理的归一化NCCT图像作为生成器输入,输出归一化合成CTA图像,
    所述步骤3中的生成器与矫正器的联合聚焦学习损失函数LGR定义为:

    LGAN(G,D)=Ex[(1-D(G(x)))2]

    LGAN(G,D)为对抗损失函数,D为判别器,G为生成器,m为聚焦尺度数,bi为第i个的加权系数,为矫正损失函数,γ为LSmooth的加权系数,LSmooth为平滑损失函数;E(.)为期望运算符,下标为输入变量,x为生成器G输入的归一化NCCT图像,对应于重采样操作,R为矫正器,为梯度运算符为,‖.‖1为L1距离运算符,R(G(x),y)为矫正器训练输出的矫正空间矩阵,G(x)为生成器的输出,为矫正后的归一化合成CTA图像,y为归一化真实CTA图像,i取值为1和2,i=1时计算y和的全图像损失,i=2时计算y和的区域滤波图像损失,
    所述步骤3中判别器损失函数LAdv(G,D)定义为:
    minLAdv(G,D)=Ey[(1-D(y))2]+Ex[D(G(x))2]。
  2. 根据权利要求1所述一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,所述生成器包括输入层、编码器、中心残差模块、解码器以及输出层,在生成器中:
    归一化NCCT图像输入到输入层,
    编码器包括多层下采样卷积层,
    中心残差模块包括多个残差块,
    解码器包括多层上采样卷积层,
    除输出层以外,输入层、下采样卷积层、残差块和上采样卷积层均使用了归一化和功能激活函数,输出层将上采样卷积层的输出进行2D卷积操作并经过激活函数输出归一化合成CTA图像。
  3. 根据权利要求2所述一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,所述矫正器包括编码器、中心残差模块、解码器及输出端,输出端包括提炼模块和输出层,在矫正器中:
    生成器输出的归一化合成CTA图像和归一化真实CTA图像输入到编码器,
    编码器包括多层下采样卷积层,
    中心残差模块包括多个残差块,
    解码器包括多层上采样卷积层,
    提炼模块包括残差块和卷积层
    编码器的下采样卷积层和对应的解码器的上采样卷积层之间通过跳转连接线进行连接,
    除输出端的提炼模块和输出层,编码器的下采样卷积层、中心残差模块的残差块和解码器的上采样卷积层均使用了归一化和功能激活函数,输出层输出矫正空间矩阵。
  4. 根据权利要求3所述一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,所述判别器包括多层下采样卷积层和一个2维卷积输出层,判别器的输入为归一化真实CTA图像或归一化合成CTA图像,判别器输出单通道图像矩阵块,单通道图像矩阵块经平均池化后,获得对应的池化值。
  5. 根据权利要求1所述一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,步骤4中的对抗网络模型进行训练具体包括以下步骤:
    首先,判别器参数固定不变,计算最小化的联合聚焦学习损失函数LGR的值,进而实现对生成器和矫正器的参数更新;
    其次,生成器与矫正器参数保持固定不变,计算最小化的判别器损失函数LAdv(G,D)的值,进而对判别器参数进行优化更新。
  6. 根据权利要求1所述一种基于聚焦学习的CT血管造影智能成像方法,其特征在于,所述步骤5中的测试性能包括归一化合成CTA图像的平均绝对误差(MAE)和峰值信噪比(PSNR),还包括归一化合成CTA图像与归一化真实CTA图像的结构相似度(SSIM)。
PCT/CN2023/109843 2022-09-26 2023-07-28 一种基于聚焦学习的ct血管造影智能成像方法 WO2024066711A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211178939.8A CN115512182B (zh) 2022-09-26 2022-09-26 一种基于聚焦学习的ct血管造影智能成像方法
CN202211178939.8 2022-09-26

Publications (1)

Publication Number Publication Date
WO2024066711A1 true WO2024066711A1 (zh) 2024-04-04

Family

ID=84506223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/109843 WO2024066711A1 (zh) 2022-09-26 2023-07-28 一种基于聚焦学习的ct血管造影智能成像方法

Country Status (2)

Country Link
CN (1) CN115512182B (zh)
WO (1) WO2024066711A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512182B (zh) * 2022-09-26 2023-07-04 中国人民解放军总医院第一医学中心 一种基于聚焦学习的ct血管造影智能成像方法
CN116385329B (zh) * 2023-06-06 2023-08-29 之江实验室 基于特征融合的多层知识蒸馏医学影像生成方法和装置
CN117115064B (zh) * 2023-10-17 2024-02-02 南昌大学 一种基于多模态控制的图像合成方法
CN117745856B (zh) * 2023-12-18 2024-07-12 中国人民解放军总医院 基于平扫ct的cta图像生成方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020001217A1 (zh) * 2018-06-27 2020-01-02 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
CN112101523A (zh) * 2020-08-24 2020-12-18 复旦大学附属华山医院 基于深度学习的cbct图像跨模态预测cta图像的卒中风险筛查方法和系统
US20220148301A1 (en) * 2020-06-10 2022-05-12 West China Hospital Of Sichuan University An Auxiliary Diagnostic Model and an Image Processing Method for Detecting Acute Ischemic Stroke
CN114494482A (zh) * 2021-12-24 2022-05-13 中国人民解放军总医院第一医学中心 一种基于平扫ct生成ct血管成像的方法
CN115512182A (zh) * 2022-09-26 2022-12-23 中国人民解放军总医院第一医学中心 一种基于聚焦学习的ct血管造影智能成像方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460726B (zh) * 2018-03-26 2020-08-11 厦门大学 一种基于增强递归残差网络的磁共振图像超分辨重建方法
WO2020102546A1 (en) * 2018-11-15 2020-05-22 The Regents Of The University Of California System and method for transforming holographic microscopy images to microscopy images of various modalities
CN110503187B (zh) * 2019-07-26 2024-01-16 深圳万知达科技有限公司 一种用于功能核磁共振成像数据生成的生成对抗网络模型的实现方法
CN114066798B (zh) * 2020-07-29 2024-05-14 复旦大学 一种基于深度学习的脑肿瘤核磁共振影像数据合成方法
CN114049939A (zh) * 2021-11-25 2022-02-15 江苏科技大学 一种基于UNet-GAN网络的肺炎CT图像生成方法
CN114862982A (zh) * 2022-04-28 2022-08-05 中国兵器科学研究院宁波分院 一种基于生成对抗网络的混合域无监督有限角ct重建方法
CN114926382A (zh) * 2022-05-18 2022-08-19 深圳大学 用于融合图像的生成对抗网络、图像融合方法及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020001217A1 (zh) * 2018-06-27 2020-01-02 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
US20220148301A1 (en) * 2020-06-10 2022-05-12 West China Hospital Of Sichuan University An Auxiliary Diagnostic Model and an Image Processing Method for Detecting Acute Ischemic Stroke
CN112101523A (zh) * 2020-08-24 2020-12-18 复旦大学附属华山医院 基于深度学习的cbct图像跨模态预测cta图像的卒中风险筛查方法和系统
CN114494482A (zh) * 2021-12-24 2022-05-13 中国人民解放军总医院第一医学中心 一种基于平扫ct生成ct血管成像的方法
CN115512182A (zh) * 2022-09-26 2022-12-23 中国人民解放军总医院第一医学中心 一种基于聚焦学习的ct血管造影智能成像方法

Also Published As

Publication number Publication date
CN115512182A (zh) 2022-12-23
CN115512182B (zh) 2023-07-04

Similar Documents

Publication Publication Date Title
WO2024066711A1 (zh) 一种基于聚焦学习的ct血管造影智能成像方法
CN112348936B (zh) 一种基于深度学习的低剂量锥束ct图像重建方法
CN109146988B (zh) 基于vaegan的非完全投影ct图像重建方法
CN112396672B (zh) 一种基于深度学习的稀疏角度锥束ct图像重建方法
CN111932461B (zh) 一种基于卷积神经网络的自学习图像超分辨率重建方法及系统
CN107958471B (zh) 基于欠采样数据的ct成像方法、装置、ct设备及存储介质
WO2011033890A1 (ja) 診断処理装置、診断処理システム、診断処理方法、診断処理プログラム及びコンピュータ読み取り可能な記録媒体、並びに、分類処理装置
CN109584164B (zh) 基于二维影像迁移学习的医学图像超分辨率三维重建方法
CN109102550A (zh) 基于卷积残差网络的全网络低剂量ct成像方法及装置
CN109360152A (zh) 基于稠密卷积神经网络的三维医学图像超分辨率重建方法
CN112837244B (zh) 一种基于渐进式生成对抗网络的低剂量ct图像降噪及去伪影方法
CN113160380B (zh) 三维磁共振影像超分辨重建方法、电子设备和存储介质
WO2024022485A1 (zh) 基于多尺度判别的计算机血管造影成像合成方法
CN110060315B (zh) 一种基于人工智能的图像运动伪影消除方法及系统
CN114677263B (zh) Ct图像与mri图像的跨模态转换方法和装置
CN114998458A (zh) 基于参考图像和数据修正的欠采样磁共振图像重建方法
Li et al. Dual-domain collaborative diffusion sampling for multi-source stationary computed tomography reconstruction
CN117475018A (zh) 一种ct运动伪影去除方法
WO2021189383A1 (zh) 生成高能ct图像模型的训练及生成方法、设备、存储介质
CN112529980B (zh) 一种基于极大极小化的多目标有限角ct图像重建方法
CN117726705B (zh) 一种同时低剂量ct重建与金属伪影校正的深度学习方法
CN112053292B (zh) 医学图像的处理方法、处理装置及计算机可读存储介质
Chen et al. “One-Shot” Reduction of Additive Artifacts in Medical Images
Zou et al. Conversion-based reconstruction: a discretized clinical convergence generative network for CT metal artifact reduction
CN115482167A (zh) 降低脑部核磁共振影像莱斯噪声的多域优化模型构建方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23869957

Country of ref document: EP

Kind code of ref document: A1