WO2022000183A1 - 一种ct图像降噪系统及方法 - Google Patents

一种ct图像降噪系统及方法 Download PDF

Info

Publication number
WO2022000183A1
WO2022000183A1 PCT/CN2020/098910 CN2020098910W WO2022000183A1 WO 2022000183 A1 WO2022000183 A1 WO 2022000183A1 CN 2020098910 W CN2020098910 W CN 2020098910W WO 2022000183 A1 WO2022000183 A1 WO 2022000183A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
module
dose
noise reduction
generator
Prior art date
Application number
PCT/CN2020/098910
Other languages
English (en)
French (fr)
Inventor
郑海荣
李彦明
江洪伟
万丽雯
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2020/098910 priority Critical patent/WO2022000183A1/zh
Publication of WO2022000183A1 publication Critical patent/WO2022000183A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application belongs to the technical field of medical CT imaging, and in particular relates to a CT image noise reduction system and method.
  • Computed Tomography is a non-invasive imaging detection method that obtains tomographic images of the patient's body through computers and X-rays. It has the advantages of short scanning time, low cost and a wide range of disease monitoring. for early screening and routine physical examination of the disease. However, a large amount of X-ray exposure will cause the cumulative effect of radiation dose, which will greatly increase the possibility of various diseases, thereby affecting the physiological functions of the human body, destroying human tissues and organs, and even endangering the life safety of patients.
  • the rational application of low-dose CT imaging technology needs to reduce the radiation dose of X-rays to patients as much as possible while meeting the clinical diagnostic requirements of CT images. Therefore, the research and development of CT imaging with higher imaging quality under low-dose conditions is very important. It has important scientific significance and broad application prospects in the field of medical diagnosis.
  • image noise reduction is a professional term in image processing.
  • real digital images are often affected by the interference of imaging equipment and external environmental noise, which are called noisy images or noisy images.
  • noisy images or noisy images The process of reducing noise in digital images is called image noise reduction, or sometimes image denoising.
  • the present application provides a CT image noise reduction system and method.
  • the present application provides a CT image noise reduction system, the system includes a generative adversarial network, and the generative adversarial network is used to realize the mapping between the low-dose CT image and the normal-dose CT image, and determine the generated the authenticity of the image;
  • the generative adversarial network includes an attention module and an adaptive moment estimation optimizer; the attention module is used to apply different weights to each channel of the feature map of the image, making full use of the high-dimensional and low-dimensional features of the image as well as local and non-local features. information;
  • the adaptive moment estimation optimizer is used to optimize the generative adversarial network.
  • the attention module in the step 1 is embedded in the generative adversarial network, and the attention module includes a channel attention sub-module and a cross self-attention sub-module ;
  • the channel attention sub-module is used to assign different weights to different feature maps in the channel direction
  • the cross self-attention sub-module is used to improve the utilization of non-local information, and can acquire non-local features along the horizontal and vertical directions.
  • the channel attention sub-module is used to apply different weights to each channel of the feature map of the image, which can then be used to fuse high-dimensional and low-dimensional features; the cross self-attention sub-module obtains the attention map through learning, making full use of image local and non-dimensional features. local information
  • the generative adversarial network includes a first generator, a second generator, a first discriminator, and a second discriminator;
  • the first generator is used to complete the low-dose CT image noise reduction task
  • the second generator is used to complete the noise simulation process from normal dose CT to low dose CT;
  • the first discriminator is used to encourage the first generator to generate a normal dose CT image from the low dose CT image
  • the second discriminator is used to encourage the second generator to generate low-dose CT images from normal-dose CT images.
  • the first generator includes a feature extraction unit, an image reconstruction unit and a residual connection unit, the residual connection unit includes a mean filter;
  • the second generator includes a feature An extraction unit, an image reconstruction unit and a residual connection unit, the residual connection unit includes a mean filter.
  • the feature extraction sub-module consists of 12 groups of 3 ⁇ 3 convolutions and LeakyReLU activation functions, and the outputs of each layer of convolution operations are merged along the channel direction at the end, and then Each channel is autonomously weighted via the channel attention submodule.
  • the first discriminator is composed of 6 groups of convolutions and the LearkyReLU activation function, wherein the size of the convolution kernel is 3 ⁇ 3;
  • the second discriminator is composed of 6 groups of convolutions and the LearnyReLU activation function, where the size of the convolution kernel is 3 ⁇ 3.
  • Another embodiment provided by the present application further includes a joint loss function module, and the joint loss function module is used to further improve the image quality.
  • the joint loss function module includes an adversarial loss sub-module, a cycle consistency loss sub-module and a structure restoration loss sub-module.
  • the present application also provides a CT image noise reduction method, the method comprising:
  • the applying different weights to each channel of the feature map of the image includes performing a global average pooling operation on the input feature map to obtain a 1 ⁇ 1 ⁇ C vector, and then using 1
  • the ⁇ 1 convolution operation compresses and restores the vector along the channel direction.
  • the required weight vector can be obtained, and finally the input feature map is multiplied by the weight vector. , you can get the final output.
  • the CT image noise reduction system provided in this application is a computer tomography (CT) system in the medical and industrial fields.
  • CT computer tomography
  • the CT image noise reduction method provided by this application, based on the multi-attention cycle consistency generative adversarial network to achieve CT image noise reduction, while improving the image peak signal-to-noise ratio and structural similarity, it also enhances the image detail information, thereby obtaining more CT images for diagnostic needs.
  • the CT image noise reduction method provided by the present application is to solve the problems of poor CT imaging quality and many noise artifacts under the condition of low dose.
  • the CT image noise reduction system provided in this application is a multi-attention-based cyclic consistency generative adversarial network to improve the quality of low-dose CT imaging.
  • the attention mechanism can greatly improve the reuse of low-dimensional and high-dimensional information and The fusion of local and non-local information, thereby enhancing the performance of traditional convolution operations, can largely eliminate noise and artifacts of low-dose CT.
  • the CT image noise reduction method provided by the present application specially designs a joint loss function to improve the quality of the CT image, and further ensures that the generated CT image meets the requirements of medical diagnosis by combining multiple loss functions.
  • the CT image noise reduction system provided by this application, based on multiple attention mechanisms, strives to extract image features more effectively, starting from high-dimensional and low-dimensional features and local and non-local information, and realizes through two different attention mechanisms, greatly Improved the detail expression of the generated CT images.
  • the general CT noise reduction method uses a single loss function, which cannot guarantee the quality of the generated image.
  • the CT image noise reduction system provided by the present application effectively guarantees the quality of the output image by combining multiple loss functions.
  • the CT image noise reduction system provided by the present application adds residual connection with mean filter, which effectively improves the convergence speed of the network and improves the training efficiency of the network.
  • 1 is a schematic structural diagram of the channel attention sub-module of the present application.
  • FIG. 2 is a schematic structural diagram of a cross self-attention sub-module of the present application.
  • Fig. 3 is the first generator structure schematic diagram of the present application.
  • FIG. 4 is a schematic structural diagram of a feature extraction unit of the present application.
  • FIG. 5 is a schematic structural diagram of an image reconstruction unit of the present application.
  • FIG. 6 is a schematic structural diagram of the first discriminator of the present application.
  • FIG. 7 is a schematic diagram of a generative adversarial network of the present application.
  • FIG. 8 is a schematic diagram of the comparison of results of different methods of the present application.
  • the final output of the generator is the input image minus the last layer of the generator's convolution output image, which can be obtained Image after denoising;
  • the discriminator part uses a 3 ⁇ 3 ⁇ 3 convolution kernel, LeakyReLU activation function and batch regularization operation, and finally outputs the prediction through the fully connected layer and the Sigmoid activation function.
  • the generator part of the network consists of 8 convolution operations and ReLU activation functions
  • the discriminator consists of 6 convolution operations and ReLU activation functions. The generator and the discriminator are trained at the same time, and the discriminator can promote the generator to generate more in line with the requirements. image.
  • the present application provides a CT image noise reduction system, the system includes a generative adversarial network, and the generative adversarial network is used to realize the mapping between the low-dose CT image and the normal-dose CT image, and determine the generated image. true or false;
  • the generative adversarial network includes an attention module and an adaptive moment estimation optimizer; the attention module is used to apply different weights to each channel of the feature map of the image, making full use of the high-dimensional and low-dimensional features of the image as well as local and non-local features. information;
  • the adaptive moment estimation optimizer is used to optimize the generative adversarial network.
  • Generative adversarial network is the overall structure of the network, including generator and discriminator.
  • the generator learns the mapping between low-dose CT images and normal CT images, and the discriminator learns to determine whether the input image is a real image.
  • the so-called confrontation refers to The generator and the discriminator confront each other.
  • the generator learns the feature distribution of the real data.
  • the discriminator discriminates between the real data and the data generated by the generator. The generator expects the generated data to deceive the discriminator as much as possible, while the discriminator It is expected to be able to identify the data generated by the generator, thereby forming a confrontation. The two continue to play in the generation and confrontation, learn together, and gradually reach the Nash equilibrium.
  • the generated data of the generator is enough to mix the fake with the real, so that the discriminator cannot distinguish between the real and the fake; attention;
  • the module is a sub-module embedded in the generator and the discriminator, which is used to improve the performance of the generator and the discriminator;
  • the Adam optimizer is a gradient update method used for backpropagation, which can ensure the normal progress of network training and effectively improve the Network convergence speed.
  • the attention module is embedded in the generative confrontation network, and the attention module includes a channel attention sub-module and a cross self-attention sub-module;
  • the channel attention sub-module is used to assign different weights to different feature maps in the channel direction
  • the cross self-attention sub-module is used to improve the utilization of non-local information, and can acquire non-local features along the horizontal and vertical directions.
  • the attention module contains two different attention mechanisms: channel attention and criss-cross self-attention.
  • the traditional convolutional neural network obtains higher-dimensional information of the image by continuously stacking convolution operations, but it often lacks sufficient flexibility to use the feature information from the low-dimensional.
  • the channel attention is mainly for different feature maps. By assigning different weights in the channel direction, the feature information from low-dimensional and high-dimensional can be more fully utilized. The weight is not determined by human, but obtained by the network through learning, which further increases the autonomy of the network.
  • Channel attention is shown in Figure 1.
  • the input of the channel attention sub-module is a feature map of size H ⁇ W ⁇ C.
  • a 1 ⁇ 1 ⁇ C weight vector needs to be obtained.
  • the value is the weight of different channels.
  • In order to obtain the weight vector first perform a global average pooling operation on the input feature map to obtain a 1 ⁇ 1 ⁇ C vector, and then use a 1 ⁇ 1 convolution operation to compress and restore the vector along the channel direction, so as to better
  • the required weight vector can be obtained, and finally the input feature map is multiplied by the weight vector to obtain the final output.
  • the channel attention operation can be expressed by the following formula:
  • z represents the input feature map of size H ⁇ W ⁇ C
  • G represents the global average pooling operation
  • W D and W U represent two 1 ⁇ 1 convolution operations
  • ⁇ and f represent the ReLU and sigmoid activation functions, respectively.
  • a crisscross self-attention sub-module that can acquire non-local features along the horizontal and vertical directions is designed. This module firstly calculates the horizontal and vertical correlation of the current pixel position, and then superimposes two cross-cross self-attention sub-modules to obtain the global correlation, thereby effectively utilizing the global non-local feature information.
  • the current pixel x i ⁇ x is the pixel at the i-th position in the feature map, is a vector of pixel values along the channel direction corresponding to f(x) and the current pixel position, is a vector of pixel values along the horizontal and vertical directions corresponding to g(x) and the current pixel position, then:
  • v'(x) is first expanded by 1 ⁇ 1 convolution, and then the output attention map is calculated by softmax, and the obtained attention map is multiplied by the corresponding element of h(x):
  • the cross self-attention sub-module described above only calculates the correlation in the vertical and horizontal directions, and the global correlation can be indirectly calculated when the cross-cross self-attention sub-module is superimposed twice, which greatly reduces the direct calculation of the global correlation. Sexual calculations.
  • the generative adversarial network includes a first generator, a second generator, a first discriminator and a second discriminator;
  • the first generator is used to complete the low-dose CT image noise reduction task
  • the second generator is used to complete the noise simulation process from normal dose CT to low dose CT;
  • the first discriminator is used to encourage the first generator to generate a normal dose CT image from the low dose CT image
  • the second discriminator is used to encourage the second generator to generate low-dose CT images from normal-dose CT images.
  • the first generator includes a feature extraction unit, an image reconstruction unit and a residual connection unit, the residual connection unit includes a mean filter;
  • the second generator includes a feature extraction unit, an image reconstruction unit and a residual A difference connection unit, the residual connection unit includes a mean filter.
  • the generative adversarial network involved in this application is mainly composed of a generator and a discriminator.
  • the generator mainly realizes the mapping between low-dose CT images and normal-dose CT images, and the discriminator is used to judge the authenticity of the images generated by the generator. Synchronous training of the generators makes common progress, and gradually makes the images generated by the generator be as real as the real ones.
  • the generator is mainly composed of two parts: a feature extraction unit and an image reconstruction unit.
  • the input is used to extract features from the image through the feature extraction network, and then the extracted features are reconstructed through the image reconstruction network.
  • the residual connection unit is connected to the output.
  • the residual connection unit is to solve the problem of gradient disappearance and gradient explosion in training.
  • the mean filter can further improve the convergence speed of the network.
  • the feature extraction sub-module is composed of 12 groups of 3 ⁇ 3 convolutions and LeakyReLU activation functions, and the output of each layer of convolution operation is finally combined along the channel direction, and then the channel attention sub-module is used for each layer.
  • Each channel assigns weights autonomously. Thereby, the feature information from high-dimensional and low-dimensional features can be used more effectively.
  • the image reconstruction unit first extracts features through three convolution operations of different sizes, of which 1 ⁇ 1 convolution can be used for channel compression, reducing the amount of parameters and improving the information fusion between channels.
  • convolution operations of different sizes, of which 1 ⁇ 1 convolution can be used for channel compression, reducing the amount of parameters and improving the information fusion between channels.
  • cross self-attention is added.
  • the feature maps obtained by the three branches are finally merged along the channel direction, and the final output is obtained through three sets of convolution operations.
  • the first discriminator is composed of 6 groups of convolutions and LearnyReLU activation functions, wherein the size of the convolution kernel is 3 ⁇ 3;
  • the second discriminator is composed of 6 groups of convolutions and LearnyReLU activation functions, wherein the volume
  • the size of the product kernel is 3 ⁇ 3.
  • the number and step size of the convolution kernel are shown in the figure, where n represents the number of convolution kernels, s represents the step size of the convolution kernel, and the last of the network is two convolution layers, and the final output is to determine whether the input image is true.
  • a joint loss function module is also included, and the joint loss function module is used to further improve the image quality.
  • the joint loss function module includes an adversarial loss sub-module, a cycle consistency loss sub-module and a structure restoration loss sub-module.
  • the architecture mainly includes two pairs of generators (G ab and G ba ) and discriminators (D a and D b ),
  • the discriminator D b encourages the generator G ab from low-dose CT images normal dose CT images
  • the discriminator D a encourages the generator G ba from normal dose CT images
  • the generator G ab can complete the low-dose CT image noise reduction task
  • the generator G ba can complete the noise simulation process from normal dose CT to low-dose CT.
  • the application mainly uses the trained generator G ab in the test phase to achieve Image noise reduction.
  • a single loss function has certain limitations in image generation. In order to improve the quality of the generated image, it is necessary to use a joint loss to further improve the image quality.
  • the joint loss function can be expressed as follows:
  • ⁇ 1 and ⁇ 2 are weight coefficients, respectively.
  • the whole network is based on the idea of generating adversarial networks, so the adversarial loss is one of its core loss functions, and its objective function is:
  • the least squares adversarial loss function is used:
  • cycle consistency generative adversarial network in addition to the adversarial loss, the cycle consistency is also added.
  • the use of the adversarial loss to train the function alone cannot guarantee that the generated image details can meet the requirements. Therefore, the cycle consistency loss is added to further constrain the output:
  • the adversarial loss and the cycle consistency loss cooperate with each other to constrain each other and ensure that the output result is as close to the real as possible, but it is difficult to ensure the peak signal-to-noise ratio and structural similarity of the output image.
  • the structural restoration loss is added. The loss can further improve the image peak signal-to-noise ratio and structural similarity.
  • Structural similarity measures the similarity of two images in terms of brightness, contrast and structure, and its calculation formula is:
  • the present application also provides a CT image noise reduction method, the method comprising:
  • applying different weights to each channel of the feature map of the image includes performing a global average pooling operation on the input feature map to obtain a 1 ⁇ 1 ⁇ C vector, and then using a 1 ⁇ 1 convolution operation on the vector. Compress and restore along the channel direction. After two 1 ⁇ 1 convolution operations and then the sigmoid function, the required weight vector can be obtained. Finally, the input feature map is multiplied by the weight vector to obtain the final output.
  • the method of the present application can effectively improve the peak signal-to-noise ratio and structural similarity of the image, and at the same time, can restore the image detail information to a certain extent.
  • the method can be applied to other types of medical image noise reduction; in addition to applying noise reduction, the method can also be applied to the field of image super-resolution after appropriate changes; the attention mechanism can be considered as It is a plug-and-play module that can be added to any traditional convolutional neural network workflow to improve the performance of the network.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种CT图像降噪系统及方法,属于医学CT成像技术领域。由于在CT成像时降低X射线的辐射会导致重建图像产生大量量子噪声和金属伪影,提供了一种CT图像降噪系统,所述系统包括生成对抗网络,所述生成对抗网络用于实现低剂量CT图像和正常剂量CT图像之间的映射,判断生成图像的真假;所述生成对抗网络包括注意力模块和自适应矩估计优化器;所述注意力模块用于对图像的特征图每个通道施加不同的权重,充分利用图像高维低维特征以及局部和非局部信息;所述自适应矩估计优化器用于优化所述生成对抗网络。增强了图像细节信息,从而得到更加满足诊断需求的CT图像。

Description

一种CT图像降噪系统及方法 技术领域
本申请属于医学CT成像技术领域,特别是涉及一种CT图像降噪系统及方法。
背景技术
计算机断层扫描(Computed Tomography,CT),是一种通过计算机和X射线来获取病人躯体断层图像的非侵入式影像学检测方法,它具有扫描时间短,费用低廉和疾病监测范围广等优点,适用于疾病的早期筛查和常规性体检。然而,大量的X射线照射会出现辐射剂量的累计效应,大幅度增加各种疾病发生的可能性,进而影响人体生理机能,破坏人体组织器官,甚至危害到患者的生命安全。合理应用低剂量CT成像技术需要在满足CT图像的临床诊断要求下,同时尽可能的降低X射线对患者的辐射剂量,因此,研究和开发低剂量条件下成像质量更高的CT成像,对于目前的医疗诊断领域都有着重要的科学意义和广阔的应用前景。
图像降噪的英文名称是Image Denoising,是图像处理中的专业术语。现实中的数字图像在数字化和传输过程中常受到成像设备与外部环境噪声干扰等影响,称为含噪图像或噪声图像。减少数字图像中噪声的过程称为图像降噪,有时候又称为图像去噪。
由于在CT成像时降低X射线的辐射会导致重建图像产生大量量子噪声和金属伪影;正常CT成像需采集的数据量较大,导致图像重建速度慢;扫描时间长,导致出现病人出现不可避免生理运动而引起伪影。
发明内容
1.要解决的技术问题
基于现有CT图像重建技术无法实现在稀疏采样的低剂量条件下重建临床诊断可接受的CT图像,使用传统算法重建稀疏低剂量CT图像会带来明显的图像伪影和干扰信息,严重影响后续的临床诊断的问题,本申请提供了一种CT图像降噪系统及方法。
2.技术方案
为了达到上述的目的,本申请提供了一种CT图像降噪系统,所述系统包括生成对抗网络,所述生成对抗网络用于实现低剂量CT图像和正常剂量CT图像之间的映射,判断生成图像的真假;
所述生成对抗网络包括注意力模块和自适应矩估计优化器;所述注意力模块用于对图像的特征图每个通道施加不同的权重,充分利用图像高维低维特征以及局部和非局部信息;
所述自适应矩估计优化器用于优化所述生成对抗网络。
本申请提供的另一种实施方式为::所述步骤1中所述注意力模块内嵌于所述生成对抗网络,所述注意力模块包括通道注意力子模块和十字交叉自注意力子模块;
所述通道注意力子模块用于为不同的特征图在通道方向赋予不同的权重;
所述十字交叉自注意力子模块用于提高非局部信息利用率,可以沿着水平和垂直方向获取非局部特征。
通道注意力子模块用于对图像的特征图每个通道施加不同的权重,进而可以用来融合高维和低维特征;十字交叉自注意力子模块通过学习得到注意力图,充分利用图像局部和非局部信息
本申请提供的另一种实施方式为:所述生成对抗网络包括第一生成器、第二生成器、第一判别器和第二判别器;
所述第一生成器用于完成低剂量CT图像降噪任务;
所述第二生成器用于完成从正常剂量CT到低剂量CT的噪声仿真过程;
所述第一判别器用于鼓励第一生成器从低剂量CT图像里生成正常剂量CT图像;
所述第二判别器用于鼓励第二生成器从正常剂量CT图像里生成低剂量CT图像。
本申请提供的另一种实施方式为:所述第一生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器;所述第二生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器。
本申请提供的另一种实施方式为:所述特征提取子模块由12组3×3卷积和LeakyReLU激活函数组成,并将每一层卷积操作的输出在最后沿着通道方向合并,再经由通道注意力子模块为每个通道自主地赋予权重。
本申请提供的另一种实施方式为:所述第一判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3;所述第二判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3。
本申请提供的另一种实施方式为:还包括联合损失函数模块,所述联合损失函数模块用于进一步提高图像质量。
本申请提供的另一种实施方式为:所述联合损失函数模块包括对抗损失子模块、循环一致性损失子模块和结构恢复损失子模块。
本申请还提供一种CT图像降噪方法,所述方法包括:
1)采用权利要求1~7中任一项所述的CT图像降噪系统对图像进行处理;
2)从低剂量CT图像数据集中提取图像块作为输入,从正常剂量CT图像数据集中提取 对应的图像块作为参考;
3)训练生成对抗网络,逐步达到收敛状态。
本申请提供的另一种实施方式为:所述对图像的特征图每个通道施加不同的权重包括对输入的特征图做全局平均池化操作,得到1×1×C的向量,再使用1×1卷积操作对该向量沿通道方向压缩再恢复,经过两次1×1卷积操作后再经过sigmoid函数即可得到所需的权重向量,最后将输入的特征图与该权重向量相乘,即可得到最终的输出。
3.有益效果
与现有技术相比,本申请提供的一种CT图像降噪系统及方法的有益效果在于:
本申请提供的CT图像降噪系统,为一种医学和工业领域计算机断层扫描(CT)系统。
本申请提供的CT图像降噪方法,基于多重注意力的循环一致性生成对抗网络实现CT图像降噪,在提高图像峰值信噪比和结构相似度的同时,增强了图像细节信息,从而得到更加满足诊断需求的CT图像。
本申请提供的CT图像降噪方法,为解决低剂量条件下CT成像质量差,噪声伪影多的问题。
本申请提供的CT图像降噪系统,为一种基于多重注意力的循环一致性生成对抗网络,来提高低剂量CT成像质量,通过注意力机制可以大幅度提高低维和高维信息的重复利用以及局部和非局部信息的融合,从而增强传统卷积操作的性能,能够在较大程度上消除低剂量CT的噪声和伪影。
本申请提供的CT图像降噪方法,提高CT图像质量专门设计了一个联合损失函数,通过联合多重损失函数来进一步保证生成的CT图像满足医学诊断要求。
本申请提供的CT图像降噪系统,基于多重注意力机制,力求更有效地提取图像特征,从高维和低维特征以及局部和非局部信息着手,并通过两种不同的注意力机制实现,大大提高了生成的CT图像细节表现力。
一般CT降噪方法使用单一损失函数,无法对生成图像质量做出保证,本申请提供的CT图像降噪系统通过联合多重损失函数,有效地保证了输出图像的质量。
本申请提供的CT图像降噪系统,加入了带有均值滤波的残差连接,有效地提升了网络的收敛速度,提高了网络的训练效率。
附图说明
图1是本申请的通道注意力子模块结构示意图;
图2是本申请的十字交叉自注意力子模块结构示意图;
图3是本申请的第一生成器结构示意图;
图4是本申请的特征提取单元结构示意图;
图5是本申请的图像重建单元结构示意图;
图6是本申请的第一判别器结构示意图;
图7是本申请的生成对抗网络示意图;
图8是本申请的不同方法的结果对比示意图。
具体实施方式
在下文中,将参考附图对本申请的具体实施例进行详细地描述,依照这些详细的描述,所属领域技术人员能够清楚地理解本申请,并能够实施本申请。在不违背本申请原理的情况下,各个不同的实施例中的特征可以进行组合以获得新的实施方式,或者替代某些实施例中的某些特征,获得其它优选的实施方式。
elmer M.Wolterink等人于2017年在IEEE Transactions on Medical Imaging期刊上发表文章“Generative Adversarial Networks for Noise Reduction in Low-Dose CT”,成功将生成对抗网络(GAN)应用于低剂量CT成像领域,其中生成器使用的卷积核大小为3×3×3,卷积核数量由最开始的32个逐步增加到64个,最终增加到128个,去除了池化操作,所有卷积层后均使用LeakyReLU激活函数提训练稳定性,此外,为保证生成器学习到的是低剂量CT图像中的噪声部分,生成器的最终输出为输入图像减去生成器最后一层卷积输出图像,即可得到降噪后图像;判别器部分使用了3×3×3大小的卷积核,LeakyReLU激活函数以及批正则化操作,最终通过全连接层以及Sigmoid激活函数输出预测。
Yang等人于2018年在IEEE transactions on medical imaging期刊上发表文章“Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss”,在生成对抗网络的基础上增加了感知损失,使得生成的图像具有更多的细节信息和更好的视觉效果。该网络生成器部分由8个卷积操作和ReLU激活函数组成,判别器由6个卷积操作和ReLU激活函数组成,生成器和判别器同时训练,判别器可促进生成器生成更加符合要求的图像。
参见图1~8,本申请提供一种CT图像降噪系统,所述系统包括生成对抗网络,所述生成对抗网络用于实现低剂量CT图像和正常剂量CT图像之间的映射,判断生成图像的真假;
所述生成对抗网络包括注意力模块和自适应矩估计优化器;所述注意力模块用于对图像的特征图每个通道施加不同的权重,充分利用图像高维低维特征以及局部和非局部信息;
所述自适应矩估计优化器用于优化所述生成对抗网络。
生成对抗网络是网络的整体结构,包含生成器和判别器,生成器学习低剂量CT图像和正常CT图像之间的映射,判别器学习判别输入图像是否为真实图像,所谓的对抗,指的就是生成器和判别器之间的互相对抗,生成器学习真实数据的特征分布,判别器从真实数据和生成器生成数据中鉴别真假,生成器期望生成数据尽可能的欺骗判别器,而判别器期望能够鉴别出生成器生成数据,由此形成对抗,两者在生成和对抗中不断博弈,共同学习,逐步达到纳什均衡,最终生成器生成数据足以以假乱真,使得判别器无法鉴别真伪;注意力模块属于内嵌进生成器和判别器中的子模块,用于提高生成器和判别器的性能;Adam优化器是用于反向传播时的梯度更新方法,能确保网络训练正常进行,有效提高网络收敛速度。
进一步地,:所述步骤1中所述注意力模块内嵌于所述生成对抗网络,所述注意力模块包括通道注意力子模块和十字交叉自注意力子模块;
所述通道注意力子模块用于为不同的特征图在通道方向赋予不同的权重;
所述十字交叉自注意力子模块用于提高非局部信息利用率,可以沿着水平和垂直方向获取非局部特征。
注意力模块包含两种不同的注意力机制:通道注意力和十字交叉自注意力。传统的卷积神经网络通过不断叠加卷积操作以此来获取图像的更高维信息,但往往对来自低维的特征信息缺乏足够的灵活利用,其中,通道注意力主要是为不同的特征图在通道方向赋予不同的权重,可以更加充分地利用来自低维和高维的特征信息,该权重不是通过人为确定,而是网络通过学习获得,这进一步增加了网络的自主性。通道注意力如图1所示。
如图1所示,通道注意力子模块输入为大小是H×W×C的特征图,为了给每个通道施加不同的权重,需要获得一个1×1×C的权重向量,该向量上的值即为不同通道的权重。为获取权重向量,首先对输入的特征图做全局平均池化操作,得到1×1×C的向量,再使用1×1卷积操作对该向量沿通道方向压缩再恢复,以此来更好地实现通道间的信息融合,经过两次1×1卷积操作后再经过sigmoid函数即可得到所需的权重向量,最后将输入的特征图与该权重向量相乘,即可得到最终的输出。通道注意力操作可用如下公式表示:
s=f(W Uδ(W DG(z)))     (1)
其中,z表示大小为H×W×C的输入特征图,G表示全局平均池化操作,W D和W U表示两次1×1卷积操作,δ和f分别表示ReLU和sigmoid激活函数。
此外,传统卷积操作仅在局部区域内进行互相关计算,这大大降低了非局部信息的利用,融合图像的非局部特征有利于提高最终生成的图像质量。为提高非局部信息利用率,设计出一种可以沿着水平和垂直方向获取非局部特征的十字交叉自注意力子模块。该模块首先通过 计算当前像素位置的水平和垂直方向相关性,然后再叠加两次十字交叉自注意力子模块即可获得全局相关性,从而有效地利用全局的非局部特征信息。
如图所示,假定该模块的输入为C×H×W大小的特征图,分别经过三个1×1卷积得到
Figure PCTCN2020098910-appb-000001
和h(x)=W hx∈R C×H×W,其中f(x)和g(x)的通道数 C要低于h(x),压缩通道是为了促使通道间的信息融合。假定当前像素点x i∈x是特征图中的第i个位置的像素点,
Figure PCTCN2020098910-appb-000002
是f(x)与当前像素位置对应的沿着通道方向像素值组成的向量,
Figure PCTCN2020098910-appb-000003
是g(x)与当前像素位置对应的沿着水平和垂直方向像素值组成的向量,则:
Figure PCTCN2020098910-appb-000004
再将v′(x)首先经1×1卷积扩展维度,然后通过softmax计算输出注意力图,再将得到的注意力图与h(x)对应元素相乘:
Figure PCTCN2020098910-appb-000005
最终将输出乘以可学习参数γ并加上输入,得到最后的输出:
o final(x)=γ·o(x)+x      (4)
以上所描述的十字交叉自注意力子模块仅在垂直和水平方向计算相关性,当叠加两次十字交叉自注意力子模块即可间接计算出全局的相关性,这大大降低了直接计算全局相关性的计算量。
进一步地,所述生成对抗网络包括第一生成器、第二生成器、第一判别器和第二判别器;
所述第一生成器用于完成低剂量CT图像降噪任务;
所述第二生成器用于完成从正常剂量CT到低剂量CT的噪声仿真过程;
所述第一判别器用于鼓励第一生成器从低剂量CT图像里生成正常剂量CT图像;
所述第二判别器用于鼓励第二生成器从正常剂量CT图像里生成低剂量CT图像。
进一步地,所述第一生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器;所述第二生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器。
本申请涉及到的生成对抗网络主要由生成器和判别器组成,其中生成器主要实现低剂量CT图像和正常剂量CT图像之间的映射,而判别器用于判断生成器生成图像的真假,两者同步训练共同进步,逐步使得生成器生成的图像能够以假乱真。其中生成器主要由两个部分组成:特征提取单元和图像重建单元,输入经由特征提取网络对图像进行特征提取,再经图像重建网络对提取到的特征进行重建,同时输入经由带均值滤波器的残差连接单元到输出,残差连接单元是为了解决训练中的梯度消失和梯度爆炸问题,同时为了促进网络间的信息传递, 均值滤波器可以进一步提高网络的收敛速度。
进一步地,所述特征提取子模块由12组3×3卷积和LeakyReLU激活函数组成,并将每一层卷积操作的输出在最后沿着通道方向合并,再经由通道注意力子模块为每个通道自主地赋予权重。从而更加有效地利用来自高维和低维的特征信息。
图像重建单元首先经由三路不同尺寸卷积操作提取特征,其中1×1卷积可用于通道压缩,减少参数量并提高通道间的信息融合,三条支路的最后都添加了十字交叉自注意力子模块,以提高非局部特征的利用,最终将三条支路获取到的特征图沿通道方向合并,并经由三组卷积操作得到最终的输出。
进一步地,所述第一判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3;所述第二判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3。卷积核的数量和步长如图中所示,其中n表示卷积核的数量,s表示卷积核的步长,网络的最后是两个卷积层,最终的输出为判别输入图像是否为真。
进一步地,还包括联合损失函数模块,所述联合损失函数模块用于进一步提高图像质量。
进一步地,所述联合损失函数模块包括对抗损失子模块、循环一致性损失子模块和结构恢复损失子模块。
在设计联合损失函数前,首先说明循环一致性生成对抗网络的整体架构,如图所示,该架构主要包含两对生成器(G ab和G ba)和判别器(D a和D b),判别器D b鼓励生成器G ab从低剂量CT图像
Figure PCTCN2020098910-appb-000006
里生成正常剂量CT图像
Figure PCTCN2020098910-appb-000007
判别器D a鼓励生成器G ba从正常剂量CT图像
Figure PCTCN2020098910-appb-000008
里生成低剂量CT图像
Figure PCTCN2020098910-appb-000009
其中生成器G ab可完成低剂量CT图像降噪任务,生成器G ba可完成从正常剂量CT到低剂量CT的噪声仿真过程,本申请在测试阶段主要使用已训练好的生成器G ab实现图像降噪功能。
单一损失函数在图像生成方面存在一定局限性,为提高生成图像质量,有必要使用联合损失来进一步提高图像质量。联合损失函数可表示如下:
Figure PCTCN2020098910-appb-000010
其中,λ 1和λ 2分别为权重系数。
首先,网络整体基于生成对抗网络的思想,故对抗损失为其核心损失函数之一,其目标函数为:
Figure PCTCN2020098910-appb-000011
其中,
Figure PCTCN2020098910-appb-000012
Figure PCTCN2020098910-appb-000013
为缓解生成对抗网络在训练过程中的梯度消失和模式崩塌问题,使用了最小二乘对抗损失函数:
Figure PCTCN2020098910-appb-000014
Figure PCTCN2020098910-appb-000015
其中,
Figure PCTCN2020098910-appb-000016
表示期望,α和β表示输入和目标数据,P *表示数据分布,a,b和c表示人为设定的超参数,其中a=0,b=1,c=1。
在循环一致性生成对抗网络中,除了对抗损失外,还加入了循环一致性,单单使用对抗损失来训练函数无法保证生成的图像细节能满足要求,故增加循环一致性损失对输出做进一步约束:
Figure PCTCN2020098910-appb-000017
其中,||·|| 1表示L1范数。
对抗损失与循环一致性损失相互配合是为了相互约束,保证输出结果尽可能接近真实,但难以保证输出图像的峰值信噪比和结构相似性,为进一步提高图像质量,增加了结构恢复损失,该损失可以进一步提高图像峰值信噪比和结构相似度。首先,以往的经验可知L2损失函数虽然可以在一定程度上提升图像的峰值信噪比,但L2损失会使得图像平滑,丢失细节,故采用L1损失函数:
Figure PCTCN2020098910-appb-000018
结构相似度是从亮度,对比度和结构等方面衡量两个图像的相似度,其计算公式为:
Figure PCTCN2020098910-appb-000019
其中,μ和σ分别表示图像的均值和标准差,C 1=(k 1L) 2和C 2=(k 2L) 2是两个较小的常数项,避免分母为0,其中L表示图像的最大像素值。结构相似度越接近于1,则表示两张图像相似度越高,同时,在网络训练中通常使用梯度下降法,故设计如下损失函数:
Figure PCTCN2020098910-appb-000020
结合L 1和L SSIM可得:
L str=μL 1+(1-μ)L SSIM     (15)
本申请还提供一种CT图像降噪方法,所述方法包括:
1)采用权利要求1~7中任一项所述的CT图像降噪系统对图像进行处理;
2)从低剂量CT图像数据集中提取图像块作为输入,从正常剂量CT图像数据集中提取对应的图像块作为参考;
3)训练生成对抗网络,逐步达到收敛状态。
进一步地,所述对图像的特征图每个通道施加不同的权重包括对输入的特征图做全局平均池化操作,得到1×1×C的向量,再使用1×1卷积操作对该向量沿通道方向压缩再恢复,经过两次1×1卷积操作后再经过sigmoid函数即可得到所需的权重向量,最后将输入的特征图与该权重向量相乘,即可得到最终的输出。
从图8可以看出,本申请的方法可以有效提高图像的峰值信噪比和结构相似度,同时,可以在一定程度上恢复图像细节信息。
除应用于CT图像降噪外,该方法可应用于其他类型医学图像降噪;除应用于降噪外,该方法经过适当更改后,也可应用于图像超分辨领域;注意力机制可以被认为是一种即插即用模块,可以添加至任意传统卷积神经网络工作流程中,提高网络的性能。
尽管在上文中参考特定的实施例对本申请进行了描述,但是所属领域技术人员应当理解,在本申请公开的原理和范围内,可以针对本申请公开的配置和细节做出许多修改。本申请的保护范围由所附的权利要求来确定,并且权利要求意在涵盖权利要求中技术特征的等同物文字意义或范围所包含的全部修改。

Claims (10)

  1. 一种CT图像降噪系统,其特征在于:所述系统包括生成对抗网络,所述生成对抗网络用于实现低剂量CT图像和正常剂量CT图像之间的映射,判断生成图像的真假;
    所述生成对抗网络包括注意力模块和自适应矩估计优化器;所述注意力模块用于对图像的特征图每个通道施加不同的权重,充分利用图像高维低维特征以及局部和非局部信息;
    所述自适应矩估计优化器用于优化所述生成对抗网络。
  2. 如权利要求1所述的CT图像降噪系统,其特征在于:所述注意力模块内嵌于所述生成对抗网络,所述注意力模块包括通道注意力子模块和十字交叉自注意力子模块;
    所述通道注意力子模块用于为不同的特征图在通道方向赋予不同的权重;
    所述十字交叉自注意力子模块用于提高非局部信息利用率,可以沿着水平和垂直方向获取非局部特征。
  3. 如权利要求2所述的CT图像降噪系统,其特征在于:所述生成对抗网络包括第一生成器、第二生成器、第一判别器和第二判别器;
    所述第一生成器用于完成低剂量CT图像降噪任务;
    所述第二生成器用于完成从正常剂量CT到低剂量CT的噪声仿真过程;
    所述第一判别器用于鼓励第一生成器从低剂量CT图像里生成正常剂量CT图像;
    所述第二判别器用于鼓励第二生成器从正常剂量CT图像里生成低剂量CT图像。
  4. 如权利要求3所述的CT图像降噪系统,其特征在于:所述第一生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器;所述第二生成器包括特征提取单元、图像重建单元和残差连接单元,所述残差连接单元包括均值滤波器。
  5. 如权利要求4所述的CT图像降噪系统,其特征在于:所述特征提取子模块由12组3×3卷积和LeakyReLU激活函数组成,并将每一层卷积操作的输出在最后沿着通道方向合并,再经由通道注意力子模块为每个通道自主地赋予权重。
  6. 如权利要求3所述的CT图像降噪系统,其特征在于:所述第一判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3;所述第二判别器由6组卷积和LearkyReLU激活函数组成,其中卷积核的大小为3×3。
  7. 如权利要求1所述的CT图像降噪系统,其特征在于:还包括联合损失函数模块,所述联合损失函数模块用于进一步提高图像质量。
  8. 如权利要求7所述的CT图像降噪系统,其特征在于:所述联合损失函数模块包括对抗损失子模块、循环一致性损失子模块和结构恢复损失子模块。
  9. 一种CT图像降噪方法,其特征在于:所述方法包括:
    1)采用权利要求1~7中任一项所述的CT图像降噪系统对图像进行处理;
    2)从低剂量CT图像数据集中提取图像块作为输入,从正常剂量CT图像数据集中提取对应的图像块作为参考;
    3)训练生成对抗网络,逐步达到收敛状态。
  10. 如权利要求8所述的CT图像降噪方法,其特征在于:所述对图像的特征图每个通道施加不同的权重包括对输入的特征图做全局平均池化操作,得到1×1×C的向量,再使用1×1卷积操作对该向量沿通道方向压缩再恢复,经过两次1×1卷积操作后再经过sigmoid函数即可得到所需的权重向量,最后将输入的特征图与该权重向量相乘,即可得到最终的输出。
PCT/CN2020/098910 2020-06-29 2020-06-29 一种ct图像降噪系统及方法 WO2022000183A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/098910 WO2022000183A1 (zh) 2020-06-29 2020-06-29 一种ct图像降噪系统及方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/098910 WO2022000183A1 (zh) 2020-06-29 2020-06-29 一种ct图像降噪系统及方法

Publications (1)

Publication Number Publication Date
WO2022000183A1 true WO2022000183A1 (zh) 2022-01-06

Family

ID=79315061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098910 WO2022000183A1 (zh) 2020-06-29 2020-06-29 一种ct图像降噪系统及方法

Country Status (1)

Country Link
WO (1) WO2022000183A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114692509A (zh) * 2022-04-21 2022-07-01 南京邮电大学 基于多阶段退化神经网络的强噪声单光子三维重建方法
CN114998466A (zh) * 2022-05-31 2022-09-02 华中科技大学 一种基于注意力机制和深度学习的低剂量锥束ct重建方法
CN115409733A (zh) * 2022-09-02 2022-11-29 山东财经大学 一种基于图像增强和扩散模型的低剂量ct图像降噪方法
CN115908204A (zh) * 2023-02-21 2023-04-04 北京唯迈医疗设备有限公司 用于利用放射成像设备获取的医学图像的降噪处理方法、装置及介质
CN115984106A (zh) * 2022-12-12 2023-04-18 武汉大学 一种基于双边生成对抗网络的线扫描图像超分辨率方法
CN116523800A (zh) * 2023-07-03 2023-08-01 南京邮电大学 基于残差密集网络与注意力机制的图像降噪模型及方法
CN117372879A (zh) * 2023-12-07 2024-01-09 山东建筑大学 基于自监督增强的轻量级遥感影像变化检测方法和系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903228A (zh) * 2019-02-28 2019-06-18 合肥工业大学 一种基于卷积神经网络的图像超分辨率重建方法
CN110766632A (zh) * 2019-10-22 2020-02-07 广东启迪图卫科技股份有限公司 基于通道注意力机制和特征金字塔的图像去噪方法
CN110930418A (zh) * 2019-11-27 2020-03-27 江西理工大学 融合W-net和条件生成对抗网络的视网膜血管分割方法
CN110930318A (zh) * 2019-10-31 2020-03-27 中山大学 一种低剂量ct图像修复去噪方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903228A (zh) * 2019-02-28 2019-06-18 合肥工业大学 一种基于卷积神经网络的图像超分辨率重建方法
CN110766632A (zh) * 2019-10-22 2020-02-07 广东启迪图卫科技股份有限公司 基于通道注意力机制和特征金字塔的图像去噪方法
CN110930318A (zh) * 2019-10-31 2020-03-27 中山大学 一种低剂量ct图像修复去噪方法
CN110930418A (zh) * 2019-11-27 2020-03-27 江西理工大学 融合W-net和条件生成对抗网络的视网膜血管分割方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114692509A (zh) * 2022-04-21 2022-07-01 南京邮电大学 基于多阶段退化神经网络的强噪声单光子三维重建方法
CN114998466A (zh) * 2022-05-31 2022-09-02 华中科技大学 一种基于注意力机制和深度学习的低剂量锥束ct重建方法
CN115409733A (zh) * 2022-09-02 2022-11-29 山东财经大学 一种基于图像增强和扩散模型的低剂量ct图像降噪方法
CN115984106A (zh) * 2022-12-12 2023-04-18 武汉大学 一种基于双边生成对抗网络的线扫描图像超分辨率方法
CN115984106B (zh) * 2022-12-12 2024-04-02 武汉大学 一种基于双边生成对抗网络的线扫描图像超分辨率方法
CN115908204A (zh) * 2023-02-21 2023-04-04 北京唯迈医疗设备有限公司 用于利用放射成像设备获取的医学图像的降噪处理方法、装置及介质
CN116523800A (zh) * 2023-07-03 2023-08-01 南京邮电大学 基于残差密集网络与注意力机制的图像降噪模型及方法
CN116523800B (zh) * 2023-07-03 2023-09-22 南京邮电大学 基于残差密集网络与注意力机制的图像降噪模型及方法
CN117372879A (zh) * 2023-12-07 2024-01-09 山东建筑大学 基于自监督增强的轻量级遥感影像变化检测方法和系统
CN117372879B (zh) * 2023-12-07 2024-03-26 山东建筑大学 基于自监督增强的轻量级遥感影像变化检测方法和系统

Similar Documents

Publication Publication Date Title
WO2022000183A1 (zh) 一种ct图像降噪系统及方法
WO2021077997A1 (zh) 图像去噪的多生成器生成对抗网络学习方法
CN111861910A (zh) 一种ct图像降噪系统及方法
Zreik et al. Deep learning analysis of coronary arteries in cardiac CT angiography for detection of patients requiring invasive coronary angiography
CN110728729B (zh) 一种基于注意机制的无监督ct投影域数据恢复方法
CN115953494B (zh) 基于低剂量和超分辨率的多任务高质量ct图像重建方法
Jafari et al. Semi-supervised learning for cardiac left ventricle segmentation using conditional deep generative models as prior
Leclerc et al. RU-Net: A refining segmentation network for 2D echocardiography
CN111709446B (zh) 基于改进的密集连接网络的x线胸片分类装置
Habijan et al. Whole heart segmentation from CT images using 3D U-net architecture
Kita Elastic-model driven analysis of several views of a deformable cylindrical object
CN111696042B (zh) 基于样本学习的图像超分辨重建方法
Lin et al. BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation
Huang et al. Bone feature segmentation in ultrasound spine image with robustness to speckle and regular occlusion noise
CN116645283A (zh) 基于自监督感知损失多尺度卷积神经网络的低剂量ct图像去噪方法
CN114119635B (zh) 一种基于空洞卷积的脂肪肝ct图像分割方法
Wang et al. A self-supervised guided knowledge distillation framework for unpaired low-dose CT image denoising
Chinkamol et al. OCTAve: 2D en face optical coherence tomography angiography vessel segmentation in weakly-supervised learning with locality augmentation
Yin et al. CoT-UNet++: A medical image segmentation method based on contextual Transformer and dense connection
CN115330600A (zh) 一种基于改进srgan的肺部ct图像超分辨率方法
CN114049334A (zh) 一种以ct图像为输入的超分辨率mr成像方法
Selim et al. CT Image Standardization Using Deep Image Synthesis Models
Kumar et al. Abnormality detection in smartphone-captured chest radiograph using multi-pretrained models
Shi et al. The Study of Echocardiography of Left-Ventricle Segmentation Combining Transformer and CNN
US20230169659A1 (en) Image segmentation and tracking based on statistical shape model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942772

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20942772

Country of ref document: EP

Kind code of ref document: A1