WO2022016461A1 - 一种图像金属伪影抑制方法 - Google Patents

一种图像金属伪影抑制方法 Download PDF

Info

Publication number
WO2022016461A1
WO2022016461A1 PCT/CN2020/103844 CN2020103844W WO2022016461A1 WO 2022016461 A1 WO2022016461 A1 WO 2022016461A1 CN 2020103844 W CN2020103844 W CN 2020103844W WO 2022016461 A1 WO2022016461 A1 WO 2022016461A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
domain
artifacts
identifier
metal
Prior art date
Application number
PCT/CN2020/103844
Other languages
English (en)
French (fr)
Inventor
李彦明
郑海荣
江洪伟
万丽雯
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2020/103844 priority Critical patent/WO2022016461A1/zh
Publication of WO2022016461A1 publication Critical patent/WO2022016461A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/408Dual energy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/441AI-based methods, deep learning or artificial neural networks

Definitions

  • the present application belongs to the field of image technology, and in particular relates to a method for suppressing metal artifacts in images.
  • CT Computed Tomography
  • Metal objects strongly attenuate the intensity of X-rays, or even completely block their penetration, causing the detector to receive corrupted or incomplete projection data, which can produce bright and dark radial streaks when the image is reconstructed, losing important parts of the image. structural information, causing misjudgment by doctors or measurement errors in the size of the target volume. Therefore, it is of great clinical significance to use fast and effective algorithms to suppress metal artifacts in CT images and improve image quality.
  • Dual-energy CT imaging can solve many problems in traditional CT imaging techniques, such as motion artifacts, radiation sclerosis, streak artifacts caused by incomplete scanning, noise under low-dose conditions, etc. Radiation doses are relatively low and are widely used in clinical practice today.
  • the existing metal artifact reduction (MAR) algorithms can be divided into three categories: metal artifact suppression methods based on projection domain interpolation, metal artifact suppression algorithms based on iterative reconstruction, and metal artifact suppression methods based on deep learning. Shadow suppression method. Because metal artifacts usually appear as non-local light and dark streaks, it is very difficult to model them in the image domain. Before the rise of deep learning, most of the work was done in the projection domain. For example, the area affected by metal is projected on Domains are missing, and these algorithms use different methods to interpolate missing data, however, since projections are taken from a single object under a certain geometry, the augmented sinogram should satisfy physical constraints, otherwise, it will appear in the reconstructed CT image.
  • MAR metal artifact reduction
  • Severe secondary artifacts are introduced in .
  • the iterative reconstruction-based metal artifact suppression algorithm uses an optimization algorithm in the image domain to minimize the error between the image and the real result, thereby obtaining high-quality de-artifacted images.
  • This type of algorithm can usually effectively suppress metal artifacts, but the amount of calculation is very large, the hardware requirements are high, and the timeliness is poor.
  • Wang et al. applied the pix2pix model to reduce metal artifacts in CT images in the image domain.
  • Zhang et al. first estimated the prior image through a convolutional neural network (CNN), and then filled the metal damage region in the sinogram with surrogate data based on the prior image to reduce quadratic artifacts.
  • Park et al. applied U-Net to directly recover the sinogram of metal damage.
  • the metal artifact suppression method based on projection domain interpolation has the advantages of simple theory, fast calculation, and easy implementation. However, it can only deal with simple metal objects, and it is difficult to meet the physical constraints for metals with special shapes, which will introduce serious problems in reconstructed CT images.
  • the metal artifact suppression algorithm based on iterative reconstruction can effectively suppress the artifact and noise, but the computational load is very large, the speed is slow, and it is difficult to be practical; the current deep learning MAR algorithm is for single-dose CT images. Artifact suppression was performed and was not applied to dual-energy CT images. However, in the images of high and low energy states of dual-energy CT imaging, the shape and degree of metal artifacts are different. The current method does not use the intrinsic relationship between the two images for reconstruction, and there is still a lot of room for improvement in performance. .
  • the present application provides an image metal artifact suppression method.
  • the present application provides a method for suppressing image metal artifacts, the method comprising the following steps:
  • Step 1 Divide the image domain of the image into different subdomains
  • Step 2 extracting the input image and the domain identifier corresponding to the image, and then converting the image into a target domain image according to the target domain identifier to obtain a generated image;
  • Step 3 obtain a reconstructed image according to the generated image, and calculate the reconstruction loss
  • Step 4 Input the image and the reconstructed image into the discriminator to obtain the discrimination result and the domain classification result; calculate the adversarial loss and the domain classification loss, and train the deep neural network;
  • Step 5 Use the trained neural network to get images that suppress artifacts.
  • step 1 the dual-energy CT image domain is divided into four sub-domains according to the level of energy state and whether there are metal artifacts: high-energy state-with artifacts, low-energy state-with artifacts , high energy state - no artifacts, low energy state - no artifacts.
  • step 2 the input image and the domain identifier corresponding to the image are extracted, and a target domain is selected, that is, it is desired to convert the image to the domain to which the target image belongs, Then, the image is input into the generator network according to the target domain identifier, and converted into the target domain image to obtain the generated image.
  • the step 2 includes the embedding of the identifier.
  • step 3 a loss function is used to constrain the error between the original input image and the reconstructed image.
  • the identifier embedding is to expand the domain identifier and the target domain identifier respectively into a two-channel map with the same size as the input image, and then splicing the channel dimension to the input image.
  • the domain identifier is a binary identifier
  • the target domain identifier is a binary identifier
  • the dual-channel image is two pieces of pure black or two pieces of pure white, or one piece of black and one piece of white.
  • Another embodiment provided by the present application is to back-propagate a training generator according to the reconstruction loss, and back-propagate a training discriminator according to the adversarial loss and the domain classification loss to obtain a model for multi-domain transformation.
  • the generator in the model is a common network
  • the discriminator is a dual-output structure.
  • the image metal artifact suppression method provided by the present application is a novel CT image metal artifact suppression technology based on multi-space image conversion.
  • the image metal artifact suppression method provided by the present application based on the metal artifact suppression technology of multi-space image conversion, can be used to improve the image quality of dual-energy CT imaging.
  • the image metal artifact suppression method provided by the present application is applicable to various metal artifacts and has stronger robustness.
  • the image metal artifact suppression method provided by this application compared with the traditional iterative reconstruction metal artifact suppression algorithm, after the offline learning is completed, the operation speed of the artifact removal algorithm is very fast, and better image quality can be obtained at the same time. .
  • the image metal artifact suppression method provided by the present application can further improve the effect of MAR by using the relevant information in the images of high and low energy states of dual-energy CT imaging.
  • the image metal artifact suppression method provided by the present application is a novel generative adversarial network based on multi-space image transformation, which is used for metal artifact suppression of dual-energy CT images.
  • the image domain of dual-energy CT can be divided into four sub-domains according to the two attributes of energy state level and whether there is metal artifact: high energy state - with artifacts, low energy state - with artifacts shadow, high energy state - no artifact, low energy state - no artifact.
  • the network adopts the idea of adversarial learning.
  • the generator in the network performs domain transformation on the input image of any sub-domain, generates images in other domains, and judges the generated results through the discriminator. The two compete with each other until the conversion between each domain has a good effect, so that the generator can input any sub-domain image, and can generate the corresponding energy-state artifact-suppressed image.
  • the image metal artifact suppression method provided in this application based on the idea of confrontation generation network, introduces the concept of multi-domain image conversion, and only trains a pair of generators and discriminators to realize the mutual conversion of dual-energy CT images between different domains. In this way, the metal artifact suppression of CT images is realized.
  • FIG. 1 is a schematic diagram of the image metal artifact suppression principle of the present application.
  • the present application provides a method for suppressing metal artifacts in an image, and the method includes the following steps:
  • Step 1 Divide the image domain of dual-energy CT into four sub-domains, and define a domain generic identifier for each sub-domain;
  • Step 2 Extract the input image and the domain identifier corresponding to the image, select a target domain, that is, it is desired to convert the image into the domain to which the target image belongs, and then input the image into the generator network according to the target domain identifier , converted to the target domain image to get the generated image;
  • Step 3 Then take the domain to which the input image belongs as the target domain, input the generated image in step 2 into the generator network, generate a reconstructed image, and compare the reconstruction loss with the original image;
  • Step 4 Input the input image and the reconstructed image into the discriminator network to obtain the discrimination result and the domain classification result; calculate the confrontation loss and the domain classification loss, and train the deep neural network;
  • Step 5 Use the trained neural network to get images that suppress artifacts.
  • the first 4 steps are to train a usable network model, which is an offline learning process; this step is to input the image with metal artifacts into the model after the model is learned, and then generate an image without artifacts, that is In practical applications, the artifact removal is realized.
  • the dual-energy CT image domain is divided into four sub-domains according to the level of energy state and whether there are metal artifacts: high-energy state-with artifact, low-energy state-with artifact, high-energy state-no artifact, Low energy state - no artifacts.
  • the dual-energy CT system can use two different energies of X-rays to image the object, and can accurately obtain the composition ratio of the object.
  • the generator extracts the input image and the domain identifier corresponding to the image.
  • the input image is sent to the generator; the domain identifier of the input image is also paired with the input image, for example, the identifiers of the four energy states are 00, 01, 10, 11, and a low-energy state is input - there are Artifact image, then its domain identifier is the identifier 01 corresponding to "low energy state - with artifact".
  • step 2 includes the embedding of the identifier.
  • the error of the target domain image and the reconstructed image is constrained by a loss function.
  • the identifier embedding is to expand the domain identifier and target domain identifier of the input image to be respectively expanded into a two-channel map with the same size as the input image, and then spliced to the input image in the channel dimension.
  • the domain identifier is a binary identifier
  • the target domain identifier is a binary identifier
  • the two-channel image is two pieces of pure black or two pieces of pure white or one piece of black and one piece of white obtained by expanding the binary identifier.
  • the process of reverse reconstruction is introduced, according to the generated image Generate reconstructed images Use loss function to constrain and error, thereby increasing the robustness of the network, the process is part of Figure 1(b).
  • the identifier embedding in the figure is the binary identifier c(x) and They are respectively expanded into two pure black/pure white/one black and one white dual-channel images of the same size as the input image, and then stitched to the input image in the channel dimension.
  • the generator is back-propagated according to the reconstruction loss
  • the discriminator is back-propagated according to the adversarial loss and the domain classification loss to obtain a model for multi-domain transformation.
  • the generator in the model is a common network
  • the discriminator is a dual-output structure.
  • the generator can be implemented using a common network, such as U-Net, or it can design a similar network according to the data;
  • the discriminator adopts a dual-output structure, sharing a series of convolutional layers, which are used when obtaining feature vectors. Two different sets of fully connected layers are used to obtain discriminative results and domain classification results respectively.
  • the data input by the discriminator D not only contains the real image data, but also the image output by the generator G.
  • the discriminator has two major tasks. While it is responsible for judging whether the input image is a real image/generated image, it also needs to give the domain identifier to which the input image belongs. This step is shown in Figure 1(c).
  • the loss is calculated by the above steps, and the generator G and the discriminator D are back-propagated to train the generator G and the discriminator D respectively, and the model G that can be used for multi-domain transformation can be obtained.
  • the model G that can be used for multi-domain transformation can be obtained.
  • This application uses dual-energy CT images of high and low energy states to train the network, so that the network can obtain more feature information and achieve metal artifact suppression; four domains are constructed according to the two attributes of energy state and artifact state, and domain conversion is used. Solve the problem of artifact suppression; use domain identifiers to share generators and discriminators in the process of domain conversion, avoiding the need for a pair of generators and discriminators for each conversion between two domains; use dual-energy CT data for training The process of the network and the loss design scheme.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请属于图像技术领域,特别是涉及一种图像金属伪影抑制方法。双能CT成像的高、低两种能态的图像中,金属伪影的形态和程度有所区别,目前的方法并没有利用两幅图像的内在联系进行重建,性能仍有很大提升空间。本申请提供了一种图像金属伪影抑制方法,包括:将图像的图像域分成不同的子域;提取输入图像以及所述图像对应的域识别符,然后按照目标域识别符将所述图像转换为目标域图像得到生成图像;根据所述生成图像得到重建图像,计算重建损失;将所述图像和所述重建图像输入判别器,得到判别结果和域分类结果;计算对抗损失和域分类损失,训练深度神经网络;使用训练好的神经网络得到抑制伪影的图片。适用于各种金属伪影,鲁棒性更强。

Description

一种图像金属伪影抑制方法 技术领域
本申请属于图像技术领域,特别是涉及一种图像金属伪影抑制方法。
背景技术
在计算机断层成像(Computed Tomography,CT)中,患者体内的金属植入物,包括牙科填充物、髋关节假体、线圈等,可能导致图像产生金属伪影。金属物体会强烈衰减X射线的强度,甚至完全阻挡它们的穿透,从而导致探测器接收到损坏或不完备的投影数据,使用这些数据重建图像时会产生明暗相间的放射状条纹,损失图像中重要的结构信息,引起医生的误判或靶区大小的计量误差。因此,利用快速有效的算法抑制CT图像中的金属伪影,提升图像质量有很大的临床意义。双能CT成像可以解决传统的CT成像技术中的许多问题,例如运动伪影、射术硬化、不完全扫描引起的条形伪影、低剂量条件下的噪声等,并且操作更加便利、病人接受辐射剂量相对较低,如今已在临床中广泛使用。
目前,现有的金属伪影抑制(Metal artifact reduction,MAR)算法可以分为三类:基于投影域插值的金属伪影抑制方法、基于迭代重建的金属伪影抑制算法以及基于深度学习的金属伪影抑制方法。因为金属伪影通常呈现为非局部的明暗条纹,所以在图像域对其建模非常困难,在深度学习兴起之前,大部分的工作都是在投影域进行的,例如,金属影响的区域在投影域是缺失的,这些算法采用不同方法对缺失的数据插值,但是,由于投影是在一定几何形状下取自单个对象的,因此增强的正弦图应满足物理约束,否则,会在重建的CT图像中引入严重的二次伪影。基于迭代重建的金属伪影抑制算法在图像域利用优化算法最小化图像与真实结果的误差,从而获得高质量的去伪影图像。该类算法通常能够有效抑制金属伪影,但是计算量非常大,对硬件要求高且时效性差。
最近,深度学习在金属伪影抑制方面也有较大进展。Wang等人应用pix2pix模型在图像域中减少CT图像的金属伪影。Zhang等人首先通过卷积神经网(convolutionalneural network,CNN)估计先验图像,再基于先验图像对正弦图中的金属损坏区域填充了替代数据,以减少二次伪像。Park等人应用U-Net直接恢复金属损坏的正弦图。
基于投影域插值的金属伪影抑制方法具有理论简单、计算快速、容易实现等优点,但是只能应对简单的金属物体,对特殊形状的金属难以满足物理约束,会在重建的CT图像中引入严重的二次伪影;基于迭代重建的金属伪影抑制算法能够有效的抑制伪影和噪声,但是运算量非常大,速度慢,难以实用化;目前的深度学习MAR算法都是对单剂量CT图像进行伪影抑 制,并没有应用到双能CT图像中。而双能CT成像的高、低两种能态的图像中,金属伪影的形态和程度有所区别,目前的方法并没有利用两幅图像的内在联系进行重建,性能仍有很大提升空间。
发明内容
1.要解决的技术问题
基于双能CT成像的高、低两种能态的图像中,金属伪影的形态和程度有所区别,目前的方法并没有利用两幅图像的内在联系进行重建,性能仍有很大提升空间的问题,本申请提供了一种图像金属伪影抑制方法。
2.技术方案
为了达到上述的目的,本申请提供了一种图像金属伪影抑制方法,所述方法包括如下步骤:
步骤1:将图像的图像域分成不同的子域;
步骤2:提取输入图像以及所述图像对应的域识别符,然后按照目标域识别符将所述图像转换为目标域图像得到生成图像;
步骤3:根据所述生成图像得到重建图像,计算重建损失;
步骤4:将所述图像和所述重建图像输入判别器,得到判别结果和域分类结果;计算对抗损失和域分类损失,训练深度神经网络;
步骤5:使用训练好的神经网络得到抑制伪影的图片。
本申请提供的另一种实施方式为:所述步骤1中将双能CT图像域根据能态高低和是否有金属伪影分成四种子域:高能态-有伪影、低能态-有伪影、高能态-无伪影、低能态-无伪影。
本申请提供的另一种实施方式为:所述步骤2中,提取输入图像以及所述图像对应的域识别符,选定一个目标域,即期望将所述图像转换为目标图像所属的域,然后按照目标域识别符将所述图像输入生成器网络,转换为目标域图像得到生成图像。
本申请提供的另一种实施方式为:所述步骤2中包括识别符的嵌入。
本申请提供的另一种实施方式为:所述步骤3中通过损失函数来约束所述原输入图像和所述重建图像的误差。
本申请提供的另一种实施方式为:所述识别符嵌入为将域识别符和目标域识别符分别扩展成和输入图像大小相同的双通道图,然后在通道维度拼接到输入图像之后。
本申请提供的另一种实施方式为:所述域识别符为二进制识别符,所述目标域识别符为二进制识别符。
本申请提供的另一种实施方式为:所述双通道图为两张纯黑或者两张纯白或者一张黑一张白。本申请提供的另一种实施方式为:根据所述重建损失反向传播训练生成器,根据所述对抗损失和所述域分类损失反向传播训练判别器,得到用于多域转换的模型。
本申请提供的另一种实施方式为:所述模型中的生成器为常用网络,所述判别器为双输出结构。
3.有益效果
与现有技术相比,本申请提供的一种图像金属伪影抑制方法的有益效果在于:
本申请提供的图像金属伪影抑制方法,为一种基于多空间图像转换的新型CT图像金属伪影抑制的技术。
本申请提供的图像金属伪影抑制方法,基于多空间图像转换的金属伪影抑制技术,可用于提升双能CT成像的图像质量。
本申请提供的图像金属伪影抑制方法,与传统投影域插值MAR算法相比,本方法能够适用于各种金属伪影,鲁棒性更强。
本申请提供的图像金属伪影抑制方法,与传统迭代重建的金属伪影抑制算法相比,本算法在完成离线学习后,去伪影算法的运算速度非常快,同时可获得较好的图像质量。
本申请提供的图像金属伪影抑制方法,与深度学习方法相比,本方法能够利用双能CT成像的高、低两种能态的图像中的相关信息,进一步提升MAR的效果。
本申请提供的图像金属伪影抑制方法,为一种新型的基于多空间图像转换的生成对抗网络,用于双能CT图像的金属伪影抑制。
本申请提供的图像金属伪影抑制方法,双能CT的图像域可以根据能态高低和是否有金属伪影这两个属性,分成四种子域:高能态-有伪影、低能态-有伪影、高能态-无伪影、低能态-无伪影。
本申请提供的图像金属伪影抑制方法,网络采用对抗学习的思想,网络中的生成器对输入的任意子域的图像进行域变换,生成其他域的图像,并通过判别器对生成结果评判,两者相互博弈,直至各域之间的转换具有较好的效果,从而使生成器具有输入任意子域图像,都能够生成对应能态伪影抑制后的图像。
本申请提供的图像金属伪影抑制方法,基于对抗生成网络的思想,引入多领域图像转换的概念,仅训练一对生成器和判别器,来实现双能CT图像在不同域间的相互转换,进而实现CT图像的金属伪影抑制。
附图说明
图1是本申请的图像金属伪影抑制原理示意图。
具体实施方式
在下文中,将参考附图对本申请的具体实施例进行详细地描述,依照这些详细的描述,所属领域技术人员能够清楚地理解本申请,并能够实施本申请。在不违背本申请原理的情况下,各个不同的实施例中的特征可以进行组合以获得新的实施方式,或者替代某些实施例中的某些特征,获得其它优选的实施方式。
参见图1,本申请提供一种图像金属伪影抑制方法,所述方法包括如下步骤:
步骤1:将双能CT的图像域分成四种子域,并给每个子域定义域通识符;
步骤2:提取输入图像以及所述图像对应的域识别符,选定一个目标域,即期望将所述图像转换为目标图像所属的域,然后按照目标域识别符将所述图像输入生成器网络,转换为目标域图像得到生成图像;
步骤3:再将输入图像所属的域作为目标域,将所述步骤2中的生成图像输入生成器网络,生成重建图像,并和原图比较计算重建损失;
步骤4:将所述输入图像和所述重建图像输入判别器网络,得到判别结果和域分类结果;计算对抗损失和域分类损失,训练深度神经网络;
步骤5:使用训练好的神经网络得到抑制伪影的图片。
前面4步是为了训练一个可用的网络模型,是一个离线学习的过程;这一步是模型学习好了之后,把有金属伪影的图像输入到模型中,然后产生没有伪影的图像,也就是实际应用中,实现去伪影。
进一步地,所述步骤1中将双能CT图像域根据能态高低和是否有金属伪影分成四种子域:高能态-有伪影、低能态-有伪影、高能态-无伪影、低能态-无伪影。
双能CT系统可以利用两种不同能量的X射线对物体进行成像,能够精确得到物体的构成比例。
它主要有两大优势:
过去单一能量成像的局限性在于成分不同的物体可能表现出相似的衰减特征,此时单纯应用CT值就难以区分不同物质。
其次,由于技术先进,现有的双能CT辐射剂量都比传统单能的要低,更加安全。
进一步地,所述步骤2中通过生成器提取输入图像以及所述图像对应的域识别符。
将输入图像送进生成器;输入图像的域识别符也是和输入图像成对存在的,比方说,四种能态的识别符分别为00,01,10,11,输入一张低能态-有伪影的图像,那么它的域识别符 就是“低能态-有伪影”对应的识别符01。
进一步地,所述步骤2中包括识别符的嵌入。
进一步地,所述步骤3中通过损失函数来约束所述目标域图像和所述重建图像的误差。
进一步地,所述识别符嵌入为将所属输入图像的域识别符和目标域识别符分别扩展成和输入图像大小相同的双通道图,然后在通道维度拼接到输入图像之后。
进一步地,所述域识别符为二进制识别符,所述目标域识别符为二进制识别符。
进一步地,所述双通道图为由二进制识别符扩展得到的两张纯黑或者两张纯白或者一张黑一张白。
双能CT的图像域可以根据能态高低和是否有金属伪影这两个属性分成四种子域:高能态-有伪影、低能态-有伪影、高能态-无伪影、低能态-无伪影,分别记作Domain i,i=1,2,3,4。在训练网络时,使用的训练数据需包含数量级相近的四种域的数据。将输入生成器G的图像记作x,希望通过生成器G产生的图像记作
Figure PCTCN2020103844-appb-000001
域识别符c(x)=binary(i-1),其中i为x对应的域标号,binary(·)表示二进制算符,将输入转换成两位二进制表示。那么生成器G可以提取输入图像x以及它对应的域识别符c(x),然后按照目标域识别符
Figure PCTCN2020103844-appb-000002
将其转换为目标域图像
Figure PCTCN2020103844-appb-000003
该过程可以用图1(a)表示。与此同时,和CycleGAN的思想类似引入反向重建的过程,根据生成图像
Figure PCTCN2020103844-appb-000004
生成重建图像
Figure PCTCN2020103844-appb-000005
利用损失函数来约束
Figure PCTCN2020103844-appb-000006
Figure PCTCN2020103844-appb-000007
的误差,从而增加网络的鲁棒性,该过程为图1(b)部分。图中的识别符嵌入是将二进制的识别符c(x)和
Figure PCTCN2020103844-appb-000008
分别扩展成和输入图像大小相同的两张纯黑/纯白/一黑一白双通道图,然后在通道维度拼接到输入图像之后。
进一步地,根据所述重建损失反向传播训练生成器,根据所述对抗损失和所述域分类损失反向传播训练判别器,得到用于多域转换的模型。
进一步地,所述模型中的生成器为常用网络,所述判别器为双输出结构。该网络模型中,生成器可以使用常用的网络实现,如U-Net,也可以自行根据数据设计类似的网络;判别器采用双输出的结构,共用一系列卷积层,在得到特征向量时使用两组不同的全连接层,分别获取判别结果和域分类结果。
判别器D输入的数据不仅包含真实的图像数据,还包含生成器G输出的图像。判别器有两大任务,在负责判断输入的图像是真实图像/生成图像的同时,还需要给出输入图像所属的域识别符
Figure PCTCN2020103844-appb-000009
该步骤如图1(c)所示的。
网络训练的一次迭代中,首先从数据集中抽取一对高低能图像x 1,x 2,提取域识别符c(x 1),c(x 2),随机生成两个目标域识别符
Figure PCTCN2020103844-appb-000010
Figure PCTCN2020103844-appb-000011
Figure PCTCN2020103844-appb-000012
分别输入生成器,得到生成图像
Figure PCTCN2020103844-appb-000013
Figure PCTCN2020103844-appb-000014
Figure PCTCN2020103844-appb-000015
再分别输入生成 器,得到重建图像
Figure PCTCN2020103844-appb-000016
计算重建损失
Figure PCTCN2020103844-appb-000017
将x 1,x 2,
Figure PCTCN2020103844-appb-000018
分别输入判别器,得到判别结果D src(x 1),D src(x 2),
Figure PCTCN2020103844-appb-000019
和域分类结果D cls(x 1),D cls(x 2),
Figure PCTCN2020103844-appb-000020
计算对抗损失
Figure PCTCN2020103844-appb-000021
和域分类损失
Figure PCTCN2020103844-appb-000022
Figure PCTCN2020103844-appb-000023
其中
Figure PCTCN2020103844-appb-000024
表示真实(real)数据的域分类损失,
Figure PCTCN2020103844-appb-000025
表示生成(synthetic)数据的域分类损失。那么训练生成器G和判别器D的损失函数分别为
Figure PCTCN2020103844-appb-000026
Figure PCTCN2020103844-appb-000027
对于每一组输入x 1,x 2,由上述步骤计算出loss,分别反向传播训练生成器G和判别器D,即可获得可用于多域转换的模型G。在测试和使用该模型时,只需要将数据按照有金属伪影的图像、原始域识别符、目标域识别符的形式
Figure PCTCN2020103844-appb-000028
输入生成器G,其中
Figure PCTCN2020103844-appb-000029
为与c(x)同能态无伪影域对应的识别符(如高能态-有伪影对应高能态-无伪影),即可获得对应能态伪影抑制图像。
本申请利用双能CT高低两种能态图像来训练网络,使网络获取到更多的特征信息,实现金属伪影抑制;根据能态、伪影状态两个属性构建四个域,用域转换解决伪影抑制问题;域转换的过程中利用域识别符来实现生成器、判别器的共用,避免每两个域之间的转换都需要一对生成器、判别器;使用双能CT数据训练网络的过程以及loss设计方案。
其他深度学习方法针对的是单图像的金属伪影去除,没有针对双能CT的方案,不同能态需要使用对应能态数据重新训练新的模型,本申请引入多域转换的概念,利用域识别符解决了使得网络可以同时学习两种能态的特征,用一个模型实现双能态的伪影抑制;
其他深度学习方法不能充分利用到两种能态的信息,本申请使得四种域的图像共用一个生成器,该生成器能够充分学习到不同能态、不同伪影状态下的特征,达到更好的伪影抑制效果。
尽管在上文中参考特定的实施例对本申请进行了描述,但是所属领域技术人员应当理解,在本申请公开的原理和范围内,可以针对本申请公开的配置和细节做出许多修改。本申请的保护范围由所附的权利要求来确定,并且权利要求意在涵盖权利要求中技术特征的等同物文字意义或范围所包含的全部修改。

Claims (10)

  1. 一种图像金属伪影抑制方法,其特征在于:所述方法包括如下步骤:
    步骤1:将图像的图像域分成不同的子域;
    步骤2:提取输入图像以及所述图像对应的域识别符,然后按照目标域识别符将所述图像转换为目标域图像得到生成图像;
    步骤3:根据所述生成图像得到重建图像,计算重建损失;
    步骤4:将所述图像和所述重建图像输入判别器,得到判别结果和域分类结果;计算对抗损失和域分类损失,训练深度神经网络;
    步骤5:使用训练好的神经网络得到抑制伪影的图片。
  2. 如权利要求1所述的图像金属伪影抑制方法,其特征在于:所述步骤1中将双能CT图像域根据能态高低和是否有金属伪影分成四种子域:高能态-有伪影、低能态-有伪影、高能态-无伪影、低能态-无伪影。
  3. 如权利要求1所述的图像金属伪影抑制方法,其特征在于:所述步骤2中提取输入图像以及所述图像对应的域识别符,选定一个目标域,即期望将所述图像转换为目标图像所属的域,然后按照目标域识别符将所述图像输入生成器网络,转换为目标域图像得到生成图像。
  4. 如权利要求1所述的图像金属伪影抑制方法,其特征在于:所述步骤2中包括识别符的嵌入。
  5. 如权利要求1所述的图像金属伪影抑制方法,其特征在于:所述步骤3中通过损失函数来约束所述原输入图像和所述重建图像的误差。
  6. 如权利要求5所述的图像金属伪影抑制方法,其特征在于:所述识别符嵌入为将所属输入图像的域识别符和目标域识别符分别扩展成和输入图像大小相同的双通道图,然后在通道维度拼接到输入图像之后。
  7. 如权利要求6所述的图像金属伪影抑制方法,其特征在于:所述域识别符为二进制识别符,所述目标域识别符为二进制识别符。
  8. 如权利要求7所述的图像金属伪影抑制方法,其特征在于:所述双通道图为由二进制识别符扩展得到的两张纯黑或者两张纯白或者一张黑一张白。
  9. 如权利要求1~8中任一项所述的图像金属伪影抑制方法,其特征在于:根据所述重建损失反向传播训练生成器,根据所述对抗损失和所述域分类损失反向传播训练判别器,得到用于多域转换的模型。
  10. 如权利要求9所述的图像金属伪影抑制方法,其特征在于:所述模型中的生成器为常用网络,所述判别器为双输出结构。
PCT/CN2020/103844 2020-07-23 2020-07-23 一种图像金属伪影抑制方法 WO2022016461A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/103844 WO2022016461A1 (zh) 2020-07-23 2020-07-23 一种图像金属伪影抑制方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/103844 WO2022016461A1 (zh) 2020-07-23 2020-07-23 一种图像金属伪影抑制方法

Publications (1)

Publication Number Publication Date
WO2022016461A1 true WO2022016461A1 (zh) 2022-01-27

Family

ID=79729948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103844 WO2022016461A1 (zh) 2020-07-23 2020-07-23 一种图像金属伪影抑制方法

Country Status (1)

Country Link
WO (1) WO2022016461A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998471A (zh) * 2022-06-22 2022-09-02 中国科学院自动化研究所 基于RecNet模型的磁粒子成像重建方法
CN115690255A (zh) * 2023-01-04 2023-02-03 浙江双元科技股份有限公司 基于卷积神经网络的ct图像去伪影方法、装置及系统
CN116738911A (zh) * 2023-07-10 2023-09-12 苏州异格技术有限公司 布线拥塞预测方法、装置及计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679642A (zh) * 2012-09-26 2014-03-26 上海联影医疗科技有限公司 一种ct图像金属伪影校正方法、装置及ct设备
US20190066346A1 (en) * 2017-08-30 2019-02-28 Korea Advanced Institute Of Science And Technology Apparatus and method for reconstructing image using extended neural network
US20190377047A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Artifact Reduction by Image-to-Image Network in Magnetic Resonance Imaging
CN110570492A (zh) * 2019-09-11 2019-12-13 清华大学 神经网络训练方法和设备、图像处理方法和设备以及介质
CN110675461A (zh) * 2019-09-03 2020-01-10 天津大学 一种基于无监督学习的ct图像恢复方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679642A (zh) * 2012-09-26 2014-03-26 上海联影医疗科技有限公司 一种ct图像金属伪影校正方法、装置及ct设备
US20190066346A1 (en) * 2017-08-30 2019-02-28 Korea Advanced Institute Of Science And Technology Apparatus and method for reconstructing image using extended neural network
US20190377047A1 (en) * 2018-06-07 2019-12-12 Siemens Healthcare Gmbh Artifact Reduction by Image-to-Image Network in Magnetic Resonance Imaging
CN110675461A (zh) * 2019-09-03 2020-01-10 天津大学 一种基于无监督学习的ct图像恢复方法
CN110570492A (zh) * 2019-09-11 2019-12-13 清华大学 神经网络训练方法和设备、图像处理方法和设备以及介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998471A (zh) * 2022-06-22 2022-09-02 中国科学院自动化研究所 基于RecNet模型的磁粒子成像重建方法
US11835602B1 (en) 2022-06-22 2023-12-05 Institute Of Automation, Chinese Academy Of Sciences Magnetic particle imaging (MPI) reconstruction method based on RecNet model
CN115690255A (zh) * 2023-01-04 2023-02-03 浙江双元科技股份有限公司 基于卷积神经网络的ct图像去伪影方法、装置及系统
CN116738911A (zh) * 2023-07-10 2023-09-12 苏州异格技术有限公司 布线拥塞预测方法、装置及计算机设备
CN116738911B (zh) * 2023-07-10 2024-04-30 苏州异格技术有限公司 布线拥塞预测方法、装置及计算机设备

Similar Documents

Publication Publication Date Title
WO2022016461A1 (zh) 一种图像金属伪影抑制方法
CN109146988B (zh) 基于vaegan的非完全投影ct图像重建方法
Gjesteby et al. Reducing metal streak artifacts in CT images via deep learning: Pilot results
CN103247061B (zh) 一种x射线ct图像的增广拉格朗日迭代重建方法
CN110675461A (zh) 一种基于无监督学习的ct图像恢复方法
CN110570492A (zh) 神经网络训练方法和设备、图像处理方法和设备以及介质
CN109949215B (zh) 一种低剂量ct图像模拟方法
CN110728729A (zh) 一种基于注意机制的无监督ct投影域数据恢复方法
CN115953494B (zh) 基于低剂量和超分辨率的多任务高质量ct图像重建方法
CN109146994A (zh) 一种面向多能谱x射线ct成像的金属伪影校正方法
CN112348936A (zh) 一种基于深度学习的低剂量锥束ct图像重建方法
Zhou et al. Limited angle tomography reconstruction: synthetic reconstruction via unsupervised sinogram adaptation
CN103034989A (zh) 一种基于优质先验图像的低剂量cbct图像去噪方法
Zhu et al. Metal artifact reduction for X-ray computed tomography using U-net in image domain
CN110060315A (zh) 一种基于人工智能的图像运动伪影消除方法及系统
Du et al. Reduction of metal artefacts in CT with Cycle-GAN
Mostafavi et al. E2sri: Learning to super-resolve intensity images from events
US20220164927A1 (en) Method and system of statistical image restoration for low-dose ct image using deep learning
Chen et al. A C-GAN denoising algorithm in projection domain for micro-CT
CN117611695A (zh) 基于扩散模型的牙科锥束ct金属伪影校正方法及装置
Ding et al. Ultrasound image super-resolution with two-stage zero-shot cyclegan
CN111145096A (zh) 基于递归极深网络的超分辨图像重构方法及系统
CN111862258B (zh) 一种图像金属伪影抑制方法
CN111862258A (zh) 一种图像金属伪影抑制方法
Zhu et al. CT metal artifact correction assisted by the deep learning-based metal segmentation on the projection domain

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20946435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20946435

Country of ref document: EP

Kind code of ref document: A1