CN116894783A - Metal artifact removal method based on adversarial generative network model with time-varying constraints - Google Patents
Metal artifact removal method based on adversarial generative network model with time-varying constraints Download PDFInfo
- Publication number
- CN116894783A CN116894783A CN202310878651.XA CN202310878651A CN116894783A CN 116894783 A CN116894783 A CN 116894783A CN 202310878651 A CN202310878651 A CN 202310878651A CN 116894783 A CN116894783 A CN 116894783A
- Authority
- CN
- China
- Prior art keywords
- image
- generator
- metal
- artifacts
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 128
- 239000002184 metal Substances 0.000 title claims abstract description 123
- 230000006870 function Effects 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 30
- 239000013598 vector Substances 0.000 claims description 28
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 20
- 238000012937 correction Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 6
- 230000007423 decrease Effects 0.000 claims description 5
- 238000011387 Li's method Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 12
- 210000000988 bone and bone Anatomy 0.000 description 10
- 239000007943 implant Substances 0.000 description 7
- 210000004872 soft tissue Anatomy 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 210000001519 tissue Anatomy 0.000 description 5
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000001054 cortical effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000004445 quantitative analysis Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012733 comparative method Methods 0.000 description 1
- 238000012885 constant function Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
技术领域Technical field
本发明涉及一种金属伪影去除方法,具体是一种基于时变约束的对抗生成网络模型的金属伪影去除方法,属于提高CT图像质量的技术领域。The invention relates to a metal artifact removal method, specifically a metal artifact removal method based on a time-varying constraint adversarial generation network model, and belongs to the technical field of improving CT image quality.
背景技术Background technique
金属植入物的存在会导致CT图像出现严重的金属伪影,严重降低图像质量。在过去的几十年里,人们提出了许多减少金属伪影的方法。虽然这些传统的方法已经在金属伪影去除方面产生了一定的效果,但是离实际需求的效果还有很大的距离。最近几年,随着深度学习的兴起,深度学习模型也逐渐被用于金属伪影去除的研究,并产生了不错的效果。参考文献[1,2]中,Park等人利用U-net学习从投影图中的金属轨迹区域的几何形态到相应的射束硬化因子的映射来修正投影图中不准确的投影数据。CNNMAR方法将未校正图像、LI图像和BHC图像组合成三通道的图像输入卷积神经网络中生成一张先验图像,然后使用它的投影结果指导插值过程,产生去除伪影的图片。基于条件生成对抗网络(cGAN)的思想,参考文献[3]中Wang等人提出cGANMAR,学习从有伪影图片到无伪影图片的映射,并使用PatchGAN作为判别器。以上方法均是有监督的方法,需要良好的配对数据作为训练集。许多学者致力于无监督学习的研究,解放了金属伪影去除问题对配对数据的需求。基于CycleGAN的思想,Lee等人提出了Attention-guidedβ-CycleGAN,见参考文献[4]中,使用注意力机制来关注空间域和通道域中金属伪影的独特特征,用无监督的方式进行训练,结果具有很好的鲁棒性。参考文献[5]中,Liao等人创造性地提出了ADN网络,ADN使用隐空间的概念,分别使用不同的编码器将含伪影图像编码到只含图像信息特征的内容空间,用另一个编码器将含伪影图像编码到只含伪影的伪影空间,实现伪影与组织细节的解耦。无监督的方法消除了对配对数据的需求,但是面对临床条件下产生的复杂伪影和组织细节没有很好的处理能力。当前的主流的有监督的方法虽然比无监督方法在性能上更加优越,但是在去金属伪影和图像细节保真的性能上还不能满足实际应用的需求。The presence of metal implants will cause severe metal artifacts in CT images, seriously reducing image quality. Over the past few decades, many methods have been proposed to reduce metal artifacts. Although these traditional methods have produced certain results in metal artifact removal, they are still far from the actual required results. In recent years, with the rise of deep learning, deep learning models have gradually been used in research on metal artifact removal, and have produced good results. In references [1, 2], Park et al. used U-net to learn the mapping from the geometric shape of the metal track area in the projection map to the corresponding beam hardening factor to correct the inaccurate projection data in the projection map. The CNNMAR method combines the uncorrected image, LI image and BHC image into a three-channel image and inputs it into the convolutional neural network to generate a prior image, and then uses its projection results to guide the interpolation process to produce a picture with artifacts removed. Based on the idea of conditional generative adversarial network (cGAN), Wang et al. proposed cGANMAR in Reference [3] to learn the mapping from images with artifacts to images without artifacts, and use PatchGAN as the discriminator. The above methods are all supervised methods and require good paired data as training sets. Many scholars are committed to the research of unsupervised learning, liberating the need for paired data for metal artifact removal problems. Based on the idea of CycleGAN, Lee et al. proposed Attention-guidedβ-CycleGAN, see reference [4], which uses the attention mechanism to pay attention to the unique characteristics of metal artifacts in the spatial domain and channel domain, and conducts training in an unsupervised manner. , the results are very robust. In reference [5], Liao et al. creatively proposed the ADN network. ADN uses the concept of latent space, using different encoders to encode artifact-containing images into a content space containing only image information features, and uses another encoder to The encoder encodes images containing artifacts into an artifact space containing only artifacts to achieve decoupling of artifacts and tissue details. Unsupervised methods eliminate the need for paired data, but do not cope well with the complex artifacts and tissue details produced in clinical conditions. Although the current mainstream supervised methods are more superior in performance than unsupervised methods, they still cannot meet the needs of practical applications in terms of metal artifact removal and image detail fidelity.
参考文献:references:
[1]Park H S,Chung Y E,Lee S M,et al.Sinogram-consistency learning inCT for metal artifact reduction[J].arXiv preprint arXiv:00607,2017,1.[1]Park H S, Chung Y E, Lee S M, et al.Sinogram-consistency learning inCT for metal artifact reduction[J].arXiv preprint arXiv:00607,2017,1.
[2]Park H S,Lee S M,Kim H P,et al.CT sinogram-consistency learningfor metal-induced beam hardening correction[J].Medical physics,2018,45(12):5376-84.[2] Park H S, Lee S M, Kim H P, et al. CT sinogram-consistency learning for metal-induced beam hardening correction [J]. Medical physics, 2018, 45(12): 5376-84.
[3]Wang J,Zhao Y,Noble J H,et al.Conditional generative adversarialnetworks for metal artifact reduction in CT images of the ear[C].International Conference on Medical Image Computing and Computer-AssistedIntervention.2018:3-11.[3]Wang J, Zhao Y, Noble J H, et al. Conditional generative adversarial networks for metal artifact reduction in CT images of the ear[C]. International Conference on Medical Image Computing and Computer-AssistedIntervention.2018:3-11.
[4]Lee J,Gu J,Ye J C.Unsupervised CT Metal Artifact Learning UsingAttention-Guidedβ-CycleGAN[J].IEEE Transactions on Medical Imaging,2021,40(12):3932-44.[4]Lee J,Gu J,Ye J C.Unsupervised CT Metal Artifact Learning UsingAttention-Guidedβ-CycleGAN[J].IEEE Transactions on Medical Imaging, 2021,40(12):3932-44.
[5]Liao H,Lin W-A,Zhou S K,et al.ADN:artifact disentanglement networkfor unsupervised metal artifact reduction[J].IEEE Transactions on MedicalImaging,2019,39(3):634-43.[5]Liao H, Lin W-A, Zhou S K, et al.ADN:artifact disentanglement network for unsupervised metal artifact reduction[J].IEEE Transactions on MedicalImaging,2019,39(3):634-43.
发明内容Contents of the invention
发明目的:针对现有技术中存在的问题与不足,本发明提供一种基于时变约束的对抗生成网络模型的金属伪影去除方法。构建了带有时域可变约束的GAN网络去金属伪影模型(MARGANVAC),该模型通过引入时变约束项,对全图各个部分进行自适应的保真约束,从而更有效地训练生成器生成具有更好细节保真性和金属伪影去除效果的图像。该方法对比现在主流的模型,不但去伪影的效果更好,且有更广泛的应用场景。Purpose of the invention: In view of the problems and deficiencies existing in the existing technology, the present invention provides a metal artifact removal method based on a time-varying constraint adversarial generation network model. A GAN network metal artifact removal model with time-domain variable constraints (MARGANVAC) was constructed. This model introduces time-varying constraints to implement adaptive fidelity constraints on each part of the entire image, thereby more effectively training the generator to generate Images with better detail fidelity and metallic artifact removal. Compared with the current mainstream models, this method not only has better artifact removal effects, but also has a wider range of application scenarios.
技术方案:一种基于时变约束的对抗生成网络模型的金属伪影去除方法;构建了带有时域可变约束的GAN网络去金属伪影模型(MARGANVAC),该模型在GAN网络的基础上通过引入时变约束项,对全图各个部分进行自适应的保真约束;Technical solution: A metal artifact removal method based on time-varying constraints of the adversarial generative network model; a GAN network metal artifact removal model with time-domain variable constraints (MARGANVAC) is constructed, which is based on the GAN network. Introducing time-varying constraints to implement adaptive fidelity constraints on each part of the entire graph;
所述带有时域可变约束的GAN网络去金属伪影模型,包含三个模块,分别是生成器G、判别器D和配准网络R,待处理的CT图像经过随机仿射变化之后,输入生成器G,经生成器G输出到配准网络和判别器D;所述配准网络R用于随机采样,且采样的像素点的邻域随迭代次数的增加逐渐减小;配准网络R在不需要人工干预的条件下,可以自适应地调整参数,从而使得生成器产生更真实的图片。The GAN network metal artifact removal model with time domain variable constraints includes three modules, namely the generator G, the discriminator D and the registration network R. After the CT image to be processed undergoes random affine changes, the input The generator G is output to the registration network and the discriminator D through the generator G; the registration network R is used for random sampling, and the neighborhood of the sampled pixels gradually decreases as the number of iterations increases; the registration network R Without manual intervention, parameters can be adjusted adaptively, allowing the generator to produce more realistic images.
进一步地,所述带有时域可变约束的GAN网络去金属伪影模型的训练过程如下:Further, the training process of the GAN network metal artifact removal model with time domain variable constraints is as follows:
首先对每个输入带金属伪影的图像xa进行随机仿射变换,再对图像xa的不受伪影影响的参考图像x进行随机仿射变换,分别得到图像xa和参考图像x变换后的图像和xT;当图像/>通过生成器后G,得到去除伪影的图像/>将/>和xT输入配准网络R;配准网络的物理意义可用函数/>来表示,函数的第一部分G(xa;θG)是生成器子网络的输出结果,θG表示生成器子网络的参数,j是像素索引,第二部分的φ表示的是一个抽象的采样函数,用于在像素xj的邻域内对像素进行采样,σt是一个参数,用于控制邻域的大小,随着训练过程中迭代的进行,参数σt逐渐收敛于零。在训练开始时,配准网络的性能较差,因此输出的/>与仿射变换后的ground truth图像xt之间会存在一定的位置偏差,这些偏差是可变的,即每次迭代的每个输入图像中的每个像素都有独立的不同偏差,该特点可以帮助模拟函数φ(xj,σt)的随机采样过程。并且,随着训练的递进,配准网络的配准性能会提高,即表示参数σt会减小,如果整个模型收敛得好,σt将降为零。First, perform a random affine transformation on each input image image after and x T ; when the image/> After passing the generator G, we get the image with artifacts removed/> Will/> and x T input the registration network R; the physical meaning of the registration network can be used as a function/> To express, the first part of the function G (x a ; θ G ) is the output result of the generator subnetwork, θ G represents the parameters of the generator subnetwork, j is the pixel index, and the second part φ represents an abstract The sampling function is used to sample pixels within the neighborhood of pixel x j . σ t is a parameter used to control the size of the neighborhood. As the iteration proceeds during the training process, the parameter σ t gradually converges to zero. At the beginning of training, the performance of the registration network is poor, so the output There will be a certain position deviation from the ground truth image x t after affine transformation. These deviations are variable, that is, each pixel in each input image of each iteration has an independent and different deviation. This feature It can help simulate the random sampling process of function φ(x j ,σ t ). Moreover, as training progresses, the registration performance of the registration network will improve, which means that the parameter σ t will decrease. If the entire model converges well, σ t will drop to zero.
利用训练好的带有时域可变约束的GAN网络去金属伪影模型对带金属伪影的CT图像生成去除金属伪影后图像。The trained GAN network metal artifact removal model with time-domain variable constraints is used to generate the image after removing metal artifacts from the CT image with metal artifacts.
进一步地,在生成器G中引入了残差学习;引入自重建作为约束来正则化生成器;生成器包含编码器和解码器;编码器用于将图像样本从图像域映射到隐空间,其隐空间中内容信息和金属伪影的特征被分离;解码器将分离的内容信息重建为无伪影的图像;在编码器和解码器之间,引入了一个由21个Inception-ResNet模块组成的深层子网络,以提高隐空间中伪影特征和内容信息的分离能力。Further, residual learning is introduced in the generator G; self-reconstruction is introduced as a constraint to regularize the generator; the generator contains an encoder and a decoder; the encoder is used to map image samples from the image domain to the latent space, and its latent space The features of content information and metal artifacts in space are separated; the decoder reconstructs the separated content information into artifact-free images; between the encoder and decoder, a deep layer consisting of 21 Inception-ResNet modules is introduced sub-network to improve the separation ability of artifact features and content information in the latent space.
进一步地,所述带有时域可变约束的GAN网络去金属伪影模型的训练过程,为了更好地区分金属伪影和内容信息,引入自重建作为约束来正则化生成器。在自重建过程中,将无伪影图像y拼接成[y,y],然后输入生成器。在拼接之前,对无伪影图像y进行随机仿射变换得到结果A3(y),拼接得到[A3(y),A3(y)],相应地,在自重建中,生成器输出变成:G([A3(y),A3(y)];θG);无伪影图像y对应的不受伪影影响的参考图像也是y。Furthermore, in the training process of the GAN network metal artifact removal model with time-domain variable constraints, in order to better distinguish metal artifacts and content information, self-reconstruction is introduced as a constraint to regularize the generator. During the self-reconstruction process, the artifact-free image y is spliced into [y, y] and then input to the generator. Before splicing, random affine transformation is performed on the artifact-free image y to obtain the result A 3 (y), and the splicing is [A 3 (y), A 3 (y)]. Correspondingly, in the self-reconstruction, the generator output Becomes: G([A 3 (y), A 3 (y)]; θ G ); the reference image that is not affected by artifacts corresponding to the artifact-free image y is also y.
进一步地,对于图像xa,利用线性插值方法(LI)在正弦图域的投影图像中获得金属轨迹部分的估计值,并通过FBP或FDK方法得到LI校正重建图像x[LI]a;将经过LI方法校正的图像x[LI]a和受伪影影响的图像xa分别进行随机仿射变换得到仿射变换结果A1(x[LI]a)和A1(xa),将A1(x[LI]a)和A1(xa)在通道维度连接起来作为生成器G的输入。Further, for the image x a , the linear interpolation method (LI) is used to obtain the estimated value of the metal trajectory part in the projection image in the sinogram domain, and the LI corrected reconstructed image x [LI]a is obtained through the FBP or FDK method; The image x [LI]a corrected by the LI method and the image x a affected by the artifact are subjected to random affine transformation respectively to obtain the affine transformation results A 1 (x [LI]a ) and A 1 (x a ). A 1 (x [LI]a ) and A 1 (x a ) are connected in the channel dimension as the input of the generator G.
进一步地,在生成器G中引入了残差学习,去除金属伪影后的图像表示为: [xa,x[LI]a]表示xa和x[LI]a的连接操作;在自重建中,生成器输出变成/>然后取/>和A2(x)的拼接作为配准网络R的输入,A2(x)表示参考图像x的随机仿射变换,配准网络R的输出是一个形变向量场Tx,/> θR为配准网络参数,在得到形变向量场Tx之后,通过对输入图像/>应用Tx来得到重采样的图像/>按照和生成Tx相同的步骤,得到形变向量场Ty,/> A4(y)表示参考图像y的随机仿射变换;以及在自重建的情况下,对应的重采样图像/>生成器输出/>和A2(x)作为判别器D的输入,同样,在自重建的情况下,生成器输出/>和A4(y)作为判别器D的输入。Further, residual learning is introduced in the generator G, and the image after removing metal artifacts is expressed as: [x a , x [LI]a ] represents the connection operation of x a and x [LI]a ; in self-reconstruction, the generator output becomes/> Then take/> The splicing of A 2 (x) and A 2 (x) is used as the input of the registration network R. A 2 (x) represents the random affine transformation of the reference image x. The output of the registration network R is a deformation vector field T x ,/> θ R is the registration network parameter. After obtaining the deformation vector field T x , the input image/> Apply T x to get the resampled image/> Follow the same steps as generating T x to obtain the deformation vector field T y ,/> A 4 (y) represents the random affine transformation of the reference image y; and in the case of self-reconstruction, the corresponding resampled image/> Generator output/> and A 2 (x) as the input of the discriminator D. Similarly, in the case of self-reconstruction, the generator output/> and A 4 (y) as the input of the discriminator D.
进一步地,所述带有时域可变约束的GAN网络去金属伪影模型包含四个损失函数,分别是伪影修正损失自重建损失/>对抗损失/>和用于约束形变向量场的平滑损块/>总损失表示为这些损失的加权和/> 各个λ表示相应损失的权重系数。Further, the GAN network metal artifact removal model with time domain variable constraints includes four loss functions, namely artifact correction loss. Self-reconstruction loss/> Fight against losses/> and smooth loss blocks used to constrain the deformation vector field/> The total loss is expressed as the weighted sum of these losses/> Each λ represents the weight coefficient of the corresponding loss.
校正损失配准网络R和生成器G同时训练,设计校正损失为:correction loss The registration network R and the generator G are trained simultaneously, and the design correction loss is:
使用L1范数使得生成器在减少金属伪影的同时保持更多细节,表示期望,/>表示x来自于数据集/>受到金属伪影影响的CT图像的定义域为/> Using the L1 norm allows the generator to maintain more detail while reducing metallic artifacts, Express expectations,/> Indicates that x comes from the data set/> The definition domain of CT images affected by metal artifacts is/>
自重建损失训练目标是使生成器学会去除金属伪影,同时还需要引入更多约束来使生成器保留更多的内容信息;即在有金属和伪影的情况下尽量减少金属伪影,在没有金属伪影的情况下尽量保留所有图像内容信息。self-reconstruction loss The training goal is to make the generator learn to remove metal artifacts, and more constraints need to be introduced to make the generator retain more content information; that is, to minimize metal artifacts when there are metal and artifacts, and to minimize metal artifacts when there are no metal artifacts. Try to retain all image content information without shadowing.
其中,表示期望,/>表示y来自于数据集/>不受伪影影响的图像的定义域为/> in, Express expectations,/> Indicates that y comes from the data set/> The domain of an image that is not affected by artifacts is/>
对抗损失对抗学习促使生成器G生成更真实的无伪影图像。为了实现这一目标,生成器应该具有在隐空间中区分伪影信息和内容信息的能力。为了增强这种能力,引入了两种对抗性学习策略。一种策略是提高隐空间识别金属伪影的能力,另一种策略是提高隐空间保存内容信息的能力。两种策略的输入数据分别为受金属伪影影响的图像和不含金属伪影的图像。这两种策略在对抗性学习中同时执行,它们各自的损失可以写成:against losses Adversarial learning forces the generator G to generate more realistic artifact-free images. To achieve this goal, the generator should have the ability to distinguish artifact information and content information in the latent space. To enhance this capability, two adversarial learning strategies are introduced. One strategy is to improve the ability of the latent space to identify metal artifacts, and the other strategy is to improve the ability of the latent space to preserve content information. The input data for the two strategies are images affected by metal artifacts and images without metal artifacts. These two strategies are performed simultaneously in adversarial learning, and their respective losses can be written as:
log的底数是2,D(·)表示判别器D的输出;所以总的对抗损失是:The base of log is 2, and D(·) represents the output of the discriminator D; so the total adversarial loss is:
平滑损失在最小化校正损失和自重建损失的组合损失时,产生的形变向量场可能变得不平滑,在物理上不现实。为了解决这一问题,在原始配准网络中引入了形变向量场梯度方向上的扩散正则算子来约束形变向量场使之变平滑。因为生成器有两种输出和/>有两个相应的形变向量场正则化项。因此,总的平滑损失可以表示为:Smooth loss When minimizing the combined loss of correction loss and self-reconstruction loss, the resulting deformation vector field may become unsmooth and physically unrealistic. In order to solve this problem, a diffusion regular operator in the gradient direction of the deformation vector field is introduced in the original registration network to constrain the deformation vector field and make it smooth. Because the generator has two outputs and/> There are two corresponding deformation vector field regularization terms. Therefore, the total smoothing loss can be expressed as:
其中为形变向量场T的梯度。在实践中,利用像素之间的差异来近似形变向量场中每个像素的梯度。in is the gradient of the deformation vector field T. In practice, the differences between pixels are used to approximate the gradient of each pixel in the deformation vector field.
一种计算机设备,该计算机设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行上述计算机程序时实现如上所述的基于时变约束的对抗生成网络模型的金属伪影去除方法。A computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program, it implements the adversarial generation network model based on time-varying constraints as described above. Metal artifact removal method.
一种计算机可读存储介质,该计算机可读存储介质存储有执行如上所述的基于时变约束的对抗生成网络模型的金属伪影去除方法的计算机程序。A computer-readable storage medium stores a computer program for executing the metal artifact removal method based on the time-varying constraint adversarial generative network model as described above.
有益效果:与现有技术相比,本发明构建了带有时域可变约束的GAN网络去金属伪影模型(MARGANVAC),该模型通过引入时变约束项,对全图各个部分进行自适应的保真约束,从而更有效地训练生成器生成具有更好细节保真性和金属伪影去除效果的图像。该方法对比现在主流的模型,不但去伪影的效果更好,且有更广泛的应用场景。Beneficial effects: Compared with the existing technology, the present invention constructs a GAN network metal artifact removal model with time-domain variable constraints (MARGANVAC), which adaptively adapts to each part of the entire image by introducing time-varying constraint terms. Fidelity constraints, thereby more efficiently training the generator to produce images with better detail fidelity and metallic artifact removal. Compared with the current mainstream models, this method not only has better artifact removal effects, but also has a wider range of application scenarios.
附图说明Description of the drawings
图1是常规GAN模型的网络示意图;Figure 1 is a network diagram of a conventional GAN model;
图2是基于常规GAN架构的MAR方法的结果展示图;Figure 2 is a result display diagram of the MAR method based on conventional GAN architecture;
图3是引入配准网络作为采样函数的GAN网络示意图;Figure 3 is a schematic diagram of the GAN network that introduces the registration network as the sampling function;
图4是仿射变化示意图,其中:(a)为原图,(b)为对(a)实施仿射变换后的图,(c)为(a)(b)之间的差值图;Figure 4 is a schematic diagram of affine transformation, in which: (a) is the original image, (b) is the image after affine transformation of (a), (c) is the difference image between (a) and (b);
图5为本发明实施例的MARGANVAC模型的整体架构示意图;Figure 5 is a schematic diagram of the overall architecture of the MARGANVAC model according to the embodiment of the present invention;
图6为生成器结构示意图;Figure 6 is a schematic diagram of the generator structure;
图7为生成器的基本组成模块示意图;Figure 7 is a schematic diagram of the basic component modules of the generator;
图8为判别器结构示意图;Figure 8 is a schematic diagram of the discriminator structure;
图9为不同方法在deeplesion数据集上的定性比较示意图;Figure 9 is a schematic diagram of qualitative comparison of different methods on the deeplesion data set;
图10为不同方法在Micro CT合成伪影数据集上的定性比较示意图;Figure 10 is a schematic diagram of the qualitative comparison of different methods on the Micro CT synthetic artifact data set;
图11为不同方法在包含真实金属伪影的锥束Micro CT数据集上定性比较的结果示意图。Figure 11 is a schematic diagram of the results of qualitative comparison of different methods on a cone beam Micro CT data set containing real metal artifacts.
具体实施方式Detailed ways
下面结合具体实施例,进一步阐明本发明,应理解这些实施例仅用于说明本发明而不用于限制本发明的范围,在阅读了本发明之后,本领域技术人员对本发明的各种等价形式的修改均落于本申请所附权利要求所限定的范围。The present invention will be further clarified below with reference to specific examples. It should be understood that these examples are only used to illustrate the present invention and are not intended to limit the scope of the present invention. After reading the present invention, those skilled in the art will be familiar with various equivalent forms of the present invention. All modifications fall within the scope defined by the appended claims of this application.
设受到金属伪影影响的CT图像的定义域为不受伪影影响的图像的定义域为/>金属伪影去除网络的目标是找到映射函数f(xa)=x,其中/>当有一个配对数据集 时,用配对数据集训练一个监督网络来学习f。Assume that the definition domain of CT images affected by metal artifacts is The domain of an image that is not affected by artifacts is/> The goal of the metal artifact removal network is to find the mapping function f(x a )=x, where/> When there is a paired data set When , a supervised network is trained using the paired data set to learn f.
GAN网络比常见的卷积神经网络(CNN)具有更好的拟合能力,并且可以生成具有更多解剖细节的图像。因此,GAN模型是一个非常好的可用于MAR的基础网络模型。传统GAN模型的缺点是生成的细节可能与ground truth不一致。常规GAN网络如图1所示。它由三部分组成。第一部分是生成器子网络G(xa;θG),第二部分是鉴别子网络D(xa||x;θD),第三部分是基于L1、L2范数或某些特定网络提取的高级特征的保真损失。一般来说,设计一个强大的生成器G(xa;θG)和与之匹配的鉴别器D(xa||x;θD)来很好地去除金属伪影是非常简单且方便的,这是因为金属伪影的特征与图像内容信息的特征有很大的不同。图2显示了通过传统的基于GAN网络的MAR模型获得的伪影去除图像。从中,可以看到,特定的金属伪影的纹理特征(如条纹和阴影)被消除了,生成器产生了一个看起来没有金属伪影的图像。换句话说,强大的生成器成功地将域中的xa图像转换为/>域中的/>图像。然而,很明显,生成器生成的图像在内容信息方面与ground truth图像x并不一致。可以从这个初步的试验中可以看出,生成对抗网络已经训练得足够好,可以实现不同域图像之间的转换,但保真损失不足以引导生成器优化到能生成和ground truth图像十分相似的图。常见的保真损失为GAN networks have better fitting capabilities than common convolutional neural networks (CNN) and can generate images with more anatomical details. Therefore, the GAN model is a very good basic network model that can be used for MAR. The disadvantage of traditional GAN models is that the generated details may be inconsistent with ground truth. A conventional GAN network is shown in Figure 1. It consists of three parts. The first part is the generator subnetwork G(x a ; θ G ), the second part is the discriminator subnetwork D(x a ||x; θ D ), and the third part is based on L1, L2 norm or some specific network Fidelity loss of extracted high-level features. Generally speaking, it is very simple and convenient to design a powerful generator G(x a ; θ G ) and a matching discriminator D (x a ||x; θ D ) to remove metal artifacts well. , this is because the characteristics of metal artifacts are very different from the characteristics of image content information. Figure 2 shows the artifact-removed image obtained by the traditional GAN network-based MAR model. From this, you can see that specific texture features of metal artifacts, such as streaks and shadows, are eliminated and the generator produces an image that appears to be free of metal artifacts. In other words, a powerful generator successfully The x a image in the domain is converted to/> /> in domain image. However, it is obvious that the image generated by the generator In terms of content information, it is not consistent with the ground truth image x. It can be seen from this preliminary experiment that the generative adversarial network has been trained well enough to achieve conversion between images in different domains, but the fidelity loss is not enough to guide the generator to optimize to generate images that are very similar to ground truth images. picture. Common fidelity losses are
Lp=||C(xa;θG)-x||p, (1.1)L p =||C(x a ; θ G )-x|| p , (1.1)
其中p为1或2,表示L1或L2范数。式(1.1)是一个像素级保真约束,所以它是一个强约束。从以前的许多研究中,如著名的ResNet模型,编码残差向量比编码原始向量更有效。这是因为残差向量包含较少的编码信息,从而减轻了网络的学习负担。可以将这种残差编码称为空间编码。本发明试图将残差编码的思想扩展到时间域,即让网络逐步学习内容信息特征。更具体地说,让网络先学习图像低频特征(即CT图像中相对平滑的部分),然后捕获高频特征(即图像中的边缘信息等结构)。因此,学习过程是渐进的,网络可以逐渐进化。为了实现这一目标,本发明需要设计一个时变损失函数来代替时不变的损失函数。Where p is 1 or 2, indicating L1 or L2 norm. Equation (1.1) is a pixel-level fidelity constraint, so it is a strong constraint. From many previous studies, such as the famous ResNet model, encoding residual vectors is more efficient than encoding original vectors. This is because the residual vector contains less encoding information, thus reducing the learning burden of the network. This residual coding can be called spatial coding. This invention attempts to extend the idea of residual coding to the time domain, that is, to allow the network to gradually learn content information features. More specifically, let the network first learn the low-frequency features of the image (i.e., the relatively smooth parts of the CT image), and then capture the high-frequency features (i.e., structures such as edge information in the image). Therefore, the learning process is gradual and the network can gradually evolve. In order to achieve this goal, the present invention needs to design a time-varying loss function to replace the time-invariant loss function.
Lvar=Ft(G(xa;θG),x) (1.2)L var =F t (G(x a ; θ G ), x) (1.2)
其中Ft是关于生成图像G(xa;θG)和ground truth图像x的时变函数。图像是由低频分量和高频分量组成的,这意味着图像x可以表示为低频分量和高频分量的组合。图像低频分量因其相对平滑的特点,可视为分段常数函数,因此像素xj有较大概率与相邻像素相似。如果让GAN模型一开始就学会生成低频部分,通过引入邻域相似度约束来松弛像素保真约束,如下所示:where F t is a time-varying function about the generated image G (x a ; θ G ) and the ground truth image x. The image is composed of low-frequency components and high-frequency components, which means that the image x can be represented as a combination of low-frequency components and high-frequency components. Due to its relatively smooth characteristics, the low-frequency component of the image can be regarded as a piecewise constant function, so the pixel x j has a high probability of being similar to the adjacent pixels. If the GAN model is allowed to learn to generate low-frequency parts from the beginning, the pixel fidelity constraints are relaxed by introducing neighborhood similarity constraints, as shown below:
其中j是像素索引,其中φ是一个采样函数,用于在像素xj的邻域内对像素进行采样,σ是一个参数,用于控制邻域的大小。这个损失函数Lp的物理意义是像素xj与其相邻像素相似。G(xa;θG)学习x的低频分量。如果可以动态地改变参数σ,使其成为时变函数,得到一个损失函数Lvar:where j is the pixel index, where φ is a sampling function that samples pixels within the neighborhood of pixel x j , and σ is a parameter that controls the size of the neighborhood. The physical meaning of this loss function L p is that the pixel x j is similar to its adjacent pixels. G(x a ; θ G ) learns the low-frequency components of x. If the parameter σ can be changed dynamically to make it a time-varying function, a loss function L var can be obtained:
函数φ(xj,σt)是像素xj附近区域的采样函数,随着迭代的进行,参数σt逐渐收敛于零,即损失函数最终退化为:The function φ(x j , σ t ) is the sampling function of the area near the pixel x j . As the iteration proceeds, the parameter σ t gradually converges to zero, that is, the loss function eventually degenerates into:
是常见的保真度损失,意味着正在加强约束,以便促使GAN学习高频成分。接下来,实例化一个函数φ(xj,σt)来满足上述条件,即函数具有采样功能和参数σ随时间递减;理论上,应该有许多选项来构建这样的函数,类似的函数扩展都在本发明的包含范畴本发明举例设计的可变形配准网络是一个比较好的的方案,它的优点在于在无人工干预的条件下可以自适应地调整网络参数,使得生成器产生更真实的去伪影图片。图3所示为引入配准网络的GAN网络示意图。首先对每个输入图像xa进行随机仿射变换,再对其参考图像x进行随机仿射变换,这样就可以得到变换后的图像和xT。当/>通过生成器后,得到去除伪影的图像由于/>和xT不是像素级对应的图像,为了比较它们之间的差异,将/>和xT输入配准网络,部分纠正了前面因为随机仿射变换引起的形变。如图4所示,采取的随机仿射变化的幅度并不大,处在可控的范围内,因此配准网络可以矫正这些随机的偏差。is a common fidelity loss, which means that constraints are being tightened to prompt the GAN to learn high-frequency components. Next, instantiate a function φ(x j , σ t ) to satisfy the above conditions, that is, the function has a sampling function and the parameter σ decreases with time; in theory, there should be many options to build such a function, and similar function extensions are Within the scope of the present invention, the deformable registration network designed by the present invention is a better solution. Its advantage is that the network parameters can be adaptively adjusted without manual intervention, so that the generator can generate more realistic images. Remove artifacts from images. Figure 3 shows a schematic diagram of the GAN network introducing the registration network. First, perform a random affine transformation on each input image x a , and then perform a random affine transformation on its reference image x, so that the transformed image can be obtained and x T . When/> After passing through the generator, the image with artifacts removed is obtained Due to/> and x T are not pixel-level corresponding images. In order to compare the differences between them, // and x T are input into the registration network, which partially corrects the previous deformation caused by random affine transformation. As shown in Figure 4, the amplitude of the random affine changes is not large and is within a controllable range. Therefore, the registration network can correct these random deviations.
在训练开始时,配准网络的性能较差,因此输出的与仿射变换后的groundtruth图像xT之间会存在一定的位置偏差,这些偏差是可变的,即每个epoch的每个输入图像中的每个像素都有独立的不同偏差。该特点可以帮助网络模拟函数φ(xj,σt)的随机采样过程。并且,随着训练的递进,配准网络的配准性能会提高,即参数σt会减小。如果整个模型收敛得好,σt将降为零。At the beginning of training, the performance of the registration network is poor, so the output There will be certain position deviations from the affine transformed groundtruth image x T , and these deviations are variable, that is, each pixel in each input image of each epoch has an independent and different deviation. This feature can help the network simulate the random sampling process of function φ(x j , σ t ). Moreover, as training progresses, the registration performance of the registration network will improve, that is, the parameter σ t will decrease. If the entire model converges well, σt will drop to zero.
本发明提出的模型MARGANVAC的总体架构如图5所示。MARGANVAC包含三个模块,分别是生成器G、判别器D和配准网络R。除了上述时域可变约束的新颖训练机制外,还引入了一些有效的技巧来使生成器得到更有效的训练。首先,对输入图像xa,利用线性插值方法在正弦图域的投影图像中获得金属轨迹部分的估计值,并通过FBP或FDK方法得到LI校正重建图像x[LI]a。将经过LI方法校正的图像x[LI]a和受伪影影响的图像xa在通道维度连接起来作为生成器G的输入。其次,为了进一步减轻生成器的学习负担,在生成器G中引入了残差学习。那么去除金属伪影后的图像表示为:The overall architecture of the model MARGANVAC proposed by this invention is shown in Figure 5. MARGANVAC contains three modules, namely generator G, discriminator D and registration network R. In addition to the novel training mechanism with variable temporal constraints mentioned above, some effective techniques are also introduced to enable the generator to be trained more effectively. First, for the input image x a , the linear interpolation method is used to obtain the estimated value of the metal trajectory part in the projection image in the sinogram domain, and the LI corrected reconstructed image x [LI]a is obtained through the FBP or FDK method. The image x [LI]a corrected by the LI method and the image xa affected by the artifact are connected in the channel dimension as the input of the generator G. Secondly, in order to further reduce the learning burden of the generator, residual learning is introduced in the generator G. Then the image after removing metal artifacts is expressed as:
其中[m,n]表示图像m和n的拼接操作。为了更好地区分金属伪影和内容信息,引入自重建作为约束来正则化生成器。在自重建过程中,将无伪影图像y拼接成[y,y],然后输入生成器,因此自重建图像表示为:Where [m, n] represents the splicing operation of images m and n. In order to better distinguish metal artifacts and content information, self-reconstruction is introduced as a constraint to regularize the generator. In the self-reconstruction process, the artifact-free image y is spliced into [y, y], and then input to the generator, so the self-reconstructed image is expressed as:
为了使配准网络R正常工作,对每个输入的伪影图像xa和ground-truth图像x进行随机仿射变换。假设对应的仿射变换图像分别为A1(xa)和A2(x)。通过这种方式,去除金属伪影的图像表示为:In order for the registration network R to work properly, a random affine transformation is performed on each input artifact image x a and ground-truth image x. Assume that the corresponding affine transformed images are A 1 (x a ) and A 2 (x) respectively. In this way, the image with metal artifacts removed is expressed as:
相应地,在自重建中,输出变成:Correspondingly, in self-reconstruction, the output becomes:
然后取和A2(x)的拼接作为配准网络R的输入,配准网络R的输出是一个形变向量场Tx,then take and A 2 (x) as the input of the registration network R, and the output of the registration network R is a deformation vector field T x ,
在得到形变向量场Tx之后,通过对输入图像应用Tx来得到重采样的图像/> After obtaining the deformation vector field T x , by comparing the input image Apply T x to get the resampled image/>
按照和生成Tx相同的步骤,得到形变向量场Ty,Following the same steps as generating T x , the deformation vector field T y is obtained,
以及在自重建的情况下,对应的重采样图像 And in the case of self-reconstruction, the corresponding resampled image
为了使生成器能够很好地分离内容信息和金属伪影,引入了一个特别设计的生成器,如图6所示。生成器包含编码器和解码器。该编码器用于将图像样本从图像域映射到隐空间,在隐空间中内容信息和金属伪影的特征被分离。解码器将分离的内容信息重建为无伪影的图像。在编码器和解码器之间,引入了一个由21个Inception-ResNet模块组成的深层子网络,以提高隐空间中伪影特征和内容信息的分离能力。在GoogleNet中提出的Inception结构充分利用了不同大小的卷积核,因此Inception结构在提取特征方面具有更好的性能。Inception-ResNet模块是通过在Inception结构中引入残差网络结构而形成的,因此很容易构建由多个Inception-ResNet模块组成的深度网络来加强内容信息和伪影特征的分离能力而不用担心网络是否会收敛的问题。所使用的Inception-ResNet模块的结构如图7(c)所示,其中通过添加了1×1卷积算子,降低了输入特征映射的维数,从而减少计算量。在编码器中,核心模块是下采样模块如图7(b)所示,它由卷积层、实例归一化(InstanceNormalization)层和ReLU激活函数组成。选择实例归一化而不是批量归一化(BatchNormalization)的原因是,实例归一化已被证明对于小批量的图像生成任务具有更好的性能。在解码器中,核心模块是上采样模块,如图7(e)所示,它类似于下采样模块,唯一的修改是转置卷积取代了卷积操作。判别器D基于PatchGAN判别器搭建。判别器的结构如图8所示。配准网络R整个结构以U-Net结构为基础,但比传统的U-Net结构更深。在这个非常深的U-Net中,可以获取所有级别的特征,以有效促进配准过程。In order for the generator to well separate content information and metal artifacts, a specially designed generator is introduced, as shown in Figure 6. Generator contains encoder and decoder. This encoder is used to map image samples from the image domain to a latent space where content information and features of metal artifacts are separated. The decoder reconstructs the separated content information into an artifact-free image. Between the encoder and decoder, a deep sub-network consisting of 21 Inception-ResNet modules is introduced to improve the separation ability of artifact features and content information in the latent space. The Inception structure proposed in GoogleNet makes full use of convolution kernels of different sizes, so the Inception structure has better performance in extracting features. The Inception-ResNet module is formed by introducing a residual network structure into the Inception structure, so it is easy to build a deep network composed of multiple Inception-ResNet modules to enhance the separation ability of content information and artifact features without worrying about whether the network Convergence issues. The structure of the Inception-ResNet module used is shown in Figure 7(c), in which a 1×1 convolution operator is added to reduce the dimensionality of the input feature map, thereby reducing the amount of calculation. In the encoder, the core module is the downsampling module as shown in Figure 7(b), which consists of a convolutional layer, an instance normalization (InstanceNormalization) layer and a ReLU activation function. The reason for choosing instance normalization over batch normalization (BatchNormalization) is that instance normalization has been proven to have better performance for image generation tasks with small batches. In the decoder, the core module is the upsampling module, as shown in Figure 7(e), which is similar to the downsampling module, the only modification is that the transposed convolution replaces the convolution operation. The discriminator D is built based on the PatchGAN discriminator. The structure of the discriminator is shown in Figure 8. The entire structure of the registration network R is based on the U-Net structure, but is deeper than the traditional U-Net structure. In this very deep U-Net, all levels of features can be obtained to effectively facilitate the registration process.
损失函数设计Loss function design
模型的学习过程也是鼓励生成器在保持内容信息的同时减少金属伪影的过程。在每次对抗式学习迭代中,生成器G将输出减少金属伪影的图像;当G和D的性能都提高时,输出的图像看起来更像无伪影的图像。引入的配准网络的作用是帮助生成器G逐步学习内容特征,因此在训练过程中需要同时更新网络权重θR和θG。从图5中可以看出,MARGANVAC包含四种形式的损失,分别是伪影修正损失自重建损失/>对抗损失/>和用于约束形变向量场的平滑损失/>总损失可表示为这些损失的加权和,如下所示:The learning process of the model is also a process that encourages the generator to reduce metallic artifacts while maintaining content information. In each adversarial learning iteration, the generator G will output an image with reduced metal artifacts; when the performance of both G and D improves, the output images look more like artifact-free images. The function of the introduced registration network is to help the generator G gradually learn content features, so the network weights θ R and θ G need to be updated simultaneously during the training process. As can be seen from Figure 5, MARGANVAC contains four forms of losses, namely artifact correction loss Self-reconstruction loss/> Fight against losses/> and a smoothing loss used to constrain the deformation vector field/> The total loss can be expressed as the weighted sum of these losses as follows:
其中λ是超参数,它们平衡了每个损失在训练过程中的重要程度。where λ are hyperparameters that balance the importance of each loss in the training process.
校正损失由于整个模型是基于有监督学习的,最直接有效的约束就是校正损失,它使去除金属伪影的图像与ground truth图像之间的差异最小化。然而,对输入图像进行随机仿射变换后,不再存在严格的像素级对应的ground truth图像。为了解决这个问题,配准网络R和生成器G同时训练,设计校正损失为:correction loss Since the entire model is based on supervised learning, the most direct and effective constraint is the correction loss, which minimizes the difference between the image with metal artifacts removed and the ground truth image. However, after random affine transformation is performed on the input image, there is no strict pixel-level corresponding ground truth image anymore. In order to solve this problem, the registration network R and the generator G are trained simultaneously, and the design correction loss is:
其中为式(1.11)得到的重采样图像,A2(x)为式(1.10)所示的经过仿射变换后的ground truth图像。在许多图像到图像的转换任务中,L1范数损失函数已被证明在恢复图像细节方面更有效,因此在这里使用L1范数来促使生成器在减少金属伪影的同时保持更多细节。in is the resampled image obtained by equation (1.11), and A 2 (x) is the ground truth image after affine transformation shown in equation (1.10). In many image-to-image translation tasks, the L1 norm loss function has been proven to be more effective in recovering image details, so the L1 norm is used here to encourage the generator to maintain more details while reducing metallic artifacts.
自重建损失训练目标是使生成器学会去除金属伪影。同时,需要引入更多约束来鼓励生成器保持更多已有的内容信息不变。核心目标是让网络学会识别金属伪影和内容信息,即在有金属和伪影的情况下减少金属伪影,在没有金属伪影的情况下保留所有图像内容信息。因此,将自重建损失引入如下:self-reconstruction loss The training goal is to make the generator learn to remove metal artifacts. At the same time, more constraints need to be introduced to encourage the generator to keep more existing content information unchanged. The core goal is to let the network learn to identify metal artifacts and content information, that is, reduce metal artifacts in the presence of metal and artifacts, and retain all image content information in the absence of metal artifacts. Therefore, the self-reconstruction loss is introduced as follows:
其中,为由式(1.13)得到的重采样图像,A4(y)为无伪影图像y的仿射变换。in, is the resampled image obtained by equation (1.13), and A 4 (y) is the affine transformation of the artifact-free image y.
对抗损失对抗学习可以鼓励生成器G生成更真实的无伪影图像。为了实现这一目标,生成器应该具有在隐空间中区分伪影信息和内容信息的能力。为了增强这种能力,引入了两种对抗性学习策略。一种策略是提高隐空间识别金属伪影的能力,另一种途径是提高隐空间保存内容信息的能力。两种策略的输入数据分别为受金属伪影影响的图像和不含金属伪影的图像。这两种策略在对抗性学习中同时执行,它们各自的损失可以写成:against losses Adversarial learning can encourage the generator G to generate more realistic artifact-free images. To achieve this goal, the generator should have the ability to distinguish artifact information and content information in the latent space. To enhance this capability, two adversarial learning strategies are introduced. One strategy is to improve the ability of the latent space to identify metal artifacts, and another approach is to improve the ability of the latent space to preserve content information. The input data for the two strategies are images affected by metal artifacts and images without metal artifacts. These two strategies are performed simultaneously in adversarial learning, and their respective losses can be written as:
所以总的对抗损失是:So the total adversarial loss is:
平滑损失在最小化校正损失和自重建损失的组合损失时,产生的形变向量场可能变得不平滑,在物理上不现实。为了解决这一问题,在原始配准网络中引入了形变向量场梯度方向上的扩散正则算子来约束形变向量场使之变平滑。因为生成器有两种输出和/>有两个相应的形变向量场正则化项。因此,总的平滑损失可以表示为:Smooth loss When minimizing the combined loss of correction loss and self-reconstruction loss, the resulting deformation vector field may become unsmooth and physically unrealistic. In order to solve this problem, a diffusion regular operator in the gradient direction of the deformation vector field is introduced in the original registration network to constrain the deformation vector field and make it smooth. Because the generator has two outputs and/> There are two corresponding deformation vector field regularization terms. Therefore, the total smoothing loss can be expressed as:
其中为形变向量场T的梯度。在实践中,利用像素之间的差异来近似形变向量场中每个像素的梯度。in is the gradient of the deformation vector field T. In practice, the differences between pixels are used to approximate the gradient of each pixel in the deformation vector field.
显然,本领域的技术人员应该明白,上述的本发明实施例的基于时变约束的对抗生成网络模型的金属伪影去除方法各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明实施例不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that the above-mentioned steps of the metal artifact removal method based on the time-varying constraint adversarial generation network model of the embodiment of the present invention can be implemented with a general-purpose computing device, and they can be concentrated in a single calculation device, or distributed on a network composed of multiple computing devices, optionally, they can be implemented with program codes executable by the computing device, so that they can be stored in a storage device and executed by the computing device, and In some cases, the steps shown or described may be performed in a different order than herein, or may be fabricated separately into individual integrated circuit modules, or multiple modules or steps thereof may be fabricated into a single integrated circuit module. to fulfill. As such, embodiments of the present invention are not limited to any specific combination of hardware and software.
实验前期准备Preparation for the experiment
(1)数据集构成(1) Data set composition
数据集1:deeplesion模拟金属伪影数据集Dataset 1: deeplesion simulated metal artifact data set
从deeplesion数据集中选取4000张无伪影CT图像,从CNNMAR提供的100张不同形状和大小的二值化金属掩膜中选取90张金属掩膜合成金属伪影,总共合成360000张配对CT图像作为训练数据集。此外,另外从deeplesion数据集选取200张无伪影CT图像和剩余的10张二值化金属掩膜来生成包含2000张配对CT图像的测试数据集。采用扇束投影,重建方法为FBP,图像大小为256×256。Select 4,000 artifact-free CT images from the deeplesion data set, select 90 metal masks from 100 binary metal masks of different shapes and sizes provided by CNNMAR to synthesize metal artifacts, and synthesize a total of 360,000 paired CT images as training data set. In addition, 200 artifact-free CT images and the remaining 10 binary metal masks were selected from the deeplesion data set to generate a test data set containing 2000 paired CT images. Fan-beam projection is used, the reconstruction method is FBP, and the image size is 256×256.
数据集2:真实伪影数据集Dataset 2: Real Artifact Dataset
真实伪影数据集为Micro CT数据集,为植入金属以及没有植入金属的骨骼样本的真实CT投影图的集合。并采用基于真实金属伪影的合成方法合成配对训练数据集。在得到插入金属轨迹的合成投影数据后,通过FDK方法获得相应的重建图像。训练集包含3868张图像,测试集包含537张图像。图像大小为364×364。The real artifact data set is a Micro CT data set, which is a collection of real CT projection images of bone samples with and without metal implants. And the paired training data set is synthesized using a synthesis method based on real metal artifacts. After obtaining the synthetic projection data inserted into the metal trajectory, the corresponding reconstructed image is obtained through the FDK method. The training set contains 3868 images and the test set contains 537 images. Image size is 364×364.
(2)实现和训练细节(2)Implementation and training details
网络基于Pytorch深度学习框搭建,在一台配备Nvidia 2080Ti GPU的计算机上运行。在训练过程中,使用Adam优化器(优化器参数为(β1,β2)=(0.5,0.999))来优化损失函数。对于deeplesion数据集,batch size设置为2,学习率设置为0.0001,训练epoch的个数为5。对于锥束Micro CT数据集,batch sizea设置为1,学习率设置为0.0001,共训练70个epoch。损失函数的权重分别为λsmooth=10,λAdv=1,λCorr=20,λRec=20。The network is built based on the Pytorch deep learning box and runs on a computer equipped with an Nvidia 2080Ti GPU. During the training process, the Adam optimizer (optimizer parameters are (β1, β2) = (0.5, 0.999)) is used to optimize the loss function. For the deeplesion data set, the batch size is set to 2, the learning rate is set to 0.0001, and the number of training epochs is 5. For the cone beam Micro CT data set, the batch sizea is set to 1, the learning rate is set to 0.0001, and a total of 70 epochs are trained. The weights of the loss function are λ smooth =10, λ Adv =1, λ Corr =20, and λ Rec =20.
(3)评估指标(3)Evaluation indicators
在合成的伪影的配对数据集上,采用结构相似度(SSIM)、峰值信噪比(PSNR)以及标准差对所有MAR方法的性能进行定量评估,包括本发明提出的方法和其他经典的MAR方法。具体而言,SSIM和PSNR值越高,金属伪影去除的效果和内容信息保留性能越好。标准差的引入是从结果稳定性的维度对不同方法进行评估,因为在SSIM和PSNR之外还需关注不同模型在遇到不同数据时结果的稳定性。在具有真实金属伪影的真实锥束Micro CT数据集上,由于缺乏配对的无伪影CT图像,因此只进行了基于图像视觉评估的定性评价。On the paired data set of synthetic artifacts, structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and standard deviation are used to quantitatively evaluate the performance of all MAR methods, including the method proposed by the present invention and other classic MAR method. Specifically, the higher the SSIM and PSNR values, the better the metal artifact removal effect and content information retention performance. The introduction of standard deviation is to evaluate different methods from the dimension of result stability, because in addition to SSIM and PSNR, it is also necessary to pay attention to the stability of the results of different models when encountering different data. On a real cone-beam Micro CT dataset with real metal artifacts, only a qualitative evaluation based on image visual assessment was performed due to the lack of paired artifact-free CT images.
MARGANVAC模型有效性验证Validity verification of MARGANVAC model
为了证明所提出方法的在金属伪影去除方面表现的优秀性能,实验了几种有代表性的MAR方法,包括传统的LI算法,FSMAR算法,有监督算法CNNMAR,cGANMAR,U-Net,双域方法InDuDoNet,无监督的ADN算法,CycleGAN,注意力引导β-CycleGAN算法。所有的方法都是在公开数据集deeplesion数据集和私有Micro CT数据集上进行的。除了本发明提出的模型,所有的方法都是基于公开的代码或已发表的论文中提出的模型。In order to prove the excellent performance of the proposed method in metal artifact removal, several representative MAR methods were experimented, including the traditional LI algorithm, FSMAR algorithm, supervised algorithm CNNMAR, cGANMAR, U-Net, dual domain Methods InDuDoNet, unsupervised ADN algorithm, CycleGAN, attention-guided β-CycleGAN algorithm. All methods are performed on the public deeplesion dataset and the private Micro CT dataset. Except for the model proposed in this invention, all methods are based on public codes or models proposed in published papers.
因为在双域网络的训练过程中涉及到CT图像的投影与重建,锥束CT的FDK需要耗费大量的计算资源,因此在以往的双域网络的训练中,都是采用扇束CT,Micro CT的数据是锥束CT,因此只在deeplesion数据集上进行双域网络的实验,在Micro CT数据集上则不进行双域的实验。Because the training process of dual-domain networks involves the projection and reconstruction of CT images, the FDK of cone beam CT requires a large amount of computing resources. Therefore, in the previous training of dual-domain networks, fan-beam CT and Micro CT were used. The data is cone beam CT, so the dual-domain network experiment is only performed on the deeplesion data set, and the dual-domain experiment is not performed on the Micro CT data set.
在deeplesion数据集上评估Evaluated on the deeplesion dataset
表1.1不同方法在deeplesion数据集上的PSNR(dB)和SSIM的比较Table 1.1 Comparison of PSNR (dB) and SSIM of different methods on the deeplesion data set
在该部分的实验中,在配对deeplesion数据集上对各种MAR方法进行了定性和定量分析。基于2000张测试集图像得到定量分析的结果。如表1.1所示,可以直观地看出,深度学习方法由于传统方法,且有监督方法普遍优于无监督方法。在所有复现的有监督方法中,双域模型InDuDoNet的性能最好。本发明提出的模型的PSNR和SSIM得分仅次于InDuDoNet,而InDuDoNet仅适应扇束CT,但本发明的模型不仅可以适应于扇束,同样可以适应锥束CT。In this part of the experiments, various MAR methods are qualitatively and quantitatively analyzed on the paired deeplesion data set. The results of quantitative analysis are obtained based on 2000 test set images. As shown in Table 1.1, it can be intuitively seen that deep learning methods are better than traditional methods, and supervised methods are generally better than unsupervised methods. Among all recurring supervised methods, the dual-domain model InDuDoNet performs best. The PSNR and SSIM scores of the model proposed by the present invention are second only to InDuDoNet, while InDuDoNet is only adapted to fan-beam CT, but the model of the present invention can not only be adapted to fan-beam, but also cone-beam CT.
受伪影影响的图像、和它相对应的无伪影图像以及各种MAR方法去除伪影后的图像如图9所示。图9给出了胸部切片的样例进行展示。在图9中受金属伪影影响的图像中,几乎看不到金属植入物邻近区域的任何组织/器官结构,条纹状伪影贯穿整张图像,金属植入物的邻近区域的图像内容信息被金属伪影严重损坏。从经过各种MAR方法去除伪影后的图像中,可以看到大部分远离金属的条纹状伪影被去除,深度学习方法在这方面的性能优于传统的LI方法和FSMAR方法。经过传统方法LI和FSMAR方法处理,组织结构依然模糊,且存在二次伪影。有监督方法CNNMAR、cGANMAR、InDuDoNet以及U-Net的性能优于传统方法,但都不能很好地恢复缺失的内容信息。与此相反,无监督方法往往会自动生成图像中丰富的细节,但这些细节中有许多与实际缺失结构不一致。因此,可以看到一些类似组织、器官或病变的结构实际上并不存在。β-CycleGAN和ADN在这方面有所改善,但不能完全避免,因为无监督方法缺乏保真度约束。通过分析本发明方法的MARGANVAC去除伪影的图像,可以清楚地看到几个血管管腔被恢复了,椎体边界完整。这些清晰的细节在其他图像中是看不到的。这些结果表明,本发明的方法在减少金属伪影和保留内容信息方面表现出了更好的性能。虽然InDuDoNet方法在评价指标ssim和psnr上略优于本发明的方法,但从图9可以看出,本发明的方法比InDuDoNet方法恢复的图像纹理更接近于真实图像。The image affected by artifacts, its corresponding artifact-free image, and the images after artifacts are removed by various MAR methods are shown in Figure 9. Figure 9 shows an example of a chest slice for demonstration. In the image affected by metal artifacts in Figure 9, almost no tissue/organ structure can be seen in the area adjacent to the metal implant. The stripe-like artifacts run through the entire image. The image content information in the area adjacent to the metal implant is Heavily damaged by metal artifacts. From the images after artifact removal by various MAR methods, it can be seen that most of the stripe-like artifacts away from the metal have been removed. The performance of the deep learning method in this regard is better than the traditional LI method and FSMAR method. After processing by traditional LI and FSMAR methods, the tissue structure is still blurred and there are secondary artifacts. The performance of supervised methods CNNMAR, cGANMAR, InDuDoNet and U-Net is better than traditional methods, but they cannot recover the missing content information well. In contrast, unsupervised methods tend to automatically generate rich details in images, but many of these details are inconsistent with the actual missing structure. Therefore, it is possible to see structures that resemble tissues, organs, or lesions that do not actually exist. β-CycleGAN and ADN improve on this aspect but cannot completely avoid it because unsupervised methods lack fidelity constraints. By analyzing the artifact-removed images of MARGANVAC using the method of the present invention, it can be clearly seen that several blood vessel lumens have been restored and the vertebral body boundaries are intact. These crisp details are not visible in other images. These results show that the method of the present invention shows better performance in reducing metal artifacts and retaining content information. Although the InDuDoNet method is slightly better than the method of the present invention in terms of evaluation indicators ssim and psnr, it can be seen from Figure 9 that the image texture restored by the method of the present invention is closer to the real image than the InDuDoNet method.
通过计算不同MAR方法的ssim和psnr的标准差(Std)可以用来比较不同方法的稳定性。从表1.1可以看出,本发明方法的PSNR指标的Std排名第三,SSIM指标的Std排名第一。因此,与其他MAR方法相比,本发明方法在保证良好图像质量的同时,可以获得更稳定的结果。By calculating the standard deviation (Std) of ssim and psnr of different MAR methods, the stability of different methods can be compared. As can be seen from Table 1.1, the Std of the PSNR index of the method of the present invention ranks third, and the Std of the SSIM index ranks first. Therefore, compared with other MAR methods, the method of the present invention can obtain more stable results while ensuring good image quality.
在锥束Micro CT伪影迁移数据集上评估Evaluation on Cone Beam Micro CT Artifact Migration Dataset
为了验证所提出的MAR方法在现实应用中的性能,制备了从锥束Micro CT采集的数据集。配对数据是通过将收到伪影影响的金属轨迹迁移到没有金属植入物的投影中来生成的,所有MAR方法的定量分析的结果如表1.2所示。可以看到,在SSIM和PSNR指标方面,被金属伪影污染的原始图像的质量比模拟实验差得多。说明引入真实金属伪影来合成受伪影影响的图像可能会导致更严重的图像退化。从表1.2中MAR方法的定量分析结果可以看出,所有MAR方法的性能都有所下降,本发明方法优于其他MAR方法。降低的值可能是由于受伪影影响的原始图像的质量低于模拟实验的原始图像质量。不同MAR方法的定性比较结果如图10所示。展示了被伪影影响的骨头横截面图像。在图10中,将金属植入物插入骨中,导致严重的图像退化和内容信息丢失。可以看到比模拟实验中更强的阴影伪影,完全损失了一部分骨组织细节。部分骨小梁消失,完整的皮质骨被切分成若干部分。从无监督方法CycleGAN和ADN去除伪影的结果可以看出,皮质骨被分割部分之间的间隙没有被很好地填充,缺失的骨小梁也没有很好地恢复。β-CycleGAN在这两方面表现略好于CycleGAN和ADN。与无监督方法相比,有监督方法特别是CNNMAR在这两个方面有较大的改进,而U-Net方法更倾向于把图片变平滑模糊。通过对比本发明提出的方法和CNNMAR得到的图像,可以看到,通过本发明的方法恢复的皮质骨边界更清晰,更接近ground truth图像。靠近金属植入物的软组织区域由于CT值相对于骨组织来说要小得多,所以更容易被金属伪影污染。但在临床应用中,正确的软组织成像对诊断至关重要。因此,软组织恢复性能应成为减少金属伪影的关键指标。通过对比软组织,可以发现,本发明方法在去除金属伪影的同时,具有更好的软组织恢复性能。综上,上述所有基于深度学习的MAR方法都能去除大部分金属伪影,但在内容信息保留上的表现差异较大,即监督方法优于无监督方法,其中本发明提出的方法表现最好。In order to verify the performance of the proposed MAR method in real-world applications, a data set collected from cone beam Micro CT was prepared. Paired data were generated by migrating metal trajectories affected by artifacts into projections without metal implants. The results of the quantitative analysis of all MAR methods are shown in Table 1.2. It can be seen that the quality of the original image contaminated by metal artifacts is much worse than the simulation experiment in terms of SSIM and PSNR indicators. illustrates that introducing real metal artifacts to synthesize artifact-affected images may lead to more severe image degradation. It can be seen from the quantitative analysis results of the MAR method in Table 1.2 that the performance of all MAR methods has declined, and the method of the present invention is better than other MAR methods. The reduced values may be due to the fact that the quality of the original images affected by artifacts is lower than that of the simulated experiments. The qualitative comparison results of different MAR methods are shown in Figure 10. Shown is an image of a cross-section of a bone affected by artifacts. In Figure 10, a metal implant is inserted into the bone, resulting in severe image degradation and loss of content information. Stronger shadow artifacts can be seen than in the simulation experiments, with a complete loss of some bone tissue details. Part of the trabecular bone disappears, and the intact cortical bone is divided into several parts. It can be seen from the results of artifact removal by the unsupervised methods CycleGAN and ADN that the gaps between the segmented parts of cortical bone are not well filled, and the missing trabeculae are not well restored. β-CycleGAN performs slightly better than CycleGAN and ADN in these two aspects. Compared with unsupervised methods, supervised methods, especially CNNMAR, have made great improvements in these two aspects, while the U-Net method is more inclined to smooth and blur the picture. By comparing the image obtained by the method proposed by the present invention and CNNMAR, it can be seen that the cortical bone boundary restored by the method of the present invention is clearer and closer to the ground truth image. Soft tissue areas close to metal implants are more likely to be contaminated by metal artifacts because their CT values are much smaller relative to bone tissue. But in clinical applications, correct soft tissue imaging is crucial for diagnosis. Therefore, soft tissue recovery performance should be a key indicator for reducing metal artifacts. By comparing soft tissue, it can be found that the method of the present invention has better soft tissue recovery performance while removing metal artifacts. In summary, all the above-mentioned MAR methods based on deep learning can remove most metal artifacts, but their performance in retaining content information is quite different, that is, supervised methods are better than unsupervised methods, among which the method proposed by the present invention performs best .
表1.2不同方法在Micro CT合成伪影数据集上的PSNR(dB)和SSIM的比较Table 1.2 Comparison of PSNR (dB) and SSIM of different methods on Micro CT synthetic artifact data set
在锥束Micro CT真实伪影图像上的性能评估Performance evaluation on cone beam Micro CT real artifact images
进一步在受到真实金属伪影影响的Micro CT数据集上对本发明提出的方法进行了实验,以测试其在实际应用中的性能。由于没有ground truth图片,只能用定性评估的方式来评价其性能。所有的MAR模型首先在使用伪影迁移方法生成的Micro CT配对训练数据集上进行训练,然后在锥束Micro CT的真实伪影图像上进行测试。图11展示了真实的受伪影影响的图像和不同MAR方法得到的相应的去除伪影图像。从真实伪影图像中,可以看到伪影在视觉上与图10中的合成的伪影非常相似。通过对比不同MAR方法得到的结果图像的局部放大图,可以看出有监督方法比无监督方法有更好的伪影去除性能。可以说明有监督方法的训练是成功的。换句话说,借助伪影迁移方式合成的配对数据集,成功地将有监督方法扩展到实际应用中。在所有的对比方法中,显然CNNMAR和本发明的方法在去除金属伪影的效果和图像内容信息恢复方面表现更好。进一步对比局部放大图中的细节可以发现,本发明的方法获得的骨小梁解剖结构更加清晰,金属周围软组织区域几乎没有残留的伪影,软组织边界更平滑。这一结果与上述模拟实验一致。The method proposed in this invention was further tested on a Micro CT data set affected by real metal artifacts to test its performance in practical applications. Since there are no ground truth pictures, its performance can only be evaluated qualitatively. All MAR models are first trained on Micro CT paired training datasets generated using the artifact transfer method, and then tested on real artifact images from cone beam Micro CT. Figure 11 shows the real artifact-affected image and the corresponding artifact-removed images obtained by different MAR methods. From the real artifact image, we can see that the artifact is visually very similar to the synthesized artifact in Figure 10. By comparing the partial enlargements of the result images obtained by different MAR methods, it can be seen that the supervised method has better artifact removal performance than the unsupervised method. It can be shown that the training of supervised methods is successful. In other words, the supervised method is successfully extended to practical applications with the help of paired data sets synthesized by the artifact transfer method. Among all comparative methods, it is obvious that CNNMAR and the method of the present invention perform better in terms of the effect of removing metal artifacts and restoring image content information. Further comparison of the details in the partial enlarged images shows that the anatomical structure of the trabecular bone obtained by the method of the present invention is clearer, there is almost no residual artifact in the soft tissue area around the metal, and the soft tissue boundary is smoother. This result is consistent with the above simulation experiment.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878651.XA CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method based on adversarial generative network model with time-varying constraints |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878651.XA CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method based on adversarial generative network model with time-varying constraints |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116894783A true CN116894783A (en) | 2023-10-17 |
Family
ID=88314602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310878651.XA Pending CN116894783A (en) | 2023-07-18 | 2023-07-18 | Metal artifact removal method based on adversarial generative network model with time-varying constraints |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894783A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117914656A (en) * | 2024-03-13 | 2024-04-19 | 北京航空航天大学 | End-to-end communication system design method based on neural network |
-
2023
- 2023-07-18 CN CN202310878651.XA patent/CN116894783A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117914656A (en) * | 2024-03-13 | 2024-04-19 | 北京航空航天大学 | End-to-end communication system design method based on neural network |
CN117914656B (en) * | 2024-03-13 | 2024-05-10 | 北京航空航天大学 | A design method for end-to-end communication system based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110827216B (en) | Multi-generator generation countermeasure network learning method for image denoising | |
CN110728729B (en) | An Attention Mechanism Based Unsupervised CT Projection Domain Data Restoration Method | |
CN109754403A (en) | A method and system for automatic tumor segmentation in CT images | |
CN110930416A (en) | MRI image prostate segmentation method based on U-shaped network | |
CN116739899B (en) | Image super-resolution reconstruction method based on SAUGAN network | |
CN112017131B (en) | CT image metal artifact removing method and device and computer readable storage medium | |
WO2022246677A1 (en) | Method for reconstructing enhanced ct image | |
CN114863225B (en) | Image processing model training method, image processing model generation device, image processing model equipment and image processing model medium | |
CN111814891A (en) | Medical image synthesis method, device and storage medium | |
Niu et al. | Low-dimensional manifold-constrained disentanglement network for metal artifact reduction | |
CN106909947A (en) | CT image metal artifacts removing method and elimination system based on Mean Shift algorithms | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
KR102505908B1 (en) | Medical Image Fusion System | |
CN108038840B (en) | Image processing method and device, image processing equipment and storage medium | |
CN116894783A (en) | Metal artifact removal method based on adversarial generative network model with time-varying constraints | |
CN116645283A (en) | Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network | |
Li et al. | MARGANVAC: metal artifact reduction method based on generative adversarial network with variable constraints | |
CN116703750A (en) | Image defogging method and system based on edge attention and multi-order differential loss | |
CN111899315A (en) | Method for reconstructing low-dose image by using multi-scale feature perception depth network | |
CN117726706B (en) | CT metal artifact correction and super-resolution method for unsupervised deep dictionary learning | |
CN118628599A (en) | A new method for removing metal artifacts from CT images | |
CN116524191B (en) | Blood vessel segmentation method using deep learning network integrated with geodesic voting algorithm | |
CN118154451A (en) | Deep learning CT image denoising method based on structure non-alignment pairing data set | |
CN109658464B (en) | Sparse angle CT image reconstruction method based on minimum weighted nuclear norm | |
Zhu et al. | CT metal artifact correction assisted by the deep learning-based metal segmentation on the projection domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |