CN115984401A - A dynamic PET image reconstruction method based on model-driven deep learning - Google Patents

A dynamic PET image reconstruction method based on model-driven deep learning Download PDF

Info

Publication number
CN115984401A
CN115984401A CN202310042621.5A CN202310042621A CN115984401A CN 115984401 A CN115984401 A CN 115984401A CN 202310042621 A CN202310042621 A CN 202310042621A CN 115984401 A CN115984401 A CN 115984401A
Authority
CN
China
Prior art keywords
dynamic
dynamic pet
model
image reconstruction
pet image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310042621.5A
Other languages
Chinese (zh)
Inventor
刘华锋
胡睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310042621.5A priority Critical patent/CN115984401A/en
Publication of CN115984401A publication Critical patent/CN115984401A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Nuclear Medicine (AREA)

Abstract

The invention discloses a dynamic PET image reconstruction method based on model-driven deep learning, which utilizes 3D space-time convolution to simultaneously extract the relevance of a time domain and a space domain of dynamic projection data, and front and back projection operators are integrated into a reconstruction network, so that the method has strong physical constraint and interpretability; the invention splits the dynamic PET image reconstruction problem into a plurality of cascaded reconstruction blocks, wherein each reconstruction block comprises a main network for updating a main image domain variable and a dual network for updating a dual measurement domain variable. The invention can obtain a high-quality dynamic PET tracer activity distribution image from the reconstruction of the ultra-low counting dynamic PET projection data, and solves the problems of poor interpretability and poor reconstruction effect of the current mainstream method.

Description

一种基于模型驱动深度学习的动态PET图像重建方法A dynamic PET image reconstruction method based on model-driven deep learning

技术领域Technical Field

本发明属于PET成像技术领域,具体涉及一种基于模型驱动深度学习的动态PET图像重建方法。The present invention belongs to the technical field of PET imaging, and in particular relates to a dynamic PET image reconstruction method based on model-driven deep learning.

背景技术Background Art

动态PET成像可以对生物组织内的生理参数进行量化表征,在肿瘤探测、心脏疾病表征和药物研发等方面起到了不可或缺的作用。然而从动态测量数据出发进行图像重建非常具有挑战性,原因在于PET重建问题不适定的特性以及动态PET数据单帧图像的低计数特点,尤其是在早期时间帧,单帧图像的计数很低,重建得到的图像受严重的噪声影响。Dynamic PET imaging can quantitatively characterize physiological parameters in biological tissues, and plays an indispensable role in tumor detection, cardiac disease characterization, and drug development. However, image reconstruction based on dynamic measurement data is very challenging due to the ill-posed nature of the PET reconstruction problem and the low count characteristics of single-frame images of dynamic PET data. Especially in the early time frame, the count of a single-frame image is very low, and the reconstructed image is severely affected by noise.

此外,随着成像技术和探测器水平的不断提高,超快时间帧成像硬件技术上可以实现,但是现有的重建算法不能很好的对超低计数的投影数据进行高质量重建。传统的动态图像重建算法如滤波反投影、极大似然期望最大算法不能对时域信号进行建模,导致了重建图像在不同帧差异较大。In addition, with the continuous improvement of imaging technology and detector level, ultrafast time frame imaging hardware technology can be realized, but the existing reconstruction algorithm cannot reconstruct high-quality ultra-low count projection data well. Traditional dynamic image reconstruction algorithms such as filtered back projection and maximum likelihood expectation maximum algorithm cannot model time domain signals, resulting in large differences in reconstructed images in different frames.

随着深度学习技术的发展,PET图像重建领域也涌现了一批新的基于深度学习的解决方案,如直接学习或者基于模型的学习。文献[G.Wang and J.Qi,“PET imagereconstruction using kernel method,”IEEE transactions on medical imaging,vol.34,no.1,pp.61–71,2014]通过在核方法的基础上引入时间的先验,提升了重建效果,但是在实际应用中对于超快速时间帧、超低计数的数据,表现效果并不是很好,一部分原因是该方法的先验信息从单次重建的数据中获取,没有结合数据驱动方法的优势。文献[B.Wang and H.Liu,“FBP-Net for direct reconstruction of dynamic PET images,”Physics in Medicine&Biology,vol.65,no.23,p.235008,2020]通过结合传统的滤波反投影算法和一个去噪神经网络,实现了比较不错的动态重建效果,但是该方法由于没有考虑到PET仪器物理特性相关的的系统矩阵约束,从而具有较差的可解释性和泛化性。With the development of deep learning technology, a number of new deep learning-based solutions have emerged in the field of PET image reconstruction, such as direct learning or model-based learning. Reference [G.Wang and J.Qi, "PET image reconstruction using kernel method," IEEE transactions on medical imaging, vol.34, no.1, pp.61–71, 2014] improves the reconstruction effect by introducing time priors on the basis of the kernel method. However, in practical applications, the performance is not very good for ultra-fast time frames and ultra-low count data. Part of the reason is that the prior information of this method is obtained from the data of a single reconstruction, and the advantages of data-driven methods are not combined. Reference [B.Wang and H.Liu, "FBP-Net for direct reconstruction of dynamic PET images," Physics in Medicine&Biology, vol.65, no.23, p.235008, 2020] achieves a relatively good dynamic reconstruction effect by combining the traditional filtered back projection algorithm with a denoising neural network. However, this method has poor interpretability and generalization because it does not take into account the system matrix constraints related to the physical characteristics of the PET instrument.

由此可见,现有技术要么没有很好的利用动态投影数据在时域上的相关性,要么缺少物理上的约束导致重建结果不稳定,并且在超低计数的情况下表现不佳,从而限制了超快时间帧PET成像技术的发展。为了得到较好的重建质量,现有的动态重建方法往往需要较长采集时间的投影数据,然而长的扫描时间又会不可避免地引入运动伪影,从而进一步恶化了重建图像的质量。It can be seen that the existing technologies either do not make good use of the correlation of dynamic projection data in the time domain, or lack physical constraints, resulting in unstable reconstruction results, and poor performance in the case of ultra-low counts, thus limiting the development of ultrafast time frame PET imaging technology. In order to obtain better reconstruction quality, the existing dynamic reconstruction methods often require projection data with a longer acquisition time. However, the long scanning time will inevitably introduce motion artifacts, further deteriorating the quality of the reconstructed image.

发明内容Summary of the invention

鉴于上述,本发明提供了一种基于模型驱动深度学习的动态PET图像重建方法,可以从超低计数的动态投影数据重建得到高质量的动态PET示踪剂活度分布图,可以有效的利用动态投影数据的时空关联性,并结合了物理投影矩阵的约束,具有较强的可解释性。In view of the above, the present invention provides a dynamic PET image reconstruction method based on model-driven deep learning, which can reconstruct high-quality dynamic PET tracer activity distribution maps from ultra-low count dynamic projection data, effectively utilize the spatiotemporal correlation of dynamic projection data, and combine the constraints of the physical projection matrix, and has strong interpretability.

一种基于模型驱动深度学习的动态PET图像重建方法,包括如下步骤:A dynamic PET image reconstruction method based on model-driven deep learning includes the following steps:

(1)利用探测器对注入放射性药物的生物组织进行探测,采集得到对应的动态正弦图投影数据Y;(1) Using a detector to detect the biological tissue injected with radioactive drugs, and acquiring corresponding dynamic sinusoidal projection data Y;

(2)对动态正弦图投影数据Y进行重建得到对应的动态PET示踪剂活度分布图X;(2) reconstructing the dynamic sinusoidal projection data Y to obtain the corresponding dynamic PET tracer activity distribution map X;

(3)根据步骤(1)和(2)多次执行以获得大量样本,每一样本均包含有动态正弦图投影数据Y及其对应的动态PET示踪剂活度分布图X,进而将所有样本划分为训练集、验证集和测试集;(3) performing steps (1) and (2) multiple times to obtain a large number of samples, each sample comprising dynamic sinusoidal projection data Y and its corresponding dynamic PET tracer activity distribution map X, and then dividing all samples into a training set, a validation set, and a test set;

(4)根据动态PET测量方程将动态重建问题转化为带正则项的泊松对数似然优化问题,并利用对偶变量的性质将该优化问题转化为对应的鞍点问题;(4) According to the dynamic PET measurement equation, the dynamic reconstruction problem is transformed into a Poisson log-likelihood optimization problem with a regularization term, and the properties of the dual variables are used to transform the optimization problem into the corresponding saddle point problem;

(5)利用主对偶网络交替更新主变量和对偶变量来求解上述鞍点问题,从而构建用于动态PET图像重建的STPD-Net模型,该模型由若干个重建模块级联而成,每个重建模块由主网络和对偶网络连接组成;(5) The main and dual networks are used to alternately update the main and dual variables to solve the above saddle point problem, thereby constructing the STPD-Net model for dynamic PET image reconstruction. The model is composed of a cascade of several reconstruction modules, each of which is composed of a main network and a dual network.

(6)利用训练集样本中的Y作为STPD-Net模型的输入,X作为标签,对STPD-Net模型进行训练,从而得到最终的动态PET图像重建模型;(6) Using Y in the training set samples as the input of the STPD-Net model and X as the label, the STPD-Net model is trained to obtain the final dynamic PET image reconstruction model;

(7)将测试集样本中的Y输入至动态PET图像重建模型中,即可直接重建输出对应的动态PET示踪剂活度分布图。(7) Input Y in the test set sample into the dynamic PET image reconstruction model, and the corresponding dynamic PET tracer activity distribution map can be directly reconstructed and output.

进一步地,所述动态PET测量方程的表达式如下:Furthermore, the expression of the dynamic PET measurement equation is as follows:

Y=G·X+RY=G·X+R

其中:G为系统响应矩阵,R为随机及散射的噪声项。Where: G is the system response matrix, and R is the random and scattered noise term.

进一步地,所述步骤(4)中带正则项的泊松对数似然优化问题表达式如下:Furthermore, the Poisson log-likelihood optimization problem with regularization term in step (4) is expressed as follows:

Figure BDA0004051034070000031
Figure BDA0004051034070000031

其中:L(Y|X)为泊松似然项且

Figure BDA0004051034070000032
R()表示正则项,I为探测器的总数量,T为时间帧的总数量,λ为正则项惩罚系数,Yi,t表示动态正弦图投影数据Y中第i行第t列的元素值,
Figure BDA0004051034070000033
表示动态正弦图投影数据
Figure BDA0004051034070000034
中第i行第t列的元素值,
Figure BDA0004051034070000035
E()为期望函数。Where: L(Y|X) is the Poisson likelihood term and
Figure BDA0004051034070000032
R() represents the regularization term, I is the total number of detectors, T is the total number of time frames, λ is the regularization term penalty coefficient, Yi ,t represents the element value of the i-th row and t-th column in the dynamic sinusoidal projection data Y,
Figure BDA0004051034070000033
Represents dynamic sinogram projection data
Figure BDA0004051034070000034
The value of the element in the i-th row and t-th column,
Figure BDA0004051034070000035
E() is the expectation function.

进一步地,所述步骤(4)中鞍点问题的表达式如下:Furthermore, the expression of the saddle point problem in step (4) is as follows:

Figure BDA0004051034070000036
Figure BDA0004051034070000036

其中:sup<>表示集合上确界,L*(Y|X)表示L(Y|X)的共轭。Where: sup<> represents the supremum of a set, and L * (Y|X) represents the conjugate of L(Y|X).

进一步地,所述步骤(5)中利用主对偶网络交替更新主变量和对偶变量来求解鞍点问题的表达式如下:Furthermore, in step (5), the expression for solving the saddle point problem by alternately updating the primary variable and the dual variable using the primary-dual network is as follows:

Figure BDA0004051034070000037
Figure BDA0004051034070000037

Figure BDA0004051034070000038
Figure BDA0004051034070000038

其中:P()表示主网络,D()表示对偶网络,

Figure BDA0004051034070000039
表示第k次迭代的动态PET示踪剂活度分布图X k中第j行第t列的元素值,
Figure BDA00040510340700000310
分别表示第k-1次迭代的动态PET示踪剂活度分布图X k-1中第j行第t列的元素值,
Figure BDA00040510340700000311
表示第k次迭代的对偶变量hk中第i行第t列的元素值,
Figure BDA00040510340700000312
表示第k-1次迭代的对偶变量hk-1中第i行第t列的元素值,G*表示系统响应矩阵G的共轭,k为大于0的自然数,j为自然数且1≤j≤N,N为PET示踪剂活度分布图的总像素点数量。Where: P() represents the main network, D() represents the dual network,
Figure BDA0004051034070000039
represents the value of the element in the jth row and tth column of the dynamic PET tracer activity distribution diagram Xk of the kth iteration,
Figure BDA00040510340700000310
They represent the element value of the jth row and tth column in the dynamic PET tracer activity distribution diagram Xk -1 of the k-1th iteration,
Figure BDA00040510340700000311
represents the value of the element in the i-th row and t-th column of the dual variable h k of the k-th iteration,
Figure BDA00040510340700000312
represents the element value of the i-th row and t-th column in the dual variable h k-1 of the k-1th iteration, G * represents the conjugate of the system response matrix G, k is a natural number greater than 0, j is a natural number and 1≤j≤N, and N is the total number of pixels in the PET tracer activity distribution map.

进一步地,所述对偶网络的输入包括对偶变量hk-1、动态PET示踪剂活度分布图X k -1以及动态正弦图投影数据Y,输出为迭代更新后的对偶变量hk,X k-1先经过正向投影操作后与hk-1以及Y进行通道维度的拼接,然后将拼接得到的结果依次通过四个3D时空卷积层进行时空特征提取,最后将提取得到的特征与hk-1进行通道维度拼接后输出即为hkFurthermore, the input of the dual network includes the dual variable h k-1 , the dynamic PET tracer activity distribution map X k -1 and the dynamic sinusoidal projection data Y, and the output is the iteratively updated dual variable h k , X k-1 is first subjected to a forward projection operation and then spliced with h k-1 and Y in the channel dimension, and then the spliced result is sequentially passed through four 3D spatiotemporal convolutional layers for spatiotemporal feature extraction, and finally the extracted features are spliced with h k-1 in the channel dimension and the output is h k .

进一步地,所述主网络的输入包括对偶变量hk以及动态PET示踪剂活度分布图X k -1,输出为迭代更新后的动态PET示踪剂活度分布图X k,hk先经过反向投影操作后与X k-1进行通道维度的拼接,然后将拼接得到的结果依次通过四个3D时空卷积层进行时空特征提取,最后将提取得到的特征与X k-1进行通道维度拼接后输出即为X kFurthermore, the input of the main network includes the dual variable h k and the dynamic PET tracer activity distribution map X k -1 , and the output is the iteratively updated dynamic PET tracer activity distribution map X k . h k is first back-projected and then concatenated with X k-1 in the channel dimension. The concatenated result is then sequentially passed through four 3D spatiotemporal convolutional layers for spatiotemporal feature extraction. Finally, the extracted features are concatenated with X k -1 in the channel dimension and the output is X k .

进一步地,所述3D时空卷积层采用的卷积核大小为3×3×3。Furthermore, the convolution kernel size used in the 3D spatiotemporal convolution layer is 3×3×3.

进一步地,所述步骤(6)中对模型进行训练的过程如下:Furthermore, the process of training the model in step (6) is as follows:

6.1初始化模型参数,包括每一层的偏置向量和权值矩阵、学习率以及优化器;6.1 Initialize model parameters, including the bias vector and weight matrix of each layer, learning rate and optimizer;

6.2将训练集样本中的动态正弦图投影数据Y输入至模型,模型正向传播输出得到对应的动态PET示踪剂活度分布图,计算该结果与标签之间的损失函数;6.2 Input the dynamic sinusoidal projection data Y in the training set samples into the model, forward propagate the model output to obtain the corresponding dynamic PET tracer activity distribution map, and calculate the loss function between the result and the label;

6.3根据损失函数利用优化器通过梯度下降法对模型参数不断迭代更新,直至损失函数收敛,训练完成。6.3 Based on the loss function, use the optimizer to iteratively update the model parameters through the gradient descent method until the loss function converges and the training is completed.

进一步地,训练完成后,利用验证集样本对模型进行验证,将在验证集上表现最好的模型作为最终的动态PET图像重建模型。Furthermore, after the training is completed, the model is verified using the validation set samples, and the model with the best performance on the validation set is used as the final dynamic PET image reconstruction model.

本发明提出用3D时空卷积来同时提取动态PET投影数据的时空关联性,与其他基于2D卷积神经网络的重建方法相比,本发明可以对动态投影数据不同时间帧之间的依赖性进行很好的建模,对单帧图像的具有很好的重建恢复效果,同时对不同时间帧PET图像之间的结构相似性也有很好的把握,优于现有的基于深度学习的动态重建方法。The present invention proposes to use 3D spatiotemporal convolution to simultaneously extract the spatiotemporal correlation of dynamic PET projection data. Compared with other reconstruction methods based on 2D convolutional neural networks, the present invention can well model the dependency between different time frames of dynamic projection data, has a good reconstruction and restoration effect on single-frame images, and also has a good grasp of the structural similarity between PET images in different time frames, which is superior to the existing dynamic reconstruction methods based on deep learning.

本发明提出利用主对偶网络来替代近端算子进行主变量和对偶变量的更新,将主对偶混合梯度下降算法展开成基于模型的深度神经网络的形式,在数学推导上保证了一定的可解释性,同时又具备很强的学习表征能力,这是动态PET图像重建领域中的基于模型的深度学习方法的首次尝试。The present invention proposes to use a principal-dual network to replace the proximal operator to update the principal variable and the dual variable, and expands the principal-dual hybrid gradient descent algorithm into a model-based deep neural network. It ensures a certain degree of interpretability in mathematical derivation, while having a strong learning representation ability. This is the first attempt at a model-based deep learning method in the field of dynamic PET image reconstruction.

本发明在超低计数的动态投影数据上的表现很好,在实际应用中,可以很好的解决动态采集过程病人等待时间过长的问题;在实验中,我们同时用模拟数据和临床老鼠的扫描数据对本发明提出的方法进行了验证,发现本发明对单帧仅具有超低的几千计数的动态数据有很好的重建效果,说明本发明提出的方法将尤其适用于人体全身动态PET成像以及参数化PET成像的应用。The present invention performs very well on dynamic projection data with ultra-low counts. In practical applications, it can effectively solve the problem of long waiting time for patients during the dynamic acquisition process. In the experiment, we used both simulation data and scanning data of clinical mice to verify the method proposed by the present invention, and found that the present invention has a good reconstruction effect on dynamic data with ultra-low counts of only a few thousand in a single frame, indicating that the method proposed by the present invention will be particularly suitable for the application of human whole-body dynamic PET imaging and parametric PET imaging.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明动态PET图像重建方法的步骤流程示意图。FIG. 1 is a schematic flow chart of the steps of the dynamic PET image reconstruction method of the present invention.

图2为本发明STPD-Net模型的总体结构示意图。FIG2 is a schematic diagram of the overall structure of the STPD-Net model of the present invention.

图3为不同方法在低计数动态PET投影数据的不同切片上的重建结果对比示意图;从左到右依次为MLEM算法的重建图、KEM-ST方法的重建图、LPD方法的重建图、FBPnet方法的重建图、本发明的重建图、真值标签图,从上到下依次为第三帧、第八帧和第十五帧。Figure 3 is a schematic diagram comparing the reconstruction results of different methods on different slices of low-count dynamic PET projection data; from left to right are the reconstruction image of the MLEM algorithm, the reconstruction image of the KEM-ST method, the reconstruction image of the LPD method, the reconstruction image of the FBPnet method, the reconstruction image of the present invention, and the true value label image, and from top to bottom are the third frame, the eighth frame, and the fifteenth frame.

具体实施方式DETAILED DESCRIPTION

为了更为具体地描述本发明,下面结合附图及具体实施方式对本发明的技术方案进行详细说明。In order to describe the present invention more specifically, the technical solution of the present invention is described in detail below in conjunction with the accompanying drawings and specific implementation methods.

如图1所示,本发明基于模型驱动深度学习的动态PET图像重建方法,包括如下步骤:As shown in FIG1 , the dynamic PET image reconstruction method based on model-driven deep learning of the present invention comprises the following steps:

训练阶段Training phase

(1)对目标组织注射放射性药物并进行探测,采集得到动态正弦图投影数据Y,并重建得到对应PET示踪剂活度分布图X。(1) Radioactive drugs are injected into the target tissue and detected, dynamic sinusoidal projection data Y is collected, and the corresponding PET tracer activity distribution map X is reconstructed.

(2)根据动态PET成像原理,建立测量方程模型:(2) According to the principle of dynamic PET imaging, the measurement equation model is established:

Y=G·X+RY=G·X+R

其中:G为系统响应矩阵,R为随机和散射噪声项,系统响应矩阵采用射线模拟方法计算得到。Where: G is the system response matrix, R is the random and scattered noise term, and the system response matrix is calculated using the ray simulation method.

利用带正则项的泊松对数似然优化方法求解上述成像逆问题:The above inverse imaging problem is solved using the Poisson log-likelihood optimization method with a regularization term:

Figure BDA0004051034070000051
Figure BDA0004051034070000051

其中:

Figure BDA0004051034070000052
为泊松似然项,I表示总探测器的数量,T表示总时间帧的数量,
Figure BDA0004051034070000053
表示动态正弦图投影矩阵的均值,R()代表正则项,λ为正则项惩罚系数。in:
Figure BDA0004051034070000052
is the Poisson likelihood term, I represents the total number of detectors, T represents the total number of time frames,
Figure BDA0004051034070000053
represents the mean of the dynamic sinusoidal projection matrix, R() represents the regularization term, and λ is the penalty coefficient of the regularization term.

利用对偶问题的性质将该问题转化为鞍点问题的形式:Using the properties of the dual problem, the problem is transformed into a saddle point problem:

Figure BDA0004051034070000054
Figure BDA0004051034070000054

采用主变量和对偶变量交替更新的方法即STPD-Net来求解该鞍点问题,如图2所示,STPD-Net由若干个重建模块级联而成,每个重建模块中包含了一个主网络来更新主变量X,和一个对偶网络D来更新引入的对偶变量:The saddle point problem is solved by using the method of alternating updating of the primary variable and the dual variable, namely STPD-Net. As shown in Figure 2, STPD-Net is composed of several cascaded reconstruction modules. Each reconstruction module contains a primary network to update the primary variable X and a dual network D to update the introduced dual variable:

Figure BDA0004051034070000061
Figure BDA0004051034070000061

Figure BDA0004051034070000062
Figure BDA0004051034070000062

其中:h表示对偶变量,k为迭代次数索引,G*表示系统矩阵的共轭算符,即反投影操作,采用系统矩阵的转置进行计算。Where: h represents the dual variable, k is the iteration index, G * represents the conjugate operator of the system matrix, that is, the back-projection operation, and the calculation is performed using the transpose of the system matrix.

对偶网络的输入为对偶变量hk-1,动态PET图像Xk-1和投影数据Y,输出为更新后的对偶变量hk,动态PET图像Xk-1先经过正向投影操作后和对偶变量hk-1、投影数据Y进行通道维度的拼接,然后经过四个3D时空卷积层进行时空特征提取,3D卷积核大小为3×3×3;最后和对偶变量hk-1进行通道维度拼接后输出得到更新后的对偶变量hkThe input of the dual network is the dual variable h k-1 , the dynamic PET image X k-1 and the projection data Y, and the output is the updated dual variable h k . The dynamic PET image X k-1 is first forward projected and then concatenated with the dual variable h k-1 and the projection data Y in the channel dimension. Then, it is subjected to four 3D spatiotemporal convolution layers for spatiotemporal feature extraction, and the 3D convolution kernel size is 3×3×3. Finally, it is concatenated with the dual variable h k-1 in the channel dimension and the updated dual variable h k is output.

主网络输入为动态PET图像Xk-1和对偶网络更新后的对偶变量hk,对偶网络更新后的对偶变量hk先经过反向投影操作后和动态PET图像Xk-1进行通道维度拼接,然后经过四个3D时空卷积层进行时空卷积提取,然后和动态PET图像Xk-1进行通道维度拼接后输出得到PET重建图像XkThe input of the main network is the dynamic PET image Xk-1 and the dual variable hk updated by the dual network. The dual variable hk updated by the dual network is first back-projected and then concatenated with the dynamic PET image Xk-1 in the channel dimension. Then it is extracted through four 3D spatiotemporal convolution layers and then concatenated with the dynamic PET image Xk-1 in the channel dimension to output the PET reconstructed image Xk .

(3)在训练阶段,以动态PET示踪剂活度分布图作为标签,以动态正弦图数据作为输入,训练该重建模型。(3) In the training phase, the dynamic PET tracer activity distribution map is used as a label and the dynamic sinusoidal data is used as an input to train the reconstruction model.

首先初始化模型参数,包括网络参数初始化和重建图像初始化,其中模型参数采用kaiming初始化方式,初始化图像和初始化对偶变量均采用0初始化。First, the model parameters are initialized, including network parameter initialization and reconstructed image initialization. The model parameters are initialized using the Kaiming method, and the initialization image and the initialization dual variables are both initialized to 0.

然后将训练样本中的动态正弦图投影数据Y输入STPD-Net模型,正向传播得到最后一个迭代模块的输出结果作为模型重建得到结果。Then the dynamic sinusoidal projection data Y in the training sample is input into the STPD-Net model, and the output result of the last iterative module is obtained by forward propagation as the result of model reconstruction.

进而计算模型输出结果和动态PET示踪剂活度分布图之间的MSE损失函数以及损失函数对各个变量的梯度,利用Adam优化器更新模型中所有可学习的参数,直到损失函数的数值基本不变则训练结束。Then, the MSE loss function between the model output and the dynamic PET tracer activity distribution map and the gradient of the loss function with respect to each variable are calculated, and the Adam optimizer is used to update all learnable parameters in the model until the value of the loss function remains basically unchanged, at which point the training is completed.

最后用验证集样本进行验证,将在验证集表现最好的模型作为最终的动态PET图像重建模型。Finally, the validation set samples were used for verification, and the model with the best performance in the validation set was used as the final dynamic PET image reconstruction model.

推断阶段Inference Phase

(1)测量或者模拟得到动态正弦图投影数据。(1) Measure or simulate the dynamic sinusoidal projection data.

(2)将动态正弦图数据和初始化图像作为输入,训练好的重建模型直接输出动态PET示踪剂活度分布图。(2) The dynamic sinusoidal data and the initialization image are taken as input, and the trained reconstruction model directly outputs the dynamic PET tracer activity distribution map.

以下我们基于模拟的超低计数动态PET数据进行实验,以验证本实施方式的有效性;该数据集包含40个3D大脑模板数据,每个模板的大小为128×128×40×18,一共采集了18帧,仿真示踪剂是18F-FDG,通过模拟投影并添加噪声的方式得到对应的动态正弦图数据,单帧正弦图的计数约为1e4,其中33个样本作为训练数据,2个样本作为验证集数据,剩下5个样本用来测试。Below we conduct experiments based on simulated ultra-low count dynamic PET data to verify the effectiveness of this implementation; the data set contains 40 3D brain template data, each template is 128×128×40×18 in size, and a total of 18 frames are collected. The simulated tracer is 18 F-FDG, and the corresponding dynamic sinusoidal data are obtained by simulating projection and adding noise. The count of a single-frame sinusoidal graph is about 1e4, of which 33 samples are used as training data, 2 samples are used as validation set data, and the remaining 5 samples are used for testing.

STPD-Net通过pytorch1.7.0实现,并在带有TITAN-X的ubuntu服务器主机上进行训练;优化器是Adam,初始学习率为0.0001,批大小为4,一共训练了200个epoch,验证集上表现最好的模型被用来测试。STPD-Net was implemented using pytorch 1.7.0 and trained on an ubuntu server host with a TITAN-X; the optimizer was Adam, the initial learning rate was 0.0001, the batch size was 4, and a total of 200 epochs were trained. The best performing model on the validation set was used for testing.

图3展示了本发明和其他方法在低计数投影数据的不同切片上的重建图,其中KEM-ST和FBPnet方法为目前主流的方法,从图中可以看到MLEM算法和KEM-ST算法都显示了很严重的噪声,LPD方法由于没有考虑数据在不同时间帧之间的关联,导致不同帧的图像结构差异较大;FBPnet对噪声有较好的控制,但是结构信息不能很好的恢复,并且准确率有待提高。而本发明提出的方法不仅在结构上有最好的恢复表现,并且对于噪声抑制和准确率都取得了最好的效果。Figure 3 shows the reconstruction images of the present invention and other methods on different slices of low-count projection data, among which KEM-ST and FBPnet methods are the current mainstream methods. It can be seen from the figure that both the MLEM algorithm and the KEM-ST algorithm show very serious noise. The LPD method does not consider the association between data in different time frames, resulting in large differences in image structure between different frames; FBPnet has better control over noise, but the structural information cannot be well restored, and the accuracy needs to be improved. The method proposed by the present invention not only has the best recovery performance in structure, but also achieves the best results in noise suppression and accuracy.

上述对实施例的描述是为便于本技术领域的普通技术人员能理解和应用本发明,熟悉本领域技术的人员显然可以容易地对上述实施例做出各种修改,并把在此说明的一般原理应用到其他实施例中而不必经过创造性的劳动。因此,本发明不限于上述实施例,本领域技术人员根据本发明的揭示,对于本发明做出的改进和修改都应该在本发明的保护范围之内。The above description of the embodiments is to facilitate the understanding and application of the present invention by those skilled in the art. It is obvious that those familiar with the art can easily make various modifications to the above embodiments and apply the general principles described herein to other embodiments without creative work. Therefore, the present invention is not limited to the above embodiments. Improvements and modifications made by those skilled in the art to the present invention based on the disclosure of the present invention should be within the protection scope of the present invention.

Claims (10)

1. A dynamic PET image reconstruction method based on model-driven deep learning comprises the following steps:
(1) Detecting biological tissues injected with the radiopharmaceuticals by using a detector, and acquiring corresponding dynamic sinogram projection data Y;
(2) Reconstructing projection data Y of the dynamic sinogram to obtain a corresponding activity distribution map X of the dynamic PET tracer;
(3) Executing the steps (1) and (2) for multiple times to obtain a large number of samples, wherein each sample comprises dynamic sinogram projection data Y and a dynamic PET tracer activity distribution diagram X corresponding to the dynamic sinogram projection data Y, and further dividing all the samples into a training set, a verification set and a test set;
(4) Converting the dynamic reconstruction problem into a Poisson log-likelihood optimization problem with a regular term according to a dynamic PET measurement equation, and converting the optimization problem into a corresponding saddle point problem by using the property of a dual variable;
(5) Alternately updating a main variable and a dual variable by using a main dual network to solve the saddle point problem, thereby constructing an STPD-Net model for reconstructing the dynamic PET image, wherein the model is formed by cascading a plurality of reconstruction modules, and each reconstruction module is formed by connecting the main network and the dual network;
(6) Training the STPD-Net model by using Y in the training set sample as the input of the STPD-Net model and using X as a label, thereby obtaining a final dynamic PET image reconstruction model;
(7) And inputting Y in the test set sample into the dynamic PET image reconstruction model, and directly reconstructing and outputting a corresponding dynamic PET tracer activity distribution diagram.
2. The dynamic PET image reconstruction method according to claim 1, characterized in that: the expression of the dynamic PET measurement equation is as follows:
Y=G·X+R
wherein: g is the system response matrix and R is the random and scattered noise terms.
3. The dynamic PET image reconstruction method according to claim 2, characterized in that: the poisson log-likelihood optimization problem expression with the regular term in the step (4) is as follows:
Figure FDA0004051034060000011
wherein: l (Y | X) is a Poisson likelihood term and
Figure FDA0004051034060000012
r () represents a regular term, I is the total number of detectors, T is the total number of time frames, λ is a regular term penalty factor, Y i,t Represents the value of an element in the ith row and the tth column of the dynamic sinogram projection data Y @>
Figure FDA0004051034060000021
Representing dynamic sinogram projection data pick>
Figure FDA0004051034060000022
The value of the element in the ith row and the tth column,
Figure FDA0004051034060000023
e () is the desired function.
4. The dynamic PET image reconstruction method according to claim 3, characterized in that: the expression of the saddle point problem in the step (4) is as follows:
Figure FDA0004051034060000024
wherein: sup<>Denotes the set supremum, L * (Y | X) represents the conjugate of L (Y | X).
5. The dynamic PET image reconstruction method according to claim 4, characterized in that: in the step (5), the expression for solving the saddle point problem by alternately updating the primary variable and the dual variable by using the primary-dual network is as follows:
Figure FDA0004051034060000025
Figure FDA0004051034060000026
wherein: p () denotes the master network, D () denotes the dual network,
Figure FDA0004051034060000027
dynamic PET tracer activity profile X representing the kth iteration k The value of an element in the jth row and the tth column is greater than>
Figure FDA0004051034060000028
Dynamic PET tracer activity profile X for the k-1 th iteration, respectively k -1 The value of an element in the jth row and the tth column is greater than>
Figure FDA0004051034060000029
Dual variable h representing the kth iteration k The value of an element in the ith row and the tth column is greater or less>
Figure FDA00040510340600000210
Dual variable h representing the k-1 iteration k-1 In the ith row and the tth columnElement value of (1), G * And (3) representing the conjugation of a system response matrix G, wherein k is a natural number greater than 0, j is a natural number, j is greater than or equal to 1 and less than or equal to N, and N is the total pixel point number of the PET tracer activity distribution diagram.
6. The dynamic PET image reconstruction method according to claim 5, characterized in that: the input of the dual network comprises a dual variable h k-1 Dynamic PET tracer activity distribution diagram X k-1 And dynamic sinogram projection data Y output as dual variable h after iterative update k ,X k-1 After the forward projection operation, h k-1 Y splicing the channel dimensions, then sequentially performing space-time feature extraction on the spliced result through four 3D space-time convolution layers, and finally performing space-time feature extraction on the extracted features and h k -1 Output is h after channel dimension splicing k
7. The dynamic PET image reconstruction method according to claim 5, characterized in that: the input of the main network comprises a dual variable h k And dynamic PET tracer activity profile X k-1 And the output is a dynamic PET tracer activity distribution diagram X after iterative update k ,h k After the back projection operation, the X-ray source is connected with the X-ray source k-1 Splicing channel dimensions, sequentially performing space-time feature extraction on the spliced result through four 3D space-time convolution layers, and finally performing space-time feature extraction on the extracted features and X k-1 The output is X after channel dimension splicing k
8. The dynamic PET image reconstruction method according to claim 6 or 7, characterized in that: the convolution kernel size adopted by the 3D space-time convolution layer is 3 multiplied by 3.
9. The dynamic PET image reconstruction method according to claim 1, characterized in that: the process of training the model in the step (6) is as follows:
6.1 initializing model parameters, including a bias vector and a weight matrix of each layer, a learning rate and an optimizer;
6.2, inputting the projection data Y of the dynamic sinogram in the training set sample into a model, outputting the forward propagation of the model to obtain a corresponding activity distribution map of the dynamic PET tracer, and calculating a loss function between the result and the label;
and 6.3, continuously and iteratively updating model parameters by using an optimizer according to the loss function through a gradient descent method until the loss function is converged, and finishing training.
10. The dynamic PET image reconstruction method according to claim 9, characterized in that: and after the training is finished, verifying the model by using a verification set sample, and taking the model which best expresses on the verification set as a final dynamic PET image reconstruction model.
CN202310042621.5A 2023-01-28 2023-01-28 A dynamic PET image reconstruction method based on model-driven deep learning Pending CN115984401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310042621.5A CN115984401A (en) 2023-01-28 2023-01-28 A dynamic PET image reconstruction method based on model-driven deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310042621.5A CN115984401A (en) 2023-01-28 2023-01-28 A dynamic PET image reconstruction method based on model-driven deep learning

Publications (1)

Publication Number Publication Date
CN115984401A true CN115984401A (en) 2023-04-18

Family

ID=85974095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310042621.5A Pending CN115984401A (en) 2023-01-28 2023-01-28 A dynamic PET image reconstruction method based on model-driven deep learning

Country Status (1)

Country Link
CN (1) CN115984401A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437152A (en) * 2023-12-21 2024-01-23 之江实验室 PET iterative reconstruction method and system based on diffusion model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437152A (en) * 2023-12-21 2024-01-23 之江实验室 PET iterative reconstruction method and system based on diffusion model
CN117437152B (en) * 2023-12-21 2024-04-02 之江实验室 A PET iterative reconstruction method and system based on diffusion model

Similar Documents

Publication Publication Date Title
Wang et al. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose
CN111627082B (en) PET image reconstruction method based on filtering back projection algorithm and neural network
CN108257134B (en) Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning
CN104657950B (en) Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV
Burger et al. EM-TV methods for inverse problems with Poisson noise
US20240249450A1 (en) A pet image reconstruction method based on swin-transformer regularization
CN112053412A (en) A low-dose Sinogram denoising and PET image reconstruction method based on teacher-student generator
CN109636869B (en) Dynamic PET Image Reconstruction Method Based on Non-local Total Variation and Low-Rank Constraints
CN109993808B (en) Dynamic double-tracing PET reconstruction method based on DSN
Cheng et al. Learned full-sampling reconstruction from incomplete data
CN113160347A (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
CN106204674A (en) The dynamic PET images method for reconstructing retrained based on structure dictionary and kinetic parameter dictionary joint sparse
CN114387236B (en) Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
CN105894550A (en) Method for synchronously reconstructing dynamic PET image and tracer kinetic parameter on the basis of TV and sparse constraint
Wang et al. Improving generalizability in limited-angle ct reconstruction with sinogram extrapolation
Feng et al. Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning
WO2023279316A1 (en) Pet reconstruction method based on denoising score matching network
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network
CN115984401A (en) A dynamic PET image reconstruction method based on model-driven deep learning
CN107146263A (en) A Dynamic PET Image Reconstruction Method Based on Tensor Dictionary Constraints
CN104063887A (en) Low Rank based dynamic PET image reestablishment method
CN105976321A (en) OCT (Optical Coherent Tomography) image super-resolution reconstruction method and device
WO2024109762A1 (en) Pet parameter determination method and apparatus, and device and storage medium
CN113476064A (en) Single-scanning double-tracer PET signal separation method based on BCD-ED
CN111476859A (en) A dynamic dual tracer PET imaging method based on 3D Unet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination