WO2023272491A1 - 基于联合字典学习和深度网络的pet图像重建方法 - Google Patents

基于联合字典学习和深度网络的pet图像重建方法 Download PDF

Info

Publication number
WO2023272491A1
WO2023272491A1 PCT/CN2021/103141 CN2021103141W WO2023272491A1 WO 2023272491 A1 WO2023272491 A1 WO 2023272491A1 CN 2021103141 W CN2021103141 W CN 2021103141W WO 2023272491 A1 WO2023272491 A1 WO 2023272491A1
Authority
WO
WIPO (PCT)
Prior art keywords
dose
dictionary
low
standard
sample vector
Prior art date
Application number
PCT/CN2021/103141
Other languages
English (en)
French (fr)
Inventor
郑海荣
李彦明
万丽雯
张娜
徐英杰
Original Assignee
深圳高性能医疗器械国家研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳高性能医疗器械国家研究院有限公司 filed Critical 深圳高性能医疗器械国家研究院有限公司
Priority to PCT/CN2021/103141 priority Critical patent/WO2023272491A1/zh
Publication of WO2023272491A1 publication Critical patent/WO2023272491A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • This application relates to the field of medical image processing, in particular to a PET image reconstruction method based on joint dictionary learning and deep network.
  • PET-MR system combining magnetic resonance imaging (MRI) and positron emission tomography (PET).
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • This application provides a PET image reconstruction method based on joint dictionary learning and deep network, using training samples and the constructed joint dictionary to obtain sparse coefficients corresponding to low dose, MR and standard dose respectively, and introduce the sparse coefficients as sample vectors into
  • the DNN network maps low-dose sample vectors to standard-dose sample vectors, avoiding the disadvantages of image noise and loss of details caused by reducing the dose of PET developer during PET imaging, and at the same time realizing low-dose PET images to standard-dose PET images. predict.
  • the PET image reconstruction method based on joint dictionary learning and deep network includes the following steps:
  • training samples include low-dose patches and corresponding MR patches and standard-dose patches
  • the joint dictionary includes a low dose dictionary, an MR dictionary and a standard dose dictionary
  • the low dose sample vector, MR sample vector and standard dose sample vector are respectively the low dose patch, MR patch and standard dose patch at their corresponding low dose Sparse coefficients under dictionary, MR dictionary and standard dose dictionary;
  • the low-dose PET images and their corresponding MR images were preprocessed, and the standard-dose PET images were obtained by using the acquired joint dictionary and the trained DNN network to predict.
  • the step of training the DNN network according to the low-dose sample vector, the MR sample vector and the standard-dose sample vector until convergence, and obtaining a mapping model from the low-dose sample vector to the standard-dose sample vector specifically includes:
  • the low-dose sample vector and MR sample vector are used as the input of the DNN network, and the standard dose sample vector is used as the result.
  • the DNN network is trained until convergence, and the mapping model from the low-dose sample vector to the standard dose sample vector is obtained.
  • the DNN network includes an input layer, a hidden layer and an output layer, and the hidden layer adopts a 3-layer network, and the number of neurons in each layer is 2048.
  • the dictionary learning adopts the method of alternately obtaining sparse coefficients and updating the joint dictionary, and the specific steps include:
  • the sparse coefficients and joint dictionary are updated iteratively until convergence.
  • the acquisition of the first sparse coefficient adopts the OMP method
  • the dictionary update adopts the KSVD method.
  • the step of constructing the DNN network also includes: preprocessing the low dose sample vector, the MR sample vector and the standard dose sample vector;
  • the preprocessing step includes: taking the non-zero low-dose patch, the sparse index corresponding to the MR sample vector and the standard dose patch as the low-dose sample vector, the MR sample vector and the standard dose sample vector respectively; wherein, the sparse index is the vector joint correspondence
  • the sparse coefficients of are composed of vectors.
  • the step of acquiring training samples, wherein the training samples include low-dose patches and corresponding MR patches and standard-dose patches specifically includes:
  • the dictionary learning before using the dictionary learning to obtain the joint dictionary step according to the training samples, it also includes:
  • Duplicate low-dose patches and their corresponding MR patches and standard-dose patches are removed from the training samples.
  • the specific steps of preprocessing the low-dose PET image and its corresponding MR image, and using the acquired joint dictionary and the trained DNN network to predict the standard-dose PET image include:
  • the trained DNN network model before inputting it as a low-dose sample vector into the trained DNN network model to obtain the predicted standard dose sample vector step, it also includes: preprocessing the obtained sparse coefficients;
  • the preprocessing step includes: taking the sparse index corresponding to the non-zero low-dose patch as the low-dose sample vector; wherein, the sparse index is a vector composed of sparse coefficients corresponding to the vector union.
  • the present application has the following beneficial effects: based on the joint dictionary learning and DNN network, the standard dose PET image can be restored from the low dose PET image, which overcomes the disadvantage that the traditional method of denoising cannot preserve details; the present application does not sparse the low dose
  • the coefficient matrix is directly combined with the standard dose dictionary for image prediction, but the sparse coefficient matrix acquired by dictionary learning is combined with the DNN network to obtain the standard dose sparse coefficient matrix, and then combined with the standard dose dictionary for image prediction, which improves the prediction image and the real standard dose PET image. similarity; at the same time, this technology introduces MR image prior, which effectively improves the effect of predicted images.
  • Fig. 1 is the schematic diagram of dictionary learning principle
  • Fig. 2 is the schematic diagram of dictionary update of this application.
  • FIG. 3 is a schematic diagram of sample vector acquisition in this application.
  • Figure 4 is a schematic diagram of a DNN network
  • Fig. 5 is a low-dose PET image, a standard-dose PET image and a comparison chart of reconstructed images obtained by the method of the present application;
  • FIG. 6 is a flow chart of the reconstruction method of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the present application, unless otherwise specified, "plurality” means two or more.
  • the embodiment of the present application provides a PET image reconstruction method based on joint dictionary learning and deep network, and the specific steps include:
  • PET images Acquire training PET images, where the training PET images include low-dose PET images, MR images and standard-dose PET images. PET images are a set of continuous tomographic images of the human body.
  • S102 Randomly select a small block from the low-dose PET image and extend it into a one-dimensional vector to form a low-dose patch, and at the same time select a small block from the corresponding position in the MR image and the standard-dose image and extend it into a one-dimensional vector to form an MR patch patches and standard dose patches.
  • S103 Remove repeated patches, and remove patches that are not conducive to obtaining the joint dictionary according to the variance; then use the processed low-dose patches and their corresponding MR patches and standard-dose patches as training samples.
  • the joint dictionary is obtained in a manner of alternately performing sparse coefficients and dictionary updates, where the joint dictionary includes a low-dose dictionary, an MR dictionary and a standard-dose dictionary.
  • the principle of dictionary learning is shown in Figure 1.
  • X represents the sparse coefficient
  • D represents the feature matrix (dictionary).
  • the main purpose of dictionary learning is to obtain and obtain D, and X minimizes the result of the above formula.
  • the joint dictionary adopts the K-means clustering algorithm. According to the training samples, K cluster centers are obtained as the initialized joint dictionary. The joint dictionary needs to be normalized.
  • the acquisition of sparse coefficients adopts the OMP method
  • the update of the dictionary adopts the KSVD method.
  • the sparse coefficients of the low dose patch, MR patch and standard dose patch under the corresponding low dose dictionary, MR dictionary and standard dose dictionary are 3, that is, the coefficient has at most 3 values that are not zero.
  • the obtained sparse coefficients can be directly used as sample vectors; the non-zero sparse indices are used as vectors and the corresponding sparse coefficients are combined to form a vector, which is used as a sample vector.
  • the specific method is shown in Figure 3.
  • the DNN network includes an input layer, a hidden layer and an output layer.
  • the hidden layer adopts a 3-layer network, and the number of neurons in each layer is 2048.
  • the block vector with the trained low-dose dictionary and MR dictionary to obtain the corresponding sparse coefficient; and input it as the low-dose sample vector into the trained DNN network model to obtain the predicted standard dose sample vector;
  • the obtained sparse coefficient can be directly As a low-dose sample vector;
  • the non-zero sparse index can also be used as a vector and the corresponding sparse coefficients to form a vector, and it can be used as a low-dose sample vector;
  • the predicted standard dose sample vector is combined with the standard dose dictionary to obtain the standard dose image block, and the standard dose image block is combined according to the set step size to obtain the predicted standard dose PET image.
  • the average PSNR increased from 29.65 to 30.86. It can be seen that the reconstructed image is closer to the standard-dose PET image than the original low-dose PET image, achieving an ideal effect.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种基于联合字典学习和深度网络的PET图像重建方法,涉及医学影像处理领域。包括以下步骤:获取训练样本,根据训练样本利用字典学习获取联合字典,训练样本包括低剂量补丁以及与其相对应的MR补丁和标准剂量补丁;构建DNN网络;根据低剂量样本向量和标准剂量样本向量训练DNN网络直至收敛,得到映射模型,其中,低剂量样本向量、MR样本向量和标准剂量样本向量分别为低剂量补丁、MR补丁和标准剂量补丁在与其对应的低剂量字典、MR字典和标准剂量字典下的稀疏系数;将低剂量PET图像以及其对应的MR图像进行预处理,并利用获取的联合字典以及训练后的DNN网络预测得到标准剂量PET图像。本申请用于减少低剂量PET图像噪声并增强图像细节。

Description

基于联合字典学习和深度网络的PET图像重建方法 技术领域
本申请涉及医学影像处理领域,尤其涉及基于联合字典学习和深度网络的PET图像重建方法。
背景技术
磁共振成像(magnetic resonance imaging,MRI)与正电子发射计算机断层成像(positron emission tomography,PET)相结合的PET-MR系统。具有PET和MR的检查功能,具有灵敏度搞、准确性好、辐射量少等优势。但是,标准剂量的PET示踪剂的放射性具有很大的健康隐患,在累计效应下会增加各种疾病发生的可能性。
降低PET示踪剂剂量是一种可行的方案,该方案存在的最大问题是低剂量PET图像的噪声较大、细节丢失,不利于疾病的诊断。目前从低剂量PET图像到标准剂量PET图像的预测主要有两种,通过稀疏表达的传统算法和通过神经网络的深度学习,通过稀疏表达的传统算法以字典学习为主,通过深度学习的以CNN网络和GAN对抗网络为主。但现有的方法存在图像产生噪声和细节丢失的问题以及由于PET图像分辨率较高,导致预测过程计算量大,迭代速度慢的问题。
发明内容
本申请提供一种基于联合字典学习和深度网络的PET图像重建方法,利用训练样本以及构建的联合字典,分别获取低剂量、MR和标准剂量对应的稀疏系数,并将稀疏系数作为样本向量引入到DNN网络,将低剂量样本向量映射到标准剂量样本向量,避免了在PET成像时降低PET显影剂剂量而导致图像产生噪声和细节丢失的弊端,同时实现了低剂量PET图像到标准剂量PET图像的预测。
为达到上述目的,基于联合字典学习和深度网络的PET图像重建方法,包括以下步骤:
获取训练样本,其中所述训练样本包括低剂量补丁以及与其相对应的MR补丁和标准剂量补丁;
根据训练样本利用字典学习获取联合字典,其中,所述联合字典包括低剂量字典、MR字典和标准剂量字典;
获取低剂量样本向量、MR样本向量和标准剂量样本向量,其中, 所述低剂量样本向量、MR样本向量和标准剂量样本向量分别为低剂量补丁、MR补丁和标准剂量补丁在与其对应的低剂量字典、MR字典和标准剂量字典下的稀疏系数;
构建DNN网络;
根据低剂量样本向量、MR样本向量和标准剂量样本向量训练所述DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型;
将低剂量PET图像以及其对应的MR图像进行预处理,并利用获取的联合字典以及训练后的DNN网络预测得到标准剂量PET图像。
进一步地是,根据低剂量样本向量、MR样本向量和标准剂量样本向量训练所述DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型的步骤具体包括:
将低剂量样本向量和MR样本向量作为DNN网络的输入,标准剂量样本向量作为结果,训练DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型。
进一步地是,构建DNN网络步骤中,DNN网络包括输入层、隐藏层和输出层,所述隐藏层采用3层网络,每层神经元个数为2048。
进一步地是,字典学习采用获取稀疏系数和联合字典更新交替进行的方式,具体步骤包括:
构建初始化联合字典;
根据初始化联合字典获取稀疏系数;
将所述初始化联合字典拆分为低剂量字典、MR字典和标准剂量字典,并利用获取的稀疏系数分别更新低剂量字典、MR字典和标准剂量字典;并将更新后的低剂量字典、MR字典和标准剂量字典合并成联合字典;
对稀疏系数和联合字典进行迭代更新,直至收敛。
进一步地是,在获取联合字典时,第一稀疏系数的获取采用OMP方法,字典更新采用KSVD方法。
进一步地是,在构建DNN网络步骤之前还包括:对低剂量样本向量、MR样本向量和标准剂量样本向量进行预处理;
预处理步骤包括:将不为零的低剂量补丁、MR样本向量和标准剂量补丁所对应的稀疏索引分别作为低剂量样本向量、MR样本向量和标准剂量样本向量;其中,稀疏索引为向量联合对应的稀疏系数组成向 量。
进一步地是,获取训练样本,其中,所述训练样本包括低剂量补丁以及与其相对应的MR补丁和标准剂量补丁的步骤具体包括:
获取低剂量PET图像以及与其相对应的MR图像和标准剂量PET图像;
从所述低剂量PET图像中随机选取小块并延展成一维向量作为低剂量补丁,并同时从MR图像、标准剂量图像中的相对应位置上分别选取小块并延展成一维向量作为MR补丁和标准剂量补丁。
进一步地是,在根据训练样本利用字典学习获取联合字典步骤之前还包括:
从训练样本中去除重复的低剂量补丁以及与其相对应的MR补丁和标准剂量补丁。
进一步地是,将低剂量PET图像以及其对应的MR图像进行预处理,并利用获取的联合字典以及训练后的DNN网络预测得到标准剂量PET图像的具体步骤包括:
将低剂量PET图像和MR图像以一定的步长分块,并将块延展成一维块向量;
将所述块向量结合低剂量字典和MR字典得到与其对应的稀疏系数;并将其作为低剂量样本向量输入训练好的DNN网络模型得到预测的标准剂量样本向量;
将得到标准剂量样本向量结合标准剂量字典得到标准剂量图像块,按照设定的步长组合标准剂量图像块即为预测的标准剂量PET图像。
进一步地是,在将其作为低剂量样本向量输入训练好的DNN网络模型得到预测的标准剂量样本向量步骤之前还包括:对得到的稀疏系数进行预处理;
预处理步骤包括:将不为零的低剂量补丁所对应的稀疏索引作为低剂量样本向量;其中,稀疏索引为向量联合对应的稀疏系数组成向量。
本申请相比现有技术具有以下有益效果:基于联合字典学习和DNN网络实现从低剂量PET图像还原标准剂量PET图像,克服了传统方法去噪无法保留细节的缺点;本申请没有将低剂量稀疏系数矩阵直接结合标准剂量字典进行图像预测,而是将字典学习获取的稀疏系数 矩阵结合DNN网络得到标准剂量稀疏系数矩阵,再结合标准剂量字典进行图像预测,提高了预测图像与真实标准剂量PET图像的相似性;同时该技术引入了MR图像先验,有效提升预测图像效果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为字典学习原理的示意图;
图2为本申请字典更新的示意图;
图3为本申请样本向量获取示意图;
图4为DNN网络示意图;
图5为低剂量PET图像、标准剂量PET图像以及采用本申请方法得到重建图像的对比图;
图6为本申请的重建方法的流程图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
如图6所示,本申请实施例提供了一直基于联合字典学习和深度网络的PET图像重建方法,具体步骤包括:
S1:获取训练样本
S101:获取训练PET图像,训练PET图像包括低剂量PET图像、MR图像和标准剂量PET图像。PET图像是一组连续的人体断层图像。
S102:从低剂量PET图像中随机选取小块并延展成一维向量即为低剂量补丁,并同时从MR图像、标准剂量图像中的相对应位置上分别 选取小块并延展成一维向量即为MR补丁和标准剂量补丁。
S103:去除重复的补丁,并根据方差去除不利于获取联合字典的补丁;然后将处理后的低剂量补丁及其对应的MR补丁和标准剂量补丁作为训练样本。
S2:根据训练样本并采用稀疏系数和字典更新交替进行的方式获取联合字典,其中,联合字典包括低剂量字典、MR字典和标准剂量字典。字典学习原理如图1所示。
字典学习的求解公式:
Figure PCTCN2021103141-appb-000001
式中,X表示稀疏系数,D表示特征矩阵(字典),字典学习的主要目的是获取求取D,X使上式的结果最小。
S201:初始化联合字典,联合字典采用K-means聚类算法,根据训练样本,获取K个聚类中心作为初始化的联合字典,联合字典需要做normalize处理。
S202:字典更新
参见图2,为了提高训练结果的两个字典稀疏一致性,首先利用联合字典获取稀疏系数;将联合字典拆分为低剂量字典、MR字典和标准剂量字典,然后利用稀疏系数分别更新低剂量字典、MR字典和标准剂量字典;最后将更新后的三个字典组合成新的联合字典。如此循环迭代更新,直至收敛。
稀疏系数的获取采用OMP方法,字典的更新采用KSVD方法。
S3:获取低剂量样本向量和标准剂量样本向量
将低剂量补丁、MR补丁和标准剂量补丁在与其对应的低剂量字典、MR字典和标准剂量字典下的稀疏系数。稀疏系数稀疏度为3,即系数最多有3个值不为零。
可以将得到的稀疏系数直接作为样本向量;将不为零的稀疏索引作为向量联合对应的稀疏系数组成向量,并将其作为样本向量,具体的方法如图3所示。
S4:参见图4,构建DNN网络,实现从低剂量稀疏系数到标准剂量稀疏系数的映射。
DNN网络包括输入层、隐藏层和输出层,所述隐藏层采用3层网络,每层神经元个数为2048。
S5:将低剂量样本向量、MR样本向量作为DNN网络的输入,标准剂量样本向量作为结果,训练DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型。
S6:标准剂量PET图像的重建
将低剂量PET图像和MR图像以步长为1进行分块,并将块延展成一维块向量;
将联合字典拆分为低剂量字典、MR字典和标准剂量样本向量;
将块向量结合训练后的低剂量字典和MR字典得到与其对应的稀疏系数;并将其作为低剂量样本向量输入训练后的DNN网络模型得到预测的标准剂量样本向量;可以将得到的稀疏系数直接作为低剂量样本向量;也可以将不为零的稀疏索引作为向量联合对应的稀疏系数组成向量,并将其作为低剂量样本向量;
将预测的标准剂量样本向量结合标准剂量字典得到标准剂量图像块,按照设定的步长组合标准剂量图像块即为预测的标准剂量PET图像。
重建方法的性能测试:采用1000张头部低剂量PET图像作为样本,迭代训练50次得到联合字典,在不采用深度学习映射稀疏矩阵的情况下重建图像如图5所示,从该图像可以看出采用该方法重建的图像噪声明显减少,相比于低剂量PET图像,重建图像与标准剂量PET图像更加相似。
测试18张PET图像,平均PSNR由29.65提高到30.86,可以看出重建后图像比原始低剂量PET图像更加接近标准剂量PET图像,达到了比较理想的效果。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (10)

  1. 基于联合字典学习和深度网络的PET图像重建方法,其特征在于,包括以下步骤:
    获取训练样本,其中所述训练样本包括低剂量补丁以及与其相对应的MR补丁和标准剂量补丁;
    根据训练样本利用字典学习获取联合字典,其中,所述联合字典包括低剂量字典、MR字典和标准剂量字典;
    获取低剂量样本向量、MR样本向量和标准剂量样本向量,其中,所述低剂量样本向量、MR样本向量和标准剂量样本向量分别为低剂量补丁、MR补丁和标准剂量补丁在与其对应的低剂量字典、MR字典和标准剂量字典下的稀疏系数;
    构建DNN网络;
    根据低剂量样本向量、MR样本向量和标准剂量样本向量训练所述DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型;
    将低剂量PET图像以及其对应的MR图像进行预处理,并利用获取的联合字典以及训练后的DNN网络预测得到标准剂量PET图像。
  2. 根据权利要求1所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,根据低剂量样本向量、MR样本向量和标准剂量样本向量训练所述DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型的步骤具体包括:
    将低剂量样本向量和MR样本向量作为DNN网络的输入,标准剂量样本向量作为结果,训练DNN网络直至收敛,得到低剂量样本向量到标准剂量样本向量的映射模型。
  3. 根据权利要求2所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,构建DNN网络步骤中,DNN网络包括输入层、隐藏层和输出层,所述隐藏层采用3层网络,每层神经元个数为2048。
  4. 根据权利要求2所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,字典学习采用获取稀疏系数和联合字典更新交替进行的方式,具体步骤包括:
    构建初始化联合字典;
    根据初始化联合字典获取稀疏系数;
    将所述初始化联合字典拆分为低剂量字典、MR字典和标准剂量字典,并利用获取的稀疏系数分别更新低剂量字典、MR字典和标准剂量字典;并将更新后的低剂量字典、MR字典和标准剂量字典合并成联合字典;
    对稀疏系数和联合字典进行迭代更新,直至收敛。
  5. 根据权利要求4所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,在获取联合字典时,第一稀疏系数的获取采用OMP方法,字典更新采用KSVD方法。
  6. 根据权利要求5所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,在构建DNN网络步骤之前还包括:对低剂量样本向量、MR样本向量和标准剂量样本向量进行预处理;
    预处理步骤包括:将不为零的低剂量补丁、MR样本向量和标准剂量补丁所对应的稀疏索引分别作为低剂量样本向量、MR样本向量和标准剂量样本向量;其中,稀疏索引为向量联合对应的稀疏系数组成向量。
  7. 根据权利要求1所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,获取训练样本,其中,所述训练样本包括低剂量补丁以及与其相对应的MR补丁和标准剂量补丁的步骤具体包括:
    获取低剂量PET图像以及与其相对应的MR图像和标准剂量PET图像;
    从所述低剂量PET图像中随机选取小块并延展成一维向量作为低剂量补丁,并同时从MR图像、标准剂量图像中的相对应位置上分别选取小块并延展成一维向量作为MR补丁和标准剂量补丁。
  8. 根据权利要求7所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,在根据训练样本利用字典学习获取联合字典步骤之前还包括:
    从训练样本中去除重复的低剂量补丁以及与其相对应的MR补丁和标准剂量补丁。
  9. 根据权利要求2所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,将低剂量PET图像以及其对应的MR图像进行预处理,并利用获取的联合字典以及训练后的DNN网络预测得到标准剂量PET图像的具体步骤包括:
    将低剂量PET图像和MR图像以一定的步长分块,并将块延展成一维块向量;
    将所述块向量结合低剂量字典和MR字典得到与其对应的稀疏系数;并将其作为低剂量样本向量输入训练好的DNN网络模型得到预测的标准剂量样本向量;
    将得到标准剂量样本向量结合标准剂量字典得到标准剂量图像块,按照设定的步长组合标准剂量图像块即为预测的标准剂量PET图像。
  10. 根据权利要求9所述的基于联合字典学习和深度网络的PET图像重建方法,其特征在于,在将其作为低剂量样本向量输入训练好的DNN网络模型得到预测的标准剂量样本向量步骤之前还包括:对得到的稀疏系数进行预处理;
    预处理步骤包括:将不为零的低剂量补丁所对应的稀疏索引作为低剂量样本向量;其中,稀疏索引为向量联合对应的稀疏系数组成向量。
PCT/CN2021/103141 2021-06-29 2021-06-29 基于联合字典学习和深度网络的pet图像重建方法 WO2023272491A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103141 WO2023272491A1 (zh) 2021-06-29 2021-06-29 基于联合字典学习和深度网络的pet图像重建方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/103141 WO2023272491A1 (zh) 2021-06-29 2021-06-29 基于联合字典学习和深度网络的pet图像重建方法

Publications (1)

Publication Number Publication Date
WO2023272491A1 true WO2023272491A1 (zh) 2023-01-05

Family

ID=84689742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103141 WO2023272491A1 (zh) 2021-06-29 2021-06-29 基于联合字典学习和深度网络的pet图像重建方法

Country Status (1)

Country Link
WO (1) WO2023272491A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489203A (zh) * 2013-01-31 2014-01-01 清华大学 基于字典学习的图像编码方法及系统
WO2016033458A1 (en) * 2014-08-29 2016-03-03 The University Of North Carolina At Chapel Hill Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr)
CN105931179A (zh) * 2016-04-08 2016-09-07 武汉大学 一种联合稀疏表示与深度学习的图像超分辨率方法及系统
CN112258642A (zh) * 2020-12-21 2021-01-22 之江实验室 基于深度学习的低剂量pet数据三维迭代更新重建方法
CN112488949A (zh) * 2020-12-08 2021-03-12 深圳先进技术研究院 一种低剂量pet图像还原方法、系统、设备和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489203A (zh) * 2013-01-31 2014-01-01 清华大学 基于字典学习的图像编码方法及系统
WO2016033458A1 (en) * 2014-08-29 2016-03-03 The University Of North Carolina At Chapel Hill Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr)
CN105931179A (zh) * 2016-04-08 2016-09-07 武汉大学 一种联合稀疏表示与深度学习的图像超分辨率方法及系统
CN112488949A (zh) * 2020-12-08 2021-03-12 深圳先进技术研究院 一种低剂量pet图像还原方法、系统、设备和介质
CN112258642A (zh) * 2020-12-21 2021-01-22 之江实验室 基于深度学习的低剂量pet数据三维迭代更新重建方法

Similar Documents

Publication Publication Date Title
Teng et al. DMCNN: a deep multiscale convolutional neural network model for medical image segmentation
US20220012890A1 (en) Model-Based Deep Learning for Globally Optimal Surface Segmentation
Kieselmann et al. Cross‐modality deep learning: contouring of MRI data from annotated CT data only
Merlet et al. Parametric dictionary learning for modeling EAP and ODF in diffusion MRI
Chen et al. Generative adversarial U-Net for domain-free medical image augmentation
CN114897756A (zh) 模型训练方法、医学图像融合方法、装置、设备和介质
Teng et al. Respiratory deformation registration in 4D-CT/cone beam CT using deep learning
Senthilkumaran et al. Brain image segmentation
Liu et al. Generalize ultrasound image segmentation via instant and plug & play style transfer
CN115601346A (zh) 一种基于深度学习的多模态mri对膝关节软骨损伤的多层次分类方法
Luo et al. Multi-Task Learning Using Attention-Based Convolutional Encoder-Decoder for Dilated Cardiomyopathy CMR Segmentation and Classification.
Wang et al. Deep transfer learning-based multi-modal digital twins for enhancement and diagnostic analysis of brain mri image
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
Xie et al. Contextual loss based artifact removal method on CBCT image
CN114820450A (zh) 适宜李氏人工肝治疗的ct血管造影图像分类方法
Lin et al. Method for carotid artery 3-D ultrasound image segmentation based on cswin transformer
Taher et al. Automatic cerebrovascular segmentation methods-a review
CN116385809A (zh) 一种基于半监督学习的mri脑肿瘤分类方法及系统
Zhang et al. Automatic parotid gland segmentation in MVCT using deep convolutional neural networks
WO2023272491A1 (zh) 基于联合字典学习和深度网络的pet图像重建方法
Luo et al. Tissue segmentation in nasopharyngeal ct images using two-stage learning
Zhu et al. 3D automatic MRI level set segmentation of inner ear based on statistical shape models prior
CN116309754A (zh) 一种基于局部-全局信息协作的大脑医学图像配准方法及系统
CN113450427B (zh) 基于联合字典学习和深度网络的pet图像重建方法
WO2022183851A1 (zh) 一种基于数字人技术的肺叶分割方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21947451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE