CN110363797B - PET and CT image registration method based on excessive deformation inhibition - Google Patents

PET and CT image registration method based on excessive deformation inhibition Download PDF

Info

Publication number
CN110363797B
CN110363797B CN201910634301.2A CN201910634301A CN110363797B CN 110363797 B CN110363797 B CN 110363797B CN 201910634301 A CN201910634301 A CN 201910634301A CN 110363797 B CN110363797 B CN 110363797B
Authority
CN
China
Prior art keywords
pet
image
registration
sequence
excessive deformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910634301.2A
Other languages
Chinese (zh)
Other versions
CN110363797A (en
Inventor
姜慧研
康鸿健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910634301.2A priority Critical patent/CN110363797B/en
Publication of CN110363797A publication Critical patent/CN110363797A/en
Application granted granted Critical
Publication of CN110363797B publication Critical patent/CN110363797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of medical image registration, and provides a PET and CT image registration method based on excessive deformation inhibition. Firstly, acquiring a two-dimensional PET/CT sequence image to obtain a PET/CT sequence image set and preprocessing the PET/CT sequence image set to obtain a PET/CT image block training set; then constructing a PET/CT registration network based on the 3D U-Net convolution neural network, and constructing a cost function by combining an image similarity constraint term and an excessive deformation inhibition term; initializing a neural network weight parameter, setting a hyper-parameter, inputting a PET/CT image block training set into a PET/CT registration network, and performing iterative training on the PET/CT image block training set; and finally, inputting the PET/CT image to be registered into the trained PET/CT registration network to generate a registered PET image block. The invention can realize PET/CT elastic registration, improve registration efficiency and accuracy and reduce calculation cost for restraining excessive deformation.

Description

一种基于过度形变抑制的PET与CT图像配准方法A Registration Method of PET and CT Images Based on Excessive Deformation Suppression

技术领域technical field

本发明涉及医学图像配准技术领域,特别是涉及一种基于过度形变抑制的PET与CT图像配准方法。The invention relates to the technical field of medical image registration, in particular to a PET and CT image registration method based on excessive deformation suppression.

背景技术Background technique

医学图像配准在许多医学图像处理任务中起着重要的作用。通常将图像配准制定为优化问题以寻求空间变换,该空间变换通过最大化图像之间的空间对应的替代度量(例如配准图像之间的图像强度相关性)来建立一对固定和移动图像之间的像素/体素对应。Medical image registration plays an important role in many medical image processing tasks. Image registration is usually formulated as an optimization problem seeking a spatial transformation that establishes a pair of fixed and moving images by maximizing a surrogate measure of spatial correspondence between images (such as image intensity correlation between registered images) The pixel/voxel correspondence between them.

正电子断层扫描(Positron Emission Computer Tomography,PET)利用回旋加速器产生放射性同位素18F、13N,静脉注射后参与人体的新陈代谢。代谢率高的组织或病变,在PET上呈现明确的高代谢亮信号;代谢率低的组织或病变在PET上呈低代谢暗信号。计算机断层扫描(ComputedTomography,CT)是用X线束对人体的某一部分按一定厚度的层面进行扫描,当X线射向人体组织时,部分射线被组织吸收,部分射线穿过人体被检测器官接收,产生信号,能准确对图像进行定位。Positron Emission Computer Tomography (PET) uses a cyclotron to produce radioactive isotopes 18F and 13N, which participate in the metabolism of the human body after intravenous injection. Tissues or lesions with a high metabolic rate show a clear high metabolic bright signal on PET; tissues or lesions with a low metabolic rate show a low metabolic dark signal on PET. Computed Tomography (CT) uses X-ray beams to scan a certain part of the human body at a certain thickness. When X-rays are irradiated to human tissues, part of the rays are absorbed by the tissues, and part of the rays pass through the human body and are received by the detection organs. Signals can be generated to accurately position the image.

PET/CT可以进行功能与解剖结构的同机图像融合,是影像医学的一个重要进展。多模态图像配准利用各种成像方式的特点,为不同的影像提供互补信息,增加图像信息量,有助于更全面地了解病变的性质及与周围解剖结构的关系,为临床诊断和治疗的定位提供有效的方法。通过融合两种不同模态的PET/CT图像获得兼具结构信息和功能信息的融合图像,对于医学图像分析和诊断具有重要意义。由于PET/CT图像间像素强度的相似性较低,导致配准后容易产生过度形变,所以PET/CT配准是一项具有挑战性的任务。PET/CT can perform functional and anatomical image fusion on the same machine, which is an important progress in imaging medicine. Multimodal image registration utilizes the characteristics of various imaging methods to provide complementary information for different images, increase the amount of image information, and help to more comprehensively understand the nature of the lesion and its relationship with the surrounding anatomical structure, providing clinical diagnosis and treatment. The positioning provides an effective method. It is of great significance for medical image analysis and diagnosis to obtain a fusion image with both structural information and functional information by fusing PET/CT images of two different modalities. PET/CT registration is a challenging task due to the low similarity of pixel intensities between PET/CT images, which leads to excessive deformation after registration.

现有的PET与CT图像配准方法中,大都是基于迭代优化进行配准。其中最常使用的方法是将配准问题转换为优化问题以最小化代价函数。常用的代价函数包括均方误差(MSE)、互信息(MI)、归一化互信息(NMI)、归一化互相关(NCC)以及梯度相关(GC)。而这些相似性度量指标直接在像素级别比较图像,而无法反映图像中的更高级别结构。尽管存在全局优化方法,例如模拟退火算法和遗传算法,但它们需要对参数空间进行全面采样,这将导致过高的计算成本。从而导致现有PET与CT图像配准方法对过度形变抑制的计算成本高且对图像配准的效率及准确性低下。Most of the existing PET and CT image registration methods are based on iterative optimization. The most commonly used method among them is to transform the registration problem into an optimization problem to minimize a cost function. Commonly used cost functions include mean square error (MSE), mutual information (MI), normalized mutual information (NMI), normalized cross-correlation (NCC), and gradient correlation (GC). However, these similarity metrics directly compare images at the pixel level and cannot reflect higher-level structures in images. Although global optimization methods exist, such as simulated annealing algorithms and genetic algorithms, they require comprehensive sampling of the parameter space, which leads to prohibitively high computational costs. As a result, the existing PET and CT image registration methods have high computational costs for excessive deformation suppression and low efficiency and accuracy for image registration.

发明内容Contents of the invention

针对现有技术存在的问题,本发明提供一种基于过度形变抑制的PET与CT图像配准方法,能够实现PET/CT弹性配准,提高配准效率与准确性,降低对过度形变抑制的计算成本。Aiming at the problems existing in the prior art, the present invention provides a PET and CT image registration method based on excessive deformation suppression, which can realize PET/CT elastic registration, improve registration efficiency and accuracy, and reduce the calculation of excessive deformation suppression cost.

本发明的技术方案为:Technical scheme of the present invention is:

一种基于过度形变抑制的PET与CT图像配准方法,其特征在于,包括下述步骤:A PET and CT image registration method based on excessive deformation suppression, is characterized in that, comprises the following steps:

步骤1:采集m个患者的二维PET序列图像、二维CT序列图像,得到PET/CT序列图像集;Step 1: Collect 2D PET sequence images and 2D CT sequence images of m patients to obtain a PET/CT sequence image set;

步骤2:对PET/CT序列图像集进行预处理,得到PET/CT图像块训练集;所述预处理包括计算SUV值与hu值并限制阈值范围、调整图像分辨率、生成图像块并进行归一化处理;Step 2: Preprocessing the PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing includes calculating the SUV value and hu value and limiting the threshold range, adjusting image resolution, generating image blocks and performing normalization One treatment;

步骤3:基于3DU-Net卷积神经网络构建PET/CT配准网络;Step 3: Construct PET/CT registration network based on 3DU-Net convolutional neural network;

步骤4:结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数;所述相似性约束项为归一化互相关NCC,所述过度形变抑制项为基于位移矢量场元素与高斯分布函数元素间差值的惩罚项加和;Step 4: Construct the cost function of the PET/CT registration network by combining the image similarity constraint item and the excessive deformation suppression item; the similarity constraint item is normalized cross-correlation NCC, and the excessive deformation suppression item is based on the displacement vector The sum of the penalty term for the difference between the field element and the Gaussian distribution function element;

步骤5:初始化神经网络权重参数,设置批处理中批的大小N、正则化项权重λ、最大迭代次数COUNT、网络学习速率、优化器,采用学习率衰减策略;Step 5: Initialize the weight parameters of the neural network, set the batch size N in the batch processing, the regularization item weight λ, the maximum number of iterations COUNT, the network learning rate, the optimizer, and adopt the learning rate decay strategy;

步骤6:将PET/CT图像块训练集作为PET/CT配准网络的输入,输出位移矢量场,将位移矢量场和PET图像块共同输入到空间变换器,获取配准后的PET图像块,根据CT图像块和配准后的PET图像块获取相似性约束项、根据位移矢量场获取过度形变抑制项,从而通过代价函数更新神经网络权重参数,并进行反向传播,由此对PET/CT配准网络进行迭代训练,直至最大迭代次数COUNT,得到训练后的PET/CT配准网络;Step 6: Use the PET/CT image block training set as the input of the PET/CT registration network, output the displacement vector field, input the displacement vector field and the PET image block together to the space transformer, and obtain the registered PET image block, According to the CT image block and the registered PET image block, the similarity constraint item is obtained, and the excessive deformation suppression item is obtained according to the displacement vector field, so as to update the weight parameters of the neural network through the cost function, and perform backpropagation, thus PET/CT The registration network is iteratively trained until the maximum number of iterations is COUNT, and the trained PET/CT registration network is obtained;

步骤7:对待配准的PET/CT图像对进行步骤1中所述的预处理,将得到的PET/CT图像块对输入训练后的PET/CT配准网络中,生成配准后的PET图像块,并进行可视化。Step 7: Perform the preprocessing described in step 1 on the PET/CT image pair to be registered, and input the obtained PET/CT image block pair into the trained PET/CT registration network to generate a registered PET image block and visualize it.

所述步骤2包括下述步骤:Described step 2 comprises the following steps:

步骤2.1:计算二维PET序列图像的SUV值为SUV=PixelsPET×LBM×1000/injecteddoseStep 2.1: Calculate the SUV value of the two-dimensional PET sequence image SUV=Pixels PET ×LBM×1000/injecteddose

计算二维CT序列图像的hu值为Hu=PixelsCT×slopes+interceptsCalculate the hu value of the two-dimensional CT sequence image Hu=Pixels CT ×slopes+intercepts

其中,PixelsPET为PET序列图像的像素值,LBM为瘦体重,injected dose为示踪剂注射剂量;PixelsCT为CT序列图像的像素值,slopes为斜率,intercepts为截距;Among them, Pixels PET is the pixel value of the PET sequence image, LBM is the lean body mass, injected dose is the tracer injection dose; Pixels CT is the pixel value of the CT sequence image, slopes is the slope, and intercepts is the intercept;

步骤2.2:对二维PET序列图像、二维CT序列图像进行增强图像对比度处理,调整hu值窗宽窗位为[a1,b1],将SUV值限制在[a2,b2]内;其中,a1、b1、a2、b2均为常数;Step 2.2: Perform image contrast enhancement processing on two-dimensional PET sequence images and two-dimensional CT sequence images, adjust the window width and window level of the hu value to [a 1 , b 1 ], and limit the SUV value to [a 2 , b 2 ] ; Among them, a 1 , b 1 , a 2 and b 2 are all constants;

步骤2.3:对二维CT序列图像进行调整图像分辨率处理,调整512×512的二维CT序列图像的尺寸至二维PET序列图像的尺寸H×W=128×128;Step 2.3: Adjust the image resolution of the two-dimensional CT sequence image, and adjust the size of the 512×512 two-dimensional CT sequence image to the size of the two-dimensional PET sequence image H×W=128×128;

步骤2.4:对第i个患者的二维PET序列图像、二维CT序列图像分别生成三维体数据[H,W,DPET,i]、[H,W,DCT,i],将三维体数据变换成五维体数据[N,H,W,Di,C],以d像素为采样间隔对五维体数据在Z方向上进行裁剪,生成多对H×W×D大小的图像块,对图像块进行归一化处理,得到图像块集,从图像块集中随机选取l对PET图像块和CT图像块形成PET/CT图像块训练集;其中,i∈{1,2,…,m},DPET,i为第i个患者的PET序列图像的切片数量,DCT,i为第i个患者的CT序列图像的切片数量,DPET,i=DCT,i=Di;N为批处理中批的大小,C为输入神经网络数据的通道数,C=2。Step 2.4: Generate three-dimensional volume data [H, W, D PET, i ] and [H, W, D CT, i ] for the two-dimensional PET sequence image and two-dimensional CT sequence image of the i-th patient respectively, and convert the three-dimensional volume The data is transformed into five-dimensional volume data [N, H, W, D i , C], and the five-dimensional volume data is cut in the Z direction with d pixels as the sampling interval to generate multiple pairs of image blocks of size H×W×D , normalize the image blocks to obtain an image block set, randomly select l pairs of PET image blocks and CT image blocks from the image block set to form a PET/CT image block training set; where, i∈{1,2,..., m}, D PET, i is the slice number of the PET sequence image of the i-th patient, D CT, i is the slice number of the CT sequence image of the i-th patient, D PET, i =D CT, i =D i ; N is the batch size in batch processing, C is the channel number of input neural network data, C=2.

所述步骤2中,[a1,b1]=[-90,300],[a2,b2]=[0,5],d=32,D=64。In the step 2, [a 1 , b 1 ]=[-90,300], [a 2 , b 2 ]=[0,5], d=32, D=64.

所述步骤2.4中,对图像块进行归一化处理的公式为

Figure BDA0002129709220000031
使图像块的数据变为均值为0且标准差为1的正态分布;其中,x、x*分别为图像块中归一化处理前、后的像素点,μ、σ分别为图像块中所有像素点的均值、标准差。In the step 2.4, the formula for normalizing the image block is
Figure BDA0002129709220000031
Make the data of the image block become a normal distribution with a mean value of 0 and a standard deviation of 1; among them, x, x * are the pixels before and after normalization processing in the image block, respectively, and μ, σ are the pixels in the image block, respectively. The mean and standard deviation of all pixels.

所述步骤3中,基于3DU-Net卷积神经网络构建PET/CT配准网络包括编码路径和解码路径,每一条路径均有4个分辨率级别;所述编码路径有n1层,所述编码路径的每一层均包括一个卷积核为3×3×3、步长为2的卷积层,每个卷积层都后接一个BN层和ReLU层;所述解码路径有n2层,所述解码路径的每一层均包括一个卷积核为3×3×3、步长为2的反卷积层,每个反卷积层都后接一个BN层和ReLU层;通过shortcut,将编码路径中相同分辨率的层传递到解码路径,为解码路径提供原始的高分辨率特征;所述PET/CT配准网络的最后一层为3×3×3的卷积层,最后输出通道数为3。In the step 3, building a PET/CT registration network based on the 3DU-Net convolutional neural network includes an encoding path and a decoding path, and each path has 4 resolution levels; the encoding path has n 1 layers, and the Each layer of the encoding path includes a convolutional layer with a convolution kernel of 3×3×3 and a step size of 2, and each convolutional layer is followed by a BN layer and a ReLU layer; the decoding path has n 2 Layer, each layer of the decoding path includes a deconvolution layer with a convolution kernel of 3×3×3 and a step size of 2, and each deconvolution layer is followed by a BN layer and a ReLU layer; by shortcut, transfer the same resolution layer in the encoding path to the decoding path, and provide the original high-resolution features for the decoding path; the last layer of the PET/CT registration network is a 3×3×3 convolutional layer, Finally, the number of output channels is 3.

所述步骤4中,结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数为In the step 4, in combination with the image similarity constraint term and the excessive deformation suppression term, the cost function of constructing the PET/CT registration network is

Figure BDA0002129709220000032
Figure BDA0002129709220000032

其中,F、M分别为CT图像块、PET图像块,Dv为位移矢量场矩阵,

Figure BDA0002129709220000033
为均值为μ、标准差为θ的高斯分布函数,λ为正则化项权重;Among them, F and M are CT image blocks and PET image blocks respectively, D v is the displacement vector field matrix,
Figure BDA0002129709220000033
is a Gaussian distribution function with a mean of μ and a standard deviation of θ, and λ is the weight of the regularization term;

Figure BDA0002129709220000034
为相似性约束项
Figure BDA0002129709220000034
is a similarity constraint

Figure BDA0002129709220000035
Figure BDA0002129709220000035

其中,S为子图,T为模板图像,(s,t)为坐标索引,S(s,t)为子图的像素值,T(s,t)为模板图像的像素值,E(S)、E(T)分别为子图、模板图像的平均灰度值;Among them, S is the submap, T is the template image, (s, t) is the coordinate index, S(s, t) is the pixel value of the submap, T(s, t) is the pixel value of the template image, E(S ), E(T) are the average gray value of sub-image and template image respectively;

Figure BDA0002129709220000041
为过度形变抑制项
Figure BDA0002129709220000041
Excessive deformation suppression term

Figure BDA0002129709220000042
Figure BDA0002129709220000042

其中,i为位移矢量场矩阵Dv中的元素,j为遵循高斯分布函数鶰的随机数,f(i,j,θ)为惩罚项,

Figure BDA0002129709220000043
Among them, i is the element in the displacement vector field matrix Dv , j is a random number following the Gaussian distribution function, f(i, j, θ) is a penalty item,
Figure BDA0002129709220000043

本发明的有益效果为:The beneficial effects of the present invention are:

本发明基于3DU-Net卷积神经网络构建PET/CT配准网络,通过基于深度学习的无监督端到端3D弹性配准神经网络预测位移矢量场,对待配准图像进行逐体素的位移预测,并以归一化互相关为相似性约束项、结合过度形变抑制项对图像形变的抑制,构造PET/CT配准网络的代价函数,能够解决PET/CT图像本身相似度较低导致的配准过度形变问题,能够实现PET/CT弹性配准,提高配准效率与准确性,降低对过度形变抑制的计算成本。The present invention builds a PET/CT registration network based on the 3DU-Net convolutional neural network, predicts the displacement vector field through the unsupervised end-to-end 3D elastic registration neural network based on deep learning, and performs voxel-by-voxel displacement prediction on the image to be registered , and using the normalized cross-correlation as the similarity constraint item, combined with the suppression of image deformation by the excessive deformation suppression item, the cost function of the PET/CT registration network is constructed, which can solve the registration problem caused by the low similarity of the PET/CT image itself. The quasi-excessive deformation problem can realize PET/CT elastic registration, improve registration efficiency and accuracy, and reduce the computational cost of excessive deformation suppression.

附图说明Description of drawings

图1为本发明的基于过度形变抑制的PET与CT图像配准方法的流程图;Fig. 1 is the flow chart of the PET and CT image registration method based on excessive deformation suppression of the present invention;

图2为本发明的基于过度形变抑制的PET与CT图像配准方法中PET/CT配准网络的结构示意图。Fig. 2 is a schematic structural diagram of the PET/CT registration network in the PET and CT image registration method based on excessive deformation suppression of the present invention.

具体实施方式Detailed ways

下面将结合附图和具体实施方式,对本发明作进一步描述。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

如图1所示,为本发明的基于过度形变抑制的PET与CT图像配准方法的流程图。本发明的基于过度形变抑制的PET与CT图像配准方法,其特征在于,包括下述步骤:As shown in FIG. 1 , it is a flow chart of the PET and CT image registration method based on excessive deformation suppression of the present invention. The PET and CT image registration method based on excessive deformation suppression of the present invention is characterized in that it comprises the following steps:

步骤1:采集m个患者的二维PET序列图像、二维CT序列图像,得到PET/CT序列图像集。Step 1: Acquire 2D PET sequence images and 2D CT sequence images of m patients to obtain a PET/CT sequence image set.

步骤2:对PET/CT序列图像集进行预处理,得到PET/CT图像块训练集;所述预处理包括计算SUV值与hu值并限制阈值范围、调整图像分辨率、生成图像块并进行归一化处理。Step 2: Preprocessing the PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing includes calculating the SUV value and hu value and limiting the threshold range, adjusting image resolution, generating image blocks and performing normalization One treatment.

所述步骤2包括下述步骤:Described step 2 comprises the following steps:

步骤2.1:计算二维PET序列图像的SUV值为SUV=PixelsPET×LBM×1000/injecteddoseStep 2.1: Calculate the SUV value of the two-dimensional PET sequence image SUV=Pixels PET ×LBM×1000/injecteddose

计算二维CT序列图像的hu值为Hu=PixelsCT×slopes+interceptsCalculate the hu value of the two-dimensional CT sequence image Hu=Pixels CT ×slopes+intercepts

其中,PixelsPET为PET序列图像的像素值,LBM为瘦体重,injected dose为示踪剂注射剂量;PixelsCT为CT序列图像的像素值,slopes为斜率,intercepts为截距;Among them, Pixels PET is the pixel value of the PET sequence image, LBM is the lean body mass, injected dose is the tracer injection dose; Pixels CT is the pixel value of the CT sequence image, slopes is the slope, and intercepts is the intercept;

步骤2.2:对二维PET序列图像、二维CT序列图像进行增强图像对比度处理,调整hu值窗宽窗位为[a1,b1],将SUV值限制在[a2,b2]内;其中,a1、b1、a2、b2均为常数;Step 2.2: Perform image contrast enhancement processing on two-dimensional PET sequence images and two-dimensional CT sequence images, adjust the window width and window level of the hu value to [a 1 , b 1 ], and limit the SUV value to [a 2 , b 2 ] ; Among them, a 1 , b 1 , a 2 and b 2 are all constants;

步骤2.3:对二维CT序列图像进行调整图像分辨率处理,调整512×512的二维CT序列图像的尺寸至二维PET序列图像的尺寸H×W=128×128;Step 2.3: Adjust the image resolution of the two-dimensional CT sequence image, and adjust the size of the 512×512 two-dimensional CT sequence image to the size of the two-dimensional PET sequence image H×W=128×128;

步骤2.4:对第i个患者的二维PET序列图像、二维CT序列图像分别生成三维体数据[H,W,DPET,i]、[H,W,DCT,i],将三维体数据变换成五维体数据[N,H,W,Di,C],以d像素为采样间隔对五维体数据在Z方向上进行裁剪,生成多对H×W×D大小的图像块,对图像块进行归一化处理,得到图像块集,从图像块集中随机选取l对PET图像块和CT图像块形成PET/CT图像块训练集;其中,i∈{1,2,…,m},DPET,i为第i个患者的PET序列图像的切片数量,DCT,i为第i个患者的CT序列图像的切片数量,DPET,i=DCT,i=Di;N为批处理中批的大小,C为输入神经网络数据的通道数,C=2。Step 2.4: Generate three-dimensional volume data [H, W, D PET, i ] and [H, W, D CT, i ] for the two-dimensional PET sequence image and two-dimensional CT sequence image of the i-th patient respectively, and convert the three-dimensional volume The data is transformed into five-dimensional volume data [N, H, W, D i , C], and the five-dimensional volume data is cut in the Z direction with d pixels as the sampling interval to generate multiple pairs of image blocks of size H×W×D , normalize the image blocks to obtain an image block set, randomly select l pairs of PET image blocks and CT image blocks from the image block set to form a PET/CT image block training set; where, i∈{1,2,..., m}, D PET, i is the slice number of the PET sequence image of the i-th patient, D CT, i is the slice number of the CT sequence image of the i-th patient, D PET, i =D CT, i =D i ; N is the batch size in batch processing, C is the channel number of input neural network data, C=2.

本实施例中,m=176;将PET、CT经过处理后的SUV、Hu值图像分别生成三维体数据存在ndarray中;所述步骤2中,[a1,b1]=[-90,300],[a2,b2]=[0,5],d=32,D=64。In this embodiment, m=176; the processed SUV and Hu value images of PET and CT are respectively generated into three-dimensional volume data and stored in ndarray; in the step 2, [a 1 , b 1 ]=[-90,300 ], [a 2 , b 2 ]=[0,5], d=32, D=64.

对于全部176个患者,随机选择其中141名患者的体数据生成的l=900对SUV、Hu值图像块作为PET/CT图像块训练集,选择其中35名患者的体数据生成的259对SUV、Hu值图像块作为验证集。For all 176 patients, randomly select l=900 pairs of SUV and Hu value image blocks generated by the volume data of 141 patients as the PET/CT image block training set, and select 259 pairs of SUV, Hu value generated by the volume data of 35 patients among them. Hu-valued image patches are used as the validation set.

所述步骤2.4中,对图像块进行归一化处理的公式为

Figure BDA0002129709220000051
使图像块的数据变为均值为0且标准差为1的正态分布;其中,x、x*分别为图像块中归一化处理前、后的像素点,μ、σ分别为图像块中所有像素点的均值、标准差。In the step 2.4, the formula for normalizing the image block is
Figure BDA0002129709220000051
Make the data of the image block become a normal distribution with a mean value of 0 and a standard deviation of 1; among them, x, x * are the pixels before and after normalization processing in the image block, respectively, and μ, σ are the pixels in the image block, respectively. The mean and standard deviation of all pixels.

步骤3:基于3DU-Net卷积神经网络构建PET/CT配准网络,如图2所示。Step 3: Construct a PET/CT registration network based on the 3DU-Net convolutional neural network, as shown in Figure 2.

本发明的PET/CT配准网络包括:(1)用来回归位移矢量场的3D U-Net;(2)进行空间变换的组件空间变换器(Spatial Transformer)。本实施例中,所述步骤3中,基于3DU-Net卷积神经网络构建PET/CT配准网络包括编码路径和解码路径,每一条路径均有4个分辨率级别;所述编码路径有n1层,所述编码路径的每一层均包括一个卷积核为3×3×3、步长为2的卷积层,每个卷积层都后接一个BN层和ReLU层;所述解码路径有n2层,所述解码路径的每一层均包括一个卷积核为3×3×3、步长为2的反卷积层,每个反卷积层都后接一个BN层和ReLU层;通过shortcut,将编码路径中相同分辨率的层传递到解码路径,为解码路径提供原始的高分辨率特征;所述PET/CT配准网络的最后一层为3×3×3的卷积层,最后输出通道数为3。The PET/CT registration network of the present invention includes: (1) 3D U-Net for regressing the displacement vector field; (2) component spatial transformer (Spatial Transformer) for spatial transformation. In the present embodiment, in the step 3, the PET/CT registration network based on the 3DU-Net convolutional neural network construction includes an encoding path and a decoding path, and each path has 4 resolution levels; the encoding path has n 1 layer, each layer of the encoding path includes a convolution kernel with a convolution kernel of 3×3×3 and a step size of 2, and each convolution layer is followed by a BN layer and a ReLU layer; the The decoding path has n 2 layers, and each layer of the decoding path includes a deconvolution layer with a convolution kernel of 3×3×3 and a step size of 2, and each deconvolution layer is followed by a BN layer and ReLU layer; through shortcut, the same resolution layer in the encoding path is passed to the decoding path to provide the original high-resolution features for the decoding path; the last layer of the PET/CT registration network is 3×3×3 The convolutional layer, the final output channel number is 3.

其中,BN层为批量归一化层,ReLU层为整流线性单元层,shortcut为跳跃连接。最后一层为3×3×3的卷积层,减少输出的通道数,最后输出通道数为3(即表示x、y、z三个方向)。Among them, the BN layer is a batch normalization layer, the ReLU layer is a rectified linear unit layer, and the shortcut is a skip connection. The last layer is a 3×3×3 convolutional layer, which reduces the number of output channels, and the final output channel number is 3 (that is, the three directions of x, y, and z).

步骤4:结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数;所述相似性约束项为归一化互相关NCC,所述过度形变抑制项为基于位移矢量场元素与高斯分布函数元素间差值的惩罚项加和。Step 4: Construct the cost function of the PET/CT registration network by combining the image similarity constraint item and the excessive deformation suppression item; the similarity constraint item is normalized cross-correlation NCC, and the excessive deformation suppression item is based on the displacement vector Sum of the penalty term for the difference between the field elements and the Gaussian distribution function elements.

其中,基于3D形变场的形变程度大小来定义“过度形变抑制”测度,在代价函数引入“过度形变抑制项”来优化配准网络。Among them, the "excessive deformation suppression" measure is defined based on the deformation degree of the 3D deformation field, and the "excessive deformation suppression term" is introduced in the cost function to optimize the registration network.

所述步骤4中,结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数为In the step 4, in combination with the image similarity constraint term and the excessive deformation suppression term, the cost function of constructing the PET/CT registration network is

Figure BDA0002129709220000061
Figure BDA0002129709220000061

其中,F、M分别为CT图像块、PET图像块,Dv为位移矢量场矩阵,

Figure BDA0002129709220000062
为均值为μ、标准差为θ的高斯分布函数,λ为正则化项权重;其中,F、M分别为固定图像块、浮动图像块;Among them, F and M are CT image blocks and PET image blocks respectively, D v is the displacement vector field matrix,
Figure BDA0002129709220000062
is a Gaussian distribution function with mean μ and standard deviation θ, and λ is the weight of the regularization term; where F and M are fixed image blocks and floating image blocks respectively;

Figure BDA0002129709220000063
为相似性约束项
Figure BDA0002129709220000063
is a similarity constraint

Figure BDA0002129709220000064
Figure BDA0002129709220000064

其中,S为子图,T为模板图像,(s,t)为坐标索引,S(s,t)为子图的像素值,T(s,t)为模板图像的像素值,E(S)、E(T)分别为子图、模板图像的平均灰度值;Among them, S is the submap, T is the template image, (s, t) is the coordinate index, S(s, t) is the pixel value of the submap, T(s, t) is the pixel value of the template image, E(S ), E(T) are the average gray value of sub-image and template image respectively;

Figure BDA0002129709220000065
为过度形变抑制项
Figure BDA0002129709220000065
Excessive deformation suppression term

Figure BDA0002129709220000066
Figure BDA0002129709220000066

其中,i为位移矢量场矩阵Dv中的元素,j为遵循高斯分布函数鶰的随机数,f(i,j,θ)为惩罚项,

Figure BDA0002129709220000067
本实施例中,根据经验k被设置为2,当|i-j|>θ时,惩罚项为|i-j|k,即通过k次幂放大惩罚项。Among them, i is the element in the displacement vector field matrix Dv , j is a random number following the Gaussian distribution function, f(i, j, θ) is a penalty item,
Figure BDA0002129709220000067
In this embodiment, k is set to 2 based on experience, and when |ij|>θ, the penalty term is |ij| k , that is, the penalty term is amplified by the power of k.

步骤5:采用全局变量初始化(global_variables_initializer)来初始化神经网络权重参数,设置批处理中批的大小N=16、正则化项权重λ=0.3、最大迭代次数COUNT=1000、网络学习速率为0.001、优化器为Adam,采用学习率衰减策略。Step 5: Use the global variable initialization (global_variables_initializer) to initialize the weight parameters of the neural network, set the batch size in batch processing N=16, the regularization item weight λ=0.3, the maximum number of iterations COUNT=1000, the network learning rate to 0.001, optimize The controller is Adam, and the learning rate decay strategy is adopted.

步骤6:将PET/CT图像块训练集作为PET/CT配准网络的输入,输出位移矢量场,将位移矢量场和PET图像块共同输入到空间变换器,获取配准后的PET图像块,根据CT图像块和配准后的PET图像块获取相似性约束项、根据位移矢量场获取过度形变抑制项,从而通过代价函数更新神经网络权重参数,并进行反向传播,由此对PET/CT配准网络进行迭代训练,直至最大迭代次数COUNT,得到训练后的PET/CT配准网络。Step 6: Use the PET/CT image block training set as the input of the PET/CT registration network, output the displacement vector field, input the displacement vector field and the PET image block together to the space transformer, and obtain the registered PET image block, According to the CT image block and the registered PET image block, the similarity constraint item is obtained, and the excessive deformation suppression item is obtained according to the displacement vector field, so as to update the weight parameters of the neural network through the cost function, and perform backpropagation, thus PET/CT The registration network is iteratively trained until the maximum number of iterations is COUNT, and the trained PET/CT registration network is obtained.

其中,将一对128×128×64大小的PET/CT图像块作为3D U-Net网络的输入,输出相同分辨率大小的位移矢量场(128×128×64×3,分别对应x、y、z方向的位移),将位移矢量场与PET图像块共同输入到空间变换器,输出得到配准后的PET图像块。Among them, a pair of PET/CT image blocks with a size of 128×128×64 are used as the input of the 3D U-Net network, and a displacement vector field of the same resolution size (128×128×64×3, corresponding to x, y, Z-direction displacement), the displacement vector field and the PET image block are jointly input to the space transformer, and the registered PET image block is output.

步骤7:对待配准的PET/CT图像对进行步骤1中所述的预处理,将得到的PET/CT图像块对输入训练后的PET/CT配准网络中,生成配准后的PET图像块,并进行可视化。Step 7: Perform the preprocessing described in step 1 on the PET/CT image pair to be registered, and input the obtained PET/CT image block pair into the trained PET/CT registration network to generate a registered PET image block and visualize it.

本实施例中,本发明的基于过度形变抑制的PET与CT图像配准方法运行在Intel内核的Windows10系统环境中,基于Python和Tensorflow框架进行医学图像配准。本发明采用的基于深度学习的图像配准算法将图像配准转换为位移矢量场的回归问题,即预测来自一对图像的像素/体素之间的空间对应关系。图像配准通过3D U-Net卷积神经网络同时优化固定图像和浮动图像对之间的相似性约束项和位移矢量场过度形变抑制项来实现。定量和定性结果表明,使用本发明的配准方法进行3D PET/CT图像配准获得了良好的效果。其中,对于训练过的模型,给定一对新的PET/CT体数据,可以在10秒内通过一次正向计算获得配准结果。In this embodiment, the PET and CT image registration method based on excessive deformation suppression of the present invention runs in the Windows 10 system environment of the Intel kernel, and performs medical image registration based on the Python and Tensorflow framework. The image registration algorithm based on deep learning adopted by the present invention converts image registration into a regression problem of displacement vector field, namely predicting the spatial correspondence between pixels/voxels from a pair of images. Image registration is achieved by simultaneously optimizing the similarity constraint term and the displacement vector field excessive deformation suppression term between fixed and floating image pairs through 3D U-Net convolutional neural network. Quantitative and qualitative results show that using the registration method of the present invention to perform 3D PET/CT image registration achieves good results. Among them, for the trained model, given a pair of new PET/CT volume data, the registration result can be obtained through one forward calculation within 10 seconds.

显然,上述实施例仅仅是本发明的一部分实施例,而不是全部的实施例。上述实施例仅用于解释本发明,并不构成对本发明保护范围的限定。基于上述实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,也即凡在本申请的精神和原理之内所作的所有修改、等同替换和改进等,均落在本发明要求的保护范围内。Apparently, the above-mentioned embodiments are only some of the embodiments of the present invention, but not all of them. The above-mentioned embodiments are only used to explain the present invention, and do not constitute a limitation to the protection scope of the present invention. Based on the above-mentioned embodiments, all other embodiments obtained by those skilled in the art without creative work, that is, all modifications, equivalent replacements and improvements made within the spirit and principles of this application are all Fall within the scope of protection required by the present invention.

Claims (5)

1.一种基于过度形变抑制的PET与CT图像配准方法,其特征在于,包括下述步骤:1. A PET and CT image registration method based on excessive deformation suppression, is characterized in that, comprises the steps: 步骤1:采集m个患者的二维PET序列图像、二维CT序列图像,得到PET/CT序列图像集;Step 1: Collect 2D PET sequence images and 2D CT sequence images of m patients to obtain a PET/CT sequence image set; 步骤2:对PET/CT序列图像集进行预处理,得到PET/CT图像块训练集;所述预处理包括计算SUV值与hu值并限制阈值范围、调整图像分辨率、生成图像块并进行归一化处理;Step 2: Preprocessing the PET/CT sequence image set to obtain a PET/CT image block training set; the preprocessing includes calculating the SUV value and hu value and limiting the threshold range, adjusting image resolution, generating image blocks and performing normalization One treatment; 步骤3:基于3D U-Net卷积神经网络构建PET/CT配准网络;Step 3: Construct PET/CT registration network based on 3D U-Net convolutional neural network; 步骤4:结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数;所述相似性约束项为归一化互相关NCC,所述过度形变抑制项为基于位移矢量场元素与高斯分布函数元素间差值的惩罚项加和;Step 4: Construct the cost function of the PET/CT registration network by combining the image similarity constraint item and the excessive deformation suppression item; the similarity constraint item is normalized cross-correlation NCC, and the excessive deformation suppression item is based on the displacement vector The sum of the penalty term for the difference between the field element and the Gaussian distribution function element; 所述结合图像相似性约束项与过度形变抑制项,构造PET/CT配准网络的代价函数为:The cost function of constructing the PET/CT registration network by combining the image similarity constraint term and the excessive deformation suppression term is:
Figure FDA0003947483700000011
Figure FDA0003947483700000011
其中,F、M分别为CT图像块、PET图像块,Dv为位移矢量场矩阵,
Figure FDA0003947483700000012
为均值为μ、标准差为θ的高斯分布函数,λ为正则化项权重;
Among them, F and M are CT image blocks and PET image blocks respectively, D v is the displacement vector field matrix,
Figure FDA0003947483700000012
is a Gaussian distribution function with a mean of μ and a standard deviation of θ, and λ is the weight of the regularization term;
Figure FDA0003947483700000013
为相似性约束项
Figure FDA0003947483700000013
is a similarity constraint
Figure FDA0003947483700000014
Figure FDA0003947483700000014
其中,S为子图,T为模板图像,(s,t)为坐标索引,S(s,t)为子图的像素值,T(s,t)为模板图像的像素值,E(S)、E(T)分别为子图、模板图像的平均灰度值;Among them, S is the subimage, T is the template image, (s,t) is the coordinate index, S(s,t) is the pixel value of the subimage, T(s,t) is the pixel value of the template image, E(S ), E(T) are the average gray value of sub-image and template image respectively;
Figure FDA0003947483700000015
为过度形变抑制项
Figure FDA0003947483700000015
Excessive deformation suppression term
Figure FDA0003947483700000016
Figure FDA0003947483700000016
其中,i为位移矢量场矩阵Dv中的元素,j为遵循高斯分布函数
Figure FDA0003947483700000018
的随机数,f(i,j,θ)为惩罚项,
Figure FDA0003947483700000017
Among them, i is the element in the displacement vector field matrix D v , and j is the Gaussian distribution function
Figure FDA0003947483700000018
random number, f(i,j,θ) is the penalty item,
Figure FDA0003947483700000017
步骤5:初始化神经网络权重参数,设置批处理中批的大小N、正则化项权重λ、最大迭代次数COUNT、网络学习速率、优化器,采用学习率衰减策略;Step 5: Initialize the weight parameters of the neural network, set the batch size N in the batch processing, the regularization item weight λ, the maximum number of iterations COUNT, the network learning rate, the optimizer, and adopt the learning rate decay strategy; 步骤6:将PET/CT图像块训练集作为PET/CT配准网络的输入,输出位移矢量场,将位移矢量场和PET图像块共同输入到空间变换器,获取配准后的PET图像块,根据CT图像块和配准后的PET图像块获取相似性约束项、根据位移矢量场获取过度形变抑制项,从而通过代价函数更新神经网络权重参数,并进行反向传播,由此对PET/CT配准网络进行迭代训练,直至最大迭代次数COUNT,得到训练后的PET/CT配准网络;Step 6: Use the PET/CT image block training set as the input of the PET/CT registration network, output the displacement vector field, input the displacement vector field and the PET image block together into the space transformer, and obtain the registered PET image block, According to the CT image block and the registered PET image block, the similarity constraint item is obtained, and the excessive deformation suppression item is obtained according to the displacement vector field, so as to update the weight parameters of the neural network through the cost function, and perform backpropagation, thus PET/CT The registration network is iteratively trained until the maximum number of iterations is COUNT, and the trained PET/CT registration network is obtained; 步骤7:对待配准的PET/CT图像对进行步骤1中所述的预处理,将得到的PET/CT图像块对输入训练后的PET/CT配准网络中,生成配准后的PET图像块,并进行可视化。Step 7: Perform the preprocessing described in step 1 on the PET/CT image pair to be registered, and input the obtained PET/CT image block pair into the trained PET/CT registration network to generate a registered PET image block and visualize it.
2.根据权利要求1所述的基于过度形变抑制的PET与CT图像配准方法,其特征在于,所述步骤2包括下述步骤:2. The PET and CT image registration method based on excessive deformation suppression according to claim 1, wherein said step 2 comprises the following steps: 步骤2.1:计算二维PET序列图像的SUV值为SUV=PixelsPET×LBM×1000/injecteddoseStep 2.1: Calculate the SUV value of the two-dimensional PET sequence image SUV=Pixels PET ×LBM×1000/injecteddose 计算二维CT序列图像的hu值为Hu=PtxelsCT×slOpeS+interCeptsCalculate the hu value of the two-dimensional CT sequence image Hu=Ptxels CT ×slOpeS+interCepts 其中,PixelsPET为PET序列图像的像素值,LBM为瘦体重,injecteddose为示踪剂注射剂量;PixelsCT为CT序列图像的像素值,slopes为斜率,intercepts为截距;Among them, Pixels PET is the pixel value of the PET sequence image, LBM is the lean body mass, injecteddose is the tracer injection dose; Pixels CT is the pixel value of the CT sequence image, slopes is the slope, and intercepts is the intercept; 步骤2.2:对二维PET序列图像、二维CT序列图像进行增强图像对比度处理,调整hu值窗宽窗位为[a1,b1],将SUV值限制在[a2,b2]内;其中,a1、b1、a2、b2均为常数;Step 2.2: Perform image contrast enhancement processing on two-dimensional PET sequence images and two-dimensional CT sequence images, adjust the window width and window level of the hu value to [a 1 , b 1 ], and limit the SUV value to [a 2 , b 2 ] ; Among them, a 1 , b 1 , a 2 and b 2 are all constants; 步骤2.3:对二维CT序列图像进行调整图像分辨率处理,调整512×512的二维CT序列图像的尺寸至二维PET序列图像的尺寸H×W=128×128;Step 2.3: Adjust the image resolution of the two-dimensional CT sequence image, and adjust the size of the 512×512 two-dimensional CT sequence image to the size of the two-dimensional PET sequence image H×W=128×128; 步骤2.4:对第i个患者的二维PET序列图像、二维CT序列图像分别生成三维体数据[H,W,DPET,i]、[H,W,DCT,i],将三维体数据变换成五维体数据[N,H,W,Di,C],以d像素为采样间隔对五维体数据在Z方向上进行裁剪,生成多对H×W×D大小的图像块,对图像块进行归一化处理,得到图像块集,从图像块集中随机选取l对PET图像块和CT图像块形成PET/CT图像块训练集;其中,i∈{1,2,…,m},DPET,i为第i个患者的PET序列图像的切片数量,DCT,i为第i个患者的CT序列图像的切片数量,DPET,i=DCT,i=Di;N为批处理中批的大小,C为输入神经网络数据的通道数,C=2。Step 2.4: Generate 3D volume data [H,W,D PET,i ] and [H,W,D CT,i ] for the 2D PET sequence image and 2D CT sequence image of the i-th patient respectively, and convert the 3D volume The data is transformed into five-dimensional volume data [N, H, W, D i , C], and the five-dimensional volume data is cut in the Z direction with d pixels as the sampling interval, and multiple pairs of image blocks of H×W×D size are generated , normalize the image blocks to obtain an image block set, randomly select l pairs of PET image blocks and CT image blocks from the image block set to form a PET/CT image block training set; where, i∈{1,2,..., m}, D PET, i is the number of slices of the PET sequence image of the i-th patient, D CT, i is the slice number of the CT sequence image of the i-th patient, D PET, i = D CT, i = D i ; N is the batch size in batch processing, C is the channel number of input neural network data, C=2. 3.根据权利要求2所述的基于过度形变抑制的PET与CT图像配准方法,其特征在于,所述步骤2.2中,[a1,b1]=[-90,300],[a2,b2]=[0,5],d=32,D=64。3. The PET and CT image registration method based on excessive deformation suppression according to claim 2, characterized in that, in the step 2.2, [a 1 , b 1 ]=[-90,300], [a 2 , b 2 ]=[0,5], d=32, D=64. 4.根据权利要求2所述的基于过度形变抑制的PET与CT图像配准方法,其特征在于,所述步骤2.4中,对图像块进行归一化处理的公式为
Figure FDA0003947483700000021
使图像块的数据变为均值为0且标准差为1的正态分布;其中,x、x*分别为图像块中归一化处理前、后的像素点,μ、σ分别为图像块中所有像素点的均值、标准差。
4. The PET and CT image registration method based on excessive deformation suppression according to claim 2, characterized in that, in the step 2.4, the formula for normalizing the image block is
Figure FDA0003947483700000021
Make the data of the image block become a normal distribution with a mean of 0 and a standard deviation of 1; where x, x* are the pixels before and after normalization in the image block, respectively, and μ, σ are the pixels in the image block, respectively. The mean and standard deviation of all pixels.
5.根据权利要求2至4中任一项所述的基于过度形变抑制的PET与CT图像配准方法,其特征在于,所述步骤3中,基于3D U-Net卷积神经网络构建PET/CT配准网络包括编码路径和解码路径,每一条路径均有4个分辨率级别;所述编码路径有n1层,所述编码路径的每一层均包括一个卷积核为3×3×3、步长为2的卷积层,每个卷积层都后接一个BN层和ReLU层;所述解码路径有n2层,所述解码路径的每一层均包括一个卷积核为3×3×3、步长为2的反卷积层,每个反卷积层都后接一个BN层和ReLU层;通过shortcut,将编码路径中相同分辨率的层传递到解码路径,为解码路径提供原始的高分辨率特征;所述PET/CT配准网络的最后一层为3×3×3的卷积层,最后输出通道数为3。5. The PET and CT image registration method based on excessive deformation suppression according to any one of claims 2 to 4, characterized in that, in the step 3, the PET/CT image registration method is constructed based on the 3D U-Net convolutional neural network. The CT registration network includes an encoding path and a decoding path, each of which has 4 resolution levels; the encoding path has n 1 layers, and each layer of the encoding path includes a convolution kernel of 3×3× 3. A convolutional layer with a step size of 2, each convolutional layer is followed by a BN layer and a ReLU layer; the decoding path has n 2 layers, and each layer of the decoding path includes a convolution kernel as 3×3×3 deconvolution layers with a step size of 2, each deconvolution layer is followed by a BN layer and a ReLU layer; through shortcut, the same resolution layer in the encoding path is passed to the decoding path, as The decoding path provides the original high-resolution features; the last layer of the PET/CT registration network is a 3×3×3 convolutional layer, and the number of final output channels is 3.
CN201910634301.2A 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition Active CN110363797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910634301.2A CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910634301.2A CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Publications (2)

Publication Number Publication Date
CN110363797A CN110363797A (en) 2019-10-22
CN110363797B true CN110363797B (en) 2023-02-14

Family

ID=68219107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910634301.2A Active CN110363797B (en) 2019-07-15 2019-07-15 PET and CT image registration method based on excessive deformation inhibition

Country Status (1)

Country Link
CN (1) CN110363797B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260705B (en) * 2020-01-13 2022-03-15 武汉大学 Prostate MR image multi-task registration method based on deep convolutional neural network
CN111524170B (en) * 2020-04-13 2023-05-26 中南大学 Pulmonary CT image registration method based on unsupervised deep learning
CN112598718B (en) * 2020-12-31 2022-07-12 北京深睿博联科技有限责任公司 An unsupervised multi-view and multi-modal smart glasses image registration method and device
CN113706451B (en) * 2021-07-07 2024-07-12 杭州脉流科技有限公司 Method, apparatus, system and computer readable storage medium for intracranial aneurysm identification detection
CN114511602B (en) * 2022-02-15 2023-04-07 河南工业大学 Medical image registration method based on graph convolution Transformer
CN114820432B (en) * 2022-03-08 2023-04-11 安徽慧软科技有限公司 Radiotherapy effect evaluation method based on PET and CT elastic registration technology
CN114882081B (en) * 2022-06-08 2025-04-25 上海联影医疗科技股份有限公司 Image registration method, device, equipment, storage medium and computer program product
CN115393527A (en) * 2022-09-14 2022-11-25 北京富益辰医疗科技有限公司 Anatomical navigation construction method and device based on multimode image and interactive equipment
CN116740218B (en) * 2023-08-11 2023-10-27 南京安科医疗科技有限公司 Heart CT imaging image quality optimization method, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507234A (en) * 2017-08-29 2017-12-22 北京大学 Cone beam computed tomography image and x-ray image method for registering
CN108171738A (en) * 2018-01-25 2018-06-15 北京雅森科技发展有限公司 Multimodal medical image registration method based on brain function template
CN109074659A (en) * 2016-05-04 2018-12-21 皇家飞利浦有限公司 Medical image resources registration
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 PET and CT image registration method based on full convolutional neural network
CN109685811A (en) * 2018-12-24 2019-04-26 北京大学第三医院 PET/CT hypermetabolism lymph node dividing method based on dual path U-net convolutional neural networks
CN109872332A (en) * 2019-01-31 2019-06-11 广州瑞多思医疗科技有限公司 A kind of 3 d medical images method for registering based on U-NET neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0912845D0 (en) * 2009-07-24 2009-08-26 Siemens Medical Solutions Initialisation of registration using an anatomical atlas

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074659A (en) * 2016-05-04 2018-12-21 皇家飞利浦有限公司 Medical image resources registration
CN107507234A (en) * 2017-08-29 2017-12-22 北京大学 Cone beam computed tomography image and x-ray image method for registering
CN108171738A (en) * 2018-01-25 2018-06-15 北京雅森科技发展有限公司 Multimodal medical image registration method based on brain function template
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 PET and CT image registration method based on full convolutional neural network
CN109685811A (en) * 2018-12-24 2019-04-26 北京大学第三医院 PET/CT hypermetabolism lymph node dividing method based on dual path U-net convolutional neural networks
CN109872332A (en) * 2019-01-31 2019-06-11 广州瑞多思医疗科技有限公司 A kind of 3 d medical images method for registering based on U-NET neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Unsupervised learning for fast probabilistic diffeomorphic registration;Hessam Sokootil 等;《Medical lmage Computing and Computer Assisted Intervention -MICCAl 2017》;20170904;232-239 *
三维医学图像刚性配准新算法研究;陈明;《中国优秀博硕士学位论文全文数据库 (博士)医药卫生科技辑》;20031215(第04期);E080-5 *

Also Published As

Publication number Publication date
CN110363797A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363797B (en) PET and CT image registration method based on excessive deformation inhibition
JP7039153B2 (en) Image enhancement using a hostile generation network
CN109272443B (en) PET and CT image registration method based on full convolution neural network
Lei et al. Learning‐based CBCT correction using alternating random forest based on auto‐context model
US12073492B2 (en) Method and system for generating attenuation map from SPECT emission data
US11475535B2 (en) PET-CT registration for medical imaging
US11717233B2 (en) Assessment of abnormality patterns associated with COVID-19 from x-ray images
US11776128B2 (en) Automatic detection of lesions in medical images using 2D and 3D deep learning networks
He et al. Downsampled imaging geometric modeling for accurate CT reconstruction via deep learning
CN107146218B (en) A Method for Dynamic PET Image Reconstruction and Tracking Kinetic Parameter Estimation Based on Image Segmentation
Jin et al. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients
JP7359851B2 (en) Artificial Intelligence (AI)-based standard uptake value (SUV) correction and variation assessment for positron emission tomography (PET)
CN116547699A (en) A method for delineating clinical targets in radiotherapy
CN116630738A (en) Energy spectrum CT imaging method based on depth convolution sparse representation reconstruction network
CN118411435A (en) PET image reconstruction method based on prior image and PET image 3D perception method
Izadi et al. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks
Imran et al. Personalized CT organ dose estimation from scout images
CN110270015B (en) sCT generation method based on multi-sequence MRI
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network
US20230065196A1 (en) Patient-specific organ dose quantification and inverse optimization for ct
US20240161289A1 (en) Deformable image registration using machine learning and mathematical methods
Sun et al. Ct reconstruction from few planar x-rays with application towards low-resource radiotherapy
Cheon et al. Deep learning in radiation oncology
Cao et al. MBST-Driven 4D-CBCT Reconstruction: Leveraging Swin Transformer and Masking for Robust Performance
Texier et al. 3D Unsupervised deep learning method for magnetic resonance imaging-to-computed tomography synthesis in prostate radiotherapy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant