WO2021120069A1 - 基于解剖结构差异先验的低剂量图像重建方法和系统 - Google Patents

基于解剖结构差异先验的低剂量图像重建方法和系统 Download PDF

Info

Publication number
WO2021120069A1
WO2021120069A1 PCT/CN2019/126411 CN2019126411W WO2021120069A1 WO 2021120069 A1 WO2021120069 A1 WO 2021120069A1 CN 2019126411 W CN2019126411 W CN 2019126411W WO 2021120069 A1 WO2021120069 A1 WO 2021120069A1
Authority
WO
WIPO (PCT)
Prior art keywords
low
image
dose
dose image
network
Prior art date
Application number
PCT/CN2019/126411
Other languages
English (en)
French (fr)
Inventor
胡战利
梁栋
黄振兴
杨永峰
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Priority to PCT/CN2019/126411 priority Critical patent/WO2021120069A1/zh
Priority to US16/878,633 priority patent/US11514621B2/en
Publication of WO2021120069A1 publication Critical patent/WO2021120069A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1061Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
    • A61N2005/1062Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/441AI-based methods, deep learning or artificial neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/444Low dose acquisition or reduction of radiation dose

Definitions

  • the present invention relates to the technical field of medical image processing, in particular to a low-dose image reconstruction method and system based on anatomical structure difference prior.
  • Computed tomography is an important imaging method to obtain the internal structure information of objects in a non-destructive manner. It has many advantages such as high resolution, high sensitivity, and multiple levels, and is widely used in various medical clinical examination fields.
  • CT Computed tomography
  • AARA As Low As Reasonably Achievable
  • the main problem of the existing low-dose image reconstruction methods is that full sampling is usually required, resulting in long CT scan time; due to the large amount of data collected by full sampling, the image reconstruction speed is slow; and the long scan time leads to Artifacts caused by patient movement; as most of the algorithms are designed based on a few parts, the algorithm has poor robustness; the patient's CT radiation dose is high.
  • the existing technical solutions ignore the large differences in the anatomical structure of low-dose images when solving the problem of low-dose CT imaging. For example, there are obvious differences in anatomical structures in the cranial and abdominal structures, which affects the clarity of the reconstructed image.
  • the purpose of the present invention is to overcome the above-mentioned shortcomings of the prior art and provide a low-dose image reconstruction method and system based on differences in anatomical structures.
  • Image reconstruction is completed based on sparse projection sampling, taking the differences in anatomical structures into consideration, and taking the differences in anatomical structures as one This kind of prior information is introduced into the network design to ensure the clarity of the reconstructed image.
  • a low-dose image reconstruction method based on anatomical structure difference a priori includes the following steps:
  • Construct a discriminant network take the predicted image and the standard dose image as input, distinguish the authenticity of the predicted image and the standard dose image as the first optimization target, and identify different parts of the predicted image as the second optimization target , Jointly training the generation network and the discrimination network to obtain the mapping relationship between the low-dose image and the standard-dose image;
  • the determining the weights of different parts in the low-dose image according to the prior information of the anatomical structure difference includes the following sub-steps:
  • One-hot encoding is performed on different parts of the low-dose image and sequentially input to the multiple convolutional layers, and then the Sigmod activation function is used to generate weights for different parts.
  • the generation network includes a plurality of cascaded attribute augmentation modules, which are used to multiply the features extracted from the input low-dose image with the weights of the different parts to obtain the weighted features, and combine the extracted features
  • each attribute augmentation module includes a down-sampling layer, a ReLU layer, an up-sampling layer, a feature combination layer, and a feature fusion layer in turn.
  • the discriminant network includes multiple convolutional layers and two fully connected layers.
  • n ⁇ (x 1 ,y 1 ), (x 2 ,y 2 ),..., (x n ,y n ) ⁇
  • n is the total number of training samples.
  • the parameters in the generating network are obtained by minimizing the mean square error of the objective function, Expressed as:
  • represents the parameters of the generated network
  • G represents the mapping of the generated network
  • the loss function of the first optimization objective is set as:
  • E represents the expected calculation
  • represents the balance factor
  • D d represents the process of authenticating.
  • the loss function of the second optimization target is set as:
  • E represents the desired calculation
  • D a represents a site attribute determination process
  • a low-dose image reconstruction system based on anatomical structure difference prior includes:
  • Weight prediction module it is used to determine the weight of different parts in the low-dose image according to the prior information of the anatomical structure difference;
  • Network construction and training module it is used to construct a generation network, use low-dose images as input to extract features, and merge the weights of the different parts in the feature extraction process to output prediction images; and to construct a discriminant network with the The predicted image and the standard dose image are used as input, the authenticity of the predicted image and the standard dose image is distinguished as the first optimization target, and the identification of different parts of the predicted image is used as the second optimization target, and the generation network is jointly trained And the discrimination network to obtain the mapping relationship between the low-dose image and the standard-dose image;
  • Image reconstruction module it is used for low-dose image reconstruction using the obtained mapping relationship.
  • the present invention has the advantages of: taking advantage of the difference in anatomical structure, fusing image content information and location information, improving the ability of the network to generate anatomical structures; based on the confrontation network, adding attribute constraints to improve the network Perception of and anatomy.
  • the invention improves the network performance, so that the reconstructed image retains the image details well and has a clearer structure.
  • Fig. 1 is a flowchart of a low-dose image reconstruction method based on anatomical structure difference prior according to an embodiment of the present invention
  • Fig. 2 is a schematic diagram of the architecture of a weight prediction module according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of the architecture of a generative confrontation network according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of a reference standard image according to an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a sparsely sampled low-dose image according to an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a reconstructed image according to an embodiment of the present invention.
  • the low-dose image reconstruction method based on anatomical structure difference priors considers different anatomical structures of the input image, and adds prior information (attributes) of anatomical parts to the network framework in the form of weights.
  • the same anatomical part has the same weight, and different anatomical parts have different weights.
  • data from multiple locations can be integrated on a unified model framework.
  • the Wasserstein Generative Adversarial Network (WGAN) is introduced, and considering that the low-dose image and the estimated normal-dose image are derived from the same anatomical part, an attribute loss is proposed to define the attribute value of the estimated image and the real image distance. Through multiple loss constraints, the low-dose image reconstruction method of the present invention can obtain clearer images.
  • the low-dose image reconstruction method of the embodiment of the present invention includes the following steps:
  • step S110 the weights of different parts in the low-dose image are determined according to the prior information of the anatomical structure difference.
  • the weights of different parts are determined according to the weight prediction module of FIG. 2.
  • Each input low-dose image has a corresponding attribute (location), and the attribute is firstly coded with one-hot (one-hot).
  • the weight prediction module can generate the weights corresponding to each part according to the input attributes.
  • the structural parts referred to herein include, for example, the head, the orbits, the sinuses, the neck, the lungs, the abdomen, the pelvis, the knees, and the lumbar spine.
  • weight prediction module in Figure 2 those skilled in the art can make appropriate modifications according to actual application scenarios, for example, using more or less convolutional layers, using other types of activation functions or according to low dose
  • the number of parts included in the image is set with more or less channels, for example, a weight mask with 128 channels is generated.
  • step S120 a generative confrontation network is constructed, where the generative network uses low-dose images as input to extract features, and merges the weights of different parts in the feature extraction process to output a predicted image.
  • the generative confrontation network as a whole includes two parts: a generative network and a discriminant network.
  • the generative network includes a feature extraction layer 210, a plurality of cascaded attribute augmentation modules (for example, set to 15), and a reconstruction layer 270.
  • Each attribute augmentation module includes a down-sampling layer 220, a ReLU layer 230, an up-sampling layer 240, a feature joint layer 250, and a feature fusion layer 260.
  • the attribute augmentation module completes feature extraction through a down-sampling layer 220, a ReLu layer 230, and an up-sampling layer 240, and then obtains part weights according to step S110, and multiplies the extracted features and weights to obtain weighted features.
  • a joint layer is used to combine the original features and the weighted features, and the final feature fusion layer 260 (such as a convolutional layer) completes the feature fusion.
  • Symbols in Figure 3 Means point plus, Represents dot multiplication.
  • the parameter settings of the attribute augmentation module are as shown in Table 1.
  • the input of the generation network constructed by the present invention is a low-dose image
  • the input of the weight prediction module is the attribute corresponding to the low-dose image
  • the output of the weight prediction module is the weight of each part predicted, wherein the weight of each part is extracted by the generation network and the original The features are multiplied, and finally the network output prediction image is generated.
  • the prior information of the anatomical structure difference can be applied to the reconstruction of the low-dose image, thereby maintaining the characteristics of each part and increasing the difference of each part, so that the prediction The image is closer to the real image.
  • the present invention does not limit the number of cascaded attribute augmentation modules.
  • Step S130 For the discriminant network in the constructed generative confrontation network, the predicted image and the standard dose image are used as input, and the authenticity of the input image and the attribute value of the input image are determined as the optimization target.
  • the discriminant network needs to determine the attribute value (ie location) of the input image in addition to the authenticity of the input image.
  • the input image for generating the entire confrontation network framework is an image block, and the size of the image block is, for example, 64x64.
  • the training set and test set include images of multiple parts, such as the skull, orbits, sinuses, neck, lungs, abdomen, pelvis (male), pelvic (female), knees, and lumbar spine.
  • the discriminant network includes 7 convolutional layers and 2 fully connected layers. For specific parameter settings, see Table 2 below.
  • the input of the discrimination network is the predicted image and the normal dose image obtained by the generation network.
  • the output of the discrimination network includes two aspects, namely, the authenticity of the input image and the identification of the attribute value of the input image.
  • the goal of the discrimination network is to try to
  • the predicted image generated by the generating network is distinguished from the real image, and the attributes of the input image are accurately recognized.
  • Step S140 training a generated confrontation network to obtain a mapping relationship from a low-dose image to a standard-dose image.
  • a training data set D ⁇ (x 1 ,y 1 ), (x 2 ,y 2 ),..., (x n ,y n ) ⁇
  • x ⁇ x 1 ,x 2 ,... .
  • x n ⁇ are image blocks extracted from low-dose CT images
  • y ⁇ y 1 , y 2 ,...
  • y n ⁇ are image blocks extracted from standard-dose CT images (ie normal-dose images)
  • a ⁇ a 1, a 2, ..., a n ⁇ is the corresponding attribute, n being the number of training samples.
  • mapping G generating network
  • the parameters in the mapping G can be obtained by minimizing the mean square error of the objective function, expressed as:
  • represents network parameters (such as weights, biases, etc.).
  • the adversarial loss function is introduced to optimize the model to improve the accuracy of identifying the authenticity of the input image.
  • the adversarial loss function is expressed as:
  • E represents the expected calculation
  • represents the balance factor to balance the counter loss and the gradient penalty, if set to 10
  • D d represents the process of judging the authenticity of the input image.
  • attribute loss is introduced to define the attribute distance between the estimated image and the original image.
  • the attribute loss is expressed as:
  • E represents the desired calculation
  • D a represents the attribute discriminating process
  • the optimizer of the prior art can be used for optimization, for example, for supervised learning (generation network), the Adam optimizer is used for optimization, and for the generation confrontation model, Use SGD (stochastic gradient descent) optimizer to optimize.
  • image block pairings and corresponding attribute values are extracted from the standard-dose CT image and low-dose CT image data sets as the overall network input.
  • SGD stochastic gradient descent
  • mapping relationship G from the low-dose image to the standard-dose image is obtained, and the new low-dose image can be reconstructed by using the mapping relationship, so as to obtain a clear image closer to the real image.
  • the present invention provides a low-dose image reconstruction system based on anatomical structure difference prior to realize one or more aspects of the above method.
  • the system includes a weight prediction module, which is used to determine the weights of different parts in a low-dose image based on the prior information of anatomical structure differences; a network construction and training module, which is used to construct a generation network, and takes the low-dose image as input to extract Features, and merge the weights of different parts in the feature extraction process to output predicted images; and used to construct a discriminant network, with predicted images and standard dose images as input, to distinguish between the authenticity of predicted images and standard dose images as the first optimization Target, and identify different parts of the predicted image as the second optimization target, and jointly train the generation network and the discriminant network to obtain the mapping relationship between the low-dose image and the standard-dose image; the image reconstruction module is used to use the obtained mapping Relations for low-dose image reconstruction.
  • Each module in the system provided by the present invention can be implemented by a processor or a logic circuit.
  • the present invention can also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction or other sparse-based image reconstruction after proper deformation. Projection sampled image reconstruction.
  • Figure 4 is a reference standard image
  • Figure 5 is a sparsely sampled low-dose image
  • Figure 6 is a reconstructed image or restored image.
  • the present invention implements the conversion of attribute values into weight masks through the weight prediction module, and completes the fusion of the original image features and the attribute features by setting the attribute augmentation module in the generation network; based on the original low-dose image and the estimated image ownership
  • the same attribute value defines the attribute loss, thereby strengthening the constraints on the generation of the confrontation network, and then obtaining more accurate high-definition images.
  • the present invention may be a system, a method and/or a computer program product.
  • the computer program product may include a computer non-transitory readable storage medium loaded with computer readable program instructions for enabling a processor to implement various aspects of the present invention.
  • the computer-readable storage medium may be a tangible device that holds and stores instructions used by the instruction execution device.
  • the computer-readable storage medium may include, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing, for example.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

一种基于解剖结构差异先验的低剂量图像重建方法。该方法包括:根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重(S110);构建生成对抗网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合不同部位的权重,输出预测图像(S120);对于所构建的生成对抗网络中的判别网络,以预测图像和标准剂量图像作为输入,以判别输入图像的真伪以及判别输入图像的属性值为优化目标(S130);训练生成对抗网络,获得低剂量图像到标准剂量图像的映射关系(S140)。

Description

基于解剖结构差异先验的低剂量图像重建方法和系统 技术领域
本发明涉及医学图像处理技术领域,尤其涉及一种基于解剖结构差异先验的低剂量图像重建方法和系统。
背景技术
计算机断层成像(CT)是通过无损方式获取物体内部结构信息的一种重要成像手段,它拥有高分辨率、高灵敏度以及多层次等众多优点,被广泛应用于各医疗临床检查领域。然而,由于CT扫描过程中需要使用X射线,随着人们对辐射潜在危害的逐步了解,CT辐射剂量问题越来越受到重视。合理使用低剂量(As Low As Reasonably Achievable,ALARA)原则要求在满足临床诊断的前提下,尽量降低对患者的辐射剂量。因此,研究和开发新的低剂量CT成像方法,既能保证CT成像质量又减少有害的辐射剂量,对于医疗诊断领域具有重要的科学意义和应用前景。
现有的低剂量图像重建方法存在的主要问题是,通常需要进行全采样,导致CT扫描时间长;由于全采样所采集的数据量较大,导致图像重建速度慢;由于扫描时间长,导致出现病人运动所引起的伪影;由于大部分算法基于少数部位设计,算法鲁棒性差;病人所承受的CT辐射剂量高。并且,现有技术方案解决低剂量CT成像问题时忽略了低剂量图像的解剖结构存在很大差异,例如颅部、腹部结构存在明显的解剖结构不同,影响了重建图像的清晰度。
发明内容
本发明的目的在于克服上述现有技术的缺陷,提供一种基于解剖结构差异的低剂量图像重建方法和系统,基于稀疏投影采样完成图像重建,考虑解剖结构的差异,将解剖结构的差异作为一种先验信息引入到网络设计中,从而保证了重建图像的清晰度。
根据本发明的第一方面,提供一种基于解剖结构差异先验的低剂量图 像重建方法。该方法包括以下步骤:
根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;
构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;
构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;
利用所获得的映射关系进行低剂量图像重建。
在一个实施例中,所述根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重包括以下子步骤:
构建权重预测模块,其包括多个卷积层和Sigmod激活函数;
对低剂量图像的不同部位进行独热编码并依次输入至所述多个卷积层,进而利用所述Sigmod激活函数生成不同部位的权重。
在一个实施例中,所述生成网络包括多个级联的属性增广模块,用于将从输入的低剂量图像提取的特征与所述不同部位的权重相乘得到权重特征,并将所提取的特征和权重特征进行融合,其中每个属性增广模块依次包括下采样层、ReLU层、上采样层、特征联合层和特征融合层。
在一个实施例中,所述判别网络包括多个卷积层和两个全连接层。
在一个实施例中,对于给定训练数据集D={(x 1,y 1),(x 2,y 2),…,(x n,y n)},其中,x={x 1,x 2,...,x n}是从低剂量图像中抽取的图像块,y={y 1,y 2,...,y n}是从标准剂量图像抽取的图像块,a={a 1,a 2,...,a n}是对应不同部位的权重,n是训练样本的总数,在联合训练过程中,所述生成网络中的参数通过均方误差最小化目标函数得到,表示为:
Figure PCTCN2019126411-appb-000001
其中,Θ表示生成网络的参数,G表示生成网络的映射。
在一个实施例中,所述第一优化目标的损失函数设置为:
Figure PCTCN2019126411-appb-000002
其中,E代表期望计算,β表示平衡因子,D d表示判别真伪的过程。
在一个实施例中,所述第二优化目标的损失函数设置为:
L Attribute=E x(D a(x)-a)+E y(D a(G(y;a;Θ))-a)
其中,E表示期望计算,D a表示判别部位属性的过程。
根据本发明的第二方面,提供一种基于解剖结构差异先验的低剂量图像重建系统。该系统包括:
权重预测模块:其用于根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;
网络构建和训练模块:其用于构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;以及用于构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;
图像重建模块:其用于利用所获得的映射关系进行低剂量图像重建。
与现有技术相比,本发明的优点在于:利用解剖结构的不同,将图像内容信息和部位信息进行融合,提高了网络对于解剖结构的生成能力;基于对抗网络,加入属性约束,提高了网络对与解剖结构的感知。本发明提高了网络性能,使得重建后的图像很好地保留了图像细节,结构更清晰。
附图说明
以下附图仅对本发明作示意性的说明和解释,并不用于限定本发明的范围,其中:
图1是根据本发明一个实施例的基于解剖结构差异先验的低剂量图像重建方法的流程图;
图2是根据本发明一个实施例的权重预测模块的架构示意图;
图3是根据本发明一个实施例的生成对抗网络的架构示意图;
图4是根据本发明一个实施例的参考标准图像的示意图;
图5是根据本发明一个实施例的稀疏采样低剂量图像的示意图;
图6是根据本发明一个实施例的重建图像的示意图。
具体实施方式
为了使本发明的目的、技术方案、设计方法及优点更加清楚明了,以 下结合附图通过具体实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用于解释本发明,并不用于限定本发明。
在本文示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
简言之,本发明实施例提供的基于解剖结构差异先验的低剂量图像重建方法考虑输入图像的不同解剖结构,通过引入解剖部位先验信息(属性)以权重的形式加入到网络框架中。相同的解剖部位具有相同的权重,不同的解剖部位具有不同的权重。通过这种方式,多个部位的数据可以集成在统一的模型框架上。为了提升了图像视觉效果,引入Wasserstein生成对抗网络(WGAN),并且考虑到低剂量图像和估计的正常剂量图像来源于相同解剖部位,提出一种属性损失来定义预估图像和真实图像的属性数值距离。通过多种损失约束,使本发明的低剂量图像重建方法能够获得更清晰的图像。
具体地,参见图1所示,本发明实施例的低剂量图像重建方法包括以下步骤:
步骤S110,根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重。
例如,根据图2的权重预测模块来确定不同部位的权重。每个输入的低剂量图像具有对应的属性(部位),首先将属性进行one-hot(独热)编码。使用6个卷积层(卷积核1x1)核,最后使用Sigmod激活函数生成通道数为64的权重掩码。类似于U-net结构,在通道上完成通道的压缩与扩张,并使用短连接将相同通道数的卷积层连接起来以保留更多上下文信息,例如图2中从下至上的第一层(1x1x64)与第五层(1x1x64)使用短连接,第二层(1x1x32)与第四层(1x1x32)使用短连接。权重预测模块根据输入的属性可以生成各部位对应的权重。
本文涉及的结构部位例如包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔、膝盖和腰椎等。
需说明的是,对于图2的权重预测模块,本领域的技术人员可根据实际应用场景作适当变型,例如,采用更多或更少的卷积层,采用其它类型 的激活函数或根据低剂量图像中所包含部位的数量设置更多或更少的通道数,例如生成通道数为128的权重掩码。此外,在另外的实施例中,也可简单地直接设置不同部位的权重,只要将不同部位进行差异化标识即可。
步骤S120,构建生成对抗网络,其中生成网络以低剂量图像作为输入提取特征,并在特征提取过程中融合不同部位的权重,输出预测图像。
参见图3所示,生成对抗网络整体上包括生成网络和判别网络两部分,其中生成网络包括特征提取层210、多个级联的属性增广模块(例如设置为15个)和重建层270。每个属性增广模块包括下采样层220、ReLU层230、上采样层240、特征联合层250和特征融合层260。属性增广模块通过一个下采样层220、ReLu层230和上采样层240完成特征提取,进而根据步骤S110获得部位权重,将提取的特征和权重相乘得到权重特征。为防止原始提取特征的丢失,使用联合层将原始特征和权重特征联合起来,经过最后的特征融合层260(如卷积层)完成特征融合。图3中符号
Figure PCTCN2019126411-appb-000003
表示点加,
Figure PCTCN2019126411-appb-000004
表示点乘。
在一个实施例中,属性增广模块的参数设置如下表1。
表1:图像增广模块
单元 操作 参数
下采样层 卷积 3x3x64
上采样层 反卷积 3x3x64
特征融合层 卷积 1x1x64
本发明所构建的生成网络的输入是低剂量图像,权重预测模块的输入是低剂量图像对应的属性,权重预测模块的输出是预测的各部位权重,其中各部位权重在生成网络与原始提取的特征相乘,最终生成网络输出预测图像。
在本发明实施例中,通过设置属性增广模块和权重预测模块能够将解剖结构差异的先验信息应用到低剂量图像的重建,从而保持各部位的特性并增加各部位的差异性,使预测图像更接近真实的图像。本发明对级联的属性增广模块的数量不作限制。
步骤S130,对于所构建的生成对抗网络中的判别网络,以预测图像和标准剂量图像作为输入,以判别输入图像的真伪以及判别输入图像的属性 值为优化目标。
由于输入的低剂量图像和最终的估计图像拥有相同的属性,因此,判别网络除了需要判别出输入图像的真伪外,还需要判别出输入图的属性值(即部位)。整个生成对抗网络框架的输入图像是图像块,图像块大小例如是64x64。训练集和测试集中包括多个部位的图像,例如包括头颅、眼眶、鼻窦、颈部、肺腔、腹部、盆腔(男)、盆腔(女)、膝盖和腰椎等。
在一个实施例中,判别网络包括7个卷积层和2个全连接层,具体参数设置参见下表2。
表2:判别网络参数
单元 卷积步幅 卷积核
卷积层1 2 64
卷积层2 1 128
卷积层3 2 128
卷积层4 1 256
卷积层5 2 256
卷积层6 1 512
卷积层7 2 512
全连接层1 - 1
全连接层2 - 10
判别网络的输入是生成网络获得的预测图像和正常剂量图像,判别网络的输出包括两方面,即判别输入图像的真伪以及识别输入图像的属性值,也就是说,判别网络的目标是尽量将生成网络生成的预测图像和真实图像进行区分,并准确识别输入图像的属性。
步骤S140,训练生成对抗网络,获得低剂量图像到标准剂量图像的映射关系。
例如,给定训练数据集D={(x 1,y 1),(x 2,y 2),…,(x n,y n)},其中,x={x 1,x 2,...,x n}是从低剂量CT图像中抽取的图像块,y={y 1,y 2,...,y n}是从标准剂量CT图像(即正常剂量图像)中抽取的图像块,a={a 1,a 2,...,a n}是对应的属性,n是训练样本的数量。
预训练监督模型,映射G(生成网络)中的参数可通过均方误差最小化目标函数得到,表示为:
Figure PCTCN2019126411-appb-000005
其中,Θ代表网络参数(例如权重、偏置等)。
为了提高视觉效果,引入对抗损失函数来优化模型以提高识别输入图像真伪的准确性,对抗损失函数表示为:
Figure PCTCN2019126411-appb-000006
其中,E代表期望计算,β表示平衡因子以平衡对抗损失和梯度惩罚项,如设置为10,D d表示判别输入图像真伪的过程。
进一步地,对于识别输入图像属性的过程,因为输入低剂量图像和估计图像拥有相同的属性,所以引入属性损失来定义估计图像和原始图像的属性距离,属性损失表示为:
L Attribute=E x(D a(x)-a)+E y(D a(G(y;a;Θ))-a)   (3)
其中,E表示期望计算,D a表示判别属性的过程。
需说明的是,在联合训练生成网络和判别网络过程中,可采用现有技术的优化器进行优化,例如,对应监督学习(生成网络),采用Adam优化器来优化,而对于生成对抗模型,采用SGD(随机梯度下降法)优化器来优化。训练时,从标准剂量CT图像和低剂量CT图像的数据集中提取图像块配对和对应的属性值作为整体的网络输入。此外,也可以采用其他形式的损失函数进行训练。
经训练生成对抗网络,得到由低剂量图像到标准剂量图像的映射关系G,利用该映射关系即可对新的低剂量图像进行重建,从而获得更接近真实图像的清晰图像。
相应地,本发明提供一种基于解剖结构差异先验的低剂量图像重建系统,用于实现上述方法的一个方面或多个方面。例如,该系统包括权重预测模块,其用于根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;网络构建和训练模块,其用于构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合不同部位的权重,输出预测图像;以及用于构建判别网络,以预测图像和标准剂量图像作为输入,以区分预 测图像与标准剂量图像的真伪作为第一优化目标,并以识别预测图像的不同部位作为第二优化目标,联合训练生成网络和判别网络,获得低剂量图像到标准剂量图像之间的映射关系;图像重建模块,其用于利用所获得的映射关系进行低剂量图像重建。本发明提供的系统中各模块可以采用处理器或逻辑电路实现。
需说明的是,本发明除应用于CT图像重建,经适当变形后,也可应用于PET(正电子发射断层成像术)、SPECT(单光子发射计算机断层成像术)图像重建或其它的基于稀疏投影采样的图像重建。
经验证,利用本发明进行图像重建,能够获得更清晰并包含更多细节的图像。参见图4至图6所示,其中图4是参考标准图像,图5是稀疏采样低剂量图像,图6是重建图像或称复原图像。
综上所述,本发明通过权重预测模块实现属性值转化为权重掩码,通过在生成网络中设置属性增广模块完成原始图像特征和属性特征的融合;基于原始的低剂量图像和估计图像拥有相同的属性值来定义属性损失,从而加强对生成对抗网络的约束,进而获得更精准的高清图像。
需要说明的是,虽然上文按照特定顺序描述了各个步骤,但是并不意味着必须按照上述特定顺序来执行各个步骤,实际上,这些步骤中的一些可以并发执行,甚至改变顺序,只要能够实现所需要的功能即可。
本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机非暂态可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。
计算机可读存储介质可以是保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以包括但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范 围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (10)

  1. 一种基于解剖结构差异先验的低剂量图像重建方法,包括以下步骤:
    根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;
    构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;
    构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;
    利用所获得的映射关系进行低剂量图像重建。
  2. 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重包括以下子步骤:
    构建权重预测模块,其包括多个卷积层和Sigmod激活函数;
    对低剂量图像的不同部位进行独热编码并依次输入至所述多个卷积层,进而利用所述Sigmod激活函数生成不同部位的权重。
  3. 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述生成网络包括多个级联的属性增广模块,用于将从输入的低剂量图像提取的特征与所述不同部位的权重相乘得到权重特征,并将所提取的特征和权重特征进行融合,其中每个属性增广模块依次包括下采样层、ReLU层、上采样层、特征联合层和特征融合层。
  4. 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述判别网络包括多个卷积层和两个全连接层。
  5. 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,对于给定训练数据集D={(x 1,y 1),(x 2,y 2),…,(x n,y n)},其中,x={x 1,x 2,...,x n}是从低剂量图像中抽取的图像块,y={y 1,y 2,...,y n}是从标准剂量图像抽取的图像块,a={a 1,a 2,...,a n}是对应不同部位的权重,n是训练样本的总数,在联合训练过程中,所述生成网络中的参数通过均方误差最小化目标函数得到,表示为:
    Figure PCTCN2019126411-appb-100001
    其中,Θ表示生成网络的参数,G表示生成网络的映射。
  6. 根据权利要求5所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述第一优化目标的损失函数设置为:
    Figure PCTCN2019126411-appb-100002
    其中,E代表期望计算,β表示平衡因子,D d表示判别真伪的过程。
  7. 根据权利要求5所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述第二优化目标的损失函数设置为:
    L Attribute=E x(D a(x)-a)+E y(D a(G(y;a;Θ))-a)
    其中,E表示期望计算,D a表示判别部位属性的过程。
  8. 一种基于解剖结构差异先验的低剂量图像重建系统,包括:
    权重预测模块:用于根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;
    网络构建和训练模块:用于构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;以及用于构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;
    图像重建模块,用于利用所获得的映射关系进行低剂量图像重建。
  9. 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求1至7中任一项所述方法的步骤。
  10. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至7中任一项所述的方法的步骤。
PCT/CN2019/126411 2019-12-18 2019-12-18 基于解剖结构差异先验的低剂量图像重建方法和系统 WO2021120069A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/126411 WO2021120069A1 (zh) 2019-12-18 2019-12-18 基于解剖结构差异先验的低剂量图像重建方法和系统
US16/878,633 US11514621B2 (en) 2019-12-18 2020-05-20 Low-dose image reconstruction method and system based on prior anatomical structure difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/126411 WO2021120069A1 (zh) 2019-12-18 2019-12-18 基于解剖结构差异先验的低剂量图像重建方法和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/878,633 Continuation-In-Part US11514621B2 (en) 2019-12-18 2020-05-20 Low-dose image reconstruction method and system based on prior anatomical structure difference

Publications (1)

Publication Number Publication Date
WO2021120069A1 true WO2021120069A1 (zh) 2021-06-24

Family

ID=76438638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126411 WO2021120069A1 (zh) 2019-12-18 2019-12-18 基于解剖结构差异先验的低剂量图像重建方法和系统

Country Status (2)

Country Link
US (1) US11514621B2 (zh)
WO (1) WO2021120069A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11806175B2 (en) * 2019-09-12 2023-11-07 Rensselaer Polytechnic Institute Few-view CT image reconstruction system
CN113506353A (zh) * 2021-07-22 2021-10-15 深圳高性能医疗器械国家研究院有限公司 一种图像处理方法、系统及其应用
CN115393534B (zh) * 2022-10-31 2023-01-20 深圳市宝润科技有限公司 一种基于深度学习的锥束三维dr重建方法及系统
CN117611731B (zh) * 2023-10-09 2024-05-28 四川大学 一种基于GANs的颅面复原方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156974A (zh) * 2011-04-22 2011-08-17 浙江大学 解剖信息约束下基于h∞滤波的动态pet浓度重建方法
US20150342549A1 (en) * 2014-05-29 2015-12-03 Samsung Electronics Co., Ltd. X-ray imaging apparatus and control method for the same
CN108693491A (zh) * 2017-04-07 2018-10-23 康奈尔大学 稳健的定量磁化率成像系统和方法
CN110084794A (zh) * 2019-04-22 2019-08-02 华南理工大学 一种基于注意力卷积神经网络的皮肤癌图片识别方法
CN110288671A (zh) * 2019-06-25 2019-09-27 南京邮电大学 基于三维对抗性生成网络的低剂量cbct图像重建方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11126914B2 (en) * 2017-10-11 2021-09-21 General Electric Company Image generation using machine learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156974A (zh) * 2011-04-22 2011-08-17 浙江大学 解剖信息约束下基于h∞滤波的动态pet浓度重建方法
US20150342549A1 (en) * 2014-05-29 2015-12-03 Samsung Electronics Co., Ltd. X-ray imaging apparatus and control method for the same
CN108693491A (zh) * 2017-04-07 2018-10-23 康奈尔大学 稳健的定量磁化率成像系统和方法
CN110084794A (zh) * 2019-04-22 2019-08-02 华南理工大学 一种基于注意力卷积神经网络的皮肤癌图片识别方法
CN110288671A (zh) * 2019-06-25 2019-09-27 南京邮电大学 基于三维对抗性生成网络的低剂量cbct图像重建方法

Also Published As

Publication number Publication date
US11514621B2 (en) 2022-11-29
US20210192806A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
Jiang et al. COVID-19 CT image synthesis with a conditional generative adversarial network
WO2021120069A1 (zh) 基于解剖结构差异先验的低剂量图像重建方法和系统
CN111179366B (zh) 基于解剖结构差异先验的低剂量图像重建方法和系统
Hwang et al. Clinical implementation of deep learning in thoracic radiology: potential applications and challenges
US9256965B2 (en) Method and apparatus for generating a derived image using images of different types
Sander et al. Automatic segmentation with detection of local segmentation failures in cardiac MRI
US20200265276A1 (en) Copd classification with machine-trained abnormality detection
Qiu et al. Automatic segmentation of mandible from conventional methods to deep learning—a review
Han et al. Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning
CN110400617A (zh) 医学成像中的成像和报告的结合
CN112470190A (zh) 用于改进低剂量体积对比增强mri的系统和方法
Koshino et al. Narrative review of generative adversarial networks in medical and molecular imaging
Chen et al. Generative models improve radiomics reproducibility in low dose CTs: a simulation study
CN109741254A (zh) 字典训练及图像超分辨重建方法、系统、设备及存储介质
McDonald et al. Investigation of autosegmentation techniques on T2-weighted MRI for off-line dose reconstruction in MR-linac Adapt to Position workflow for head and neck cancers
WO2020113148A1 (en) Single or a few views computed tomography imaging with deep neural network
Yao et al. W‐Transformer: Accurate Cobb angles estimation by using a transformer‐based hybrid structure
Chen et al. DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT
Sander et al. Reconstruction and completion of high-resolution 3D cardiac shapes using anisotropic CMRI segmentations and continuous implicit neural representations
Sui et al. Simultaneous image reconstruction and lesion segmentation in accelerated MRI using multitasking learning
Xia et al. Dynamic controllable residual generative adversarial network for low-dose computed tomography imaging
Ichikawa et al. Acquisition time reduction in pediatric 99mTc‐DMSA planar imaging using deep learning
Gerard et al. Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network
Xue et al. Early Pregnancy Fetal Facial Ultrasound Standard Plane‐Assisted Recognition Algorithm
Sun et al. Clinical ultra‐high resolution CT scans enabled by using a generative adversarial network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956494

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19956494

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.01.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19956494

Country of ref document: EP

Kind code of ref document: A1