WO2021120069A1 - 基于解剖结构差异先验的低剂量图像重建方法和系统 - Google Patents
基于解剖结构差异先验的低剂量图像重建方法和系统 Download PDFInfo
- Publication number
- WO2021120069A1 WO2021120069A1 PCT/CN2019/126411 CN2019126411W WO2021120069A1 WO 2021120069 A1 WO2021120069 A1 WO 2021120069A1 CN 2019126411 W CN2019126411 W CN 2019126411W WO 2021120069 A1 WO2021120069 A1 WO 2021120069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- low
- image
- dose
- dose image
- network
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 210000003484 anatomy Anatomy 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000005457 optimization Methods 0.000 claims abstract description 18
- 238000013507 mapping Methods 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 17
- 230000003416 augmentation Effects 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000002591 computed tomography Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 5
- 230000005855 radiation Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 210000004705 lumbosacral region Anatomy 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000004197 pelvis Anatomy 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000003739 neck Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
- A61N2005/1061—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
- A61N2005/1062—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source using virtual X-ray images, e.g. digitally reconstructed radiographs [DRR]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/444—Low dose acquisition or reduction of radiation dose
Definitions
- the present invention relates to the technical field of medical image processing, in particular to a low-dose image reconstruction method and system based on anatomical structure difference prior.
- Computed tomography is an important imaging method to obtain the internal structure information of objects in a non-destructive manner. It has many advantages such as high resolution, high sensitivity, and multiple levels, and is widely used in various medical clinical examination fields.
- CT Computed tomography
- AARA As Low As Reasonably Achievable
- the main problem of the existing low-dose image reconstruction methods is that full sampling is usually required, resulting in long CT scan time; due to the large amount of data collected by full sampling, the image reconstruction speed is slow; and the long scan time leads to Artifacts caused by patient movement; as most of the algorithms are designed based on a few parts, the algorithm has poor robustness; the patient's CT radiation dose is high.
- the existing technical solutions ignore the large differences in the anatomical structure of low-dose images when solving the problem of low-dose CT imaging. For example, there are obvious differences in anatomical structures in the cranial and abdominal structures, which affects the clarity of the reconstructed image.
- the purpose of the present invention is to overcome the above-mentioned shortcomings of the prior art and provide a low-dose image reconstruction method and system based on differences in anatomical structures.
- Image reconstruction is completed based on sparse projection sampling, taking the differences in anatomical structures into consideration, and taking the differences in anatomical structures as one This kind of prior information is introduced into the network design to ensure the clarity of the reconstructed image.
- a low-dose image reconstruction method based on anatomical structure difference a priori includes the following steps:
- Construct a discriminant network take the predicted image and the standard dose image as input, distinguish the authenticity of the predicted image and the standard dose image as the first optimization target, and identify different parts of the predicted image as the second optimization target , Jointly training the generation network and the discrimination network to obtain the mapping relationship between the low-dose image and the standard-dose image;
- the determining the weights of different parts in the low-dose image according to the prior information of the anatomical structure difference includes the following sub-steps:
- One-hot encoding is performed on different parts of the low-dose image and sequentially input to the multiple convolutional layers, and then the Sigmod activation function is used to generate weights for different parts.
- the generation network includes a plurality of cascaded attribute augmentation modules, which are used to multiply the features extracted from the input low-dose image with the weights of the different parts to obtain the weighted features, and combine the extracted features
- each attribute augmentation module includes a down-sampling layer, a ReLU layer, an up-sampling layer, a feature combination layer, and a feature fusion layer in turn.
- the discriminant network includes multiple convolutional layers and two fully connected layers.
- n ⁇ (x 1 ,y 1 ), (x 2 ,y 2 ),..., (x n ,y n ) ⁇
- n is the total number of training samples.
- the parameters in the generating network are obtained by minimizing the mean square error of the objective function, Expressed as:
- ⁇ represents the parameters of the generated network
- G represents the mapping of the generated network
- the loss function of the first optimization objective is set as:
- E represents the expected calculation
- ⁇ represents the balance factor
- D d represents the process of authenticating.
- the loss function of the second optimization target is set as:
- E represents the desired calculation
- D a represents a site attribute determination process
- a low-dose image reconstruction system based on anatomical structure difference prior includes:
- Weight prediction module it is used to determine the weight of different parts in the low-dose image according to the prior information of the anatomical structure difference;
- Network construction and training module it is used to construct a generation network, use low-dose images as input to extract features, and merge the weights of the different parts in the feature extraction process to output prediction images; and to construct a discriminant network with the The predicted image and the standard dose image are used as input, the authenticity of the predicted image and the standard dose image is distinguished as the first optimization target, and the identification of different parts of the predicted image is used as the second optimization target, and the generation network is jointly trained And the discrimination network to obtain the mapping relationship between the low-dose image and the standard-dose image;
- Image reconstruction module it is used for low-dose image reconstruction using the obtained mapping relationship.
- the present invention has the advantages of: taking advantage of the difference in anatomical structure, fusing image content information and location information, improving the ability of the network to generate anatomical structures; based on the confrontation network, adding attribute constraints to improve the network Perception of and anatomy.
- the invention improves the network performance, so that the reconstructed image retains the image details well and has a clearer structure.
- Fig. 1 is a flowchart of a low-dose image reconstruction method based on anatomical structure difference prior according to an embodiment of the present invention
- Fig. 2 is a schematic diagram of the architecture of a weight prediction module according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of the architecture of a generative confrontation network according to an embodiment of the present invention.
- Fig. 4 is a schematic diagram of a reference standard image according to an embodiment of the present invention.
- Fig. 5 is a schematic diagram of a sparsely sampled low-dose image according to an embodiment of the present invention.
- Fig. 6 is a schematic diagram of a reconstructed image according to an embodiment of the present invention.
- the low-dose image reconstruction method based on anatomical structure difference priors considers different anatomical structures of the input image, and adds prior information (attributes) of anatomical parts to the network framework in the form of weights.
- the same anatomical part has the same weight, and different anatomical parts have different weights.
- data from multiple locations can be integrated on a unified model framework.
- the Wasserstein Generative Adversarial Network (WGAN) is introduced, and considering that the low-dose image and the estimated normal-dose image are derived from the same anatomical part, an attribute loss is proposed to define the attribute value of the estimated image and the real image distance. Through multiple loss constraints, the low-dose image reconstruction method of the present invention can obtain clearer images.
- the low-dose image reconstruction method of the embodiment of the present invention includes the following steps:
- step S110 the weights of different parts in the low-dose image are determined according to the prior information of the anatomical structure difference.
- the weights of different parts are determined according to the weight prediction module of FIG. 2.
- Each input low-dose image has a corresponding attribute (location), and the attribute is firstly coded with one-hot (one-hot).
- the weight prediction module can generate the weights corresponding to each part according to the input attributes.
- the structural parts referred to herein include, for example, the head, the orbits, the sinuses, the neck, the lungs, the abdomen, the pelvis, the knees, and the lumbar spine.
- weight prediction module in Figure 2 those skilled in the art can make appropriate modifications according to actual application scenarios, for example, using more or less convolutional layers, using other types of activation functions or according to low dose
- the number of parts included in the image is set with more or less channels, for example, a weight mask with 128 channels is generated.
- step S120 a generative confrontation network is constructed, where the generative network uses low-dose images as input to extract features, and merges the weights of different parts in the feature extraction process to output a predicted image.
- the generative confrontation network as a whole includes two parts: a generative network and a discriminant network.
- the generative network includes a feature extraction layer 210, a plurality of cascaded attribute augmentation modules (for example, set to 15), and a reconstruction layer 270.
- Each attribute augmentation module includes a down-sampling layer 220, a ReLU layer 230, an up-sampling layer 240, a feature joint layer 250, and a feature fusion layer 260.
- the attribute augmentation module completes feature extraction through a down-sampling layer 220, a ReLu layer 230, and an up-sampling layer 240, and then obtains part weights according to step S110, and multiplies the extracted features and weights to obtain weighted features.
- a joint layer is used to combine the original features and the weighted features, and the final feature fusion layer 260 (such as a convolutional layer) completes the feature fusion.
- Symbols in Figure 3 Means point plus, Represents dot multiplication.
- the parameter settings of the attribute augmentation module are as shown in Table 1.
- the input of the generation network constructed by the present invention is a low-dose image
- the input of the weight prediction module is the attribute corresponding to the low-dose image
- the output of the weight prediction module is the weight of each part predicted, wherein the weight of each part is extracted by the generation network and the original The features are multiplied, and finally the network output prediction image is generated.
- the prior information of the anatomical structure difference can be applied to the reconstruction of the low-dose image, thereby maintaining the characteristics of each part and increasing the difference of each part, so that the prediction The image is closer to the real image.
- the present invention does not limit the number of cascaded attribute augmentation modules.
- Step S130 For the discriminant network in the constructed generative confrontation network, the predicted image and the standard dose image are used as input, and the authenticity of the input image and the attribute value of the input image are determined as the optimization target.
- the discriminant network needs to determine the attribute value (ie location) of the input image in addition to the authenticity of the input image.
- the input image for generating the entire confrontation network framework is an image block, and the size of the image block is, for example, 64x64.
- the training set and test set include images of multiple parts, such as the skull, orbits, sinuses, neck, lungs, abdomen, pelvis (male), pelvic (female), knees, and lumbar spine.
- the discriminant network includes 7 convolutional layers and 2 fully connected layers. For specific parameter settings, see Table 2 below.
- the input of the discrimination network is the predicted image and the normal dose image obtained by the generation network.
- the output of the discrimination network includes two aspects, namely, the authenticity of the input image and the identification of the attribute value of the input image.
- the goal of the discrimination network is to try to
- the predicted image generated by the generating network is distinguished from the real image, and the attributes of the input image are accurately recognized.
- Step S140 training a generated confrontation network to obtain a mapping relationship from a low-dose image to a standard-dose image.
- a training data set D ⁇ (x 1 ,y 1 ), (x 2 ,y 2 ),..., (x n ,y n ) ⁇
- x ⁇ x 1 ,x 2 ,... .
- x n ⁇ are image blocks extracted from low-dose CT images
- y ⁇ y 1 , y 2 ,...
- y n ⁇ are image blocks extracted from standard-dose CT images (ie normal-dose images)
- a ⁇ a 1, a 2, ..., a n ⁇ is the corresponding attribute, n being the number of training samples.
- mapping G generating network
- the parameters in the mapping G can be obtained by minimizing the mean square error of the objective function, expressed as:
- ⁇ represents network parameters (such as weights, biases, etc.).
- the adversarial loss function is introduced to optimize the model to improve the accuracy of identifying the authenticity of the input image.
- the adversarial loss function is expressed as:
- E represents the expected calculation
- ⁇ represents the balance factor to balance the counter loss and the gradient penalty, if set to 10
- D d represents the process of judging the authenticity of the input image.
- attribute loss is introduced to define the attribute distance between the estimated image and the original image.
- the attribute loss is expressed as:
- E represents the desired calculation
- D a represents the attribute discriminating process
- the optimizer of the prior art can be used for optimization, for example, for supervised learning (generation network), the Adam optimizer is used for optimization, and for the generation confrontation model, Use SGD (stochastic gradient descent) optimizer to optimize.
- image block pairings and corresponding attribute values are extracted from the standard-dose CT image and low-dose CT image data sets as the overall network input.
- SGD stochastic gradient descent
- mapping relationship G from the low-dose image to the standard-dose image is obtained, and the new low-dose image can be reconstructed by using the mapping relationship, so as to obtain a clear image closer to the real image.
- the present invention provides a low-dose image reconstruction system based on anatomical structure difference prior to realize one or more aspects of the above method.
- the system includes a weight prediction module, which is used to determine the weights of different parts in a low-dose image based on the prior information of anatomical structure differences; a network construction and training module, which is used to construct a generation network, and takes the low-dose image as input to extract Features, and merge the weights of different parts in the feature extraction process to output predicted images; and used to construct a discriminant network, with predicted images and standard dose images as input, to distinguish between the authenticity of predicted images and standard dose images as the first optimization Target, and identify different parts of the predicted image as the second optimization target, and jointly train the generation network and the discriminant network to obtain the mapping relationship between the low-dose image and the standard-dose image; the image reconstruction module is used to use the obtained mapping Relations for low-dose image reconstruction.
- Each module in the system provided by the present invention can be implemented by a processor or a logic circuit.
- the present invention can also be applied to PET (positron emission tomography), SPECT (single photon emission computed tomography) image reconstruction or other sparse-based image reconstruction after proper deformation. Projection sampled image reconstruction.
- Figure 4 is a reference standard image
- Figure 5 is a sparsely sampled low-dose image
- Figure 6 is a reconstructed image or restored image.
- the present invention implements the conversion of attribute values into weight masks through the weight prediction module, and completes the fusion of the original image features and the attribute features by setting the attribute augmentation module in the generation network; based on the original low-dose image and the estimated image ownership
- the same attribute value defines the attribute loss, thereby strengthening the constraints on the generation of the confrontation network, and then obtaining more accurate high-definition images.
- the present invention may be a system, a method and/or a computer program product.
- the computer program product may include a computer non-transitory readable storage medium loaded with computer readable program instructions for enabling a processor to implement various aspects of the present invention.
- the computer-readable storage medium may be a tangible device that holds and stores instructions used by the instruction execution device.
- the computer-readable storage medium may include, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing, for example.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
Description
单元 | 操作 | 参数 |
下采样层 | 卷积 | 3x3x64 |
上采样层 | 反卷积 | 3x3x64 |
特征融合层 | 卷积 | 1x1x64 |
单元 | 卷积步幅 | 卷积核 |
卷积层1 | 2 | 64 |
卷积层2 | 1 | 128 |
卷积层3 | 2 | 128 |
卷积层4 | 1 | 256 |
卷积层5 | 2 | 256 |
卷积层6 | 1 | 512 |
卷积层7 | 2 | 512 |
全连接层1 | - | 1 |
全连接层2 | - | 10 |
Claims (10)
- 一种基于解剖结构差异先验的低剂量图像重建方法,包括以下步骤:根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;利用所获得的映射关系进行低剂量图像重建。
- 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重包括以下子步骤:构建权重预测模块,其包括多个卷积层和Sigmod激活函数;对低剂量图像的不同部位进行独热编码并依次输入至所述多个卷积层,进而利用所述Sigmod激活函数生成不同部位的权重。
- 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述生成网络包括多个级联的属性增广模块,用于将从输入的低剂量图像提取的特征与所述不同部位的权重相乘得到权重特征,并将所提取的特征和权重特征进行融合,其中每个属性增广模块依次包括下采样层、ReLU层、上采样层、特征联合层和特征融合层。
- 根据权利要求1所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述判别网络包括多个卷积层和两个全连接层。
- 根据权利要求5所述的基于解剖结构差异先验的低剂量图像重建方法,其特征在于,所述第二优化目标的损失函数设置为:L Attribute=E x(D a(x)-a)+E y(D a(G(y;a;Θ))-a)其中,E表示期望计算,D a表示判别部位属性的过程。
- 一种基于解剖结构差异先验的低剂量图像重建系统,包括:权重预测模块:用于根据解剖结构差异的先验信息确定低剂量图像中不同部位的权重;网络构建和训练模块:用于构建生成网络,以低剂量图像作为输入提取特征,并在特征提取过程中融合所述不同部位的权重,输出预测图像;以及用于构建判别网络,以所述预测图像和标准剂量图像作为输入,以区分所述预测图像与标准剂量图像的真伪作为第一优化目标,并以识别所述预测图像的不同部位作为第二优化目标,联合训练所述生成网络和所述判别网络,获得低剂量图像到标准剂量图像之间的映射关系;图像重建模块,用于利用所获得的映射关系进行低剂量图像重建。
- 一种计算机可读存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现根据权利要求1至7中任一项所述方法的步骤。
- 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至7中任一项所述的方法的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/126411 WO2021120069A1 (zh) | 2019-12-18 | 2019-12-18 | 基于解剖结构差异先验的低剂量图像重建方法和系统 |
US16/878,633 US11514621B2 (en) | 2019-12-18 | 2020-05-20 | Low-dose image reconstruction method and system based on prior anatomical structure difference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/126411 WO2021120069A1 (zh) | 2019-12-18 | 2019-12-18 | 基于解剖结构差异先验的低剂量图像重建方法和系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/878,633 Continuation-In-Part US11514621B2 (en) | 2019-12-18 | 2020-05-20 | Low-dose image reconstruction method and system based on prior anatomical structure difference |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021120069A1 true WO2021120069A1 (zh) | 2021-06-24 |
Family
ID=76438638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/126411 WO2021120069A1 (zh) | 2019-12-18 | 2019-12-18 | 基于解剖结构差异先验的低剂量图像重建方法和系统 |
Country Status (2)
Country | Link |
---|---|
US (1) | US11514621B2 (zh) |
WO (1) | WO2021120069A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11806175B2 (en) * | 2019-09-12 | 2023-11-07 | Rensselaer Polytechnic Institute | Few-view CT image reconstruction system |
CN113506353A (zh) * | 2021-07-22 | 2021-10-15 | 深圳高性能医疗器械国家研究院有限公司 | 一种图像处理方法、系统及其应用 |
CN115393534B (zh) * | 2022-10-31 | 2023-01-20 | 深圳市宝润科技有限公司 | 一种基于深度学习的锥束三维dr重建方法及系统 |
CN117611731B (zh) * | 2023-10-09 | 2024-05-28 | 四川大学 | 一种基于GANs的颅面复原方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156974A (zh) * | 2011-04-22 | 2011-08-17 | 浙江大学 | 解剖信息约束下基于h∞滤波的动态pet浓度重建方法 |
US20150342549A1 (en) * | 2014-05-29 | 2015-12-03 | Samsung Electronics Co., Ltd. | X-ray imaging apparatus and control method for the same |
CN108693491A (zh) * | 2017-04-07 | 2018-10-23 | 康奈尔大学 | 稳健的定量磁化率成像系统和方法 |
CN110084794A (zh) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | 一种基于注意力卷积神经网络的皮肤癌图片识别方法 |
CN110288671A (zh) * | 2019-06-25 | 2019-09-27 | 南京邮电大学 | 基于三维对抗性生成网络的低剂量cbct图像重建方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11126914B2 (en) * | 2017-10-11 | 2021-09-21 | General Electric Company | Image generation using machine learning |
-
2019
- 2019-12-18 WO PCT/CN2019/126411 patent/WO2021120069A1/zh active Application Filing
-
2020
- 2020-05-20 US US16/878,633 patent/US11514621B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156974A (zh) * | 2011-04-22 | 2011-08-17 | 浙江大学 | 解剖信息约束下基于h∞滤波的动态pet浓度重建方法 |
US20150342549A1 (en) * | 2014-05-29 | 2015-12-03 | Samsung Electronics Co., Ltd. | X-ray imaging apparatus and control method for the same |
CN108693491A (zh) * | 2017-04-07 | 2018-10-23 | 康奈尔大学 | 稳健的定量磁化率成像系统和方法 |
CN110084794A (zh) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | 一种基于注意力卷积神经网络的皮肤癌图片识别方法 |
CN110288671A (zh) * | 2019-06-25 | 2019-09-27 | 南京邮电大学 | 基于三维对抗性生成网络的低剂量cbct图像重建方法 |
Also Published As
Publication number | Publication date |
---|---|
US11514621B2 (en) | 2022-11-29 |
US20210192806A1 (en) | 2021-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jiang et al. | COVID-19 CT image synthesis with a conditional generative adversarial network | |
WO2021120069A1 (zh) | 基于解剖结构差异先验的低剂量图像重建方法和系统 | |
CN111179366B (zh) | 基于解剖结构差异先验的低剂量图像重建方法和系统 | |
Hwang et al. | Clinical implementation of deep learning in thoracic radiology: potential applications and challenges | |
US9256965B2 (en) | Method and apparatus for generating a derived image using images of different types | |
Sander et al. | Automatic segmentation with detection of local segmentation failures in cardiac MRI | |
US20200265276A1 (en) | Copd classification with machine-trained abnormality detection | |
Qiu et al. | Automatic segmentation of mandible from conventional methods to deep learning—a review | |
Han et al. | Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning | |
CN110400617A (zh) | 医学成像中的成像和报告的结合 | |
CN112470190A (zh) | 用于改进低剂量体积对比增强mri的系统和方法 | |
Koshino et al. | Narrative review of generative adversarial networks in medical and molecular imaging | |
Chen et al. | Generative models improve radiomics reproducibility in low dose CTs: a simulation study | |
CN109741254A (zh) | 字典训练及图像超分辨重建方法、系统、设备及存储介质 | |
McDonald et al. | Investigation of autosegmentation techniques on T2-weighted MRI for off-line dose reconstruction in MR-linac Adapt to Position workflow for head and neck cancers | |
WO2020113148A1 (en) | Single or a few views computed tomography imaging with deep neural network | |
Yao et al. | W‐Transformer: Accurate Cobb angles estimation by using a transformer‐based hybrid structure | |
Chen et al. | DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT | |
Sander et al. | Reconstruction and completion of high-resolution 3D cardiac shapes using anisotropic CMRI segmentations and continuous implicit neural representations | |
Sui et al. | Simultaneous image reconstruction and lesion segmentation in accelerated MRI using multitasking learning | |
Xia et al. | Dynamic controllable residual generative adversarial network for low-dose computed tomography imaging | |
Ichikawa et al. | Acquisition time reduction in pediatric 99mTc‐DMSA planar imaging using deep learning | |
Gerard et al. | Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network | |
Xue et al. | Early Pregnancy Fetal Facial Ultrasound Standard Plane‐Assisted Recognition Algorithm | |
Sun et al. | Clinical ultra‐high resolution CT scans enabled by using a generative adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19956494 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956494 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956494 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19956494 Country of ref document: EP Kind code of ref document: A1 |