CN113436708B - Delayed CT image generation method based on deep learning algorithm - Google Patents

Delayed CT image generation method based on deep learning algorithm Download PDF

Info

Publication number
CN113436708B
CN113436708B CN202110830115.3A CN202110830115A CN113436708B CN 113436708 B CN113436708 B CN 113436708B CN 202110830115 A CN202110830115 A CN 202110830115A CN 113436708 B CN113436708 B CN 113436708B
Authority
CN
China
Prior art keywords
image
deformation
resolution
delayed
t1ct
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110830115.3A
Other languages
Chinese (zh)
Other versions
CN113436708A (en
Inventor
杨勇
翟明威
孙芳芳
柯常杰
俞宸浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110830115.3A priority Critical patent/CN113436708B/en
Publication of CN113436708A publication Critical patent/CN113436708A/en
Application granted granted Critical
Publication of CN113436708B publication Critical patent/CN113436708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of medical images, and particularly relates to a delayed CT (T2 CT) image generation method based on a deep learning algorithm. The method comprises the following steps: s1, acquiring T2PET, T1PET and T1CT images of a patient; s2, inputting the acquired T2PET, T1PET and T1CT images into the proposed multi-resolution registration convolution neural network, and outputting three deformation fields containing large, medium and small deformation quantities; s3, fusing the three deformation fields containing the large, medium and small deformation quantities output in the step S2 into one deformation field; and S4, inputting the deformation field and the input T1CT image into a space conversion network to generate a T2CT image. The invention has the characteristic of being able to perform attenuation correction in delayed PET scans while avoiding additional CT scans by generating T2CT images to reduce the X-ray radiation dose to which the patient is subjected.

Description

Delayed CT image generation method based on deep learning algorithm
Technical Field
The invention belongs to the technical field of medical images, and particularly relates to a delayed CT image generation method based on a deep learning algorithm.
Background
Positron emission tomography/computed tomography (PET/CT) systems provide critical information for radiation treatment planning, which can be used to assist in decision making in diagnosing tumors, prognosis, and staging. PET is a noninvasive diagnostic tool that provides information about metabolism and function. CT is a tomographic technique that provides anatomical structures with lesions of high spatial resolution.
The human body is injected with an imaging agent, such as a positron emitting radionuclide, prior to performing a PET scan, and then advanced into the detector ring. Positron-emitting radionuclides injected into a human body decay to generate positrons, and the positrons annihilate electrons in tissues to generate a pair of gamma photons which have 511 kilo-electron volts and fly out in opposite directions. The closed multi-ring detector measures the photons flying out in the opposite direction to form a projection line, the projection line is amplified by the electronic front end to form original sinogram data, and the original sinogram data are transmitted to a computer system to reconstruct a PET image. In PET imaging, gamma photons are not detected due to absorption and loss of energy in the body, which is called attenuation. Since the number of gamma photons detected is less than practical, image quality is degraded.
Attenuation of gamma rays through human tissue can severely affect the accuracy and quality of PET images, and therefore attenuation correction is an important component of PET reconstruction. Accurate PET images require either CT images or CT estimates from the MR to correct for the loss of annihilation photons, whereas the MR sequence acquisition time is longer. Therefore, the attenuation correction is carried out by generating a CT image through a registration method.
Delayed scanning refers to a time after the first PET/CT (T1 PET/T1 CT) scan, when the selected bed is scanned again, in order to make a more accurate diagnosis of the disease. During a delayed PET scan (T2 PET), there is no need to inject imaging agent into the patient, so the patient does not receive an additional radiation dose. While a delayed CT scan (T2 CT) increases the overall X-ray radiation to the patient. Accurate PET images require the use of attenuation correction maps derived from the CT images during reconstruction.
Therefore, it is necessary to design a delayed CT image generation method based on a deep learning algorithm that takes full advantage of the existing T2PET, T1PET and T1CT images to generate T2CT images to reduce the X-ray radiation dose to which the patient is subjected and to solve the problem of performing attenuation correction in T2PET scans.
For example, chinese patent application No. CN202010125698.5 describes a CT image generation method for PET image attenuation correction, which acquires a CT image and a PET image at time T1 and a PET image at time T2, inputs them into a trained neural network, and obtains a CT image at time T2, which can be used for attenuation correction of the PET image, thereby obtaining a more accurate PET image. Although the dosage of X-ray to the patient in the whole image acquisition stage can be reduced, and the physical and psychological stress to the patient can be relieved, the defect is that the accuracy of the obtained CT image is not enough according to the clinical requirement.
Disclosure of Invention
The invention provides a delayed CT image generation method based on a deep learning algorithm, which can execute attenuation correction in PET/CT delayed scanning and reduce X-ray radiation dose suffered by a patient in the T2CT process by generating a T2CT image in order to overcome the defect that X-ray radiation suffered by the patient in the PET/CT delayed scanning cannot be reduced in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a delayed CT image generation method based on a deep learning algorithm comprises the following steps:
s1, acquiring T2PET, T1PET and T1CT images of a patient;
wherein, the T2PET image refers to an image generated by delayed PET scanning, the T1PET image refers to an image generated by first PET scanning, and the T1CT image refers to an image generated by first CT scanning;
s2, inputting the acquired T2PET, T1PET and T1CT images into a multi-resolution registration convolution neural network MRR-CNN, and outputting three deformation fields containing large, medium and small deformation quantities;
s3, fusing the three deformation fields containing the large, medium and small deformation quantities output in the step S2 into one deformation field;
s4, inputting the deformation field and the input T1CT image into a Space Transformation Network (STN) to generate a T2CT image;
wherein, the T2CT image refers to a delayed CT scanning image.
Preferably, the multiresolution registration convolutional neural network MRR-CNN comprises three convolutional neural networks CNN1, CNN2, and CNN3 connected in parallel; the CNN1, the CNN2 and the CNN3 are respectively input into a low resolution image group, a medium resolution image group and an original resolution image group.
Preferably, step S2 includes the steps of:
s21, performing down-sampling on input images T1PET, T2PET and T1CT for two times by adopting a convolutional neural network CNN1 to ensure that the resolution of the input images is changed to 1/4 of the original resolution, outputting a deformation field containing large deformation and up-sampling to the original resolution of the input images;
s22, performing down-sampling on the input images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN2 to ensure that the resolution of the input images is 1/2 of the original resolution, outputting a deformation field containing medium deformation and up-sampling to the original resolution of the input images;
s23, inputting original images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN3, and outputting a deformation field containing small deformation quantity.
Preferably, the convolutional neural networks CNN1, CNN2, and CNN3 each include an encoder, a decoder, and a plurality of hopping connections; the encoder is used to extract features between the inputs and gradually halve the features during the down-sampling process.
Preferably, the encoder comprises a number of stacked convolutional layers; the kernel size of each convolutional layer is 3 × 3 × 3, and the step size is 2.
Preferably, the decoder comprises a plurality of deconvolution layers; the kernel size of each deconvolution layer was 3 × 3 × 3, with steps of 1.
Preferably, the deformation field is defined by phi t Specifically, the method comprises the following steps:
Figure BDA0003175165180000041
wherein phi is (0) = Id being an identity transformation, v t Represents the time t ∈ [0,1 ]]A velocity field v; the velocity field v per unit time is integrated using a time step T =7 to generate the final phi (1) According to the formula phi (1) =exp(v)。
Preferably, each convolutional layer or anti-convolutional layer is followed by a normalization and activation function.
Preferably, the multi-resolution registration convolutional neural network MRR-CNN includes three convolutional neural networks CNN, but is not limited to three convolutional neural networks CNN.
Compared with the prior art, the invention has the beneficial effects that: (1) The invention can reduce the X-ray radiation dose of the patient in the T2CT scanning process by generating the T2CT image without additional CT scanning; (2) The present invention is capable of generating a delayed CT (T2 CT) image in a delayed scan to perform attenuation correction on T2 PET.
Drawings
FIG. 1 is a flowchart of a delayed CT image generation method based on a deep learning algorithm according to the present invention;
fig. 2 is a schematic structural diagram of convolutional neural networks CNN1, CNN2, and CNN3 in the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention, the following description will explain the embodiments of the present invention with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
Example 1:
fig. 1 shows a delayed CT image generation method based on a deep learning algorithm, which includes the following steps:
s1, acquiring T2PET, T1PET and T1CT images of a patient;
wherein, the T2PET image refers to an image generated by delayed PET scanning, the T1PET image refers to an image generated by first PET scanning, and the T1CT image refers to an image generated by first CT scanning;
s2, inputting the acquired T2PET, T1PET and T1CT images into a multi-resolution registration convolutional neural network (MRR-CNN), and outputting three deformation fields containing large, medium and small deformation quantities;
s3, fusing the three deformation fields containing the large, medium and small deformation quantities output in the step S2 into one deformation field;
s4, inputting the deformation field and the input T1CT image into a Space Transformation Network (STN) to generate a T2CT image;
wherein, the T2CT image refers to a delayed CT scanning image.
Further, the multiresolution registration convolutional neural network MRR-CNN includes three convolutional neural networks CNN1, CNN2, and CNN3 connected in parallel; the CNN1, CNN2, and CNN3 are input into the low resolution image group, the medium resolution image group, and the original resolution image group, respectively.
Wherein, the deformation field containing large deformation quantity is generated by inputting low resolution images after two times of downsampling, and the low resolution image group is used for acquiring large deformation information between images. The deformation field containing the medium deformation amount is generated by inputting a medium resolution image subjected to one down-sampling, and the medium resolution image group is used for acquiring medium deformation information between images. And a deformation field containing a small deformation amount is generated by inputting an original resolution image group for obtaining small deformation information between images. And then three deformation fields containing large, medium and small deformation quantities are fused into one deformation field, the deformation quantity information of the deformation field is gradually increased, and the deformation field containing the multi-scale deformation quantity is obtained and is used for accurately generating the T2CT image so as to be used for attenuation correction of the T2 PET. Each image set contains T2PET, T1PET and T1CT images of the same patient.
The step S2 includes the steps of:
s21, performing down-sampling on input images T1PET, T2PET and T1CT for two times by adopting a convolutional neural network CNN1 to ensure that the resolution of the input images is changed to 1/4 of the original resolution, outputting a deformation field containing large deformation and up-sampling to the original resolution of the input images;
s22, performing down-sampling on the input images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN2 to ensure that the resolution of the input images is 1/2 of the original resolution, outputting a deformation field containing medium deformation and up-sampling to the original resolution of the input images;
s23, inputting original images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN3, and outputting a deformation field containing small deformation quantity.
A step S21 of calculating a large amount of deformation from the low-resolution image; a step S22, aiming at calculating a medium deformation quantity from the medium resolution image; in step S23, attention is paid to calculating a small amount of deformation from the original image.
The multiresolution registration convolutional neural network MRR-CNN first calculates a large deformation amount, and then adds more and more deformation amount information, wherein the deformation amount information is gradually increased to obtain a deformation field containing a multi-scale deformation amount. The deformation field and the input T1CT image are then input into a Spatial Transform Network (STN) to generate a T2CT image.
Further, as shown in fig. 2, the convolutional neural networks CNN1, CNN2, and CNN3 each include an encoder, a decoder, and a plurality of hopping connections; the encoder is used to extract features between inputs and to reduce the features by half step by step in the downsampling process.
Further, the encoder includes a plurality of stacked convolutional layers; the kernel size of each convolutional layer is 3 × 3 × 3, and the step size is 2.
Further, the decoder includes a plurality of deconvolution layers; the kernel size of each deconvolution layer was 3 × 3 × 3, with steps of 1.
For the first four deconvolution layers in the decoder, we add the jump connection from the convolutional layer accordingly to enhance the robustness of the output. In the decoder, the feature information is preserved by up-sampling features from the decoder and the corresponding encoder. At the end of the decoder, three additional deconvolution layers are added to better preserve the feature information. In addition, each convolutional layer or anti-convolutional layer is followed by a normalization and activation function.
In addition, conventional networks usually ignore ideal micro-homeomorphic properties such as topology and reversible mapping, which leads to a folding problem of deformation field and a reduction in generated image precision. In order to realize a differential homoembryo deformation model to improve the registration precision, a differential homoembryo integration layer is realized in a network. Further, the deformation field is represented by phi t Specifically, the method comprises the following steps:
Figure BDA0003175165180000071
wherein phi is (0) = Id identity transformation, v t Represents the time t ∈ [0,1 ]]The velocity field v of (d); the velocity field v per unit time is integrated using a time step T =7 to generate the final phi (1) According to the formula, [ phi ] (1) =exp(v)。
Further, the multi-resolution registration convolutional neural network MRR-CNN in the present invention includes three convolutional neural networks CNN, but is not limited to three convolutional neural networks CNN, and two convolutional neural networks CNN may be used, or four convolutional neural networks may be used.
The invention provides a multi-resolution registration convolutional neural network MRR-CNN model, which fully utilizes the existing T2PET, T1PET and T1CT images to generate a T2CT image. The present invention is able to perform attenuation correction for T2PET by generating a T2CT image while avoiding additional CT scans to increase the X-ray radiation dose experienced by the patient.
The foregoing has outlined rather broadly the preferred embodiments and principles of the present invention and it will be appreciated that those skilled in the art may devise variations of the present invention that are within the spirit and scope of the appended claims.

Claims (7)

1. A delayed CT image generation method based on a deep learning algorithm is characterized by comprising the following steps:
s1, acquiring T2PET, T1PET and T1CT images of a patient;
the T2PET image refers to an image generated by delayed PET scanning, the T1PET image refers to an image generated by first PET scanning, and the T1CT image refers to an image generated by first CT scanning;
s2, inputting the acquired T2PET, T1PET and T1CT images into a multi-resolution registration convolution neural network MRR-CNN, and outputting three deformation fields containing large, medium and small deformation quantities;
s3, fusing the three deformation fields containing the large, medium and small deformation quantities output in the step S2 into one deformation field;
s4, inputting the deformation field and the input T1CT image into a Space Transformation Network (STN) to generate a T2CT image;
wherein, the T2CT image refers to an image generated by delayed CT scanning;
the multi-resolution registration convolutional neural network MRR-CNN comprises three convolutional neural networks CNN1, CNN2 and CNN3 which are connected in parallel; the CNN1, the CNN2 and the CNN3 are respectively input into a low-resolution image group, a medium-resolution image group and an original-resolution image group;
the step S2 includes the steps of:
s21, performing down-sampling on input images T1PET, T2PET and T1CT for two times by adopting a convolutional neural network CNN1 to ensure that the resolution of the input images is changed to 1/4 of the original resolution, outputting a deformation field containing large deformation and up-sampling to the original resolution of the input images;
s22, performing down-sampling on the input images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN2 to ensure that the resolution of the input images is 1/2 of the original resolution, outputting a deformation field containing medium deformation and up-sampling to the original resolution of the input images;
s23, inputting original images T1PET, T2PET and T1CT by adopting a convolutional neural network CNN3, and outputting a deformation field containing small deformation.
2. The delayed CT image generation method based on deep learning algorithm as claimed in claim 1, wherein the convolutional neural networks CNN1, CNN2 and CNN3 each comprise an encoder, a decoder and several jump connections; the encoder is used to extract features between the inputs and gradually halve the features during the down-sampling process.
3. The delayed CT image generation method based on the deep learning algorithm of claim 2, wherein the encoder comprises a plurality of stacked convolution layers; the kernel size of each convolutional layer is 3 × 3 × 3, and the stride is 2.
4. The delayed CT image generation method based on deep learning algorithm as claimed in claim 2, wherein said decoder comprises several deconvolution layers; the kernel size of each deconvolution layer was 3 × 3 × 3, and the step size was 1.
5. The method of claim 1, wherein the deformation field is phi- t Specifically, the method comprises the following steps:
Figure FDA0003745710900000021
wherein phi is (0) = Id being an identity transformation, v t Represents the time t ∈ [0,1 ]]A velocity field v; the velocity field v per unit time is integrated using a time step T =7 to generate the final phi (1) According to the formula, [ phi ] (1) =exp(v)。
6. The method of claim 3 or 4, wherein each convolution layer or deconvolution layer is followed by a normalization and activation function.
7. The delayed CT image generation method based on deep learning algorithm as claimed in claim 1, wherein the multi-resolution registration convolutional neural network MRR-CNN comprises three convolutional neural networks CNN, but is not limited to three convolutional neural networks CNN.
CN202110830115.3A 2021-07-22 2021-07-22 Delayed CT image generation method based on deep learning algorithm Active CN113436708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110830115.3A CN113436708B (en) 2021-07-22 2021-07-22 Delayed CT image generation method based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110830115.3A CN113436708B (en) 2021-07-22 2021-07-22 Delayed CT image generation method based on deep learning algorithm

Publications (2)

Publication Number Publication Date
CN113436708A CN113436708A (en) 2021-09-24
CN113436708B true CN113436708B (en) 2022-10-25

Family

ID=77761356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110830115.3A Active CN113436708B (en) 2021-07-22 2021-07-22 Delayed CT image generation method based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN113436708B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537393A (en) * 2015-01-04 2015-04-22 大连理工大学 Traffic sign recognizing method based on multi-resolution convolution neural networks
CN107456236A (en) * 2017-07-11 2017-12-12 沈阳东软医疗系统有限公司 A kind of data processing method and medical scanning system
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 A kind of PET based on full convolutional neural networks and CT method for registering images
CN111436958A (en) * 2020-02-27 2020-07-24 之江实验室 CT image generation method for PET image attenuation correction
CN112070809A (en) * 2020-07-22 2020-12-11 中国科学院苏州生物医学工程技术研究所 Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT
CN112419173A (en) * 2020-11-04 2021-02-26 深圳先进技术研究院 Deep learning framework and method for generating CT image from PET image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11529108B2 (en) * 2018-11-30 2022-12-20 Washington University Methods and apparatus for improving the image resolution and sensitivity of whole-body positron emission tomography (PET) imaging
US11324472B2 (en) * 2019-08-26 2022-05-10 Siemens Medical Solutions Usa, Inc. Energy-based scatter correction for PET sinograms

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537393A (en) * 2015-01-04 2015-04-22 大连理工大学 Traffic sign recognizing method based on multi-resolution convolution neural networks
CN107456236A (en) * 2017-07-11 2017-12-12 沈阳东软医疗系统有限公司 A kind of data processing method and medical scanning system
CN109272443A (en) * 2018-09-30 2019-01-25 东北大学 A kind of PET based on full convolutional neural networks and CT method for registering images
CN111436958A (en) * 2020-02-27 2020-07-24 之江实验室 CT image generation method for PET image attenuation correction
CN112070809A (en) * 2020-07-22 2020-12-11 中国科学院苏州生物医学工程技术研究所 Accurate diagnosis system of pancreatic cancer based on two time formation of image of PET/CT
CN112419173A (en) * 2020-11-04 2021-02-26 深圳先进技术研究院 Deep learning framework and method for generating CT image from PET image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A novel supervised learning method to generate CT images for attenuation correction in delayed pet scans;Fan Rao 等;《Computer Methods and Programs in Biomedicine》;20201231;全文 *
Study of low-dose PET image recovery using;kui Zhao 等;《PLOS ONE》;20200904;全文 *
基于深度学习框架的肺4D-CT图像配准研究;胡闰月;《中国优秀硕士学位论文全文数据库电子期刊网》;20210115;全文 *

Also Published As

Publication number Publication date
CN113436708A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
Shi et al. Deep learning-based attenuation map generation for myocardial perfusion SPECT
US11020077B2 (en) Simultaneous CT-MRI image reconstruction
Li et al. Model‐based image reconstruction for four‐dimensional PET
US20220207791A1 (en) Method and system for generating attenuation map from spect emission data
Wu et al. Spatial-temporal total variation regularization (STTVR) for 4D-CT reconstruction
CN109961419B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
Sanaat et al. Deep‐TOF‐PET: Deep learning‐guided generation of time‐of‐flight from non‐TOF brain PET images in the image and projection domains
Nakazato et al. Automatic alignment of myocardial perfusion PET and 64-slice coronary CT angiography on hybrid PET/CT
CN114494479A (en) System and method for simultaneous attenuation correction, scatter correction, and denoising of low dose PET images using neural networks
US7894652B2 (en) Prompt gamma correction for non-standard isotopes in a PET scanner
Arabi et al. MRI‐guided attenuation correction in torso PET/MRI: Assessment of segmentation‐, atlas‐, and deep learning‐based approaches in the presence of outliers
Zhang et al. Deep learning‐based motion compensation for four‐dimensional cone‐beam computed tomography (4D‐CBCT) reconstruction
Xie et al. Increasing angular sampling through deep learning for stationary cardiac SPECT image reconstruction
US20230230297A1 (en) Ai-enabled early-pet acquisition
US7242004B2 (en) Image correction method, image correction apparatus, and image correction program
Clark et al. Deep learning based spectral extrapolation for dual‐source, dual‐energy x‐ray computed tomography
Li et al. Eliminating CT radiation for clinical PET examination using deep learning
CN117078787A (en) PET-CT image motion artifact correction method, system, equipment and medium
Zhou et al. Limited view tomographic reconstruction using a deep recurrent framework with residual dense spatial-channel attention network and sinogram consistency
CN112700380A (en) PET image volume correction method based on MR gradient information and deep learning
Malczewski PET image reconstruction using compressed sensing
CN113436708B (en) Delayed CT image generation method based on deep learning algorithm
Rohkohl et al. Interventional 4-D motion estimation and reconstruction of cardiac vasculature without motion periodicity assumption
CN115908610A (en) Method for obtaining attenuation correction coefficient image based on single-mode PET image
Lei et al. Estimating standard-dose PET from low-dose PET with deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant