CN112991477B - PET image processing method based on deep learning - Google Patents

PET image processing method based on deep learning Download PDF

Info

Publication number
CN112991477B
CN112991477B CN202110116659.3A CN202110116659A CN112991477B CN 112991477 B CN112991477 B CN 112991477B CN 202110116659 A CN202110116659 A CN 202110116659A CN 112991477 B CN112991477 B CN 112991477B
Authority
CN
China
Prior art keywords
image
deep learning
pet
deconvolution
dose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110116659.3A
Other languages
Chinese (zh)
Other versions
CN112991477A (en
Inventor
王鑫辉
叶宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minfound Medical Systems Co Ltd
Original Assignee
Minfound Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minfound Medical Systems Co Ltd filed Critical Minfound Medical Systems Co Ltd
Priority to CN202110116659.3A priority Critical patent/CN112991477B/en
Publication of CN112991477A publication Critical patent/CN112991477A/en
Application granted granted Critical
Publication of CN112991477B publication Critical patent/CN112991477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a PET image processing method based on deep learning, which relates to the technical field of medical image processing and evaluation, adopts a PET image fusion algorithm of multi-input deep learning, and fuses a plurality of pieces of non-filtered PET image information and filtered PET image information. The algorithm was evaluated by using a low dose IEC phantom and low dose body patient data and compared to conventional unfiltered and filtered images. The PET image processed by the deep learning image fusion algorithm reduces more noises, improves the contrast of the image, retains the detail information of the image, and shows the potential clinical application value of the algorithm in clinical low-dose PET imaging.

Description

PET image processing method based on deep learning
Technical Field
The invention relates to a PET image processing method based on deep learning, and belongs to the technical field of medical image processing and evaluation.
Background
PET (Positron Emission Tomography) is one of the most advanced large medical diagnostic imaging technologies today. PET imaging is widely used in oncology, cardiology and neurology by observing the activity at the molecular level within tissues by injection of tracers containing radionuclides. However, PET imaging is greatly limited in noise level, image resolution, and preservation of image detail due to the limited resolution and inherent noise of the system.
The existing technology for improving the quality of the PET image comprises a traditional iterative reconstruction algorithm and filtering post-processing, and a deep learning post-processing method of single image input. The document [1] r.m.leahy and j.qi "statistical applications in a quantitative positive mapping and computing, vol.10, pp.147-165,2000 discloses that a conventional iterative reconstruction algorithm (maximum likelihood expectation maximization) reduces image deviation as the number of iterations increases, but noise increases significantly. In order to reduce the noise of the high-iteration images, post-filtering processing documents [2] j.dutta, r.m.leahy, and q.li. Non-local means differentiating of dynamic PET images "PLoS ONE, vol.8, no.12, pp.e81390,2013 disclose a post-filtering processing method, but it may smooth and blur important features of the images (such as the boundaries of organs and lesions), resulting in increased deviation and reduced contrast. Document [3] p.j.green "bayesian correlations data using a modified EM algorithm ieee trans. Med.imaging, vol.9, pp.84-93,1990 discloses that another conventional iterative reconstruction method is a maximum a posteriori algorithm that reduces the noise of the reconstructed image by adding a priori information, but that reduces the noise while causing loss of image details.
In recent years, deep learning has rapidly developed in the field of medical images, and neural networks have proven to be powerful tools for medical image analysis, such as noise reduction, segmentation, registration, diagnosis, and the like. However, the application of neural networks focuses on medical images with single input, such as documents [4] i.r.duffy, a.j.boyle, and n.vasdev ] improving PET imaging acquisition and analysis with a machine learning.
In addition, through patent search, the following two patents which are closer to the technology of the invention are searched:
patent one, patent number: CN11784788A application No.: CN202010501497.0 patent name: a PET fast imaging method and system based on deep learning;
patent II, patent No.: CN11867474A application No.: CN201880090666.7 patent name: a PET fast imaging method and system based on deep learning is estimated by using deep learning to carry out full-dose PET image estimation according to low-dose PET imaging;
the first patent and the second patent are both single-input deep learning methods, which cannot simultaneously utilize non-filtered and filtered reconstructed image information, and require a large amount of data for network training.
To summarize the drawbacks of the prior art described above:
1. the traditional method comprises the following steps: (1) Maximum Likelihood Expectation Maximization (MLEM) iterative reconstruction + filtering: MLEM reconstruction increases noise significantly as the number of iterations increases. After Gaussian or non-local mean filtering processing, although the noise of the image can be reduced, the contrast of the image is reduced; (2) Maximum A Posteriori (MAP) iterative reconstruction: the prior information based on the PET, CT or MR images is added into the iterative reconstruction algorithm, so that the noise of high-order iteration can be inhibited, and the detail information of the images is lost.
2. Deep learning method for single image input: (1) The non-filtering image has small deviation but large noise, and the deep learning algorithm of the single input non-filtering image has limited noise reduction effect and cannot meet the noise reduction requirement of clinical application. (2) The filtering images have small noise but large deviation, and the deep learning algorithm of a single input filtering image can cause the deviation of image values while reducing noise, thereby influencing the diagnosis of clinical diseases.
The present application was made based on this.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a method for processing a PET image by combining a multi-input deep learning image fusion algorithm.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a PET image processing method based on deep learning comprises the following steps:
(1) Establishing a neural network model of a multi-input deep learning fusion algorithm;
(2) Training a neural network model of a multi-input deep learning fusion algorithm;
(3) Inputting a plurality of non-filtered and filtered PET images into a trained neural network model for processing to obtain a fusion image;
the architecture of the neural network model of the multi-input deep learning fusion algorithm comprises the following steps: the device comprises a stack encoder, a stack decoder and a residual error compensation module; the stack encoder comprises a plurality of convolution layers and a ReLU activation function; the stack decoder comprises a plurality of deconvolution layers and a ReLU activation function; the convolution layers and the deconvolution layers are mutually matched and connected through shortcuts, the convolution layers and the deconvolution layers are same in number and are symmetrically arranged, and a ReLU activation function is arranged behind each convolution layer or each deconvolution layer.
Further, the fusion image is obtained by copying and adding the non-filtered and filtered multiple PET images in convolution, reLU activation function, deconvolution, reLU activation function and residual compensation.
Further, the stack encoder is represented as:
E i (x i )=ReLU(W i *x i +b i )i=0,1...,N, (1)
wherein N is the number of convolutional layers, W i And b i Denotes weight and bias, respectively, denotes convolution operator, x 0 Is an extraction block of the input image, x i (i>0) Is an extracted feature of the previous i-layer network, reLU (x) = max (0, x) is an activation function.
Further, the stack decoder is represented as:
Figure GDA0003038916860000031
where M is the number of deconvolution layers, W i ' and b i ' denotes a weight and a deviation respectively,
Figure GDA0003038916860000032
representing the deconvolution operator, y N = x is the output eigenvector after stack coding, y i (N>i>0) Is the reconstructed feature vector, y, of the first i deconvolution layer 0 Is the reconstructed image block. />
Further, the residual compensation module performs residual compensation by: defining an input image as I, an output image as O, and a corresponding residual mapping as F (I) = O-I; after the residual mapping is established, the original mapping R (I) = O = F (I) + I is reconstructed.
Further, the parameters T = { W = in the convolutional and deconvolution layers are estimated by optimizing the loss function L (D; T) between the low-dose PET image and the full-dose PET image i ,b i ,W′ i ,b′ i Given a set of full-dose and low-dose PET paired image blocks P = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),…,(X k ,Y k ) In which { X } i And { Y } i Denotes the full-dose and low-dose PET image blocks, respectively, K being the total number of training samples; { X i Denotes all xs in the front face P i Represents a set of all full-dose image blocks; { Y i Denotes all Y in the front face P i Represents a set of all low-dose image blocks;
the loss function is defined as the mean square error:
Figure GDA0003038916860000041
compared with the prior art, the invention has the following beneficial technical effects:
the multi-input deep learning neural network (including all neural networks, not limited to the Unet structure network implemented by the scheme) provided by the invention integrates the small deviation in the non-filtering image and the low noise characteristic in the filtering image, and obtains the image which has low noise and low deviation and retains the detail information.
Drawings
FIG. 1 is a diagram of a neural network model architecture of the multi-input deep learning fusion algorithm according to this embodiment;
FIG. 2 is a cross-axis view of an IEC phantom reconstructed image according to this embodiment, (a) MLEM reconstruction, (b) non-local mean weak filtering, (c) non-local mean strong filtering, and (d) a multi-input deep learning fusion algorithm;
FIG. 3 is a horizontal axis view of a low dose body patient data image of the present embodiment, (a) MLEM reconstruction, (b) non-local mean weak filtering, (c) non-local mean strong filtering, and (d) multi-input deep learning fusion algorithm.
Detailed Description
In order to make the technical means and technical effects achieved by the technical means of the present invention more clearly and completely disclosed, an embodiment is provided, and the following detailed description is made with reference to the accompanying drawings:
as shown in the figure 1 of the drawings,
examples
In order to clearly illustrate the PET image processing method based on deep learning, the method firstly introduces the neural network model of the proposed multi-input deep learning fusion algorithm, then gives the specific design and structure of the deep learning neural network in the implementation of the method, then introduces the specific process of training and testing the neural network, and finally gives the evaluation result of the algorithm.
(1) Neural network model for establishing multi-input deep learning fusion algorithm
The neural network model architecture of the deep learning fusion algorithm is shown in fig. 1.
The network consists of 10 layers, including 5 convolutional layers and 5 anti-convolutional layers, arranged symmetrically. Increasing the number of layers for convolution and deconvolution results in overfitting and increases the network training time, while decreasing the number of layers results in under-fitting of the network training. Empirically, this example uses a CNN network of 5 convolutional layers and 5 deconvolution layers, resulting in a network with less loss function and error to be trained in less time. Wherein the matching convolution and deconvolution layers are connected by a shortcut. Each layer is followed by a ReLU (received linear units) activation function. The structure and detail information of the specific neural network is as follows: the deep learning network includes a stack encoder, a stack decoder, and residual compensation. Where the encoder and decoder are of a fully connected layer design. The short-cut connection of residual compensation is used to recover structural detail information in the image and to increase the convenience of network training.
1) Stack encoder
There are two types of graph layers in the encoder, including convolutional layers and ReLU activation functions. Stack encoder E i (x i ) Can be expressed as:
E i (x i )=ReLU(W i *x i +b i )i=0,1...,N, (1)
wherein N is the number of convolutional layers, W i And b i Denotes weight and bias, respectively, denotes convolution operator, x 0 Is an extraction block of the input image, x i (i>0) Is an extracted feature of the previous i-layer network, reLU (x) = max (0, x) is an activation function.
2) Stack decoder
There are also two types of layers in the stack decoder, including the deconvolution layer and the ReLU activation function. The stack decoder can be represented as:
Figure GDA0003038916860000051
where M is the number of deconvolution layers, W i ' and b i ' denotes a weight and a deviation respectively,
Figure GDA0003038916860000052
representing the deconvolution operator, y N = x is the output eigenvector after stack coding, y i (N>i>0) Is the reconstructed feature vector, y, of the front i deconvolution layer 0 Is the reconstructed image block.
3) Residual compensation
Convolution can lose details of the image, and while the application of a deconvolution network can recover some details, the cumulative loss function may not yield a satisfactory image as the number of layers increases. To solve the above problem, we add residual compensation in the proposed network. Defining an input image as I and an output image as O, the corresponding residual map can be represented as F (I) = O-I. After building the residual mapping, we can reconstruct the original mapping R (I) = O = F (I) + I.
(2) Neural network model for training multi-input deep learning fusion algorithm
The deep learning neural network provided by the invention is end-to-end mapping from a low-dose PET image end to a full doseMeasure PET image end. When the network structure is designed, in order to create the mapping function U, (U, for use as the neural network model under test in step (3) to generate the fused image), it is necessary to estimate the parameters T = { W } in the convolutional and anti-convolutional layers i ,b i ,W′ i ,b′ i }. This estimate can be obtained by minimizing the loss function L (D; T) between the low dose and standard full dose PET images. Given a set of full-dose and low-dose PET paired image blocks P = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),…,(X k ,Y k ) In which { X } i And { Y } i Denotes the full-dose and low-dose PET image blocks, respectively, and K is the total number of training samples. The loss function is defined as the Mean Square Error (MSE):
Figure GDA0003038916860000061
(3) Inputting a plurality of non-filtered and filtered PET images into a trained neural network model U for processing to obtain a fusion image (data and result)
For the study and evaluation of the proposed multi-input deep learning fusion algorithm, we applied the algorithm to clinical low dose (50%) IEC phantom and body patient data, and the cross-axis view of the resulting IEC phantom reconstructed image is shown in (d) of fig. 2, and the cross-axis view of the low dose body patient data image is shown in (d) of fig. 3, and compared with the conventional reconstruction algorithm (comparative example 1) and the filtered images (comparative examples 2 and 3), and the specific results are shown in fig. 2 and 3.
Comparative example 1
Using a conventional reconstruction algorithm (MLEM reconstruction)
A transverse axis view of the obtained IEC phantom reconstructed image is shown in FIG. 2 (a), and a transverse axis view of the low-dose body patient data image is shown in FIG. 3 (a).
Comparative example 2
Weak filtering using non-local means
A transverse axis view of the resulting IEC phantom reconstructed image is shown in fig. 2 (b), and a transverse axis view of the low dose body patient data image is shown in fig. 3 (b).
Comparative example 3
Using strong filtering with non-local means
A transverse axis view of the resulting IEC phantom reconstructed image is shown in fig. 2 (c), and a transverse axis view of the low dose body patient data image is shown in fig. 3 (c).
In combination with the above comparative examples 1,2 and 3,
it can be seen from fig. 2 that the image processed by the deep learning fusion algorithm provided by the present invention reduces more noise, improves the contrast between the small balls and the background, and also retains the detail information of the image.
It can be seen from fig. 3 that the deep learning fusion algorithm proposed by the present invention reduces more noise, improves the contrast between the image and each organ, and retains the detail information of the image.
In summary, the invention provides a multi-input deep learning PET image fusion algorithm, which fuses a plurality of pieces of non-filtered and filtered PET image information. This example evaluates the algorithm using a low dose IEC phantom and low dose body patient data and compares it to conventional unfiltered and filtered images. The deep learning image fusion algorithm reduces more noise, improves the contrast of the image and keeps the detail information of the image. These improvements indicate the potential clinical utility of the algorithm in clinical low-dose PET imaging.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments of the invention, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (2)

1. A PET image processing method based on deep learning is characterized by comprising the following steps:
(1) Establishing a neural network model of a multi-input deep learning fusion algorithm;
(2) Training a neural network model of a multi-input deep learning fusion algorithm;
(3) Inputting a plurality of non-filtered and filtered PET images into a trained neural network model for processing to obtain a fused image;
the architecture of the neural network model of the multi-input deep learning fusion algorithm comprises the following steps: the device comprises a stack encoder, a stack decoder and a residual error compensation module; the stack encoder comprises a plurality of convolution layers and a ReLU activation function; the stack decoder comprises a plurality of deconvolution layers and a ReLU activation function; the convolution layers and the deconvolution layers are mutually matched and connected through shortcuts, are the same in number and are symmetrically arranged, and a ReLU activation function is arranged behind each convolution layer or each deconvolution layer;
obtaining the fusion image by convolution, reLU activation function, deconvolution, reLU activation function, copying and adding the non-filtered and filtered multiple PET images;
the stack encoder is represented as:
Figure QLYQS_1
(1)
wherein N is the number of convolutional layers, W i And b i Representing weight and bias, respectively, convolution operator, x 0 Is an extraction block of the input image, x i ,i>0, is the extracted feature of the previous i-layer network, reLU (x) = max (0, x) is the activation function;
the stack decoder is represented as:
Figure QLYQS_2
(2)
where M is the number of deconvolution layers, W i ' and b i ' denotes a weight and a deviation respectively,
Figure QLYQS_3
representing the deconvolution operator, y M = x is the output eigenvector after stack coding, y i ,M>i>0, is the reconstructed feature vector of the first i-layer deconvolution network, y 0 Is a reconstructed image block;
the residual error compensation module performs residual error compensation through the following processes: defining an input image as I and an output image as O, and representing the corresponding residual mapping as F (I) = O-I; after the residual mapping is established, the original mapping R (I) = O = F (I) + I is reconstructed.
2. The deep learning-based PET image processing method according to claim 1, wherein: estimation of the parameters T = { W } in the convolutional and deconvolution layers by the loss function L (D; T) between the low-dose PET image and the full-dose PET image i ,b i , W′ i ,b′ i Given a set of full-dose and low-dose PET paired image blocks P = { (X) 1 ,Y 1 ) ,(X 2 ,Y 2 ) ,… ,(X k ,Y k ) Therein { X } i And { Y } i Denotes the full-dose and low-dose PET image blocks, respectively, K being the total number of training samples; the loss function is defined as the mean square error:
Figure QLYQS_4
(3)。/>
CN202110116659.3A 2021-01-28 2021-01-28 PET image processing method based on deep learning Active CN112991477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116659.3A CN112991477B (en) 2021-01-28 2021-01-28 PET image processing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116659.3A CN112991477B (en) 2021-01-28 2021-01-28 PET image processing method based on deep learning

Publications (2)

Publication Number Publication Date
CN112991477A CN112991477A (en) 2021-06-18
CN112991477B true CN112991477B (en) 2023-04-18

Family

ID=76345669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116659.3A Active CN112991477B (en) 2021-01-28 2021-01-28 PET image processing method based on deep learning

Country Status (1)

Country Link
CN (1) CN112991477B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978778A (en) * 2019-03-06 2019-07-05 浙江工业大学 Convolutional neural networks medicine CT image denoising method based on residual error study

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205804B (en) * 2016-12-16 2022-05-31 斑马智行网络(香港)有限公司 Image processing method and device and electronic equipment
CN109035356B (en) * 2018-07-05 2020-07-10 四川大学 System and method based on PET (positron emission tomography) graphic imaging
US10762632B2 (en) * 2018-09-12 2020-09-01 Siemens Healthcare Gmbh Analysis of skeletal trauma using deep learning
CN111325686B (en) * 2020-02-11 2021-03-30 之江实验室 Low-dose PET three-dimensional reconstruction method based on deep learning
CN112258642B (en) * 2020-12-21 2021-04-09 之江实验室 Low-dose PET data three-dimensional iterative updating reconstruction method based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978778A (en) * 2019-03-06 2019-07-05 浙江工业大学 Convolutional neural networks medicine CT image denoising method based on residual error study

Also Published As

Publication number Publication date
CN112991477A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN111627082B (en) PET image reconstruction method based on filtering back projection algorithm and neural network
CN112381741B (en) Tomography image reconstruction method based on SPECT data sampling and noise characteristics
CN110223255B (en) Low-dose CT image denoising and recursion method based on residual error coding and decoding network
CN112258642B (en) Low-dose PET data three-dimensional iterative updating reconstruction method based on deep learning
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
WO2024011797A1 (en) Pet image reconstruction method based on swin-transformer regularization
CN109741254B (en) Dictionary training and image super-resolution reconstruction method, system, equipment and storage medium
CN102184559B (en) Particle filtering-based method of reconstructing static PET (Positron Emission Tomograph) images
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
CN111161182B (en) MR structure information constrained non-local mean guided PET image partial volume correction method
Nguyen et al. 3D Unet generative adversarial network for attenuation correction of SPECT images
CN116863014A (en) LDCT image reconstruction method based on depth double-domain joint guide learning
CN114358285A (en) PET system attenuation correction method based on flow model
CN112991477B (en) PET image processing method based on deep learning
CN114463459B (en) Partial volume correction method, device, equipment and medium for PET image
Hashimoto et al. Deep learning-based PET image denoising and reconstruction: a review
Mahmoud et al. Variant Wasserstein Generative Adversarial Network Applied on Low Dose CT Image Denoising.
CN113436118A (en) Low-dose CT image restoration method based on multi-scale convolutional coding network
Corda-D'Incan et al. Iteration-dependent networks and losses for unrolled deep learned FBSEM PET image reconstruction
Liang et al. A model-based deep learning reconstruction for X-Ray CT
CN112529980B (en) Multi-target finite angle CT image reconstruction method based on maximum minimization
Kang et al. Denoising Low-Dose CT Images Using a Multi-Layer Convolutional Analysis-Based Sparse Encoder Network
CN112801886B (en) Dynamic PET image denoising method and system based on image wavelet transformation
Feng et al. Multi-Dimensional Spatial Attention Residual U-Net (Msaru-Net) for Low-Dose Lung Ct Image Restoration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant