CN115984401A - Dynamic PET image reconstruction method based on model-driven deep learning - Google Patents
Dynamic PET image reconstruction method based on model-driven deep learning Download PDFInfo
- Publication number
- CN115984401A CN115984401A CN202310042621.5A CN202310042621A CN115984401A CN 115984401 A CN115984401 A CN 115984401A CN 202310042621 A CN202310042621 A CN 202310042621A CN 115984401 A CN115984401 A CN 115984401A
- Authority
- CN
- China
- Prior art keywords
- dynamic
- dynamic pet
- model
- image reconstruction
- pet image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000013135 deep learning Methods 0.000 title claims abstract description 12
- 230000009977 dual effect Effects 0.000 claims abstract description 44
- 230000000694 effects Effects 0.000 claims abstract description 30
- 239000000700 radioactive tracer Substances 0.000 claims abstract description 27
- 238000005259 measurement Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims description 19
- 238000010586 diagram Methods 0.000 claims description 17
- 230000000875 corresponding effect Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 3
- 230000021615 conjugation Effects 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 239000012217 radiopharmaceutical Substances 0.000 claims 1
- 229940121896 radiopharmaceutical Drugs 0.000 claims 1
- 230000002799 radiopharmaceutical effect Effects 0.000 claims 1
- 238000012636 positron electron tomography Methods 0.000 description 44
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000012879 PET imaging Methods 0.000 description 6
- 229940079593 drug Drugs 0.000 description 4
- 239000003814 drug Substances 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002285 radioactive effect Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Nuclear Medicine (AREA)
Abstract
The invention discloses a dynamic PET image reconstruction method based on model-driven deep learning, which utilizes 3D space-time convolution to simultaneously extract the relevance of a time domain and a space domain of dynamic projection data, and front and back projection operators are integrated into a reconstruction network, so that the method has strong physical constraint and interpretability; the invention splits the dynamic PET image reconstruction problem into a plurality of cascaded reconstruction blocks, wherein each reconstruction block comprises a main network for updating a main image domain variable and a dual network for updating a dual measurement domain variable. The invention can obtain a high-quality dynamic PET tracer activity distribution image from the reconstruction of the ultra-low counting dynamic PET projection data, and solves the problems of poor interpretability and poor reconstruction effect of the current mainstream method.
Description
Technical Field
The invention belongs to the technical field of PET imaging, and particularly relates to a dynamic PET image reconstruction method based on model-driven deep learning.
Background
The dynamic PET imaging can carry out quantitative characterization on physiological parameters in biological tissues, and plays an indispensable role in the aspects of tumor detection, heart disease characterization, drug research and development and the like. However, image reconstruction from dynamic measurement data is very challenging due to the unsuitable characteristic of PET reconstruction and the low count characteristic of single-frame images of dynamic PET data, especially in early time frames, the count of the single-frame images is low, and the reconstructed images are affected by serious noise.
In addition, with the continuous improvement of imaging technology and detector level, the ultra-fast time frame imaging can be realized in hardware technology, but the existing reconstruction algorithm cannot well perform high-quality reconstruction on ultra-low count projection data. Traditional dynamic image reconstruction algorithms such as filtered back projection and maximum likelihood expectation maximization cannot model time domain signals, and the difference of reconstructed images in different frames is large.
With the development of deep learning technology, a new set of deep learning-based solutions, such as direct learning or model-based learning, is emerging in the field of PET image reconstruction. The document G.Wang and J.Qi, PET image reconstruction using kernel method, IEEE transactions on medical imaging, vol.34, no.1, pp.61-71,2014 improves the reconstruction effect by introducing time prior on the basis of a nuclear method, but the performance effect is not very good for ultra-fast time frame and ultra-low count data in practical application, and part of the reason is that prior information of the method is obtained from single reconstructed data, and the advantage of a data driving method is not combined. The document [ B.Wang and H.Liu, "FBP-Net for direct reconstruction of dynamic PET images," Physics in Medicine & Biology, vol.65, no.23, p.235008,2020] realizes a relatively good dynamic reconstruction effect by combining a traditional filtering back projection algorithm and a de-noising neural network, but the method has poor interpretability and generalization because the system matrix constraint related to the physical characteristics of the PET instrument is not considered.
It can be seen that the prior art either does not make good use of the correlation of dynamic projection data in the time domain, or lacks physical constraints, which results in unstable reconstruction results and underperforms in the case of ultra-low counts, thereby limiting the development of ultra-fast time frame PET imaging techniques. In order to obtain better reconstruction quality, the existing dynamic reconstruction method often needs projection data with longer acquisition time, however, the long scanning time inevitably introduces motion artifacts, thereby further deteriorating the quality of the reconstructed image.
Disclosure of Invention
In view of the above, the invention provides a dynamic PET image reconstruction method based on model-driven deep learning, which can reconstruct dynamic projection data with ultra-low count to obtain a high-quality dynamic PET tracer activity distribution map, can effectively utilize the space-time correlation of the dynamic projection data, combines the constraint of a physical projection matrix, and has strong interpretability.
A dynamic PET image reconstruction method based on model-driven deep learning comprises the following steps:
(1) Detecting biological tissues injected with the radioactive drugs by using a detector, and acquiring corresponding dynamic sinogram projection data Y;
(2) Reconstructing projection data Y of the dynamic sinogram to obtain a corresponding activity distribution diagram X of the dynamic PET tracer agent;
(3) Executing the steps (1) and (2) for multiple times to obtain a large number of samples, wherein each sample comprises dynamic sinogram projection data Y and a dynamic PET tracer activity distribution diagram X corresponding to the dynamic sinogram projection data Y, and further dividing all the samples into a training set, a verification set and a test set;
(4) Converting the dynamic reconstruction problem into a Poisson log-likelihood optimization problem with a regular term according to a dynamic PET measurement equation, and converting the optimization problem into a corresponding saddle point problem by using the property of a dual variable;
(5) Alternately updating a primary variable and a dual variable by using a primary-dual network to solve the saddle point problem, thereby constructing an STPD-Net model for reconstructing a dynamic PET image, wherein the model is formed by cascading a plurality of reconstruction modules, and each reconstruction module is formed by connecting the primary network and the dual network;
(6) Training the STPD-Net model by using Y in the training set sample as the input of the STPD-Net model and using X as a label, thereby obtaining a final dynamic PET image reconstruction model;
(7) And inputting Y in the test set sample into the dynamic PET image reconstruction model, and directly reconstructing and outputting a corresponding dynamic PET tracer activity distribution diagram.
Further, the expression of the dynamic PET measurement equation is as follows:
Y=G·X+R
wherein: g is the system response matrix and R is the random and scattered noise terms.
Further, the poisson log-likelihood optimization problem expression with the regular term in the step (4) is as follows:
wherein: l (Y | X) is a Poisson likelihood term andr () represents a regular term, I is the total number of detectors, T is the total number of time frames, λ is a regular term penalty factor, Y i,t Represents the value of an element in the ith row and the tth column of the dynamic sinogram projection data Y @>Representing dynamic sinogram projection data pick>The value of the element in the ith row and the tth column,e () is the desired function.
Further, the expression of the saddle point problem in the step (4) is as follows:
wherein: sup<>Representing the Collection supremum, L * (Y | X) represents the conjugate of L (Y | X).
Further, the expression for solving the saddle point problem by alternately updating the primary variable and the dual variable by using the primary-dual network in the step (5) is as follows:
wherein: p () denotes the master network, D () denotes the dual network,dynamic PET tracer activity profile X representing the kth iteration k The value of an element in the jth row and the tth column is greater than>Dynamic PET tracer activity distribution diagram X representing k-1 iterations respectively k-1 The value of an element in the jth row and the tth column is greater than>Dual variable h representing the kth iteration k The value of an element in the ith row and the tth column is greater or less>Dual variable h representing the k-1 iteration k-1 Element value of ith row and tth column, G * And (3) representing the conjugation of a system response matrix G, wherein k is a natural number greater than 0, j is a natural number, j is greater than or equal to 1 and less than or equal to N, and N is the total pixel point number of the PET tracer activity distribution diagram.
Further, the input of the dual network comprises a dual variable h k-1 Dynamic PET tracer activity distribution diagram X k -1 And dynamic sinogram projection data Y which is output as a dual variable h after iterative update k ,X k-1 After the forward projection operation, h k-1 Y carries out channel dimension splicing, then sequentially passes through four 3D space-time convolutional layers to carry out space-time feature extraction on the results obtained by splicing, and finallyCombining the extracted features with h k-1 Output is h after channel dimension splicing k 。
Further, the input of the main network comprises a dual variable h k And dynamic PET tracer activity profile X k -1 And the output is a dynamic PET tracer activity distribution diagram X after iterative update k ,h k Firstly, after the back projection operation, the X and the X are processed k-1 Splicing channel dimensions, sequentially performing space-time feature extraction on the spliced result through four 3D space-time convolution layers, and finally performing space-time feature extraction on the extracted features and X k-1 The output is X after channel dimension splicing k 。
Further, the convolution kernel size adopted by the 3D space-time convolution layer is 3 × 3 × 3.
Further, the process of training the model in step (6) is as follows:
6.1 initializing model parameters, including a bias vector and a weight matrix of each layer, a learning rate and an optimizer;
6.2, inputting the dynamic sinogram projection data Y in the training set sample into a model, carrying out forward propagation and output of the model to obtain a corresponding dynamic PET tracer activity distribution diagram, and calculating a loss function between the result and the label;
and 6.3, continuously and iteratively updating the model parameters by using an optimizer through a gradient descent method according to the loss function until the loss function is converged, and finishing training.
Further, after the training is finished, the model is verified by using the verification set sample, and the model which is best represented on the verification set is used as a final dynamic PET image reconstruction model.
Compared with other reconstruction methods based on a 2D convolutional neural network, the method can well model the dependence of the dynamic projection data among different time frames, has a good reconstruction recovery effect on a single-frame image, has good assurance on the structural similarity among the PET images of different time frames, and is superior to the existing dynamic reconstruction method based on deep learning.
The invention provides a method for updating a main variable and a dual variable by using a main dual network to replace a near-end operator, and the main dual mixed gradient descent algorithm is developed into a model-based deep neural network form, so that certain interpretability is ensured in mathematical derivation, and meanwhile, the method has strong learning and characterization capabilities, and is the first attempt of a model-based deep learning method in the field of dynamic PET image reconstruction.
The invention has good performance on the dynamic projection data with ultra-low count, and can well solve the problem of overlong waiting time of patients in the dynamic acquisition process in practical application; in an experiment, the method provided by the invention is verified by using simulation data and clinical mouse scanning data at the same time, and the method has a good reconstruction effect on single-frame dynamic data with ultralow thousands of counts, so that the method provided by the invention is particularly suitable for the application of whole-body dynamic PET imaging and parametric PET imaging of a human body.
Drawings
FIG. 1 is a schematic flow chart of steps of a dynamic PET image reconstruction method according to the present invention.
FIG. 2 is a schematic diagram of the overall structure of the STPD-Net model of the present invention.
FIG. 3 is a graph showing a comparison of reconstruction results for different methods on different slices of low count dynamic PET projection data; the reconstruction graph of the MLEM algorithm, the reconstruction graph of the KEM-ST method, the reconstruction graph of the LPD method, the reconstruction graph of the FBPnet method, the reconstruction graph of the invention and the truth label graph are sequentially arranged from left to right, and the reconstruction graph of the MLEM algorithm, the reconstruction graph of the KEM-ST method, the reconstruction graph of the FBPnet method, the reconstruction graph of the invention and the truth label graph are sequentially arranged from top to bottom as a third frame, an eighth frame and a fifteenth frame.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
As shown in FIG. 1, the dynamic PET image reconstruction method based on model-driven deep learning of the present invention includes the following steps:
training phase
(1) And injecting a radioactive drug into the target tissue, detecting, collecting to obtain dynamic sinogram projection data Y, and reconstructing to obtain a corresponding PET tracer activity distribution diagram X.
(2) According to the dynamic PET imaging principle, a measurement equation model is established:
Y=G·X+R
wherein: g is a system response matrix, R is a random and scattering noise item, and the system response matrix is obtained by calculation through a ray simulation method.
Solving the imaging inverse problem by using a Poisson log-likelihood optimization method with a regular term:
wherein:for the Poisson likelihood term, I denotes the number of total detectors, T denotes the number of total time frames, and>representing the mean of the dynamic sinogram projection matrix, R () represents the regular term, and λ is the regular term penalty factor.
The problem is converted into the form of a saddle point problem by using the nature of the dual problem:
the saddle point problem is solved by adopting a method of alternately updating a main variable and a dual variable, namely STPD-Net, as shown in fig. 2, the STPD-Net is formed by cascading a plurality of reconstruction modules, each reconstruction module comprises a main network for updating a main variable X, and a dual network D for updating an introduced dual variable:
wherein: h represents a dual variable, k is an iteration index, G * The conjugate operator representing the system matrix, i.e. the back projection operation, is calculated using the transpose of the system matrix.
The input of the dual network is a dual variable h k-1 Dynamic PET image X k-1 And projection data Y, output as updated dual variable h k Dynamic PET image X k-1 After the forward projection operation, the dual variable h is summed k-1 Splicing the channel dimensions of the projection data Y, and then extracting the space-time characteristics through four 3D space-time convolution layers, wherein the size of a 3D convolution kernel is 3 multiplied by 3; last and dual variable h k-1 Outputting the channel dimension spliced variable h to obtain an updated dual variable h k 。
Main network input is dynamic PET image X k-1 And the dual variable h after the update of the dual network k Dual variable h after update of dual network k Firstly, after the back projection operation, the dynamic PET image X is obtained k-1 Performing channel dimension splicing, performing space-time convolution extraction through four 3D space-time convolution layers, and performing space-time convolution extraction with a dynamic PET image X k-1 Outputting the channel dimension spliced image to obtain a PET reconstructed image X k 。
(3) In the training phase, the reconstructed model is trained with the dynamic PET tracer activity distribution map as a label and the dynamic sinogram data as an input.
Firstly, initializing model parameters, including network parameter initialization and reconstructed image initialization, wherein the model parameters adopt a kaiming initialization mode, and initialization image and initialization dual variables adopt 0 initialization.
And then inputting the dynamic sinogram projection data Y in the training sample into an STPD-Net model, and performing forward propagation to obtain an output result of the last iteration module as a model reconstruction result.
And further calculating an MSE loss function between the output result of the model and the activity distribution diagram of the dynamic PET tracer and gradients of the loss function on each variable, and updating all learnable parameters in the model by using an Adam optimizer until the value of the loss function is basically unchanged, and finishing training.
And finally, verifying by using a verification set sample, and taking the model which has the best performance in the verification set as a final dynamic PET image reconstruction model.
Inference phase
(1) Dynamic sinogram projection data are obtained through measurement or simulation.
(2) And taking the dynamic sinogram data and the initialized image as input, and directly outputting the dynamic PET tracer activity distribution diagram by the trained reconstruction model.
We performed experiments below based on simulated ultra low count dynamic PET data to verify the effectiveness of this embodiment; the data set contains 40 3D brain template data, each template has the size of 128 multiplied by 40 multiplied by 18, 18 frames are collected in total, and the simulation tracer is 18 And F-FDG, obtaining corresponding dynamic sinogram data by simulating projection and adding noise, wherein the count of a single frame sinogram is about 1e4, 33 samples are used as training data, 2 samples are used as verification set data, and the rest 5 samples are used for testing.
STPD-Net is implemented by pytorch1.7.0 and trained on an ubuntu server host with TITAN-X; the optimizer was Adam, initial learning rate 0.0001, batch size 4, a total of 200 epochs trained, and the best performing model on the validation set was used for testing.
FIG. 3 illustrates reconstructed images of the present invention and other methods on different slices of low count projection data, wherein the KEM-ST and FBPnet methods are currently the mainstream methods, it can be seen that the MLEM algorithm and the KEM-ST algorithm both show severe noise, and the LPD method has large image structure difference of different frames due to not considering the association of data between different time frames; FBPnet has better control over noise, but the structure information cannot be recovered well, and the accuracy rate needs to be improved. The method provided by the invention not only has the best recovery performance in structure, but also has the best effect on noise suppression and accuracy.
The foregoing description of the embodiments is provided to enable one of ordinary skill in the art to make and use the invention, and it is to be understood that other modifications of the embodiments, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty, as will be readily apparent to those skilled in the art. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.
Claims (10)
1. A dynamic PET image reconstruction method based on model-driven deep learning comprises the following steps:
(1) Detecting biological tissues injected with the radiopharmaceuticals by using a detector, and acquiring corresponding dynamic sinogram projection data Y;
(2) Reconstructing projection data Y of the dynamic sinogram to obtain a corresponding activity distribution map X of the dynamic PET tracer;
(3) Executing the steps (1) and (2) for multiple times to obtain a large number of samples, wherein each sample comprises dynamic sinogram projection data Y and a dynamic PET tracer activity distribution diagram X corresponding to the dynamic sinogram projection data Y, and further dividing all the samples into a training set, a verification set and a test set;
(4) Converting the dynamic reconstruction problem into a Poisson log-likelihood optimization problem with a regular term according to a dynamic PET measurement equation, and converting the optimization problem into a corresponding saddle point problem by using the property of a dual variable;
(5) Alternately updating a main variable and a dual variable by using a main dual network to solve the saddle point problem, thereby constructing an STPD-Net model for reconstructing the dynamic PET image, wherein the model is formed by cascading a plurality of reconstruction modules, and each reconstruction module is formed by connecting the main network and the dual network;
(6) Training the STPD-Net model by using Y in the training set sample as the input of the STPD-Net model and using X as a label, thereby obtaining a final dynamic PET image reconstruction model;
(7) And inputting Y in the test set sample into the dynamic PET image reconstruction model, and directly reconstructing and outputting a corresponding dynamic PET tracer activity distribution diagram.
2. The dynamic PET image reconstruction method according to claim 1, characterized in that: the expression of the dynamic PET measurement equation is as follows:
Y=G·X+R
wherein: g is the system response matrix and R is the random and scattered noise terms.
3. The dynamic PET image reconstruction method according to claim 2, characterized in that: the poisson log-likelihood optimization problem expression with the regular term in the step (4) is as follows:
wherein: l (Y | X) is a Poisson likelihood term andr () represents a regular term, I is the total number of detectors, T is the total number of time frames, λ is a regular term penalty factor, Y i,t Represents the value of an element in the ith row and the tth column of the dynamic sinogram projection data Y @>Representing dynamic sinogram projection data pick>The value of the element in the ith row and the tth column,e () is the desired function.
5. The dynamic PET image reconstruction method according to claim 4, characterized in that: in the step (5), the expression for solving the saddle point problem by alternately updating the primary variable and the dual variable by using the primary-dual network is as follows:
wherein: p () denotes the master network, D () denotes the dual network,dynamic PET tracer activity profile X representing the kth iteration k The value of an element in the jth row and the tth column is greater than>Dynamic PET tracer activity profile X for the k-1 th iteration, respectively k -1 The value of an element in the jth row and the tth column is greater than>Dual variable h representing the kth iteration k The value of an element in the ith row and the tth column is greater or less>Dual variable h representing the k-1 iteration k-1 In the ith row and the tth columnElement value of (1), G * And (3) representing the conjugation of a system response matrix G, wherein k is a natural number greater than 0, j is a natural number, j is greater than or equal to 1 and less than or equal to N, and N is the total pixel point number of the PET tracer activity distribution diagram.
6. The dynamic PET image reconstruction method according to claim 5, characterized in that: the input of the dual network comprises a dual variable h k-1 Dynamic PET tracer activity distribution diagram X k-1 And dynamic sinogram projection data Y output as dual variable h after iterative update k ,X k-1 After the forward projection operation, h k-1 Y splicing the channel dimensions, then sequentially performing space-time feature extraction on the spliced result through four 3D space-time convolution layers, and finally performing space-time feature extraction on the extracted features and h k -1 Output is h after channel dimension splicing k 。
7. The dynamic PET image reconstruction method according to claim 5, characterized in that: the input of the main network comprises a dual variable h k And dynamic PET tracer activity profile X k-1 And the output is a dynamic PET tracer activity distribution diagram X after iterative update k ,h k After the back projection operation, the X-ray source is connected with the X-ray source k-1 Splicing channel dimensions, sequentially performing space-time feature extraction on the spliced result through four 3D space-time convolution layers, and finally performing space-time feature extraction on the extracted features and X k-1 The output is X after channel dimension splicing k 。
8. The dynamic PET image reconstruction method according to claim 6 or 7, characterized in that: the convolution kernel size adopted by the 3D space-time convolution layer is 3 multiplied by 3.
9. The dynamic PET image reconstruction method according to claim 1, characterized in that: the process of training the model in the step (6) is as follows:
6.1 initializing model parameters, including a bias vector and a weight matrix of each layer, a learning rate and an optimizer;
6.2, inputting the projection data Y of the dynamic sinogram in the training set sample into a model, outputting the forward propagation of the model to obtain a corresponding activity distribution map of the dynamic PET tracer, and calculating a loss function between the result and the label;
and 6.3, continuously and iteratively updating model parameters by using an optimizer according to the loss function through a gradient descent method until the loss function is converged, and finishing training.
10. The dynamic PET image reconstruction method according to claim 9, characterized in that: and after the training is finished, verifying the model by using a verification set sample, and taking the model which best expresses on the verification set as a final dynamic PET image reconstruction model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042621.5A CN115984401A (en) | 2023-01-28 | 2023-01-28 | Dynamic PET image reconstruction method based on model-driven deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310042621.5A CN115984401A (en) | 2023-01-28 | 2023-01-28 | Dynamic PET image reconstruction method based on model-driven deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115984401A true CN115984401A (en) | 2023-04-18 |
Family
ID=85974095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310042621.5A Pending CN115984401A (en) | 2023-01-28 | 2023-01-28 | Dynamic PET image reconstruction method based on model-driven deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984401A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117437152A (en) * | 2023-12-21 | 2024-01-23 | 之江实验室 | PET iterative reconstruction method and system based on diffusion model |
-
2023
- 2023-01-28 CN CN202310042621.5A patent/CN115984401A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117437152A (en) * | 2023-12-21 | 2024-01-23 | 之江实验室 | PET iterative reconstruction method and system based on diffusion model |
CN117437152B (en) * | 2023-12-21 | 2024-04-02 | 之江实验室 | PET iterative reconstruction method and system based on diffusion model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111627082B (en) | PET image reconstruction method based on filtering back projection algorithm and neural network | |
CN112053412B (en) | Low-dose Sinogram denoising and PET image reconstruction method based on teacher-student generator | |
Burger et al. | EM-TV methods for inverse problems with Poisson noise | |
CN113516210B (en) | Lung adenocarcinoma squamous carcinoma diagnosis model training method and device based on PET/CT | |
CN109636869B (en) | Dynamic PET image reconstruction method based on non-local total variation and low-rank constraint | |
CN113160347B (en) | Low-dose double-tracer PET reconstruction method based on attention mechanism | |
CN104657950B (en) | Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV | |
WO2024011797A1 (en) | Pet image reconstruction method based on swin-transformer regularization | |
CN109993808B (en) | Dynamic double-tracing PET reconstruction method based on DSN | |
CN108550172B (en) | PET image reconstruction method based on non-local characteristics and total variation joint constraint | |
US20230059132A1 (en) | System and method for deep learning for inverse problems without training data | |
CN106204674A (en) | The dynamic PET images method for reconstructing retrained based on structure dictionary and kinetic parameter dictionary joint sparse | |
Xue et al. | A 3D attention residual encoder–decoder least-square GAN for low-count PET denoising | |
Xu et al. | Deep-learning-based separation of a mixture of dual-tracer single-acquisition PET signals with equal half-lives: a simulation study | |
Feng et al. | Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning | |
CN114387236A (en) | Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network | |
CN107146263B (en) | A kind of dynamic PET images method for reconstructing based on the constraint of tensor dictionary | |
Wang et al. | 3D multi-modality Transformer-GAN for high-quality PET reconstruction | |
Pan et al. | Full-dose PET synthesis from low-dose PET using high-efficiency diffusion denoising probabilistic model | |
CN115984401A (en) | Dynamic PET image reconstruction method based on model-driven deep learning | |
CN113476064B (en) | BCD-ED-based single-scanning double-tracer PET signal separation method | |
WO2024109762A1 (en) | Pet parameter determination method and apparatus, and device and storage medium | |
Wan et al. | Deep-learning based joint estimation of dual-tracer PET image activity maps and clustering of time activity curves | |
Shen et al. | Unsupervised PET reconstruction from a Bayesian perspective | |
Varnyú et al. | Blood input function estimation in positron emission tomography with deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |