CN109993808A - A kind of double tracer PET method for reconstructing of the dynamic based on DSN - Google Patents

A kind of double tracer PET method for reconstructing of the dynamic based on DSN Download PDF

Info

Publication number
CN109993808A
CN109993808A CN201910196556.5A CN201910196556A CN109993808A CN 109993808 A CN109993808 A CN 109993808A CN 201910196556 A CN201910196556 A CN 201910196556A CN 109993808 A CN109993808 A CN 109993808A
Authority
CN
China
Prior art keywords
tracer
tac
dual
dynamic
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910196556.5A
Other languages
Chinese (zh)
Other versions
CN109993808B (en
Inventor
刘华锋
卿敏敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910196556.5A priority Critical patent/CN109993808B/en
Publication of CN109993808A publication Critical patent/CN109993808A/en
Application granted granted Critical
Publication of CN109993808B publication Critical patent/CN109993808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention discloses a kind of double tracer PET method for reconstructing of dynamic based on DSN, the concentration profile of two kinds of tracers are reconstructed in the case where capable of injecting two kinds of tracers at the same time, and have preferable robustness to noise.The PET method for reconstructing stacks the reconstruction of network implementations mixing tracer dynamic PET concentration distributed image by deep layer, and main realization process is first to use the concentration distribution image of mixing tracer as input, and Boltzmann machine is combined to carry out pre-training to network;Then recessive stack is carried out to network as label and error function further combined with single tracer true value to finely tune, obtain trained model.This preparatory trained network simultaneously combines the recessive training method for stacking fine tuning, and network can be enabled to possess bigger characteristic window in input dimension, so that more robust feature is arrived in study, finally realize accurate image reconstruction.

Description

Dynamic double-tracing PET reconstruction method based on DSN
Technical Field
The invention belongs to the technical field of PET imaging, and particularly relates to a dynamic double-tracing PET reconstruction method based on DSN (deep stack network).
Background
Positron Emission Tomography (PET) is one of non-invasive in vivo molecular imaging, and is widely used in medical fields such as tumors, nervous systems, hearts and the like. PET mainly adopts radioactive isotope labeled tracer agent which is sensitive to different physiological function changes to make imaging, these tracer agents mainly relate to macromolecular substances of glucose, protein and nucleic acid, etc18F、11C、13N, etc. so that PET can provide information on physiological functions of organs such as glucose metabolism, blood perfusion, hypoxia, cell proliferation, etc. at a molecular level, and provide effective information for early diagnosis and prevention of diseases. In view of the complexity of diseases, the physiological or pathological characteristics of organs need to be described in multiple angles and multiple directions, so that the PET scanning imaging using multiple tracers is necessary. In the traditional PET scanning imaging, each tracer agent is independently injected and scanned for imaging, and the problems of prolonged scanning time, space-time registration of each tracer agent concentration distribution image, increased cost and the like are inevitably brought. Therefore, the single-scanning-simultaneous-injection double-tracer PET scanning imaging technology needs to be developed urgently, gamma photon energy generated by decay of different tracers in the PET imaging process is the same, namely 511KeV, and signals of the two tracers cannot be distinguished from the energy perspective.
At present, the reconstruction of the double-tracing PET image mainly comprises two types: one is to use tracer prior information and interval injection to distinguish signals of different tracers; another class separates different tracer images in a data-driven manner using deep learning. The former type of process generally suffers from the following problems: (1) requiring the tracers to have different half-lives or different radioisotopes; (2) a pre-constructed kinetic model is required, which may not be suitable for new tracers; (3) fitting the tracer signal using only a simple linear model; (4) a specific tracer pair is required. The above problems reduce the practical feasibility of such methods, and such methods typically require separation with interval injections to assist in the separation, further extending the scan time, while also resulting in additional spatiotemporal registration of the two tracer images after separation. The latter method mainly has a dual-trace separation algorithm based on an autoencoder at present, however, the model only uses a common gradient descent algorithm to update model parameters, so that the robustness of the learned feature expression to noise is not enough, and the improvement of the separation precision is limited.
Disclosure of Invention
In view of the above, the invention provides a dynamic dual-tracer PET reconstruction method based on DSN, which can reconstruct concentration profiles of two tracers under the condition of injecting the two tracers simultaneously, and has better robustness to noise.
A dynamic double-tracing PET reconstruction method based on DSN comprises the following steps:
(1) simultaneously injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors corresponding to different moments, and further forming a dynamic coincidence counting sequence Y for reflecting the mixed distribution condition of the two tracersdual
(2) Sequentially injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors of two single tracers corresponding to different moments respectively so as to form a dynamic coincidence counting sequence Y for reflecting distribution conditions of the tracer I and the tracer II respectivelyIAnd YII
(3) Calculating dynamic coincidence counting sequence Y by using PET image reconstruction algorithmdual、YIAnd YIICorresponding dynamic PET image sequence Xdual、XIAnd XII
(4) Let Xdual、XIAnd XIIForming a sample, repeatedly executing the steps (1) to (3) for multiple times to obtain a large number of samples, and further dividing all the samples into a training set and a testing set;
(5) extracting X in training set sampledual、XIAnd XIITA based on each pixelC, enabling X in the training set sampledualThe TAC is used as the input of the deep stack network, and X in a training set sampleIAnd XIIThe TAC is used as a true value label output by the deep stacking network, and a dynamic double-tracer PET reconstruction model is obtained by training the deep stacking network;
(6) extracting X in test set sampledualTAC based on each pixel point is input into the dynamic double-tracer PET reconstruction model, the TAC corresponding to each pixel point of two single-tracer dynamic PET image sequences is obtained through model output, and then the TAC is combined into a dynamic PET image sequence X corresponding to a tracer I and a tracer IIIAnd XII
Further, in the step (4), all samples are divided into a training set and a testing set, the two sets are not repeated, and the sample ratio of the training set to the testing set is greater than one half.
Further, in the step (5), X in the training set sample is extracted according to the following expressiondual、XIAnd XIITAC based on each pixel:
wherein:for X of training set samplesdualThe TAC of the 1 st to the nth pixel points,for trainingX of training set sampleIThe TAC of the 1 st to the nth pixel points,for X of training set samplesIIWherein, the TAC of the 1 st to the n th pixel points is the total number of the pixels of the PET image,Tindicating transposition.
Further, the specific process of training the deep stack network in step (5) is as follows:
5.1 constructing a deep neural network formed by sequentially connecting an input layer, a hidden layer and an output layer, and initializing parameters of the neural network, wherein the parameters comprise a learning rate, iteration times, and a bias vector and a weight matrix between layers;
5.2 taking X in training set sampledualThe TAC corresponding to the jth pixel point is input into a deep neural network, and the TAC output results of the pixel point corresponding to two single tracers are output through calculationComputingAnd true value labelCorrecting and updating the bias vector and the weight matrix between layers in the neural network by a gradient descent method according to the error function; wherein,for training X in sample setIThe TAC corresponding to the jth pixel point,for training X in sample setITAC corresponding to the jth pixel point, j is a natural number, j is more than or equal to 1 and less than or equal to n, and n is the total number of pixels of the PET image;
5.3 according to step 5.2, the iteration is executed for a plurality of times, and the input layer of the deep neural network consists of two parts: one part is X in the training set sampledualThe other part of the TAC is the result of an output layer, namely the TAC output result of two single tracers corresponding to the previous iteration, and the initialized feedback input is 0, so that a hidden stack is formed by a deep neural network between two adjacent iterations, and the number of stack layers is determined by the number of iterations;
5.4 within the current iteration, according to the steps 5.2-5.3, the X in the training set sample is processed in batchesdualInputting the TAC into a deep neural network for training to update network parameters until all TACs in a training set sample are traversed; and after a certain number of layers are implicitly stacked after a certain number of iterations, taking a deep neural network implicitly stacked into a deep stacked network corresponding to the minimum average error function L as a dynamic double-tracer PET reconstruction model.
Further, in the step 5.1, the initialization of the bias vectors and the weight matrix between the deep neural network layers is performed by pre-training of a Restricted Boltzmann Machine (RBM).
Further, the expression of the average error function L is as follows:
wherein: b is the number of TACs input into the deep stacking network per batch,andrespectively inputting the ith TAC in each batch into the deep stack network to calculate the TAC output results corresponding to the two single tracers,andrespectively corresponding the ith TAC in each batch to TAC true tags of two single tracers, | | | | | purple sweet2Representing a 2 norm.
The method realizes the reconstruction of the dynamic PET concentration distribution image of the mixed tracer by a deep stack network, and the main realization process is that the concentration distribution image of the mixed tracer is used as input, and the network is pre-trained by combining a Boltzmann machine; and then, further combining a single tracer true value as a label and an error function to finely adjust the network implicit stacking to obtain a trained model. The network is trained in advance and combined with a hidden stacking fine tuning training mode, so that the network can have a larger characteristic window in the input dimension, more robust characteristics are learned, and accurate image reconstruction is finally realized.
Drawings
FIG. 1 is a schematic diagram of the DSN structure of the present invention.
Fig. 2 is a complex brain template image.
FIG. 3(a) is a drawing11And (3) a real density distribution image of a 10 th frame of the C-DTBZ.
FIG. 3(b) is11C-DTBZ frame 10 and the reconstruction algorithm is a predicted image of ML-EM.
FIG. 3(c) is11Frame 10 of C-DTBZ and the reconstruction algorithm is a predictive image of ADMM.
FIG. 4(a) is a drawing11True density distribution image of C-FMZ frame 10.
FIG. 4(b) is11The C-FMZ frame 10 and the reconstruction algorithm is a predicted image of ML-EM.
FIG. 4(c) is11Frame 10 of C-FMZ and the reconstruction algorithm is a predicted image of ADMM.
FIG. 5(a) is a drawing11The C-DTBZ reconstruction algorithm is a bias-variance relation graph of the prediction result under all frame numbers of ADMM.
FIG. 5(b) is11The C-DTBZ reconstruction algorithm is a bias-variance relation graph of the prediction result under all the frame numbers of the MLEM.
FIG. 5(c) is11The C-FMZ reconstruction algorithm is a bias-variance relation graph of the prediction result under all the frame numbers of the ADMM.
FIG. 5(d) is11The C-FMZ reconstruction algorithm is a bias-variance relation graph of a prediction result under all the frame numbers of the MLEM.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The invention relates to a dynamic double-tracing PET reconstruction method based on DSN, which comprises the following steps:
(1) experimental data were prepared.
1.1 injecting mixed double tracers consisting of two different tracers (tracer I and tracer II) into the biological tissue for dynamic PET detection, collecting coincidence counting vectors at different moments according to a time sequence, and further forming a dynamic coincidence counting sequence Y reflecting the distribution condition of the mixed double tracersdual
1.2 injecting tracer I and tracer II into biological tissue successively and carrying out dynamic PET detection to obtain coincidence counting vectors of two groups of single tracers corresponding to different moments, and further forming a three-dimensional dynamic coincidence counting sequence Y for reflecting the distribution conditions of the tracer I and the tracer II respectivelyIAnd YII
1.3 calculating three-dimensional dynamic coincidence counting sequence Y by utilizing PET image reconstruction algorithmdual、YIAnd YIICorresponding three-dimensional dynamic PET image sequence Xdual、XIAnd XII
1.4 repeatedly executing the steps 1.1-1.3 for multiple times to obtain a large number of dynamic PET image sequences Xdual、XIAnd XII
(2) And (4) dividing the data set.
Mixing Xdual、XIAnd XIIAt a ratio of about 2:1, 2/3 data were extracted as a training setAnd a labelThe residue 1/3 is used as a test setAnd truth value thereofUsed as a later evaluation of the reconstruction effect; dynamic PET image sequenceAndthe specific expression of (A) is as follows:
in the above expressionAndrespectively representing curves of the concentration values of the jth pixel point of the dynamic PET concentration distribution diagram of the mixed tracer, the single tracer I and the tracer II along with the change of time, namely TAC, wherein N is the total pixel number of the PET image; TAC may be further specified as:
wherein:the concentration value of the kth frame of the jth pixel point of the dynamic PET concentration distribution diagram is represented, the upper mark indicates the injected tracer (mixed tracer, single tracer I and tracer II), and S is the total frame number collected by the dynamic PET image sequence; the data form of the labels and truth values should additionally be:
(3) and (5) building the DSN.
A DNN as shown in figure 1 is constructed and composed of an input layer, a hidden layer and an output layerInput layer node size is original inputAnd a labelThe sizes of the serial column vectors are consistent, and the size of the output layer node is equal to the size of the labelThe dimensions of the column vector directions are consistent in size.
(4) Setting and initializing network parameters.
Firstly, a limited Boltzmann machine is used for pre-training the network, and the bias vectors of all layers and the weight coefficients among all layers are initialized. In the present embodiment, the learning rate is set to 0.01, the number of hidden layer layers is set to 3, the number of nodes in each hidden layer is set to 60, 40, and 30, respectively, and the sigmoid function and the batch-size are set to 32, respectively.
(5) And (5) network training.
Implicit stacked training of the constructed DNNs was performed under the direction of the truth label: sequence of dynamic PET imagesWherein the TAC extracted based on the jth pixel point isInputting the data into the network in batch mode, and calculating the corresponding output result of the batch dataAnd true value labelBased on extraction of jth pixel pointJ is a natural number, j is more than or equal to 1 and less than or equal to N, and N is the total number of pixels of the PET concentration image. According to the resultant error L, the weight parameters among the input layer, the hidden layer and the output layer of the whole network are corrected through a gradient descent algorithm, and then the dynamic PET image sequence is obtainedAnd providing the corrected DNN input by the TAC corresponding to the next group of pixel points.
Inputting the training set into a network, and continuously correcting the weight parameters and the offset vectors among layers after each iteration by using implicit stacked training, wherein the error function L of back propagation is as follows:
wherein:andrespectively the predicted values of the DSN to the tracer I and the tracer II,andtrue values for tracer I and tracer II, respectively, batch _ size is batch size, N ═ 1,2, …, N/batch _ size.
And taking the DNN recessive stacking in the last iteration as the DSN to serve as a double-tracing PET image reconstruction model.
(6) And (6) evaluating the result.
In order to quantitatively evaluate the reconstruction effect, two indexes of bias and variance are mainly used, and the expression is as follows:
wherein:xiandthe predicted value and the true value of the ith pixel point of the concentration distribution diagram and the average predicted value of the interested region are respectively, and R is the total pixel number of the interested region.
The accuracy of the invention is verified by a simulation experiment in which a complex brain template is selected to perform Monte Carlo simulation to generate a data set, and the set tracer pair is11C-FMZ and11C-DTBZ, template consisting of regions of interest (ROIs) corresponding to different tissue sites. Fig. 2 shows a complex brain template containing 4 ROI's, a simulated PET scanner is biospherogy 16HR, siemens usa, which has 3 crystal rings, and 24336 LSO crystals are uniformly distributed in an array form in 48 detector modules on each ring, the crystal array size is 13 × 13, wherein the diameter of the crystal ring is 824 mm. Extracting 2/3 as training data and the rest 1/3 as test data according to the ratio of 2:1 of the generated data set; to observe the effect of different reconstruction algorithms on DSN, we reconstructed sinograms into a radioactive concentration profile using the ADMM reconstruction algorithm in the training set, and a classical ML-EM algorithm was also used in the test set section to complete the reconstruction of the mixed dual tracer radioactive concentration profile.
FIG. 3(a) to FIG. 3(c) are each11The 10 th frame radioactive concentration distribution true value and test set reconstruction algorithm of the C-DTBZ isThe predicted concentration distribution graph of the DSN network of ML-EM and the predicted concentration distribution graph of the DSN network of ADMM as a reconstruction algorithm are shown in FIGS. 4(a) to 4(c)11The 10 th frame of the C-FMZ radioactive concentration distribution true value, the test set reconstruction algorithm is the ML-EM DSN network predicted concentration distribution diagram, and the reconstruction algorithm is the ADMM DSN network predicted concentration distribution diagram. Table 1 shows the reconstruction effect of each region of interest under different frame numbers of two tracers when the test set reconstruction algorithm is ADMM, and fig. 5(a) to 5(d) respectively show the reconstruction effect of the total region of interest under different frame numbers of two tracers when the same training set and test set are used and the results are contrasted and shown by a bias-variance relationship diagram, using two models of SAE and DSN.
TABLE 1
The comparison between the true value of the concentration distribution diagram shown in the figure and the concentration distribution diagram predicted by the DSN network and the deviation and variance between the true value and the predicted value in each region of interest shown in the table 1 can show that the invention can well complete the reconstruction of the double-tracing PET image and verify the accuracy of the double-tracing PET image; meanwhile, by utilizing the contrast of the bias-variance relational graph and the prior algorithm SAE, the robustness of the DSN to noise is verified.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (6)

1. A dynamic double-tracing PET reconstruction method based on DSN comprises the following steps:
(1) simultaneously injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors corresponding to different moments, and further forming a dynamic coincidence counting sequence Y for reflecting the mixed distribution condition of the two tracersdual
(2) Sequentially injecting tracer I and tracer II into biological tissue and carrying out dynamic PET detection to obtain coincidence counting vectors corresponding to two single tracers at different moments respectively so as to form respective anti-phase counter-phaseDynamic coincidence counting sequence Y for mapping distribution of tracer I and tracer IIIAnd YII
(3) Calculating dynamic coincidence counting sequence Y by using PET image reconstruction algorithmdual、YIAnd YIICorresponding dynamic PET image sequence Xdual、XIAnd XII
(4) Let Xdual、XIAnd XIIForming a sample, repeatedly executing the steps (1) to (3) for multiple times to obtain a large number of samples, and further dividing all the samples into a training set and a testing set;
(5) extracting X in training set sampledual、XIAnd XIIEnabling X in training set samples to be based on TAC of each pixel pointdualThe TAC is used as the input of the deep stack network, and X in a training set sampleIAnd XIIThe TAC is used as a true value label output by the deep stacking network, and a dynamic double-tracer PET reconstruction model is obtained by training the deep stacking network;
(6) extracting X in test set sampledualTAC based on each pixel point is input into the dynamic double-tracer PET reconstruction model, the TAC corresponding to each pixel point of two single-tracer dynamic PET image sequences is obtained through model output, and then the TAC is combined into a dynamic PET image sequence X corresponding to a tracer I and a tracer IIIAnd XII
2. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: in the step (4), all samples are divided into a training set and a testing set, the two sets are not repeated, and the sample proportion of the training set and the testing set is more than one half.
3. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: in the step (5), X in the training set sample is extracted according to the following expressiondual、XIAnd XIITAC based on each pixel:
wherein:for X of training set samplesdualThe TAC of the 1 st to the nth pixel points,for X of training set samplesIThe TAC of the 1 st to the nth pixel points,for X of training set samplesIIWherein, the TAC of the 1 st to the n th pixel points is the total number of the pixels of the PET image,Tindicating transposition.
4. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: the specific process of training the deep stack network in the step (5) is as follows:
5.1 constructing a deep neural network formed by sequentially connecting an input layer, a hidden layer and an output layer, and initializing parameters of the neural network, wherein the parameters comprise a learning rate, iteration times, and a bias vector and a weight matrix between layers;
5.2 taking X in training set sampledualThe TAC corresponding to the jth pixel point is input into a deep neural network, and the TAC output results of the pixel point corresponding to two single tracers are output through calculationComputingAnd true value labelCorrecting and updating the bias vector and the weight matrix between layers in the neural network by a gradient descent method according to the error function; wherein,for training X in sample setIThe TAC corresponding to the jth pixel point,for training X in sample setITAC corresponding to the jth pixel point, j is a natural number, j is more than or equal to 1 and less than or equal to n, and n is the total number of pixels of the PET image;
5.3 according to step 5.2, the iteration is executed for a plurality of times, and the input layer of the deep neural network consists of two parts: one part is X in the training set sampledualThe other part of the TAC is the result of an output layer, namely the TAC output result of two single tracers corresponding to the previous iteration, and the initialized feedback input is 0, so that a hidden stack is formed by a deep neural network between two adjacent iterations, and the number of stack layers is determined by the number of iterations;
5.4 within the current iteration, according to the steps 5.2-5.3, the X in the training set sample is processed in batchesdualInputting the TAC into a deep neural network for training to update network parameters until all TACs in a training set sample are traversed; and after a certain number of layers are implicitly stacked after a certain number of iterations, taking a deep neural network implicitly stacked into a deep stacked network corresponding to the minimum average error function L as a dynamic double-tracer PET reconstruction model.
5. The dynamic dual-tracer PET reconstruction method according to claim 4, wherein: and 5.1, initializing the bias vector and the weight matrix between layers of the deep neural network by using a limited Boltzmann machine.
6. The dynamic dual-tracer PET reconstruction method according to claim 4, wherein: the expression of the average error function L is as follows:
wherein: b is the number of TACs input into the deep stacking network per batch,andrespectively inputting the ith TAC in each batch into the deep stack network to calculate the TAC output results corresponding to the two single tracers,andrespectively corresponding the ith TAC in each batch to TAC true tags of two single tracers, | | | | | purple sweet2Representing a 2 norm.
CN201910196556.5A 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN Active CN109993808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910196556.5A CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910196556.5A CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Publications (2)

Publication Number Publication Date
CN109993808A true CN109993808A (en) 2019-07-09
CN109993808B CN109993808B (en) 2020-11-10

Family

ID=67129686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910196556.5A Active CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Country Status (1)

Country Link
CN (1) CN109993808B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111476859A (en) * 2020-04-13 2020-07-31 浙江大学 Dynamic double-tracing PET imaging method based on 3D Unet
CN111920436A (en) * 2020-07-08 2020-11-13 浙江大学 Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN113057653A (en) * 2021-03-19 2021-07-02 浙江科技学院 Channel mixed convolution neural network-based motor electroencephalogram signal classification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887025A (en) * 2017-01-16 2017-06-23 浙江大学 A kind of method that mixing tracer dynamic PET concentration distributed image based on stack self-encoding encoder is rebuild
CN107133997A (en) * 2017-04-11 2017-09-05 浙江大学 A kind of dual tracer PET method for reconstructing based on deep neural network
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887025A (en) * 2017-01-16 2017-06-23 浙江大学 A kind of method that mixing tracer dynamic PET concentration distributed image based on stack self-encoding encoder is rebuild
CN107133997A (en) * 2017-04-11 2017-09-05 浙江大学 A kind of dual tracer PET method for reconstructing based on deep neural network
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XU J , LIU H .: "Deep-learning-based Separation of a Mixture of Dual-tracer Single-acquisition PET Signals with Equal Half-lives: a Simulation Study", 《IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES》 *
张俊超,岳茂雄,刘华锋: "结构先验约束的动态PET图像重建", 《浙江大学学报(工学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111476859A (en) * 2020-04-13 2020-07-31 浙江大学 Dynamic double-tracing PET imaging method based on 3D Unet
CN111476859B (en) * 2020-04-13 2022-09-16 浙江大学 Dynamic double-tracing PET imaging method based on 3D Unet
CN111920436A (en) * 2020-07-08 2020-11-13 浙江大学 Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN113057653A (en) * 2021-03-19 2021-07-02 浙江科技学院 Channel mixed convolution neural network-based motor electroencephalogram signal classification method

Also Published As

Publication number Publication date
CN109993808B (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN109993808B (en) Dynamic double-tracing PET reconstruction method based on DSN
US11445992B2 (en) Deep-learning based separation method of a mixture of dual-tracer single-acquisition PET signals with equal half-lives
US10765382B2 (en) Method for mixed tracers dynamic PET concentration image reconstruction based on stacked autoencoder
CN107133997B (en) A kind of dual tracer PET method for reconstructing based on deep neural network
CN109615674B (en) Dynamic double-tracing PET reconstruction method based on mixed loss function 3D CNN
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
CN104657950B (en) Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV
CN106204674B (en) The dynamic PET images method for reconstructing constrained based on structure dictionary and kinetic parameter dictionary joint sparse
CN111627082A (en) PET image reconstruction method based on filtering back projection algorithm and neural network
CN105678821B (en) A kind of dynamic PET images method for reconstructing based on self-encoding encoder image co-registration
CN105894550B (en) A kind of dynamic PET images and tracer kinetics parameter synchronization method for reconstructing based on TV and sparse constraint
CN108550172B (en) PET image reconstruction method based on non-local characteristics and total variation joint constraint
CN108986916B (en) Dynamic PET image tracer agent dynamics macro-parameter estimation method based on stacked self-encoder
Shao et al. A learned reconstruction network for SPECT imaging
CN107346556A (en) A kind of PET image reconstruction method based on block dictionary learning and sparse expression
CN107146263B (en) A kind of dynamic PET images method for reconstructing based on the constraint of tensor dictionary
Wang et al. IGNFusion: an unsupervised information gate network for multimodal medical image fusion
CN111166368B (en) Single-scanning double-tracer PET signal separation method based on pre-training GRU
Qing et al. Separation of dual-tracer PET signals using a deep stacking network
CN111920436A (en) Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN111476859B (en) Dynamic double-tracing PET imaging method based on 3D Unet
CN113379863B (en) Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning
CN115984401A (en) Dynamic PET image reconstruction method based on model-driven deep learning
CN112927132B (en) PET image reconstruction method for improving spatial resolution uniformity of PET system
CN116206005A (en) Dual-tracking PET imaging method based on image blocking converter network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant