CN109993808B - Dynamic double-tracing PET reconstruction method based on DSN - Google Patents

Dynamic double-tracing PET reconstruction method based on DSN Download PDF

Info

Publication number
CN109993808B
CN109993808B CN201910196556.5A CN201910196556A CN109993808B CN 109993808 B CN109993808 B CN 109993808B CN 201910196556 A CN201910196556 A CN 201910196556A CN 109993808 B CN109993808 B CN 109993808B
Authority
CN
China
Prior art keywords
tracer
tac
dynamic
dual
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910196556.5A
Other languages
Chinese (zh)
Other versions
CN109993808A (en
Inventor
刘华锋
卿敏敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910196556.5A priority Critical patent/CN109993808B/en
Publication of CN109993808A publication Critical patent/CN109993808A/en
Application granted granted Critical
Publication of CN109993808B publication Critical patent/CN109993808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention discloses a dynamic double-tracer PET reconstruction method based on DSN, which can reconstruct concentration distribution maps of two tracers under the condition of simultaneously injecting the two tracers and has better robustness to noise. The PET reconstruction method realizes the reconstruction of the dynamic PET concentration distribution image of the mixed tracer through a deep stack network, and the main realization process is that the concentration distribution image of the mixed tracer is used as input, and the network is pre-trained by combining a Boltzmann machine; and then, further combining a single tracer true value as a label and an error function to carry out implicit stacking fine adjustment on the network to obtain a trained model. The network is trained in advance and combined with a hidden stacking fine tuning training mode, so that the network can have a larger characteristic window in the input dimension, more robust characteristics are learned, and accurate image reconstruction is finally realized.

Description

Dynamic double-tracing PET reconstruction method based on DSN
Technical Field
The invention belongs to the technical field of PET imaging, and particularly relates to a dynamic double-tracing PET reconstruction method based on DSN (deep stack network).
Background
Positron Emission Tomography (PET) is one of non-invasive in vivo molecular imaging, and is widely used in medical fields such as tumors, nervous systems, hearts and the like. PET mainly adopts radioactive isotope labeled tracer agent which is sensitive to different physiological function changes to make imaging, these tracer agents mainly relate to macromolecular substances of glucose, protein and nucleic acid, etc18F、11C、13N, etc. so that PET can provide information on physiological functions of organs such as glucose metabolism, blood perfusion, hypoxia, cell proliferation, etc. at a molecular level, and provide effective information for early diagnosis and prevention of diseases. In view of the complexity of diseases, the physiological or pathological characteristics of organs need to be described in multiple angles and multiple directions, so that the PET scanning imaging using multiple tracers is necessary. In the traditional PET scanning imaging, each tracer agent is independently injected and scanned for imaging, and the problems of prolonged scanning time, space-time registration of each tracer agent concentration distribution image, increased cost and the like are inevitably brought. Therefore, the single-scanning-simultaneous-injection double-tracer PET scanning imaging technology needs to be developed urgently, gamma photon energy generated by decay of different tracers in the PET imaging process is the same, namely 511KeV, and signals of the two tracers cannot be distinguished from the energy perspective.
At present, the reconstruction of the double-tracing PET image mainly comprises two types: one is to use tracer prior information and interval injection to distinguish signals of different tracers; another class separates different tracer images in a data-driven manner using deep learning. The former type of process generally suffers from the following problems: (1) requiring the tracers to have different half-lives or different radioisotopes; (2) a pre-constructed kinetic model is required, which may not be suitable for new tracers; (3) fitting the tracer signal using only a simple linear model; (4) a specific tracer pair is required. The above problems reduce the practical feasibility of such methods, and such methods typically require separation with interval injections to assist in the separation, further extending the scan time, while also resulting in additional spatiotemporal registration of the two tracer images after separation. The latter method mainly has a dual-trace separation algorithm based on an autoencoder at present, however, the model only uses a common gradient descent algorithm to update model parameters, so that the robustness of the learned feature expression to noise is not enough, and the improvement of the separation precision is limited.
Disclosure of Invention
In view of the above, the invention provides a dynamic dual-tracer PET reconstruction method based on DSN, which can reconstruct concentration profiles of two tracers under the condition of injecting the two tracers simultaneously, and has better robustness to noise.
A dynamic double-tracing PET reconstruction method based on DSN comprises the following steps:
(1) simultaneously injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors corresponding to different moments, and further forming a dynamic coincidence counting sequence Y for reflecting the mixed distribution condition of the two tracersdual
(2) Sequentially injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors of two single tracers corresponding to different moments respectively so as to form a dynamic coincidence counting sequence Y for reflecting distribution conditions of the tracer I and the tracer II respectivelyIAnd YII
(3) Calculating dynamic coincidence counting sequence Y by using PET image reconstruction algorithmdual、YIAnd YIICorresponding dynamic PET image sequence Xdual、XIAnd XII
(4) Let Xdual、XIAnd XIIComposition as a sample, repeatedly performed a plurality of times according to the steps (1) to (3)Obtaining a large number of samples, and further dividing all samples into a training set and a testing set;
(5) extracting X in training set sampledual、XIAnd XIIEnabling X in training set samples to be based on TAC of each pixel pointdualThe TAC is used as the input of the deep stack network, and X in a training set sampleIAnd XIIThe TAC is used as a true value label output by the deep stacking network, and a dynamic double-tracer PET reconstruction model is obtained by training the deep stacking network;
(6) extracting X in test set sampledualTAC based on each pixel point is input into the dynamic double-tracer PET reconstruction model, the TAC corresponding to each pixel point of two single-tracer dynamic PET image sequences is obtained through model output, and then the TAC is combined into a dynamic PET image sequence X corresponding to a tracer I and a tracer IIIAnd XII
Further, in the step (4), all samples are divided into a training set and a testing set, the two sets are not repeated, and the sample ratio of the training set to the testing set is greater than one half.
Further, in the step (5), X in the training set sample is extracted according to the following expressiondual、XIAnd XIITAC based on each pixel:
Figure BDA0001996024340000031
Figure BDA0001996024340000032
Figure BDA0001996024340000033
wherein:
Figure BDA0001996024340000034
for X of training set samplesdualThe TAC of the 1 st to the nth pixel points,
Figure BDA0001996024340000035
for X of training set samplesIThe TAC of the 1 st to the nth pixel points,
Figure BDA0001996024340000036
for X of training set samplesIIWherein, the TAC of the 1 st to the n th pixel points is the total number of the pixels of the PET image,Tindicating transposition.
Further, the specific process of training the deep stack network in step (5) is as follows:
5.1 constructing a deep neural network formed by sequentially connecting an input layer, a hidden layer and an output layer, and initializing parameters of the neural network, wherein the parameters comprise a learning rate, iteration times, and a bias vector and a weight matrix between layers;
5.2 taking X in training set sampledualThe TAC corresponding to the jth pixel point is input into a deep neural network, and the TAC output results of the pixel point corresponding to two single tracers are output through calculation
Figure BDA0001996024340000037
Computing
Figure BDA0001996024340000038
And true value label
Figure BDA0001996024340000039
Correcting and updating the bias vector and the weight matrix between layers in the neural network by a gradient descent method according to the error function; wherein,
Figure BDA00019960243400000310
for training X in sample setIThe TAC corresponding to the jth pixel point,
Figure BDA00019960243400000311
for training X in sample setITAC corresponding to the jth pixel point, j is a natural number, j is more than or equal to 1 and less than or equal to n, nTotal number of pixels for PET image;
5.3 according to step 5.2, the iteration is executed for a plurality of times, and the input layer of the deep neural network consists of two parts: one part is X in the training set sampledualThe other part of the TAC is the result of an output layer, namely the TAC output result of two single tracers corresponding to the previous iteration, and the initialized feedback input is 0, so that a hidden stack is formed by a deep neural network between two adjacent iterations, and the number of stack layers is determined by the number of iterations;
5.4 within the current iteration, according to the steps 5.2-5.3, the X in the training set sample is processed in batchesdualInputting the TAC into a deep neural network for training to update network parameters until all TACs in a training set sample are traversed; and after a certain number of layers are implicitly stacked after a certain number of iterations, taking a deep neural network implicitly stacked into a deep stacked network corresponding to the minimum average error function L as a dynamic double-tracer PET reconstruction model.
Further, in the step 5.1, the initialization of the bias vectors and the weight matrix between the deep neural network layers is performed by pre-training of a Restricted Boltzmann Machine (RBM).
Further, the expression of the average error function L is as follows:
Figure BDA0001996024340000041
wherein: b is the number of TACs input into the deep stacking network per batch,
Figure BDA0001996024340000042
and
Figure BDA0001996024340000043
respectively inputting the ith TAC in each batch into the deep stack network to calculate the TAC output results corresponding to the two single tracers,
Figure BDA0001996024340000044
and
Figure BDA0001996024340000045
respectively corresponding the ith TAC in each batch to TAC true tags of two single tracers, | | | | | purple sweet2Representing a 2 norm.
The method realizes the reconstruction of the dynamic PET concentration distribution image of the mixed tracer by a deep stack network, and the main realization process is that the concentration distribution image of the mixed tracer is used as input, and the network is pre-trained by combining a Boltzmann machine; and then, further combining a single tracer true value as a label and an error function to finely adjust the network implicit stacking to obtain a trained model. The network is trained in advance and combined with a hidden stacking fine tuning training mode, so that the network can have a larger characteristic window in the input dimension, more robust characteristics are learned, and accurate image reconstruction is finally realized.
Drawings
FIG. 1 is a schematic diagram of the DSN structure of the present invention.
Fig. 2 is a complex brain template image.
FIG. 3(a) is a drawing11And (3) a real density distribution image of a 10 th frame of the C-DTBZ.
FIG. 3(b) is11C-DTBZ frame 10 and the reconstruction algorithm is a predicted image of ML-EM.
FIG. 3(c) is11Frame 10 of C-DTBZ and the reconstruction algorithm is a predictive image of ADMM.
FIG. 4(a) is a drawing11True density distribution image of C-FMZ frame 10.
FIG. 4(b) is11The C-FMZ frame 10 and the reconstruction algorithm is a predicted image of ML-EM.
FIG. 4(c) is11Frame 10 of C-FMZ and the reconstruction algorithm is a predicted image of ADMM.
FIG. 5(a) is a drawing11The C-DTBZ reconstruction algorithm is a bias-variance relation graph of the prediction result under all frame numbers of ADMM.
FIG. 5(b) is11The C-DTBZ reconstruction algorithm is a bias-variance relation graph of the prediction result under all the frame numbers of the MLEM.
FIG. 5(c) is11The C-FMZ reconstruction algorithm is performed for all ADMM framesAnd (4) measuring a bias-variance relation graph of the result.
FIG. 5(d) is11The C-FMZ reconstruction algorithm is a bias-variance relation graph of a prediction result under all the frame numbers of the MLEM.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The invention relates to a dynamic double-tracing PET reconstruction method based on DSN, which comprises the following steps:
(1) experimental data were prepared.
1.1 injecting mixed double tracers consisting of two different tracers (tracer I and tracer II) into the biological tissue for dynamic PET detection, collecting coincidence counting vectors at different moments according to a time sequence, and further forming a dynamic coincidence counting sequence Y reflecting the distribution condition of the mixed double tracersdual
1.2 injecting tracer I and tracer II into biological tissue successively and carrying out dynamic PET detection to obtain coincidence counting vectors of two groups of single tracers corresponding to different moments, and further forming a three-dimensional dynamic coincidence counting sequence Y for reflecting the distribution conditions of the tracer I and the tracer II respectivelyIAnd YII
1.3 calculating three-dimensional dynamic coincidence counting sequence Y by utilizing PET image reconstruction algorithmdual、YIAnd YIICorresponding three-dimensional dynamic PET image sequence Xdual、XIAnd XII
1.4 repeatedly executing the steps 1.1-1.3 for multiple times to obtain a large number of dynamic PET image sequences Xdual、XIAnd XII
(2) And (4) dividing the data set.
Mixing Xdual、XIAnd XIIAt a ratio of about 2:1, 2/3 data were extracted as a training set
Figure BDA0001996024340000051
And a label
Figure BDA0001996024340000052
The residue 1/3 is used as a test set
Figure BDA0001996024340000053
And truth value thereof
Figure BDA0001996024340000054
Used as a later evaluation of the reconstruction effect; dynamic PET image sequenceAnd
Figure BDA0001996024340000056
the specific expression of (A) is as follows:
Figure BDA0001996024340000057
Figure BDA0001996024340000058
Figure BDA0001996024340000061
in the above expression
Figure BDA0001996024340000062
And
Figure BDA0001996024340000063
respectively representing curves of the concentration values of the jth pixel point of the dynamic PET concentration distribution diagram of the mixed tracer, the single tracer I and the tracer II along with the change of time, namely TAC, wherein N is the total pixel number of the PET image; TAC may be further specified as:
Figure BDA0001996024340000064
Figure BDA0001996024340000065
Figure BDA0001996024340000066
wherein:
Figure BDA0001996024340000067
the concentration value of the kth frame of the jth pixel point of the dynamic PET concentration distribution diagram is represented, the upper mark indicates the injected tracer (mixed tracer, single tracer I and tracer II), and S is the total frame number collected by the dynamic PET image sequence; the data form of the labels and truth values should additionally be:
Figure BDA0001996024340000068
Figure BDA0001996024340000069
(3) and (5) building the DSN.
Constructing a DNN shown in FIG. 1, which comprises an input layer, a hidden layer and an output layer, wherein the node size of the input layer is the original input
Figure BDA00019960243400000610
And a label
Figure BDA00019960243400000611
The sizes of the serial column vectors are consistent, and the size of the output layer node is equal to the size of the label
Figure BDA00019960243400000612
The dimensions of the column vector directions are consistent in size.
(4) Setting and initializing network parameters.
Firstly, a limited Boltzmann machine is used for pre-training the network, and the bias vectors of all layers and the weight coefficients among all layers are initialized. In the present embodiment, the learning rate is set to 0.01, the number of hidden layer layers is set to 3, the number of nodes in each hidden layer is set to 60, 40, and 30, respectively, and the sigmoid function and the batch-size are set to 32, respectively.
(5) And (5) network training.
Implicit stacked training of the constructed DNNs was performed under the direction of the truth label: sequence of dynamic PET images
Figure BDA00019960243400000613
Wherein the TAC extracted based on the jth pixel point is
Figure BDA00019960243400000614
Inputting the data into the network in batch mode, and calculating the corresponding output result of the batch data
Figure BDA00019960243400000615
And true value label
Figure BDA00019960243400000616
Based on extraction of jth pixel point
Figure BDA00019960243400000617
J is a natural number, j is more than or equal to 1 and less than or equal to N, and N is the total number of pixels of the PET concentration image. According to the resultant error L, the weight parameters among the input layer, the hidden layer and the output layer of the whole network are corrected through a gradient descent algorithm, and then the dynamic PET image sequence is obtained
Figure BDA0001996024340000071
And providing the corrected DNN input by the TAC corresponding to the next group of pixel points.
Inputting the training set into a network, and continuously correcting the weight parameters and the offset vectors among layers after each iteration by using implicit stacked training, wherein the error function L of back propagation is as follows:
Figure BDA0001996024340000072
wherein:
Figure BDA0001996024340000073
and
Figure BDA0001996024340000074
respectively the predicted values of the DSN to the tracer I and the tracer II,
Figure BDA0001996024340000075
and
Figure BDA0001996024340000076
true values for tracer I and tracer II, respectively, batch _ size is batch size, N ═ 1,2, …, N/batch _ size.
And taking the DNN recessive stacking in the last iteration as the DSN to serve as a double-tracing PET image reconstruction model.
(6) And (6) evaluating the result.
In order to quantitatively evaluate the reconstruction effect, two indexes of bias and variance are mainly used, and the expression is as follows:
Figure BDA0001996024340000077
Figure BDA0001996024340000078
wherein:
Figure BDA0001996024340000079
xiand
Figure BDA00019960243400000710
the predicted value and the true value of the ith pixel point of the concentration distribution diagram and the average predicted value of the interested region are respectively, and R is the total pixel number of the interested region.
The accuracy of the invention is verified by simulation experiments in which complicated experiments are selectedPerforming Monte Carlo simulation on the brain template to generate a data set, wherein the set tracer pair is11C-FMZ and11C-DTBZ, template consisting of regions of interest (ROIs) corresponding to different tissue sites. Fig. 2 shows a complex brain template containing 4 ROI's, a simulated PET scanner is biospherogy 16HR, siemens usa, which has 3 crystal rings, and 24336 LSO crystals are uniformly distributed in an array form in 48 detector modules on each ring, the crystal array size is 13 × 13, wherein the diameter of the crystal ring is 824 mm. Extracting 2/3 as training data and the rest 1/3 as test data according to the ratio of 2:1 of the generated data set; to observe the effect of different reconstruction algorithms on DSN, we reconstructed sinograms into a radioactive concentration profile using the ADMM reconstruction algorithm in the training set, and a classical ML-EM algorithm was also used in the test set section to complete the reconstruction of the mixed dual tracer radioactive concentration profile.
FIG. 3(a) to FIG. 3(c) are each11The 10 th frame of the C-DTBZ has a true value of the radioactivity concentration distribution, the test set reconstruction algorithm is a DSN network predicted concentration distribution graph of ML-EM, and the reconstruction algorithm is an ADMM DSN network predicted concentration distribution graph, and FIGS. 4(a) to 4(C) are respectively11The 10 th frame of the C-FMZ radioactive concentration distribution true value, the test set reconstruction algorithm is the ML-EM DSN network predicted concentration distribution diagram, and the reconstruction algorithm is the ADMM DSN network predicted concentration distribution diagram. Table 1 shows the reconstruction effect of each region of interest under different frame numbers of two tracers when the test set reconstruction algorithm is ADMM, and fig. 5(a) to 5(d) respectively show the reconstruction effect of the total region of interest under different frame numbers of two tracers when the same training set and test set are used and the results are contrasted and shown by a bias-variance relationship diagram, using two models of SAE and DSN.
TABLE 1
Figure BDA0001996024340000081
The comparison between the true value of the concentration distribution diagram shown in the figure and the concentration distribution diagram predicted by the DSN network and the deviation and variance between the true value and the predicted value in each region of interest shown in the table 1 can show that the invention can well complete the reconstruction of the double-tracing PET image and verify the accuracy of the double-tracing PET image; meanwhile, by utilizing the contrast of the bias-variance relational graph and the prior algorithm SAE, the robustness of the DSN to noise is verified.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (5)

1. A dynamic double-tracing PET reconstruction method based on DSN comprises the following steps:
(1) simultaneously injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors corresponding to different moments, and further forming a dynamic coincidence counting sequence Y for reflecting the mixed distribution condition of the two tracersdual
(2) Sequentially injecting tracer I and tracer II into biological tissues and carrying out dynamic PET (positron emission tomography) detection to obtain coincidence counting vectors of two single tracers corresponding to different moments respectively so as to form a dynamic coincidence counting sequence Y for reflecting distribution conditions of the tracer I and the tracer II respectivelyIAnd YII
(3) Calculating dynamic coincidence counting sequence Y by using PET image reconstruction algorithmdual、YIAnd YIICorresponding dynamic PET image sequence Xdual、XIAnd XII
(4) Let Xdual、XIAnd XIIForming a sample, repeatedly executing the steps (1) to (3) for multiple times to obtain a large number of samples, and further dividing all the samples into a training set and a testing set;
(5) extracting X in training set sampledual、XIAnd XIIEnabling X in training set samples to be based on TAC of each pixel pointdualThe TAC is used as the input of the deep stack network, and X in a training set sampleIAnd XIIThe TAC is used as a truth label output by the deep stacking network, and a dynamic double-tracer PET reconstruction model is obtained by training the deep stacking network, and the specific process is as follows:
5.1 constructing a deep neural network formed by sequentially connecting an input layer, a hidden layer and an output layer, and initializing parameters of the neural network, wherein the parameters comprise a learning rate, iteration times, and a bias vector and a weight matrix between layers;
5.2 taking X in training set sampledualThe TAC corresponding to the jth pixel point is input into a deep neural network, and the TAC output results of the pixel point corresponding to two single tracers are output through calculation
Figure FDA0002523786400000011
Computing
Figure FDA0002523786400000012
And true value label
Figure FDA0002523786400000013
Correcting and updating the bias vector and the weight matrix between layers in the neural network by a gradient descent method according to the error function; wherein,
Figure FDA0002523786400000014
for training X in sample setIThe TAC corresponding to the jth pixel point,
Figure FDA0002523786400000015
for training X in sample setIITAC corresponding to the jth pixel point, j is a natural number, j is more than or equal to 1 and less than or equal to n, and n is the total number of pixels of the PET image;
5.3 according to step 5.2, the iteration is executed for a plurality of times, and the input layer of the deep neural network consists of two parts: one part is a training set sampleIn this case XdualThe other part of the TAC is the result of an output layer, namely the TAC output result of two single tracers corresponding to the previous iteration, and the initialized feedback input is 0, so that a hidden stack is formed by a deep neural network between two adjacent iterations, and the number of stack layers is determined by the number of iterations;
5.4 within the current iteration, according to the steps 5.2-5.3, the X in the training set sample is processed in batchesdualInputting the TAC into a deep neural network for training to update network parameters until all TACs in a training set sample are traversed; after certain iteration times, namely a certain number of layers of hidden stacking, taking the deep neural network in the last iteration to be hidden stacked into a deep stacking network as a dynamic double-tracer PET reconstruction model;
(6) extracting X in test set sampledualTAC based on each pixel point is input into the dynamic double-tracer PET reconstruction model, the TAC corresponding to each pixel point of two single-tracer dynamic PET image sequences is obtained through model output, and then the TAC is combined into a dynamic PET image sequence X corresponding to a tracer I and a tracer IIIAnd XII
2. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: in the step (4), all samples are divided into a training set and a testing set, the two sets are not repeated, and the sample proportion of the training set and the testing set is more than one half.
3. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: in the step (5), X in the training set sample is extracted according to the following expressiondual、XIAnd XIITAC based on each pixel:
Figure FDA0002523786400000021
Figure FDA0002523786400000022
Figure FDA0002523786400000023
wherein:
Figure FDA0002523786400000024
for X of training set samplesdualThe TAC of the 1 st to the nth pixel points,
Figure FDA0002523786400000025
for X of training set samplesIThe TAC of the 1 st to the nth pixel points,
Figure FDA0002523786400000026
for X of training set samplesIIWherein, the TAC of the 1 st to the n th pixel points is the total number of the pixels of the PET image,Tindicating transposition.
4. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: and 5.1, initializing the bias vector and the weight matrix between layers of the deep neural network by using a limited Boltzmann machine.
5. The dynamic dual-tracer PET reconstruction method according to claim 1, wherein: in the step 5.4, in the process of training the deep neural network, correcting the weight parameters among the input layer, the hidden layer and the output layer of the whole network through a gradient descent algorithm according to the following average error function L;
Figure FDA0002523786400000031
wherein: b is the number of TACs input into the deep stacking network per batch,
Figure FDA0002523786400000032
and
Figure FDA0002523786400000033
respectively inputting the ith TAC in each batch into the deep stack network to calculate the TAC output results corresponding to the two single tracers,
Figure FDA0002523786400000034
and
Figure FDA0002523786400000035
respectively corresponding the ith TAC in each batch to TAC true tags of two single tracers, | | | | | purple sweet2Representing a 2 norm.
CN201910196556.5A 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN Active CN109993808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910196556.5A CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910196556.5A CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Publications (2)

Publication Number Publication Date
CN109993808A CN109993808A (en) 2019-07-09
CN109993808B true CN109993808B (en) 2020-11-10

Family

ID=67129686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910196556.5A Active CN109993808B (en) 2019-03-15 2019-03-15 Dynamic double-tracing PET reconstruction method based on DSN

Country Status (1)

Country Link
CN (1) CN109993808B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109009179B (en) * 2018-08-02 2020-09-18 浙江大学 Same isotope labeling double-tracer PET separation method based on deep belief network
CN111166368B (en) * 2019-12-19 2021-07-23 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111476859B (en) * 2020-04-13 2022-09-16 浙江大学 Dynamic double-tracing PET imaging method based on 3D Unet
CN111920436A (en) * 2020-07-08 2020-11-13 浙江大学 Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN113057653B (en) * 2021-03-19 2022-11-04 浙江科技学院 Channel mixed convolution neural network-based motor electroencephalogram signal classification method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887025B (en) * 2017-01-16 2019-06-11 浙江大学 A method of the mixing tracer dynamic PET concentration distributed image based on stack self-encoding encoder is rebuild
CN107133997B (en) * 2017-04-11 2019-10-15 浙江大学 A kind of dual tracer PET method for reconstructing based on deep neural network
CN109009179B (en) * 2018-08-02 2020-09-18 浙江大学 Same isotope labeling double-tracer PET separation method based on deep belief network

Also Published As

Publication number Publication date
CN109993808A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN109993808B (en) Dynamic double-tracing PET reconstruction method based on DSN
US11445992B2 (en) Deep-learning based separation method of a mixture of dual-tracer single-acquisition PET signals with equal half-lives
US10765382B2 (en) Method for mixed tracers dynamic PET concentration image reconstruction based on stacked autoencoder
CN107133997B (en) A kind of dual tracer PET method for reconstructing based on deep neural network
CN109615674B (en) Dynamic double-tracing PET reconstruction method based on mixed loss function 3D CNN
CN104657950B (en) Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV
CN106204674B (en) The dynamic PET images method for reconstructing constrained based on structure dictionary and kinetic parameter dictionary joint sparse
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
CN105678821B (en) A kind of dynamic PET images method for reconstructing based on self-encoding encoder image co-registration
CN105894550B (en) A kind of dynamic PET images and tracer kinetics parameter synchronization method for reconstructing based on TV and sparse constraint
CN108986916B (en) Dynamic PET image tracer agent dynamics macro-parameter estimation method based on stacked self-encoder
CN108550172B (en) PET image reconstruction method based on non-local characteristics and total variation joint constraint
CN107346556A (en) A kind of PET image reconstruction method based on block dictionary learning and sparse expression
Shao et al. A learned reconstruction network for SPECT imaging
CN111166368B (en) Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN107146263B (en) A kind of dynamic PET images method for reconstructing based on the constraint of tensor dictionary
CN107146218A (en) It is a kind of to be rebuild and tracer kinetics method for parameter estimation based on the dynamic PET images that image is split
CN111920436A (en) Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
Qing et al. Separation of dual-tracer PET signals using a deep stacking network
CN111476859B (en) Dynamic double-tracing PET imaging method based on 3D Unet
CN115984401A (en) Dynamic PET image reconstruction method based on model-driven deep learning
CN113379863B (en) Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning
CN112927132B (en) PET image reconstruction method for improving spatial resolution uniformity of PET system
CN118674809A (en) PET image reconstruction method based on Kalman-like filtering algorithm and neural network
CN116206005A (en) Dual-tracking PET imaging method based on image blocking converter network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant