CN113476064A - Single-scanning double-tracer PET signal separation method based on BCD-ED - Google Patents
Single-scanning double-tracer PET signal separation method based on BCD-ED Download PDFInfo
- Publication number
- CN113476064A CN113476064A CN202110840914.9A CN202110840914A CN113476064A CN 113476064 A CN113476064 A CN 113476064A CN 202110840914 A CN202110840914 A CN 202110840914A CN 113476064 A CN113476064 A CN 113476064A
- Authority
- CN
- China
- Prior art keywords
- pet
- tracer
- layer
- network
- bcd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000000700 radioactive tracer Substances 0.000 title claims abstract description 88
- 238000000926 separation method Methods 0.000 title claims abstract description 44
- 238000007476 Maximum Likelihood Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 31
- 238000012549 training Methods 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 24
- 238000005070 sampling Methods 0.000 claims description 19
- 230000009977 dual effect Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 15
- 238000011176 pooling Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 3
- 235000009508 confectionery Nutrition 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000013507 mapping Methods 0.000 abstract description 3
- 238000002600 positron emission tomography Methods 0.000 description 58
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 241000282693 Cercopithecidae Species 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 206010012601 diabetes mellitus Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000002347 injection Methods 0.000 description 2
- 239000007924 injection Substances 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001766 physiological effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 206010021143 Hypoxia Diseases 0.000 description 1
- 241000282553 Macaca Species 0.000 description 1
- 241000282560 Macaca mulatta Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000025966 Neurological disease Diseases 0.000 description 1
- 238000012879 PET imaging Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000004153 glucose metabolism Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 230000007954 hypoxia Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000001990 intravenous administration Methods 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000012636 positron electron tomography Methods 0.000 description 1
- 230000002285 radioactive effect Effects 0.000 description 1
- 210000004092 somatosensory cortex Anatomy 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/481—Diagnostic techniques involving the use of contrast agents
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a single-scanning double-tracer PET signal separation method based on BCD-ED, which combines a traditional iterative reconstruction algorithm with deep learning and can accurately separate two single tracer PET images from a mixed double-tracer PET image by using data driving. The BCD-ED framework adopted by the invention is provided with three modules which are respectively a reconstruction module, a denoising module and a separation module, a mixed tracer sinogram collected by PET is reconstructed by utilizing a maximum likelihood estimation algorithm, and a reconstructed image is denoised by using a low-rank regularization model, so that the shape of a reconstructed mixed concentration map is better constrained, and the noise is smaller; and then, a coding and decoding model is used for learning the mapping relation between the mixed tracer concentration graph and the two full-dose single tracers, and the detailed information of the single tracer concentration graph can be required to be clearly recovered.
Description
Technical Field
The invention belongs to the technical field of PET signal separation, and particularly relates to a single-scanning double-tracer PET signal separation method based on BCD-ED (block coordinate descent-encoding and decoding).
Background
Positron Emission Tomography (PET) is a typical Emission computed Tomography technique, often associated with markers11C、18F、15O、13The N-isoisotope tracer is used in combination, and has the advantages of high sensitivity to the tracer, no wound and the like. The dynamic change of the PET scanning tracer agent can characterize and quantify the functions of tissues in vivo, thereby obtaining physiological indexes of the part, such as glucose metabolism, blood flow, hypoxia and the like, which are known to be used for tumors, heart diseases, diabetes, and diabetes,Research on various diseases such as neurological diseases. Common nuclides for labeling can be divided into short half-life nuclides, medium half-life nuclides and long half-life nuclides according to the radioactive half-life duration, the half-life duration can influence the synthesis, transportation, dosage, scanning duration and performance of a tracer and the sensitivity required by a PET detector, and the nuclides need to be balanced and used according to actual conditions; short half-life nuclides of82Rb、15O、13N、62Cu、11C, the shorter half-life can be used for multiple scan imaging in a short time, but requires laboratories equipped with cyclotrons, high injection doses, short synthesis times, or more sensitive PET detectors; nuclides with long half-life such as64Cu、124I, the device can be used for longitudinally long-time study of physiological activities and is suitable for experimental places far away from a cyclotron; the nuclide with medium half-life is mainly18F and68ga, of moderate duration and therefore frequently used, since18F has the advantages of lower positron energy and range, medium half-life, higher divergence rate, easy labeling of biological molecules and the like compared with other nuclides, and is the most widely used nuclide in scientific research and clinic.
Compared with the single tracer PET imaging technology, the multi-tracer PET technology can only obtain the physiological activity characteristics of a certain aspect, the information is single, and the disease cannot be judged accurately, and the multi-tracer PET technology can provide complementary information to represent a more complete disease state through imaging the radioactive tracers sensitive to different physiological function changes, so that the possibility of misdiagnosis of the disease is reduced, and a doctor is guided to select a more effective treatment scheme. Early dual tracer imaging often employed separate acquisition of two tracers, i.e., dual injection-dual scan mode, which would not allow the two tracers to interfere with each other during the corresponding attenuation period, but which would cause great discomfort to the patient. Since this scanning method requires a long time, then to solve this problem, Koeppe et al proposes a dual-injection-single-scanning mode, i.e. a uniform scanning of dual tracers, reducing the signal superposition effect of the two tracers by injecting the two tracers at a short time interval, e.g. 10-20 minutes, and separating the different tracer signals by analyzing the pixel Time Activity Curve (TAC) or non-linear least squares (NLS) to model the target Region (ROI); although the scanning mode can combine two kinds of scanning into one kind of scanning, the scanning time is reduced to a certain extent, but the scanning time interval of 0-20 minutes is not the most perfect scanning mode.
In order to realize a completely gapless scanning mode, many researchers invest much effort, and at present, some gapless dual tracer imaging modes mostly use prior information, such as TAC data and atrioventricular model data, to separate different tracers, however, the separation mode based on the prior information has higher requirements on the accuracy of the prior information and the signal-to-noise ratio of the dual tracer data, which makes practical application of the separation mode limited. Therefore, how to distinguish by using the essential features of the tracer becomes one of the important research directions for tracer imaging.
Disclosure of Invention
In view of the above, the present invention provides a BCD-ED based single scan dual tracer PET signal separation method that is capable of accurately separating two single tracer PET images from a blended dual tracer PET image using data-driven by means of a powerful feature extraction tool, deep learning.
A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) injecting mixed double tracers into the biological tissue, and simultaneously carrying out one-time dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracers; the mixed double tracer consists of two isotopically labeled tracers I and II;
(2) respectively injecting a tracer I and a tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII;
(3) Y is calculated by using PET reconstruction algorithmIAnd yIICorresponding PET dynamic image sequenceAndand will beAndobtaining a PET dynamic image sequence truth value x of the mixed double tracers after superpositionture;
(4) Repeatedly executing the steps for multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and yI、yII、And xture;
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the network structure by using a training set sample to obtain a reconstruction-separation combined model of the dynamic double-tracing PET signal;
(6) inputting the test set samples into the combined model one by one, reconstructing to obtain a PET dynamic image sequence of mixed double tracers, and separating to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer II after denoisingIAnd SII。
Further, the reconstruction module of the BCD-ED network adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, the constraint of a reconstruction solving problem is strengthened by adding a regularization term, and meanwhile, a plurality of neural network convolution kernels are used for convolving X in different directions in the reconstruction solving process to generate a sparse image XkSo that it has a low rank property.
Further, the denoising module of the BCD-ED network firstly reconstructs the sparse image X obtained by the modulekDecomposed into low rank matrix LkAnd poisson noise momentArray WkThen, solving the following objective function by using a singular value threshold algorithm;
wherein: c. CkDenotes the kth convolution kernel, λkAs a threshold parameter for controlling LkThe sparsity of (1), beta is a hyper-parameter to control the smoothness of the image, K is the number of convolution kernels used in the reconstruction solution process, | | | | | sweet wind*Is a nuclear norm;
obtaining:
wherein:is ckThe inverse of (c) is used for deconvolution,representing the estimation result obtained by converting the maximum likelihood function of the PET dynamic image sequence x into the negative logarithm of the minimum likelihood function under the condition of given observation data y, | | | | | survival2And u is a 2 norm, namely the PET dynamic image sequence output after denoising, superscripts i and i +1 represent iteration times, and i is a natural number.
Furthermore, the separation module of the BCD-ED network integrates the concept of coding-decoding and a same-layer jump connection structure, and includes two parts of coding and decoding, where the coding part is formed by sequentially connecting a downsampling block D1, a pooling layer C1, a downsampling block D2, a pooling layer C2, a downsampling block D3, a pooling layer C3, and a downsampling block D4 from input to output, and the decoding part is formed by sequentially connecting an upsampling block U1, a deconvolution layer E1, an upsampling block U2, a deconvolution layer E2, an upsampling block U3, and a deconvolution layer E3 from input to output, where:
each of the lower sampling blocks D1-D4 comprises three layers connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; D1-D4 respectively generate 64, 128, 256 and 512 Feature maps; after the encoding part processes, the number of channels reaches the maximum after the network reaches the bottom layer, and at the moment, the original image is down-sampled to be very small, so that a large amount of original characteristic information is extracted.
The convolution kernels of the pooling layers C1-C3 are all 2 multiplied by 2, and are used for reducing the size of the input characteristic image by half so as to reduce the convolution operation amount; due to the reduction of the feature map, the convolution kernels with the same size can extract features corresponding to the original image in a larger range, and the method has higher robustness and anti-overfitting capability on small disturbances such as offset, rotation and the like of the image.
The upper sampling blocks U1-U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the upsampling block U3 further includes a fourth layer, convolutional layer with a convolutional kernel size of 1 × 1, for reducing the number of channels to a certain number as an output result.
The input of U1 is the splicing result of the outputs of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the outputs of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the outputs of D1 and E2 in the channel dimension, and the U1-U3 respectively generate 256, 128 and 64 Feature maps; because partial image information is lost in the down-sampling in the encoding stage, and image details are difficult to obtain in the decoding process, the feature maps of the encoding parts with the same layer size are introduced in the same layer jump connection mode in the decoding process, and the two feature maps are spliced to achieve the purpose of feature fusion, so that the network can also use more original information which is not discarded by the pooling layer in the decoding process to recover a clearer image.
The deconvolution layers E1 to E3 are used to double the size of the input Feature image and restore the size of the Feature map, so as to solve the problem that the resolution of the Feature image becomes small after a series of convolution operations.
In the encoding stage of the BCD-ED network, the image size is reduced through a down-sampling block and a pooling layer, some shallow features are extracted, and some deep features are obtained through a deconvolution layer and an up-sampling block in the encoding stage. Meanwhile, through skip layer operation between the up-sampling block and the down-sampling block, combining the Feature map obtained in the encoding stage with the Feature map obtained in the decoding stage, combining the features of the deep layer and the shallow layer, thinning the image, and performing prediction separation according to the obtained Feature maps; the high resolution information passed directly from the encoding module to the co-altitude decoding module via the skip layer operation can provide finer features for the separation.
Further, the training process of the BCD-ED network structure in step (5) is as follows:
5.1 initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, calculating by combining the denoising module to obtain the denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1tureThe difference between them;
5.3 inputting u into a separation module, outputting to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer IIIAnd SIIFurther, S is calculated by a loss function loss2IAndSIIandthe difference between them;
and 5.4, carrying out supervision training on the whole network by using a combined loss function loss which is loss1+ loss2, and guiding the network to carry out back propagation and gradient descent by using a root Mean Square Error (MSE) as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training to obtain the reconstruction-separation combined model of the dynamic double-tracing PET signal.
Further, the expression of the loss function loss1 is as follows:
wherein:the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by the (i + 1) th iteration solution is obtained, N is the number of the pixel points of the PET dynamic image sequence,for PET dynamic image sequence truth value xtureThe density value at the nth pixel point in (a).
Further, the expression of the loss function loss2 is as follows:
wherein:andrespectively a sequence S of PET dynamic imagesIAnd SIIThe density value at the nth pixel point in (a),andrespectively a sequence of PET dynamic imagesAndand the concentration value of the nth pixel point in the PET dynamic image sequence, wherein N is the number of the pixel points of the PET dynamic image sequence.
The invention realizes reconstruction and separation of the mixed tracer dynamic PET concentration distribution image through the BCD-ED network, and jointly reconstructs and separates the mixed dynamic double-tracer PET signal from the dynamic sinogram sequence. The BCD-ED network adopted by the invention is based on a traditional low-rank regularization model and a coding and decoding structure, can recover more single tracer image details with fewer parameters, reconstructs a mixed tracer sinogram acquired by PET by utilizing a maximum likelihood estimation algorithm, denoises the reconstructed image by using a low-rank regularization model, finally learns the mapping relation between a mixed tracer concentration map and two full-dose single tracers by using a coding and decoding model, and requires that the detail information of the single tracer concentration map can be recovered more clearly.
The invention relates to a direct separation algorithm, which has the advantages that the traditional iterative reconstruction algorithm is combined with deep learning, the shape of a reconstructed mixed concentration graph is better restrained, the noise is lower, a coding and decoding separation module can learn the mapping relation between the mixed concentration graph and a single tracer concentration graph, the traditional reconstruction algorithm is further broken away from the problem of separating dynamic double-tracer PET signals, the joint reconstruction and separation are directly carried out from a sinogram, and the double tracers can be more clinically applied.
Drawings
FIG. 1 is a schematic flow chart of the dynamic dual tracer PET signal separation method of the present invention.
FIG. 2 is a schematic diagram of a BCD-ED network framework according to the present invention.
FIG. 3(a) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 true density distribution image.
FIG. 3(b) shows a mixed tracer18F-BCPP-FE+18And F-FDG 21 st frame is used for predicting the image under the BCD-ED network.
FIG. 3(c) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 is used for predicting the image under the FBP algorithm.
FIG. 3(d) shows mixed tracers18F-BCPP-FE+18F-FDG frame 21 is used for predicting the image under the MLEM algorithm.
FIG. 3(e) shows a mixed tracer18F-BCPP-FE+18And F-FDG 21 frame is used for predicting the image under the UNET network.
FIG. 3(f) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 is used for predicting the image under the FBP-CNN network.
FIG. 4(a) is a drawing18And F-FDG frame 21 true density distribution image.
FIG. 4(b) is18And F-FDG 21 st frame is used for predicting the image under the BCD-ED network.
FIG. 4(c) is18And F-FDG 21 frame is used for predicting the image under the UNET network.
FIG. 4(d) is18And F-FDG frame 21 is used for predicting the image under the FBP-CNN network.
FIG. 5(a) is a drawing18And (3) a real density distribution image of a 21 st frame of F-BCPP-FE.
FIG. 5(b) is18And F-BCPP-FE frame 21 is used for predicting the image under the BCD-ED network.
FIG. 5(c) is18And F-BCPP-FE frame 21 is used for predicting the image under the UNET network.
FIG. 5(d) is18And F-BCPP-FE frame 21 is used for predicting the image in the FBP-CNN network.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
As shown in FIG. 1, the single-scan dynamic dual-tracer PET signal separation method based on the pre-trained BCD-ED network of the invention comprises the following steps:
(1) training set data is prepared.
1.1, injecting a mixed double tracer into a biological tissue, and simultaneously carrying out dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracer, wherein the mixed double tracer consists of a tracer I and a tracer II which are marked by two isotopes;
1.2 respectively injecting tracer I and tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII;
1.3 Using the PET reconstruction Algorithm to calculate yIAnd yIICorresponding PET dynamic image sequenceAndand makeAndPET dynamic image sequence truth value x of mixed double tracers obtained by superpositiontrue;
1.4 repeating the above steps for a plurality of times to obtain a plurality of PET dynamic sinogram sequences y, yIAnd yIIAnd a PET dynamic image sequence xtrue、And
wherein: y is1~ydCorresponding to the mixed dual tracer sinograms for frames 1-d in y,corresponds to yIThe single tracer sinograms of the 1 st to d th frames,corresponds to yIISingle tracer sinograms for the middle 1-d frames;corresponds to xtrueThe true values of the mixed double-tracer concentration diagram of the 1 st to d th frames in the test,correspond toThe single tracer concentration map true values in frames 1-d,correspond toThe true values of the single tracer concentration diagrams of the 1 st to d th frames in the middle, wherein d is the number of PET dynamic scanning frames.
(2) Training set and test set data are prepared.
From y, xtrue、And4/5 are randomly selected as the training set, 1/5 are remained as the testing set, and no sample in the testing set appears in the training set.
(3) Constructing a BCD-ED network as shown in FIG. 2, wherein the network framework has three modules, which are respectively composed of a reconstruction module, a denoising module and a separation module, and specifically introduced as follows:
in the initialization process, a dynamic sinogram sequence y and a system matrix G of a mixed dual tracer agent collected from PET are input, a maximum likelihood estimation algorithm is selected in a reconstruction module to solve a dynamic concentration map sequence x of the PET, and the expectation of the reconstructed concentration map x is obtained as follows:
the frame strengthens the constraint of the reconstruction problem by adding the regularization term, and the difference between adjacent pixels in the reconstructed image can be reduced after the constraint is added, so that the reconstructed image is smoother; k neural network convolution kernels c are used in the processkConvolving image X in different directions to produce sparse image Xk(ii) a Sparse feature images can be better learned by using a large amount of data, and the image matrixes have low rank property.
Xk=ck*x
In the actual case, XkUsually contains some noise, X can be setkDecomposed into low rank matrix LkAnd poisson noise matrix WkAdding nuclear norm | | | | luminance*The purpose of image denoising can be achieved.
Xk=Lk+Wk
The above formula is given by the hyperparameter beta which controls the smoothness of the image, lambdakIs a threshold parameter for controlling LkSparsity of (a);in the process, a parameter lambda is initializedkβ, the above equation can be solved using a singular value threshold method:
wherein: l iskCan be represented by the sum of singular values, σpFor the pth maximum singular value, i is the number of iterations, (x)+Max (x,0) can be used for soft threshold shrinkage, concentration map x estimated by the algorithmi+1Can be expressed as the following formula, convolution kernel filter c in neural networkkImage features can be extracted due to the fact that the K convolution kernels c are passedkThe extracted concentration diagram x characteristic can be equivalently expressed with x, so that the characteristic can be obtained
Then obtaining a concentration graph x by using a soft threshold valuei+1Then using deconvolutionObtaining a denoised image ui+1。
Wherein:is ckIs used for deconvolution, hereIt is the maximum likelihood function that the PET concentration map x can be converted into the minimum likelihood function negative logarithm under the given observation data y to estimateU at this timei+1After the concentration graph x is subjected to convolution to extract a characteristic graph, soft threshold shrinkage sparsification is carried out, and the concentration graph obtained through deconvolution is the denoised concentration graph.
(4) Inputting a training set into the network for training, wherein the training process comprises the following steps:
4.1 initializing the BCD-ED network, wherein the initialization comprises setting the number of input layers, hidden layers and output layers, and parameters for initializing the network comprise iteration number, convolution kernel number and learning rate.
4.2 inputting the dynamic sinogram sequence y of the mixed double tracers collected from the PET and the system matrix G into a denoising-reconstruction module in a BCD-ED network for training, and calculating x by the following formulature,nAnd ui+1And (4) correcting and updating the bias vector and the weight matrix between layers in the neural network by the error function through a gradient descent method.
Wherein: u. ofi+1Obtaining a denoised mixed tracer concentration map x by reconstructionture,nThe concentration true value of the mixed tracer at the nth pixel point of the image is referred, and N represents the total number of pixel points in the image.
And 4.3, inputting the three-dimensional reconstruction concentration map with the time frame information obtained in the denoising-reconstruction module into a separation module in the BCD-ED network, wherein the module is of a symmetrical structure, and the other convolution layers except the first and last convolution layers comprise a normalized BN layer and a ReLU activation function layer. Extracting image characteristic information by a network in the same layer through a convolution layer, and then reducing the size of the characteristic image by half through a maximum pooling layer with the size of 2 multiplied by 2 so as to reduce the convolution operation amount; meanwhile, due to the reduction of the feature map, the convolution kernels with the same size can extract features corresponding to the original image in a larger range, and the small disturbances such as offset, rotation and the like of the image have higher robustness and overfitting resistance; the up-sampling module adopts 2 x 2 deconvolution to decode the feature image and return the feature image to the original image size, because the down-sampling can lose part of image information during encoding and the decoding is difficult to obtain image details, the feature images of the encoding modules with the same layer size are introduced in the module by using a same-layer jump connection mode, and the two feature images are spliced to achieve the purpose of feature fusion, so that the network can also use more original information which is not discarded by the pooling layer during decoding, and recover a clearer image.
As shown in the following formula, the separation module uses the root mean square error MSE as a loss error to guide the network to reversely propagate and descend the gradient, and finally two separated denoised tracer concentration maps are obtained through output.
In the formula: si,nAndrespectively representing the predicted concentration value and the true concentration value of the tracer i at the nth pixel point in the image.
4.4 obtaining a loss function loss2 of the separation module, adding the loss function loss2 with the loss function loss1 in the step 4.2 to obtain a combined loss function, further performing combined training based on a de-noising part in the block coordinate descent neural network and the separation module based on the coding and decoding network, ending and reserving model parameters after training M epochs, and separating the tracer in the test set by using the trained network.
(5) And (6) evaluating the result.
The results of reconstruction-separation are generally evaluated using Peak Signal to Noise Ratio (PSNR) and Mean Structural Similarity (MSSIM) indicators.
In the formula:andrespectively mean values of the predicted image and the true value image,andrespectively representing the standard deviation of the predicted image and the true value image,representing the covariance, K the total image block, MAX the maximum value in the image, C1=(0.01MAX)2And C2=(0.03MAX)2Is a constant, Si,nAndrespectively representing the predicted concentration value and the true concentration value of the tracer i at the nth pixel point of the image, wherein N represents the total pixel point number in the image.
(6) And acquiring and comparing experimental data.
In real experiments, five male rhesus monkeys (macaques) weighing 4.7-8.7 kg were subjected to dynamic PET scanning using a high resolution small animal PET scanner (SHR-38000; Hamamatsu Photonics K.K., Hamamatsu, Japan). The monkeys were given a right lower limb intravenous dose of approximately 150MBq prior to the first scan18F-FDG, second scan injection of about 240MBq18F-BCPP-FE, with intervals between each scan of more than one week to ensure complete metabolism of the tracer in vivo. In the scanning process, in order to activate the hand region of the somatosensory cortex of the left hemisphere, a vibrator (mini MASSAGER G-2; Kawasaki-Seiji co., Ltd, Tokyo, Japan) is used to apply a tactile stimulation of 93 ± 2Hz to the right anterior paw of the monkey, the scanning is performed for 120 minutes in total, sampling protocols are 6 × 10s, 2 × 30s, 8 × 60s, 10 × 300s and 6 × 600s, 32 frames of dynamic PET data with an image size of 124 × 148 × 108 are finally obtained, and 80 pieces of slice data are selected. After two tracer concentration maps are superposed, a mixed concentration map is projected to be a sinogram, the process is completed by using a simple band-integration system model in a Michigan Image Reconstruction Toolbox (Fessler1994), the projection angle and the detector number are respectively 200 and 200, the sinogram with the size of 200 multiplied by 200 is obtained, and Poisson noise is added; one of the brain data of five monkeys was randomly selected as a test set, and the remaining four were selected as training sets, with the ratio of the number of training sets to the number of test sets being 4: 1.
It can be seen from fig. 3(a) to fig. 3(f) that the reconstruction results of the conventional method and the neural network are compared, and due to the introduction of the basic reconstruction model and the system matrix, the reconstruction results of the BCD-ED network and the two conventional FBP and MLEM methods have better constraints on the image contour than those of the other two neural network methods, and are closer to the truth map, but the reconstruction results of the conventional method are relatively noisy. Although the newly proposed FBP-CNN network is combined with a deep learning method, the quantity of parameters required for training is large, the training is difficult, and the final reconstruction result obviously has more noise and lacks of image details. The UNET network results do not appear well in image detail, but its density value is high. In comparison, the BCD-ED network has the least training parameters, the reconstruction result is closer to the true value in both the image detail and the density value, and the image is smoother due to the action of the denoising module.
It is obvious from fig. 4(a) -4 (d) and 5(a) -5 (d) that although the separation modules of the three networks all use the basic codec structure, the BCD-ED network is closer to the true value in the image shape details and density values than the other two methods because the model constraint is introduced in the previous reconstruction module, and the quality of the reconstructed mixed density map is higher when the input data is input into the separation module; and the separation module of the BCD-ED network has the same-layer jump connection, and compared with the FBP-CNN network lacking the jump connection, the characteristic fusion method is more favorable for recovering the image details.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.
Claims (7)
1. A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) injecting mixed double tracers into the biological tissue, and simultaneously carrying out one-time dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracers; the mixed double tracer consists of two isotopically labeled tracers I and II;
(2) respectively injecting a tracer I and a tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII;
(3) Y is calculated by using PET reconstruction algorithmIAnd yIICorresponding PET dynamic image sequenceAndand will beAndobtaining a PET dynamic image sequence truth value x of the mixed double tracers after superpositionture;
(4) Repeatedly executing the steps for multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and yI、yII、And xture;
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the network structure by using a training set sample to obtain a reconstruction-separation combined model of the dynamic double-tracing PET signal;
(6) inputting the test set samples into the combined model one by one, reconstructing to obtain a PET dynamic image sequence of mixed double tracers, and separating to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer II after denoisingIAnd SII。
2. The single scan dual tracer PET signal separation method of claim 1, wherein: the reconstruction module of the BCD-ED network adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, the constraint of a reconstruction solving problem is strengthened by adding a regularization term, and meanwhile, a plurality of neural network convolution kernels are used for convolving X in different directions in the reconstruction solving process to generate a sparse image XkSo that it has a low rank property.
3. The single scan dual tracer PET signal separation method of claim 2, wherein: the denoising module of the BCD-ED network firstly reconstructs the sparse image X obtained by the modulekDecomposed into low rank matrix LkAnd poisson noise matrix WkThen, solving the following objective function by using a singular value threshold algorithm;
wherein: c. CkDenotes the kth convolution kernel, λkAs a threshold parameter for controlling LkThe sparsity of (1), beta is a hyper-parameter to control the smoothness of the image, K is the number of convolution kernels used in the reconstruction solution process, | | | | | sweet wind*Is a nuclear norm;
obtaining:
wherein:is ckThe inverse of (c) is used for deconvolution,representing the estimation result obtained by converting the maximum likelihood function of the PET dynamic image sequence x into the negative logarithm of the minimum likelihood function under the condition of given observation data y,||||2And u is a 2 norm, namely the PET dynamic image sequence output after denoising, superscripts i and i +1 represent iteration times, and i is a natural number.
4. The single scan dual tracer PET signal separation method of claim 1, wherein: the separation module of the BCD-ED network comprises an encoding part and a decoding part, wherein the encoding part is formed by sequentially connecting a down-sampling block D1, a pooling layer C1, a down-sampling block D2, a pooling layer C2, a down-sampling block D3, a pooling layer C3 and a down-sampling block D4 from input to output, and the decoding part is formed by sequentially connecting an up-sampling block U1, a deconvolution layer E1, an up-sampling block U2, a deconvolution layer E2, an up-sampling block U3 and a deconvolution layer E3 from input to output, wherein:
each of the lower sampling blocks D1-D4 comprises three layers connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; D1-D4 respectively generate 64, 128, 256 and 512 Feature maps;
the convolution kernels of the pooling layers C1-C3 are all 2 multiplied by 2, and are used for reducing the size of the input characteristic image by half so as to reduce the convolution operation amount;
the upper sampling blocks U1-U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the upsampling block U3 further includes a fourth layer, i.e., a convolution layer with a convolution kernel size of 1 × 1, for reducing the number of channels to a specific number as an output result;
the input of U1 is the splicing result of the outputs of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the outputs of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the outputs of D1 and E2 in the channel dimension, and the U1-U3 respectively generate 256, 128 and 64 Feature maps;
the deconvolution layers E1 to E3 are used to multiply the size of the input Feature image and restore the size of the Feature map.
5. The single scan dual tracer PET signal separation method of claim 3, wherein: the process of training the BCD-ED network structure in the step (5) is as follows:
5.1 initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, calculating by combining the denoising module to obtain the denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1tureThe difference between them;
5.3 inputting u into a separation module, outputting to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer IIIAnd SIIFurther, S is calculated by a loss function loss2IAndSIIandthe difference between them;
and 5.4, carrying out supervision training on the whole network by using a combined loss function loss which is loss1+ loss2, and guiding the network to carry out back propagation and gradient descent by using a root Mean Square Error (MSE) as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training to obtain the reconstruction-separation combined model of the dynamic double-tracing PET signal.
6. The single scan dual tracer PET signal separation method of claim 5, wherein: the expression of the loss function loss1 is as follows:
wherein:the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by the (i + 1) th iteration solution is obtained, N is the number of the pixel points of the PET dynamic image sequence,for PET dynamic image sequence truth value xtureThe density value at the nth pixel point in (a).
7. The single scan dual tracer PET signal separation method of claim 5, wherein: the expression of the loss function loss2 is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110840914.9A CN113476064B (en) | 2021-07-23 | 2021-07-23 | BCD-ED-based single-scanning double-tracer PET signal separation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110840914.9A CN113476064B (en) | 2021-07-23 | 2021-07-23 | BCD-ED-based single-scanning double-tracer PET signal separation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113476064A true CN113476064A (en) | 2021-10-08 |
CN113476064B CN113476064B (en) | 2023-09-01 |
Family
ID=77943715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110840914.9A Active CN113476064B (en) | 2021-07-23 | 2021-07-23 | BCD-ED-based single-scanning double-tracer PET signal separation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113476064B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998249A (en) * | 2022-05-30 | 2022-09-02 | 浙江大学 | Space-time attention mechanism constrained dual-tracer PET imaging method |
US11540798B2 (en) | 2019-08-30 | 2023-01-03 | The Research Foundation For The State University Of New York | Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140119628A1 (en) * | 2012-10-28 | 2014-05-01 | Technion Research & Development Foundation Limited | Image reconstruction in computed tomography |
US20150287223A1 (en) * | 2014-04-04 | 2015-10-08 | The Board Of Trustees Of The University Of Illinois | Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms |
CN109009179A (en) * | 2018-08-02 | 2018-12-18 | 浙江大学 | Identical isotope labelling dual tracer PET separation method based on depth confidence network |
CN109993825A (en) * | 2019-03-11 | 2019-07-09 | 北京工业大学 | A kind of three-dimensional rebuilding method based on deep learning |
CN110490832A (en) * | 2019-08-23 | 2019-11-22 | 哈尔滨工业大学 | A kind of MR image reconstruction method based on regularization depth image transcendental method |
CN111127356A (en) * | 2019-12-18 | 2020-05-08 | 清华大学深圳国际研究生院 | Image blind denoising system |
CN111166368A (en) * | 2019-12-19 | 2020-05-19 | 浙江大学 | Single-scanning double-tracer PET signal separation method based on pre-training GRU |
CN111640075A (en) * | 2020-05-23 | 2020-09-08 | 西北工业大学 | Underwater image occlusion removing method based on generation countermeasure network |
-
2021
- 2021-07-23 CN CN202110840914.9A patent/CN113476064B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140119628A1 (en) * | 2012-10-28 | 2014-05-01 | Technion Research & Development Foundation Limited | Image reconstruction in computed tomography |
US20150287223A1 (en) * | 2014-04-04 | 2015-10-08 | The Board Of Trustees Of The University Of Illinois | Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms |
CN109009179A (en) * | 2018-08-02 | 2018-12-18 | 浙江大学 | Identical isotope labelling dual tracer PET separation method based on depth confidence network |
CN109993825A (en) * | 2019-03-11 | 2019-07-09 | 北京工业大学 | A kind of three-dimensional rebuilding method based on deep learning |
CN110490832A (en) * | 2019-08-23 | 2019-11-22 | 哈尔滨工业大学 | A kind of MR image reconstruction method based on regularization depth image transcendental method |
CN111127356A (en) * | 2019-12-18 | 2020-05-08 | 清华大学深圳国际研究生院 | Image blind denoising system |
CN111166368A (en) * | 2019-12-19 | 2020-05-19 | 浙江大学 | Single-scanning double-tracer PET signal separation method based on pre-training GRU |
CN111640075A (en) * | 2020-05-23 | 2020-09-08 | 西北工业大学 | Underwater image occlusion removing method based on generation countermeasure network |
Non-Patent Citations (2)
Title |
---|
卿敏敏: "基于深度学习的双示踪PET图像重建", 中国优秀硕博士论文, vol. 2021, no. 02 * |
叶华俊等: "正电子发射断层成像重建算法评述", 生物医学工程学杂志, no. 19 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11540798B2 (en) | 2019-08-30 | 2023-01-03 | The Research Foundation For The State University Of New York | Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising |
CN114998249A (en) * | 2022-05-30 | 2022-09-02 | 浙江大学 | Space-time attention mechanism constrained dual-tracer PET imaging method |
Also Published As
Publication number | Publication date |
---|---|
CN113476064B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111627082B (en) | PET image reconstruction method based on filtering back projection algorithm and neural network | |
CN111325686B (en) | Low-dose PET three-dimensional reconstruction method based on deep learning | |
Yuan et al. | SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction | |
CN113516210B (en) | Lung adenocarcinoma squamous carcinoma diagnosis model training method and device based on PET/CT | |
CN109636869B (en) | Dynamic PET image reconstruction method based on non-local total variation and low-rank constraint | |
US11508101B2 (en) | Dynamic dual-tracer PET reconstruction method based on hybrid-loss 3D convolutional neural networks | |
CN109993808B (en) | Dynamic double-tracing PET reconstruction method based on DSN | |
CN113160347B (en) | Low-dose double-tracer PET reconstruction method based on attention mechanism | |
WO2024011797A1 (en) | Pet image reconstruction method based on swin-transformer regularization | |
Shao et al. | SPECTnet: a deep learning neural network for SPECT image reconstruction | |
CN114387236A (en) | Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network | |
Feng et al. | Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning | |
CN116645283A (en) | Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network | |
CN113476064B (en) | BCD-ED-based single-scanning double-tracer PET signal separation method | |
CN114358285A (en) | PET system attenuation correction method based on flow model | |
CN116503506B (en) | Image reconstruction method, system, device and storage medium | |
CN115984401A (en) | Dynamic PET image reconstruction method based on model-driven deep learning | |
CN116245969A (en) | Low-dose PET image reconstruction method based on deep neural network | |
CN116152373A (en) | Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning | |
CN113379863B (en) | Dynamic double-tracing PET image joint reconstruction and segmentation method based on deep learning | |
US20240062061A1 (en) | Methods for training a cnn and for processing an inputted perfusion sequence using said cnn | |
CN115423892A (en) | Attenuation-free correction PET reconstruction method based on maximum expectation network | |
Wan et al. | Deep-learning based joint estimation of dual-tracer PET image activity maps and clustering of time activity curves | |
CN111920436A (en) | Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network | |
CN108765318A (en) | A kind of dynamic PET images factor treatment based on dynamics cluster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |