CN113476064A - Single-scanning double-tracer PET signal separation method based on BCD-ED - Google Patents

Single-scanning double-tracer PET signal separation method based on BCD-ED Download PDF

Info

Publication number
CN113476064A
CN113476064A CN202110840914.9A CN202110840914A CN113476064A CN 113476064 A CN113476064 A CN 113476064A CN 202110840914 A CN202110840914 A CN 202110840914A CN 113476064 A CN113476064 A CN 113476064A
Authority
CN
China
Prior art keywords
pet
tracer
layer
network
bcd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110840914.9A
Other languages
Chinese (zh)
Other versions
CN113476064B (en
Inventor
刘华锋
童珺怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110840914.9A priority Critical patent/CN113476064B/en
Publication of CN113476064A publication Critical patent/CN113476064A/en
Application granted granted Critical
Publication of CN113476064B publication Critical patent/CN113476064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/481Diagnostic techniques involving the use of contrast agents

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a single-scanning double-tracer PET signal separation method based on BCD-ED, which combines a traditional iterative reconstruction algorithm with deep learning and can accurately separate two single tracer PET images from a mixed double-tracer PET image by using data driving. The BCD-ED framework adopted by the invention is provided with three modules which are respectively a reconstruction module, a denoising module and a separation module, a mixed tracer sinogram collected by PET is reconstructed by utilizing a maximum likelihood estimation algorithm, and a reconstructed image is denoised by using a low-rank regularization model, so that the shape of a reconstructed mixed concentration map is better constrained, and the noise is smaller; and then, a coding and decoding model is used for learning the mapping relation between the mixed tracer concentration graph and the two full-dose single tracers, and the detailed information of the single tracer concentration graph can be required to be clearly recovered.

Description

Single-scanning double-tracer PET signal separation method based on BCD-ED
Technical Field
The invention belongs to the technical field of PET signal separation, and particularly relates to a single-scanning double-tracer PET signal separation method based on BCD-ED (block coordinate descent-encoding and decoding).
Background
Positron Emission Tomography (PET) is a typical Emission computed Tomography technique, often associated with markers11C、18F、15O、13The N-isoisotope tracer is used in combination, and has the advantages of high sensitivity to the tracer, no wound and the like. The dynamic change of the PET scanning tracer agent can characterize and quantify the functions of tissues in vivo, thereby obtaining physiological indexes of the part, such as glucose metabolism, blood flow, hypoxia and the like, which are known to be used for tumors, heart diseases, diabetes, and diabetes,Research on various diseases such as neurological diseases. Common nuclides for labeling can be divided into short half-life nuclides, medium half-life nuclides and long half-life nuclides according to the radioactive half-life duration, the half-life duration can influence the synthesis, transportation, dosage, scanning duration and performance of a tracer and the sensitivity required by a PET detector, and the nuclides need to be balanced and used according to actual conditions; short half-life nuclides of82Rb、15O、13N、62Cu、11C, the shorter half-life can be used for multiple scan imaging in a short time, but requires laboratories equipped with cyclotrons, high injection doses, short synthesis times, or more sensitive PET detectors; nuclides with long half-life such as64Cu、124I, the device can be used for longitudinally long-time study of physiological activities and is suitable for experimental places far away from a cyclotron; the nuclide with medium half-life is mainly18F and68ga, of moderate duration and therefore frequently used, since18F has the advantages of lower positron energy and range, medium half-life, higher divergence rate, easy labeling of biological molecules and the like compared with other nuclides, and is the most widely used nuclide in scientific research and clinic.
Compared with the single tracer PET imaging technology, the multi-tracer PET technology can only obtain the physiological activity characteristics of a certain aspect, the information is single, and the disease cannot be judged accurately, and the multi-tracer PET technology can provide complementary information to represent a more complete disease state through imaging the radioactive tracers sensitive to different physiological function changes, so that the possibility of misdiagnosis of the disease is reduced, and a doctor is guided to select a more effective treatment scheme. Early dual tracer imaging often employed separate acquisition of two tracers, i.e., dual injection-dual scan mode, which would not allow the two tracers to interfere with each other during the corresponding attenuation period, but which would cause great discomfort to the patient. Since this scanning method requires a long time, then to solve this problem, Koeppe et al proposes a dual-injection-single-scanning mode, i.e. a uniform scanning of dual tracers, reducing the signal superposition effect of the two tracers by injecting the two tracers at a short time interval, e.g. 10-20 minutes, and separating the different tracer signals by analyzing the pixel Time Activity Curve (TAC) or non-linear least squares (NLS) to model the target Region (ROI); although the scanning mode can combine two kinds of scanning into one kind of scanning, the scanning time is reduced to a certain extent, but the scanning time interval of 0-20 minutes is not the most perfect scanning mode.
In order to realize a completely gapless scanning mode, many researchers invest much effort, and at present, some gapless dual tracer imaging modes mostly use prior information, such as TAC data and atrioventricular model data, to separate different tracers, however, the separation mode based on the prior information has higher requirements on the accuracy of the prior information and the signal-to-noise ratio of the dual tracer data, which makes practical application of the separation mode limited. Therefore, how to distinguish by using the essential features of the tracer becomes one of the important research directions for tracer imaging.
Disclosure of Invention
In view of the above, the present invention provides a BCD-ED based single scan dual tracer PET signal separation method that is capable of accurately separating two single tracer PET images from a blended dual tracer PET image using data-driven by means of a powerful feature extraction tool, deep learning.
A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) injecting mixed double tracers into the biological tissue, and simultaneously carrying out one-time dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracers; the mixed double tracer consists of two isotopically labeled tracers I and II;
(2) respectively injecting a tracer I and a tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII
(3) Y is calculated by using PET reconstruction algorithmIAnd yIICorresponding PET dynamic image sequence
Figure BDA0003176824930000021
And
Figure BDA0003176824930000022
and will be
Figure BDA0003176824930000023
And
Figure BDA0003176824930000024
obtaining a PET dynamic image sequence truth value x of the mixed double tracers after superpositionture
(4) Repeatedly executing the steps for multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and yI、yII
Figure BDA0003176824930000025
And xture
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the network structure by using a training set sample to obtain a reconstruction-separation combined model of the dynamic double-tracing PET signal;
(6) inputting the test set samples into the combined model one by one, reconstructing to obtain a PET dynamic image sequence of mixed double tracers, and separating to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer II after denoisingIAnd SII
Further, the reconstruction module of the BCD-ED network adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, the constraint of a reconstruction solving problem is strengthened by adding a regularization term, and meanwhile, a plurality of neural network convolution kernels are used for convolving X in different directions in the reconstruction solving process to generate a sparse image XkSo that it has a low rank property.
Further, the denoising module of the BCD-ED network firstly reconstructs the sparse image X obtained by the modulekDecomposed into low rank matrix LkAnd poisson noise momentArray WkThen, solving the following objective function by using a singular value threshold algorithm;
Figure BDA0003176824930000031
wherein: c. CkDenotes the kth convolution kernel, λkAs a threshold parameter for controlling LkThe sparsity of (1), beta is a hyper-parameter to control the smoothness of the image, K is the number of convolution kernels used in the reconstruction solution process, | | | | | sweet wind*Is a nuclear norm;
obtaining:
Figure BDA0003176824930000032
Figure BDA0003176824930000033
Figure BDA0003176824930000034
wherein:
Figure BDA0003176824930000035
is ckThe inverse of (c) is used for deconvolution,
Figure BDA0003176824930000036
representing the estimation result obtained by converting the maximum likelihood function of the PET dynamic image sequence x into the negative logarithm of the minimum likelihood function under the condition of given observation data y, | | | | | survival2And u is a 2 norm, namely the PET dynamic image sequence output after denoising, superscripts i and i +1 represent iteration times, and i is a natural number.
Furthermore, the separation module of the BCD-ED network integrates the concept of coding-decoding and a same-layer jump connection structure, and includes two parts of coding and decoding, where the coding part is formed by sequentially connecting a downsampling block D1, a pooling layer C1, a downsampling block D2, a pooling layer C2, a downsampling block D3, a pooling layer C3, and a downsampling block D4 from input to output, and the decoding part is formed by sequentially connecting an upsampling block U1, a deconvolution layer E1, an upsampling block U2, a deconvolution layer E2, an upsampling block U3, and a deconvolution layer E3 from input to output, where:
each of the lower sampling blocks D1-D4 comprises three layers connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; D1-D4 respectively generate 64, 128, 256 and 512 Feature maps; after the encoding part processes, the number of channels reaches the maximum after the network reaches the bottom layer, and at the moment, the original image is down-sampled to be very small, so that a large amount of original characteristic information is extracted.
The convolution kernels of the pooling layers C1-C3 are all 2 multiplied by 2, and are used for reducing the size of the input characteristic image by half so as to reduce the convolution operation amount; due to the reduction of the feature map, the convolution kernels with the same size can extract features corresponding to the original image in a larger range, and the method has higher robustness and anti-overfitting capability on small disturbances such as offset, rotation and the like of the image.
The upper sampling blocks U1-U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the upsampling block U3 further includes a fourth layer, convolutional layer with a convolutional kernel size of 1 × 1, for reducing the number of channels to a certain number as an output result.
The input of U1 is the splicing result of the outputs of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the outputs of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the outputs of D1 and E2 in the channel dimension, and the U1-U3 respectively generate 256, 128 and 64 Feature maps; because partial image information is lost in the down-sampling in the encoding stage, and image details are difficult to obtain in the decoding process, the feature maps of the encoding parts with the same layer size are introduced in the same layer jump connection mode in the decoding process, and the two feature maps are spliced to achieve the purpose of feature fusion, so that the network can also use more original information which is not discarded by the pooling layer in the decoding process to recover a clearer image.
The deconvolution layers E1 to E3 are used to double the size of the input Feature image and restore the size of the Feature map, so as to solve the problem that the resolution of the Feature image becomes small after a series of convolution operations.
In the encoding stage of the BCD-ED network, the image size is reduced through a down-sampling block and a pooling layer, some shallow features are extracted, and some deep features are obtained through a deconvolution layer and an up-sampling block in the encoding stage. Meanwhile, through skip layer operation between the up-sampling block and the down-sampling block, combining the Feature map obtained in the encoding stage with the Feature map obtained in the decoding stage, combining the features of the deep layer and the shallow layer, thinning the image, and performing prediction separation according to the obtained Feature maps; the high resolution information passed directly from the encoding module to the co-altitude decoding module via the skip layer operation can provide finer features for the separation.
Further, the training process of the BCD-ED network structure in step (5) is as follows:
5.1 initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, calculating by combining the denoising module to obtain the denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1tureThe difference between them;
5.3 inputting u into a separation module, outputting to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer IIIAnd SIIFurther, S is calculated by a loss function loss2IAnd
Figure BDA0003176824930000051
SIIand
Figure BDA0003176824930000052
the difference between them;
and 5.4, carrying out supervision training on the whole network by using a combined loss function loss which is loss1+ loss2, and guiding the network to carry out back propagation and gradient descent by using a root Mean Square Error (MSE) as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training to obtain the reconstruction-separation combined model of the dynamic double-tracing PET signal.
Further, the expression of the loss function loss1 is as follows:
Figure BDA0003176824930000053
wherein:
Figure BDA0003176824930000054
the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by the (i + 1) th iteration solution is obtained, N is the number of the pixel points of the PET dynamic image sequence,
Figure BDA0003176824930000055
for PET dynamic image sequence truth value xtureThe density value at the nth pixel point in (a).
Further, the expression of the loss function loss2 is as follows:
Figure BDA0003176824930000056
wherein:
Figure BDA0003176824930000057
and
Figure BDA0003176824930000058
respectively a sequence S of PET dynamic imagesIAnd SIIThe density value at the nth pixel point in (a),
Figure BDA0003176824930000059
and
Figure BDA00031768249300000510
respectively a sequence of PET dynamic images
Figure BDA00031768249300000511
And
Figure BDA00031768249300000512
and the concentration value of the nth pixel point in the PET dynamic image sequence, wherein N is the number of the pixel points of the PET dynamic image sequence.
The invention realizes reconstruction and separation of the mixed tracer dynamic PET concentration distribution image through the BCD-ED network, and jointly reconstructs and separates the mixed dynamic double-tracer PET signal from the dynamic sinogram sequence. The BCD-ED network adopted by the invention is based on a traditional low-rank regularization model and a coding and decoding structure, can recover more single tracer image details with fewer parameters, reconstructs a mixed tracer sinogram acquired by PET by utilizing a maximum likelihood estimation algorithm, denoises the reconstructed image by using a low-rank regularization model, finally learns the mapping relation between a mixed tracer concentration map and two full-dose single tracers by using a coding and decoding model, and requires that the detail information of the single tracer concentration map can be recovered more clearly.
The invention relates to a direct separation algorithm, which has the advantages that the traditional iterative reconstruction algorithm is combined with deep learning, the shape of a reconstructed mixed concentration graph is better restrained, the noise is lower, a coding and decoding separation module can learn the mapping relation between the mixed concentration graph and a single tracer concentration graph, the traditional reconstruction algorithm is further broken away from the problem of separating dynamic double-tracer PET signals, the joint reconstruction and separation are directly carried out from a sinogram, and the double tracers can be more clinically applied.
Drawings
FIG. 1 is a schematic flow chart of the dynamic dual tracer PET signal separation method of the present invention.
FIG. 2 is a schematic diagram of a BCD-ED network framework according to the present invention.
FIG. 3(a) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 true density distribution image.
FIG. 3(b) shows a mixed tracer18F-BCPP-FE+18And F-FDG 21 st frame is used for predicting the image under the BCD-ED network.
FIG. 3(c) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 is used for predicting the image under the FBP algorithm.
FIG. 3(d) shows mixed tracers18F-BCPP-FE+18F-FDG frame 21 is used for predicting the image under the MLEM algorithm.
FIG. 3(e) shows a mixed tracer18F-BCPP-FE+18And F-FDG 21 frame is used for predicting the image under the UNET network.
FIG. 3(f) shows mixed tracers18F-BCPP-FE+18And F-FDG frame 21 is used for predicting the image under the FBP-CNN network.
FIG. 4(a) is a drawing18And F-FDG frame 21 true density distribution image.
FIG. 4(b) is18And F-FDG 21 st frame is used for predicting the image under the BCD-ED network.
FIG. 4(c) is18And F-FDG 21 frame is used for predicting the image under the UNET network.
FIG. 4(d) is18And F-FDG frame 21 is used for predicting the image under the FBP-CNN network.
FIG. 5(a) is a drawing18And (3) a real density distribution image of a 21 st frame of F-BCPP-FE.
FIG. 5(b) is18And F-BCPP-FE frame 21 is used for predicting the image under the BCD-ED network.
FIG. 5(c) is18And F-BCPP-FE frame 21 is used for predicting the image under the UNET network.
FIG. 5(d) is18And F-BCPP-FE frame 21 is used for predicting the image in the FBP-CNN network.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
As shown in FIG. 1, the single-scan dynamic dual-tracer PET signal separation method based on the pre-trained BCD-ED network of the invention comprises the following steps:
(1) training set data is prepared.
1.1, injecting a mixed double tracer into a biological tissue, and simultaneously carrying out dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracer, wherein the mixed double tracer consists of a tracer I and a tracer II which are marked by two isotopes;
1.2 respectively injecting tracer I and tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII
1.3 Using the PET reconstruction Algorithm to calculate yIAnd yIICorresponding PET dynamic image sequence
Figure BDA0003176824930000071
And
Figure BDA0003176824930000072
and make
Figure BDA0003176824930000073
And
Figure BDA0003176824930000074
PET dynamic image sequence truth value x of mixed double tracers obtained by superpositiontrue
Figure BDA0003176824930000075
1.4 repeating the above steps for a plurality of times to obtain a plurality of PET dynamic sinogram sequences y, yIAnd yIIAnd a PET dynamic image sequence xtrue
Figure BDA0003176824930000076
And
Figure BDA0003176824930000077
y=[y1,y2,...,yd]
Figure BDA0003176824930000078
Figure BDA0003176824930000079
wherein: y is1~ydCorresponding to the mixed dual tracer sinograms for frames 1-d in y,
Figure BDA0003176824930000081
corresponds to yIThe single tracer sinograms of the 1 st to d th frames,
Figure BDA0003176824930000082
corresponds to yIISingle tracer sinograms for the middle 1-d frames;
Figure BDA0003176824930000083
corresponds to xtrueThe true values of the mixed double-tracer concentration diagram of the 1 st to d th frames in the test,
Figure BDA0003176824930000084
correspond to
Figure BDA0003176824930000085
The single tracer concentration map true values in frames 1-d,
Figure BDA0003176824930000086
correspond to
Figure BDA0003176824930000087
The true values of the single tracer concentration diagrams of the 1 st to d th frames in the middle, wherein d is the number of PET dynamic scanning frames.
(2) Training set and test set data are prepared.
From y, xtrue
Figure BDA0003176824930000088
And
Figure BDA0003176824930000089
4/5 are randomly selected as the training set, 1/5 are remained as the testing set, and no sample in the testing set appears in the training set.
(3) Constructing a BCD-ED network as shown in FIG. 2, wherein the network framework has three modules, which are respectively composed of a reconstruction module, a denoising module and a separation module, and specifically introduced as follows:
in the initialization process, a dynamic sinogram sequence y and a system matrix G of a mixed dual tracer agent collected from PET are input, a maximum likelihood estimation algorithm is selected in a reconstruction module to solve a dynamic concentration map sequence x of the PET, and the expectation of the reconstructed concentration map x is obtained as follows:
Figure BDA00031768249300000810
the frame strengthens the constraint of the reconstruction problem by adding the regularization term, and the difference between adjacent pixels in the reconstructed image can be reduced after the constraint is added, so that the reconstructed image is smoother; k neural network convolution kernels c are used in the processkConvolving image X in different directions to produce sparse image Xk(ii) a Sparse feature images can be better learned by using a large amount of data, and the image matrixes have low rank property.
Xk=ck*x
In the actual case, XkUsually contains some noise, X can be setkDecomposed into low rank matrix LkAnd poisson noise matrix WkAdding nuclear norm | | | | luminance*The purpose of image denoising can be achieved.
Xk=Lk+Wk
Figure BDA00031768249300000811
The above formula is given by the hyperparameter beta which controls the smoothness of the image, lambdakIs a threshold parameter for controlling LkSparsity of (a);in the process, a parameter lambda is initializedkβ, the above equation can be solved using a singular value threshold method:
Figure BDA0003176824930000091
wherein: l iskCan be represented by the sum of singular values, σpFor the pth maximum singular value, i is the number of iterations, (x)+Max (x,0) can be used for soft threshold shrinkage, concentration map x estimated by the algorithmi+1Can be expressed as the following formula, convolution kernel filter c in neural networkkImage features can be extracted due to the fact that the K convolution kernels c are passedkThe extracted concentration diagram x characteristic can be equivalently expressed with x, so that the characteristic can be obtained
Figure BDA0003176824930000092
Figure BDA0003176824930000093
Then obtaining a concentration graph x by using a soft threshold valuei+1Then using deconvolution
Figure BDA0003176824930000094
Obtaining a denoised image ui+1
Figure BDA0003176824930000095
Figure BDA0003176824930000096
Figure BDA0003176824930000097
Figure BDA0003176824930000098
Wherein:
Figure BDA0003176824930000099
is ckIs used for deconvolution, here
Figure BDA00031768249300000910
It is the maximum likelihood function that the PET concentration map x can be converted into the minimum likelihood function negative logarithm under the given observation data y to estimate
Figure BDA00031768249300000911
U at this timei+1After the concentration graph x is subjected to convolution to extract a characteristic graph, soft threshold shrinkage sparsification is carried out, and the concentration graph obtained through deconvolution is the denoised concentration graph.
(4) Inputting a training set into the network for training, wherein the training process comprises the following steps:
4.1 initializing the BCD-ED network, wherein the initialization comprises setting the number of input layers, hidden layers and output layers, and parameters for initializing the network comprise iteration number, convolution kernel number and learning rate.
4.2 inputting the dynamic sinogram sequence y of the mixed double tracers collected from the PET and the system matrix G into a denoising-reconstruction module in a BCD-ED network for training, and calculating x by the following formulature,nAnd ui+1And (4) correcting and updating the bias vector and the weight matrix between layers in the neural network by the error function through a gradient descent method.
Figure BDA0003176824930000101
Wherein: u. ofi+1Obtaining a denoised mixed tracer concentration map x by reconstructionture,nThe concentration true value of the mixed tracer at the nth pixel point of the image is referred, and N represents the total number of pixel points in the image.
And 4.3, inputting the three-dimensional reconstruction concentration map with the time frame information obtained in the denoising-reconstruction module into a separation module in the BCD-ED network, wherein the module is of a symmetrical structure, and the other convolution layers except the first and last convolution layers comprise a normalized BN layer and a ReLU activation function layer. Extracting image characteristic information by a network in the same layer through a convolution layer, and then reducing the size of the characteristic image by half through a maximum pooling layer with the size of 2 multiplied by 2 so as to reduce the convolution operation amount; meanwhile, due to the reduction of the feature map, the convolution kernels with the same size can extract features corresponding to the original image in a larger range, and the small disturbances such as offset, rotation and the like of the image have higher robustness and overfitting resistance; the up-sampling module adopts 2 x 2 deconvolution to decode the feature image and return the feature image to the original image size, because the down-sampling can lose part of image information during encoding and the decoding is difficult to obtain image details, the feature images of the encoding modules with the same layer size are introduced in the module by using a same-layer jump connection mode, and the two feature images are spliced to achieve the purpose of feature fusion, so that the network can also use more original information which is not discarded by the pooling layer during decoding, and recover a clearer image.
As shown in the following formula, the separation module uses the root mean square error MSE as a loss error to guide the network to reversely propagate and descend the gradient, and finally two separated denoised tracer concentration maps are obtained through output.
Figure BDA0003176824930000102
In the formula: si,nAnd
Figure BDA0003176824930000103
respectively representing the predicted concentration value and the true concentration value of the tracer i at the nth pixel point in the image.
4.4 obtaining a loss function loss2 of the separation module, adding the loss function loss2 with the loss function loss1 in the step 4.2 to obtain a combined loss function, further performing combined training based on a de-noising part in the block coordinate descent neural network and the separation module based on the coding and decoding network, ending and reserving model parameters after training M epochs, and separating the tracer in the test set by using the trained network.
(5) And (6) evaluating the result.
The results of reconstruction-separation are generally evaluated using Peak Signal to Noise Ratio (PSNR) and Mean Structural Similarity (MSSIM) indicators.
Figure BDA0003176824930000111
Figure BDA0003176824930000112
In the formula:
Figure BDA0003176824930000113
and
Figure BDA0003176824930000114
respectively mean values of the predicted image and the true value image,
Figure BDA0003176824930000115
and
Figure BDA0003176824930000116
respectively representing the standard deviation of the predicted image and the true value image,
Figure BDA0003176824930000117
representing the covariance, K the total image block, MAX the maximum value in the image, C1=(0.01MAX)2And C2=(0.03MAX)2Is a constant, Si,nAnd
Figure BDA0003176824930000118
respectively representing the predicted concentration value and the true concentration value of the tracer i at the nth pixel point of the image, wherein N represents the total pixel point number in the image.
(6) And acquiring and comparing experimental data.
In real experiments, five male rhesus monkeys (macaques) weighing 4.7-8.7 kg were subjected to dynamic PET scanning using a high resolution small animal PET scanner (SHR-38000; Hamamatsu Photonics K.K., Hamamatsu, Japan). The monkeys were given a right lower limb intravenous dose of approximately 150MBq prior to the first scan18F-FDG, second scan injection of about 240MBq18F-BCPP-FE, with intervals between each scan of more than one week to ensure complete metabolism of the tracer in vivo. In the scanning process, in order to activate the hand region of the somatosensory cortex of the left hemisphere, a vibrator (mini MASSAGER G-2; Kawasaki-Seiji co., Ltd, Tokyo, Japan) is used to apply a tactile stimulation of 93 ± 2Hz to the right anterior paw of the monkey, the scanning is performed for 120 minutes in total, sampling protocols are 6 × 10s, 2 × 30s, 8 × 60s, 10 × 300s and 6 × 600s, 32 frames of dynamic PET data with an image size of 124 × 148 × 108 are finally obtained, and 80 pieces of slice data are selected. After two tracer concentration maps are superposed, a mixed concentration map is projected to be a sinogram, the process is completed by using a simple band-integration system model in a Michigan Image Reconstruction Toolbox (Fessler1994), the projection angle and the detector number are respectively 200 and 200, the sinogram with the size of 200 multiplied by 200 is obtained, and Poisson noise is added; one of the brain data of five monkeys was randomly selected as a test set, and the remaining four were selected as training sets, with the ratio of the number of training sets to the number of test sets being 4: 1.
It can be seen from fig. 3(a) to fig. 3(f) that the reconstruction results of the conventional method and the neural network are compared, and due to the introduction of the basic reconstruction model and the system matrix, the reconstruction results of the BCD-ED network and the two conventional FBP and MLEM methods have better constraints on the image contour than those of the other two neural network methods, and are closer to the truth map, but the reconstruction results of the conventional method are relatively noisy. Although the newly proposed FBP-CNN network is combined with a deep learning method, the quantity of parameters required for training is large, the training is difficult, and the final reconstruction result obviously has more noise and lacks of image details. The UNET network results do not appear well in image detail, but its density value is high. In comparison, the BCD-ED network has the least training parameters, the reconstruction result is closer to the true value in both the image detail and the density value, and the image is smoother due to the action of the denoising module.
It is obvious from fig. 4(a) -4 (d) and 5(a) -5 (d) that although the separation modules of the three networks all use the basic codec structure, the BCD-ED network is closer to the true value in the image shape details and density values than the other two methods because the model constraint is introduced in the previous reconstruction module, and the quality of the reconstructed mixed density map is higher when the input data is input into the separation module; and the separation module of the BCD-ED network has the same-layer jump connection, and compared with the FBP-CNN network lacking the jump connection, the characteristic fusion method is more favorable for recovering the image details.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (7)

1. A single-scanning double-tracer PET signal separation method based on BCD-ED comprises the following steps:
(1) injecting mixed double tracers into the biological tissue, and simultaneously carrying out one-time dynamic PET scanning to obtain a PET dynamic sinogram sequence y corresponding to the mixed double tracers; the mixed double tracer consists of two isotopically labeled tracers I and II;
(2) respectively injecting a tracer I and a tracer II into the same biological tissue, and separately carrying out dynamic PET scanning to obtain PET dynamic sinogram sequences y corresponding to the tracer I and the tracer IIIAnd yII
(3) Y is calculated by using PET reconstruction algorithmIAnd yIICorresponding PET dynamic image sequence
Figure FDA0003176824920000011
And
Figure FDA0003176824920000012
and will be
Figure FDA0003176824920000013
And
Figure FDA0003176824920000014
obtaining a PET dynamic image sequence truth value x of the mixed double tracers after superpositionture
(4) Repeatedly executing the steps for multiple times to obtain a large number of samples, and dividing the samples into a training set and a testing set, wherein each group of samples comprises y and yI、yII
Figure FDA0003176824920000015
And xture
(5) Constructing a BCD-ED network consisting of a reconstruction module, a denoising module and a separation module, and training the network structure by using a training set sample to obtain a reconstruction-separation combined model of the dynamic double-tracing PET signal;
(6) inputting the test set samples into the combined model one by one, reconstructing to obtain a PET dynamic image sequence of mixed double tracers, and separating to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer II after denoisingIAnd SII
2. The single scan dual tracer PET signal separation method of claim 1, wherein: the reconstruction module of the BCD-ED network adopts a maximum likelihood estimation algorithm to solve a PET dynamic image sequence X corresponding to y in an input sample, the constraint of a reconstruction solving problem is strengthened by adding a regularization term, and meanwhile, a plurality of neural network convolution kernels are used for convolving X in different directions in the reconstruction solving process to generate a sparse image XkSo that it has a low rank property.
3. The single scan dual tracer PET signal separation method of claim 2, wherein: the denoising module of the BCD-ED network firstly reconstructs the sparse image X obtained by the modulekDecomposed into low rank matrix LkAnd poisson noise matrix WkThen, solving the following objective function by using a singular value threshold algorithm;
Figure FDA0003176824920000016
wherein: c. CkDenotes the kth convolution kernel, λkAs a threshold parameter for controlling LkThe sparsity of (1), beta is a hyper-parameter to control the smoothness of the image, K is the number of convolution kernels used in the reconstruction solution process, | | | | | sweet wind*Is a nuclear norm;
obtaining:
Figure FDA0003176824920000021
Figure FDA0003176824920000022
Figure FDA0003176824920000023
wherein:
Figure FDA0003176824920000024
is ckThe inverse of (c) is used for deconvolution,
Figure FDA0003176824920000025
representing the estimation result obtained by converting the maximum likelihood function of the PET dynamic image sequence x into the negative logarithm of the minimum likelihood function under the condition of given observation data y,||||2And u is a 2 norm, namely the PET dynamic image sequence output after denoising, superscripts i and i +1 represent iteration times, and i is a natural number.
4. The single scan dual tracer PET signal separation method of claim 1, wherein: the separation module of the BCD-ED network comprises an encoding part and a decoding part, wherein the encoding part is formed by sequentially connecting a down-sampling block D1, a pooling layer C1, a down-sampling block D2, a pooling layer C2, a down-sampling block D3, a pooling layer C3 and a down-sampling block D4 from input to output, and the decoding part is formed by sequentially connecting an up-sampling block U1, a deconvolution layer E1, an up-sampling block U2, a deconvolution layer E2, an up-sampling block U3 and a deconvolution layer E3 from input to output, wherein:
each of the lower sampling blocks D1-D4 comprises three layers connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; D1-D4 respectively generate 64, 128, 256 and 512 Feature maps;
the convolution kernels of the pooling layers C1-C3 are all 2 multiplied by 2, and are used for reducing the size of the input characteristic image by half so as to reduce the convolution operation amount;
the upper sampling blocks U1-U3 all comprise three layers of structures which are connected in sequence: the first layer is a convolution layer, the size of a convolution kernel of the convolution layer is 3 multiplied by 3, and features are extracted through the convolution kernel; the second layer is a BatchNorm layer, and the output of the previous layer is subjected to normalization processing; the third layer is a Relu layer, and the output of the previous layer is subjected to activation function processing; the upsampling block U3 further includes a fourth layer, i.e., a convolution layer with a convolution kernel size of 1 × 1, for reducing the number of channels to a specific number as an output result;
the input of U1 is the splicing result of the outputs of D3 and D4 in the channel dimension, the input of U2 is the splicing result of the outputs of D2 and E1 in the channel dimension, the input of U3 is the splicing result of the outputs of D1 and E2 in the channel dimension, and the U1-U3 respectively generate 256, 128 and 64 Feature maps;
the deconvolution layers E1 to E3 are used to multiply the size of the input Feature image and restore the size of the Feature map.
5. The single scan dual tracer PET signal separation method of claim 3, wherein: the process of training the BCD-ED network structure in the step (5) is as follows:
5.1 initializing network parameters including bias vectors and weight matrixes among network layers, learning rate and maximum iteration times;
5.2 taking y in the training set sample as the input of the reconstruction module, calculating by combining the denoising module to obtain the denoised PET dynamic image sequence u, and further calculating u and a true value x by a loss function loss1tureThe difference between them;
5.3 inputting u into a separation module, outputting to obtain a PET dynamic image sequence S corresponding to the tracer I and the tracer IIIAnd SIIFurther, S is calculated by a loss function loss2IAnd
Figure FDA0003176824920000031
SIIand
Figure FDA0003176824920000032
the difference between them;
and 5.4, carrying out supervision training on the whole network by using a combined loss function loss which is loss1+ loss2, and guiding the network to carry out back propagation and gradient descent by using a root Mean Square Error (MSE) as a loss error until the loss function loss converges or reaches the maximum iteration number, thereby completing training to obtain the reconstruction-separation combined model of the dynamic double-tracing PET signal.
6. The single scan dual tracer PET signal separation method of claim 5, wherein: the expression of the loss function loss1 is as follows:
Figure FDA0003176824920000033
wherein:
Figure FDA0003176824920000034
the concentration value of the nth pixel point in the PET dynamic image sequence u obtained by the (i + 1) th iteration solution is obtained, N is the number of the pixel points of the PET dynamic image sequence,
Figure FDA0003176824920000035
for PET dynamic image sequence truth value xtureThe density value at the nth pixel point in (a).
7. The single scan dual tracer PET signal separation method of claim 5, wherein: the expression of the loss function loss2 is as follows:
Figure FDA0003176824920000041
wherein:
Figure FDA0003176824920000042
and
Figure FDA0003176824920000043
respectively a sequence S of PET dynamic imagesIAnd SIIThe density value at the nth pixel point in (a),
Figure FDA0003176824920000044
and
Figure FDA0003176824920000045
respectively a sequence of PET dynamic images
Figure FDA0003176824920000046
And
Figure FDA0003176824920000047
the concentration value at the nth pixel point in the spectrum, N is the PET movementNumber of pixels of the state image sequence.
CN202110840914.9A 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method Active CN113476064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110840914.9A CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110840914.9A CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Publications (2)

Publication Number Publication Date
CN113476064A true CN113476064A (en) 2021-10-08
CN113476064B CN113476064B (en) 2023-09-01

Family

ID=77943715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110840914.9A Active CN113476064B (en) 2021-07-23 2021-07-23 BCD-ED-based single-scanning double-tracer PET signal separation method

Country Status (1)

Country Link
CN (1) CN113476064B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140119628A1 (en) * 2012-10-28 2014-05-01 Technion Research & Development Foundation Limited Image reconstruction in computed tomography
US20150287223A1 (en) * 2014-04-04 2015-10-08 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN109993825A (en) * 2019-03-11 2019-07-09 北京工业大学 A kind of three-dimensional rebuilding method based on deep learning
CN110490832A (en) * 2019-08-23 2019-11-22 哈尔滨工业大学 A kind of MR image reconstruction method based on regularization depth image transcendental method
CN111127356A (en) * 2019-12-18 2020-05-08 清华大学深圳国际研究生院 Image blind denoising system
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111640075A (en) * 2020-05-23 2020-09-08 西北工业大学 Underwater image occlusion removing method based on generation countermeasure network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140119628A1 (en) * 2012-10-28 2014-05-01 Technion Research & Development Foundation Limited Image reconstruction in computed tomography
US20150287223A1 (en) * 2014-04-04 2015-10-08 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
CN109009179A (en) * 2018-08-02 2018-12-18 浙江大学 Identical isotope labelling dual tracer PET separation method based on depth confidence network
CN109993825A (en) * 2019-03-11 2019-07-09 北京工业大学 A kind of three-dimensional rebuilding method based on deep learning
CN110490832A (en) * 2019-08-23 2019-11-22 哈尔滨工业大学 A kind of MR image reconstruction method based on regularization depth image transcendental method
CN111127356A (en) * 2019-12-18 2020-05-08 清华大学深圳国际研究生院 Image blind denoising system
CN111166368A (en) * 2019-12-19 2020-05-19 浙江大学 Single-scanning double-tracer PET signal separation method based on pre-training GRU
CN111640075A (en) * 2020-05-23 2020-09-08 西北工业大学 Underwater image occlusion removing method based on generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
卿敏敏: "基于深度学习的双示踪PET图像重建", 中国优秀硕博士论文, vol. 2021, no. 02 *
叶华俊等: "正电子发射断层成像重建算法评述", 生物医学工程学杂志, no. 19 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising

Also Published As

Publication number Publication date
CN113476064B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN111627082B (en) PET image reconstruction method based on filtering back projection algorithm and neural network
CN111325686B (en) Low-dose PET three-dimensional reconstruction method based on deep learning
CN113516210B (en) Lung adenocarcinoma squamous carcinoma diagnosis model training method and device based on PET/CT
CN109636869B (en) Dynamic PET image reconstruction method based on non-local total variation and low-rank constraint
Yuan et al. SIPID: A deep learning framework for sinogram interpolation and image denoising in low-dose CT reconstruction
US11508101B2 (en) Dynamic dual-tracer PET reconstruction method based on hybrid-loss 3D convolutional neural networks
CN112435164B (en) Simultaneous super-resolution and denoising method for generating low-dose CT lung image based on multiscale countermeasure network
CN109993808B (en) Dynamic double-tracing PET reconstruction method based on DSN
CN113160347B (en) Low-dose double-tracer PET reconstruction method based on attention mechanism
WO2024011797A1 (en) Pet image reconstruction method based on swin-transformer regularization
Shao et al. SPECTnet: a deep learning neural network for SPECT image reconstruction
CN114387236A (en) Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network
Feng et al. Rethinking PET image reconstruction: ultra-low-dose, sinogram and deep learning
CN116645283A (en) Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network
CN113476064B (en) BCD-ED-based single-scanning double-tracer PET signal separation method
CN114358285A (en) PET system attenuation correction method based on flow model
CN116503506B (en) Image reconstruction method, system, device and storage medium
CN115984401A (en) Dynamic PET image reconstruction method based on model-driven deep learning
CN116245969A (en) Low-dose PET image reconstruction method based on deep neural network
CN116152373A (en) Low-dose CT image reconstruction method combining neural network and convolutional dictionary learning
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network
Wan et al. Deep-learning based joint estimation of dual-tracer PET image activity maps and clustering of time activity curves
CN111920436A (en) Dual-tracer PET (positron emission tomography) separation method based on multi-task learning three-dimensional convolutional coding and decoding network
CN108765318A (en) A kind of dynamic PET images factor treatment based on dynamics cluster
Xue et al. PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant