CN115564666A - Low-dose PET image denoising method based on contrast learning - Google Patents
Low-dose PET image denoising method based on contrast learning Download PDFInfo
- Publication number
- CN115564666A CN115564666A CN202211107388.6A CN202211107388A CN115564666A CN 115564666 A CN115564666 A CN 115564666A CN 202211107388 A CN202211107388 A CN 202211107388A CN 115564666 A CN115564666 A CN 115564666A
- Authority
- CN
- China
- Prior art keywords
- dose
- low
- pet image
- denoising
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000005070 sampling Methods 0.000 claims abstract description 21
- 230000003044 adaptive effect Effects 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 32
- 238000012360 testing method Methods 0.000 claims description 11
- 238000012795 verification Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000009795 derivation Methods 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000000052 comparative effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 abstract description 3
- 238000002600 positron emission tomography Methods 0.000 description 88
- 238000010586 diagram Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 3
- 238000000342 Monte Carlo simulation Methods 0.000 description 2
- 238000002679 ablation Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 208000014644 Brain disease Diseases 0.000 description 1
- 238000012879 PET imaging Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000002503 metabolic effect Effects 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000000700 radioactive tracer Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The invention discloses a low-dose PET image denoising method based on contrast learning, which is characterized in that information of a low-dose PET image and information of a standard-dose PET image are respectively used as a negative sample and a positive sample. CR ensures that the denoised PET image is closer to the standard dose PET image and further from the low dose PET image in the representation space. In addition, in consideration of parameter and performance balance, a dense denoising network based on an automatic-like encoder framework is developed, wherein the adaptive hybrid Mixup operation can adaptively carry out information flow in the high-low resolution space of up-down sampling and enlarge the receptive field so as to improve the conversion capability of the network.
Description
Technical Field
The invention belongs to the technical field of biomedical image denoising, and particularly relates to a low-dose PET image denoising method based on contrast learning.
Background
Positron Emission Tomography (PET) is a nuclear medicine technique for in vivo functional imaging, can provide functional information of organs and lesions at a molecular level, and plays an irreplaceable role in diagnosis and treatment of heart diseases, brain diseases and malignant tumors.
Computer vision tasks typically involve image denoising, the objective of which is to eliminate noise in the corrupted image and recover the true image. Denoising in medical imaging processes is of particular interest, since noise may confound disease diagnosis and affect subsequent clinical decisions. Positron Emission Tomography (PET) is a molecular imaging modality used to provide metabolic and functional information. This mode of denoising algorithms aim to break the trade-off between scan time, radiation intensity and image quality. In PET scanning, it is also desirable to reconstruct a perfect image with a small number of coincidence events, reducing ionizing radiation exposure.
A number of methods have been proposed to improve the image quality of PET. The image restoration methods are implemented at various stages, including raw data preprocessing before reconstruction, reconstruction algorithm design and post-processing after reconstruction. Image post-processing is implemented directly on the low-dose PET images and can be easily integrated into existing clinical procedures. Therefore, many researchers prefer to solve the problem of denoising low-dose PET images. In the field of PET image denoising, conventional image processing algorithms include NLM, BM3D, and Diffusion Filters, etc. Although these algorithms may reduce noise to varying degrees, they may result in excessive smoothing of the de-noised image.
Recently, deep learning has shown excellent ability in PET imaging tasks, and in the field of low-dose PET denoising, sano et al modified U-Net to reconstruct a standard-dose PET image from a low-dose PET image; kim et al optimized DnCNN to perform PET image denoising; a unified motion correction and denoising countermeasure network (DPET) was proposed by zhou et al to denoise and motion estimate low dose PET simultaneously; in addition, there are some low-dose PET denoising efforts based on CycleGAN and Wasserstein GAN.
The above methods have demonstrated their ability to perform the task of denoising PET images, but most focus on optimizing the network structure or designing different loss functions. There are the following problems:
(1) The information of the positive sample is mainly used as guidance to carry out denoising, and the information of the negative sample is not fully utilized. Only forward denoising has poor objective denoising effect;
(2) Most of the existing methods generally adopt a standard dose image (Ground Truth) as a positive sample to guide the training of a denoising network through an L1/L2-based image reconstruction loss without any regularization, however, only the image reconstruction loss cannot effectively process the details of the image, which may cause image color distortion;
(3) Most of the work focuses on enhancing the depth and width of the denoising network, resulting in huge computational and memory requirements.
Disclosure of Invention
In view of this, the present invention proposes a Contrast Regularization (CR) based on contrast learning, which uses information of a low-dose PET image and a standard-dose PET image as a negative sample and a positive sample, respectively. CR ensures that the denoised PET image is closer to the standard dose PET image and further away from the low dose PET image in the representation space. In addition, in consideration of parameter and performance balance, a dense denoising network based on an automatic-like encoder framework is developed, wherein the adaptive hybrid Mixup operation can adaptively carry out information flow in the high-low resolution space of up-down sampling and enlarge the receptive field so as to improve the conversion capability of the network.
The invention is realized by the following technical scheme:
the invention discloses a low-dose PET image denoising method based on contrast learning, which comprises the following steps:
1) Acquiring simulation-generated Sinogram projection data with standard dose, and performing down-sampling on the Sinogram projection data with the standard dose by utilizing Poisson distribution to obtain low-dose Sinogram projection data;
2) Reconstructing the Sinogram projection data with the standard dose through an OSEM algorithm to obtain PET image data with the corresponding standard dose; reconstructing the low-dose Sinogrm projection data through an FBP algorithm to obtain corresponding low-dose PET image data;
3) Obtaining a large number of samples according to the step 2), wherein each sample comprises low-dose PET image data and standard-dose PET image data, and dividing all samples into a training set, a verification set and a test set;
4) In order to balance denoising performance and model parameters, a lightweight PET basic denoising network based on a similar self-encoder is built; inputting low-dose PET image data into a denoising network, and outputting a denoised PET image by a model; in order to better restore the image, a new contrast learning loss function is designed based on a mode of constructing positive and negative samples through contrast learning, a low-dose PET image and a standard-dose PET image are respectively used as the positive and negative samples, and a denoised PET image is used as an anchor sample;
5) In the training stage, inputting low-dose PET image data in a training set into the network in the step 4), training based on the comparison and learning loss function in the step 4), repeatedly performing forward propagation and reverse derivation on the basis of the principle of minimizing the loss function, and continuously updating parameters until the numerical value of the loss function is small enough and the model is converged; during training, inputting the low-dose PET image data concentrated in verification into the model for verification, and monitoring the effectiveness of model training so as to adjust parameters in a training stage in time;
6) In the testing stage, the low-dose PET image data concentrated in the testing is input into the trained PET basic denoising network, and the high-quality PET image is directly obtained.
As a further improvement, the specific implementation manner of the down-sampling in step 1) of the present invention is as follows: for Sinogram projection data with standard dose, firstly, a random number matrix with the same size as the Sinogram matrix is generated by utilizing a Python built-in library function based on Poisson distribution, the mean value of the random number matrix can be set to be different sizes by setting different normalization coefficients, and then the mean value of the Sinogram matrix is changed into one n of the original mean value through matrix dot product operation, so that low-dose Sinogram projection data are obtained, wherein n is the multiplying power of downsampling.
As a further improvement, the specific structure of the PET basic denoising network in the step 4) is as follows: inspired by FABlock proposed by ' FFA-Net ', feature Fusion attachment Network for Single Image Dehazing ', 4 FABlock are used as basic blocks of the proposed de-noising Network, firstly 4 times of down-sampling operation is adopted to enable FABlock to learn the Feature representation in a low resolution space, and then corresponding 4 times of up-sampling operation and a convolution operation are used to recover the high resolution Image; low-level features (such as edges and contours) are usually captured by shallow layers of the convolutional neural network, however, as the number of layers increases, the shallow feature pairs gradually degrade, and in order to avoid losing the shallow features, the model adds an adaptive hybrid Mixup operation between the downsampling layer and the upsampling layer, which can be expressed as:
f ↑2 =Mix(f ↓1 ,f ↑1 )=σ(θ 1 )*f ↓1 +(1-σ(θ 1 ))*f ↑1 ,
f ↑ =Mix(f ↓2 ,f ↑2 )=σ(θ 2 )*f ↓2 +(1-σ(θ 2 ))*f ↑2 ,
wherein f is ↓i And f ↑i Feature maps of the i-th down-sampling layer and up-sampling layer, respectively, f ↑ Is the final output, σ (θ) i ) I =1,2 is an i learnable factor that fuses the i-th down-sampling layer and the i-th up-sampling layer, and is learned by an attention mechanism.
As a further improvement, step 4) of the present invention is a comparison learning Loss function, in which standard dose and low dose PET image data are respectively used as positive and negative samples, a denoised PET image output by a PET basic denoising network is used as an anchor sample, comparison Regularization (CR) is added to L1 Loss to form contrast Loss, for a potential feature space, a common intermediate feature is selected from the same fixed pre-training model VGG19, and end-to-end PET image denoising is expressed as:
i is a low-dose PET image, J is a standard-dose PET image, phi (I, w) is a PET basic denoising network, w is a network model parameter, | | J-phi (I, w) | | is a data fidelity term image reconstruction Loss L1 Loss, a beta parameter is used for balancing a data fidelity term and a regularization term, and rho (phi (I, w)) is a regularization term and is expressed by adopting a contrast regularization CR:
G i i =1,2, \ 8230n denotes the i-th hidden layer feature extracted from the pre-trained VGG19, specifically including only the 1 st, 3 rd, 5 th, 9 th, 13 th convolutional layers of the VGG19 as the feature extraction part, ω i The weight parameters are respectively1, D (x, y) represents the L1 distance between x and y.
As a further improvement, the specific process of training the network model in step 5) of the present invention is as follows:
6.1 initializing to initialize parameters in the denoising model by adopting Kaiming, and setting parameters of each layer;
6.2, inputting low-dose PET image data in the training set sample into a denoising network model, training the constructed PET basic denoising network, and finally calculating the output of each layer through forward propagation to obtain a final output anchor sample of the network model;
6.3VGG19 respectively extracts the characteristics of the positive and negative samples and the anchor sample, calculates the partial derivative of the loss function by using the principle of minimizing the loss function and through the comparative learning loss function, performs gradient back propagation, and updates learnable parameters in the network model through an Adam algorithm;
6.4 repeating steps 6.2-6.3 until the parameters of the whole network model converge.
The invention has the following beneficial effects:
the invention discloses a contrast learning-based low-dose PET image denoising method, which is Contrast Regularization (CR) based on contrast learning, and the information of a low-dose PET image and a full-dose PET image is respectively used as a negative sample and a positive sample. Contrast regularization CR ensures that the denoised PET image is closer to the standard-dose PET image and further away from the low-dose PET image in the representation space. In addition, in consideration of parameter and performance balance, a lightweight denoising network based on an automatic encoder-like framework is developed, wherein the adaptive hybrid Mixup operation can adaptively perform information flow in high and low resolution spaces of up and down sampling and enlarge the receptive field so as to improve the conversion capability of the network. The performance of the model has been verified on PET datasets, including both simulated and real datasets. The result shows that the strategy is superior to the most advanced low-dose PET image denoising algorithm in the aspects of visual quality and quantitative index, and shows strong generalization capability. This simple and effective strategy shows the promise of the task of denoising PET images, and may have significant clinical impact in the future.
Drawings
FIG. 1 is an algorithmic flow chart of the method of the present invention;
FIG. 2 is a schematic structural diagram of a network model CLPD-Net of the present invention;
FIG. 3 shows the low dose of the present invention at different frames 18 The image denoising result comparison schematic diagram of the F-FDG modules in the ablation experiment;
FIG. 4 shows the low dose of the present invention at different frames 18 The schematic diagram of the comparison of the ablation experiment quality indexes of all the modules of the F-FDG;
FIG. 5 shows low dose in different frames for different methods 18 A denoising image result comparison schematic diagram of an F-FDG comparison experiment;
FIG. 6 is a schematic view ofLow dose in different frames in the same way 18 F-FDG compares the experimental quality index and compares the schematic diagram.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The invention relates to a low-dose PET image denoising method based on contrast learning, in particular to a low-dose PET image denoising method based on contrast learning, and FIG. 1 is an algorithm flow chart of the method, which comprises the following steps:
1) Acquiring simulation-generated Sinogram projection data with standard dose, and performing down-sampling on the Sinogram projection data with the standard dose by utilizing Poisson distribution to obtain low-dose Sinogram projection data;
2) Reconstructing the Sinogrm projection data with the standard dose through an OSEM algorithm to obtain PET image data corresponding to the standard dose; reconstructing the low-dose Sinogrm projection data through an FBP algorithm to obtain corresponding low-dose PET image data;
3) Obtaining a large number of samples according to the step 2), wherein each sample comprises low-dose PET image data and standard-dose PET image data, and dividing all samples into a training set, a verification set and a test set;
4) In order to balance the denoising performance and the model parameters, a lightweight PET basic denoising network based on a quasi-autoencoder is built; inputting low-dose PET image data into a denoising network, and outputting a denoised PET image by a model; in order to better restore the image, a new contrast learning loss function ContrastLoss is designed based on a mode of constructing positive and negative samples through contrast learning, a standard dose PET image and a low dose PET image are respectively used as the positive and negative samples, and a denoised PET image is used as an anchor sample;
5) In a training stage, inputting low-dose PET image data in a training set into the PET basic denoising network in the step 4), training based on the contrast learning loss function ContrastLoss in the step 4), repeatedly performing forward propagation and reverse derivation on the basis of the principle of minimization of the loss function, and continuously updating parameters until the numerical value of the loss function is small enough and the model is converged; during training, inputting the low-dose PET image data concentrated in verification into the model for verification, and monitoring the effectiveness of model training so as to adjust parameters in a training stage in time;
6) In the testing stage, the low-dose PET image data concentrated in the testing is input into the trained PET basic denoising network, and the high-quality PET image is directly obtained.
The method specifically comprises the following steps:
(1) And (5) collecting data.
Generating simulation data required by an experiment by adopting a GATE tool kit, generating 3D dynamic Sinogram and PET data (OSEM reconstruction) with standard dose by utilizing Monte Carlo simulation, carrying out Poisson-down sampling on the Sinogram projection data with the standard dose by utilizing Poisson distribution to obtain Sinogram projection data with low dose, and reconstructing the Sinogram projection data with the low dose by utilizing an FBP algorithm to obtain PET image data with the low dose. The procedure was repeated more than 10 times to obtain a large number of samples, each sample comprising low dose and standard dose PET image data.
(2) Loss functions and sample forms.
And (3) adopting a comparison learning loss function, respectively taking the PET image data of the standard dose and the low dose as positive and negative samples, and taking the denoised PET image output by the PET basic denoising network as an anchor sample. Contrast Regularization (CR) is added to the L1 Loss to form a contrast learning Loss function, contrast Loss. For the potential feature space, common intermediate features are selected from the same fixed pre-trained model VGG 19. End-to-end PET image denoising utilizes a contrast learning loss function, which can be expressed as:
i is a low-dose PET image, J is a standard-dose PET image, phi (I, w) is a PET basic denoising network, w is a network model parameter, | | J-phi (I, w) | | is a data fidelity term image reconstruction Loss L1 Loss, and a beta parameter is used for balancing a data fidelity term and a regularization term. ρ (φ (I, w) is a regularization term, here denoted by contrast regularization CR, which can be expressed as:
G i i =1,2, \ 8230n denotes the i-th hidden layer feature extracted from the pre-trained VGG19, specifically including only the 1 st, 3 rd, 5 th, 9 th, 13 th convolutional layers of the VGG19 as the feature extraction part, ω i The weight parameters are respectively1, D (x, y) represents the L1 distance between x and y.
(3) And (5) a training stage.
And initializing parameters in the de-noising model by adopting Kaiming initialization, and setting parameters of each layer. Inputting low-dose PET image data in a training set sample into a denoising network model, training the constructed denoising network, and finally calculating the output of each layer through forward propagation to obtain a final output anchor sample of the network model; the VGG19 respectively extracts the characteristics of a positive sample, a negative sample and an anchor sample, calculates the partial derivative of the loss function through the comparative learning loss function according to the principle of minimizing the loss function, updates the learnable parameters in the network through the Adam algorithm, repeatedly carries out forward propagation and reverse derivation, and continuously updates the parameters until the numerical value of the loss function is small enough and the model converges; and training 300 epochs to converge the whole network so as to obtain the final output of the network. In the verification stage, every 5 epochs are trained, the model obtained by training is verified, and the effectiveness of model training is monitored so as to adjust the parameters in the training stage in time.
(4) And (5) a testing stage.
Inputting the low-dose PET image to be denoised into a trained network, and directly obtaining a high-quality denoised image.
Experiments were performed below based on monte carlo simulation data to verify the effectiveness of the present invention. The whole algorithm of the embodiment is tested in a PC system, wherein a CPU is Core i9-10900k, and the model of a display card is GeForce RTX 3080 (10 GB video memory). In programming, a Pytroch 1.10.0 platform is adopted to build a neural network, and the platform is based on Python language and can be used in a plurality of program development environments in a combined manner.
The simulated tracer is 18 The instrument used in the F-FDG, phantom 3D blue phantom, GATE simulation was a 655k brain scanner from Hamamatsu photonics corporation, japan. The simulated scan time was 40min, the time frame was 18 frames, and the simulation data was randomly divided into a training set (1200 PETs), a verification set (300 PETs), and a test set (500 PETs).
Fig. 2 is a view showing a model structure. The system comprises an FA Block, an up-down sampling Upsampling and down-sampling module and an adaptive mixing operation Mixup module.
FIG. 3 shows the low dose of different modules of the de-noising network in different frames 18 Comparing the F-FDG reconstructed images, namely sequentially obtaining a reconstructed image of a Base module, a reconstructed image of a Base model plus an adaptive mixed operation Base + Mixup module, a reconstructed image of a Base module plus a contrast regularization Base + CR module and a truth value image reconstructed by an OSEM algorithm from left to right; the reconstructed images of the 2 nd, 8 th and 14 th frames are respectively arranged from the top to the bottom. As can be seen from the figure, the reconstructed image of the invention has rich details and low noise, and is closest to a true value image; every time a module is added, the reconstruction effect increases along with the gradient; therefore, the method can obtain a high-quality PET reconstruction image, and can reconstruct a high-quality image even under the condition of low dose. FIG. 3 shows low dose at different frames for different methods 18 The quantitative advantages of the invention can be obviously seen from the comparison indexes (SSIM and PSNR) of the F-FDG reconstruction graph.
FIG. 4 shows the low dose of different modules of the de-noising network in different frames 18 The quantitative advantages of the invention can be obviously seen from the comparison indexes (SSIM and PSNR) of the F-FDG reconstruction graph.
FIG. 5 shows low dose at different frames for different methods 18 Comparing the F-FDG reconstructed images, namely sequentially forming a reconstructed image of an FBP-Net algorithm, a reconstructed image of the DeepPET algorithm, a reconstructed image of the U-Net algorithm and a truth value image of OSEM algorithm reconstruction from left to right; the reconstructed images of the 2 nd, 8 th and 14 th frames are respectively arranged from the top to the bottom. As can be seen from the figure, the,the reconstructed image of the invention has rich details and low noise, and is closest to a true value image; the edges of the reconstruction graphs of the U-Net and DeepPET algorithms have some nonexistent structures, the reconstruction graph of the FBP-Net is too smooth, part of details are lost, and the reconstruction of low dose is difficult to deal with; therefore, the method can obtain a high-quality PET reconstruction image, and can reconstruct a high-quality image even under the condition of low dose.
FIG. 6 shows low dose at different frames for different methods 18 The quantitative advantages of the invention can be obviously seen from the comparison indexes (SSIM and PSNR) of the F-FDG reconstruction graph.
The foregoing description of the embodiments is provided to enable one of ordinary skill in the art to make and use the invention, and it is to be understood that other modifications of the embodiments, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty, as will be readily apparent to those skilled in the art. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.
Claims (5)
1. A low-dose PET image denoising method based on contrast learning comprises the following steps:
1) Acquiring simulation-generated Sinogram projection data with standard dose, and performing down-sampling on the Sinogram projection data with the standard dose by utilizing Poisson distribution to obtain low-dose Sinogram projection data;
2) Reconstructing the Sinogram projection data with the standard dose through an OSEM algorithm to obtain PET image data with the corresponding standard dose; reconstructing the low-dose Sinogrm projection data through an FBP algorithm to obtain corresponding low-dose PET image data;
3) Acquiring a large number of samples according to the step 2), wherein each sample comprises low-dose PET image data and standard-dose PET image data, and dividing all samples into a training set, a verification set and a test set;
4) In order to balance the denoising performance and the model parameters, a lightweight PET basic denoising network based on a quasi-autoencoder is built; inputting low-dose PET image data into a denoising network, and outputting a denoised PET image by a model; in order to better restore the image, a new contrast learning loss function ContrastLoss is designed based on a mode of constructing positive and negative samples through contrast learning, a standard dose PET image and a low dose PET image are respectively used as the positive and negative samples, and a denoised PET image is used as an anchor sample;
5) In the training stage, inputting low-dose PET image data in a training set into the PET basic denoising network in the step 4), training based on the contrast learning loss function ContrastLoss in the step 4), repeatedly performing forward propagation and reverse derivation on the basis of the principle of minimization of the loss function, and continuously updating parameters until the numerical value of the loss function is small enough and the model is converged; during training, inputting the low-dose PET image data concentrated in verification into the model for verification, and monitoring the effectiveness of model training so as to adjust parameters in a training stage in time;
6) In the testing stage, the low-dose PET image data concentrated in the testing is input into the trained PET basic denoising network, and the high-quality PET image is directly obtained.
2. The contrast learning-based low-dose PET image denoising method of claim 1, wherein: the specific implementation manner of the down-sampling in the step 1) is as follows: for Sinogram projection data with standard dose, firstly, a random number matrix with the same size as the Sinogram matrix is generated by utilizing a Python built-in library function based on Poisson distribution, the mean value of the random number matrix can be set to be different sizes by setting different normalization coefficients, and then the mean value of the Sinogram matrix is changed into one n of the original mean value through matrix dot product operation, so that low-dose Sinogram projection data are obtained, wherein n is the multiplying power of downsampling.
3. The contrast learning-based low-dose PET image denoising method of claim 1, wherein: the PET basic denoising network in the step 4) has the following specific structure: using 4 FABlock as basic block of the proposed de-noising network, firstly adopting 4 times down-sampling operation to enable FABlock to learn feature representation in a low resolution space, and then using corresponding 4 times up-sampling operation and a convolution operation to recover a high resolution image; low-level features (such as edges and contours) are usually captured by shallow layers of the convolutional neural network, however, as the number of layers increases, the shallow feature pairs gradually degrade, and in order to avoid losing the shallow features, the model adds an adaptive hybrid Mixup operation between the downsampling layer and the upsampling layer, which can be expressed as:
f ↑2 =Mix(f ↓1 ,f ↑1 )=σ(θ 1 )*f ↓1 +(1-σ(θ 1 ))*f ↑1 ,
f ↑ =Mix(f ↓2 ,f ↑2 )=σ(θ 2 )*f ↓2 +(1-σ(θ 2 ))*f ↑2 ,
wherein f is ↓i And f ↑i Feature maps of the i-th down-sampling layer and up-sampling layer, respectively, f ↑ Is the final output, σ (θ) i ) I =1,2 is an i learnable factor that fuses the i-th downsampling layer and the i-th upsampling layer, learned by an attention mechanism.
4. The contrast learning-based low-dose PET image denoising method of claim 1, wherein: the step 4) of comparing the learning Loss function, wherein standard dose and low dose PET image data are respectively used as positive and negative samples, a denoised PET image output by a PET basic denoising network is used as an anchor sample, a Comparison Regularization (CR) is added into an L1 Loss to form a ContrastLoss, common intermediate features are selected from the same fixed pre-training model VGG19 for a potential feature space, and the end-to-end PET image denoising utilizes the comparison learning Loss function and is expressed as follows:
i is a low-dose PET image, J is a standard-dose PET image, phi (I, w) is a PET basic denoising network, w is a network model parameter, | J-phi (I, w) | is a data fidelity term image reconstruction Loss L1 Low, a beta parameter is used for balancing a data fidelity term and a regularization term, and rho (phi (I, w)) is a regularization term and is expressed by adopting a contrast regularization CR:
G i i =1,2, \ 8230n denotes the i-th hidden layer feature extracted from the pre-trained VGG19, specifically including only the 1 st, 3 rd, 5 th, 9 th, 13 th convolutional layers of the VGG19 as the feature extraction part, ω i The weight parameters are respectivelyD (x, y) represents the L1 distance between x and y.
5. The contrast learning-based low-dose PET image denoising method of claim 1, wherein: the specific process of training the network model in the step 5) is as follows:
5.1 initializing to initialize parameters in the denoising model by adopting Kaiming, and setting parameters of each layer;
5.2, inputting low-dose PET image data in the training set sample into a denoising network model, training the constructed PET basic denoising network, and finally calculating the output of each layer through forward propagation to obtain a final output anchor sample of the network model;
5.3 The VGG19 respectively extracts the characteristics of the positive and negative samples and the anchor sample, calculates the partial derivative of the loss function through the comparative learning loss function according to the principle of minimizing the loss function, performs gradient back propagation, and updates learnable parameters in the network model through an Adam algorithm;
5.4 repeating the steps 5.2-5.3 until the parameters of the whole network model are converged.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2022110951686 | 2022-09-05 | ||
CN202211095168 | 2022-09-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115564666A true CN115564666A (en) | 2023-01-03 |
Family
ID=84740486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211107388.6A Pending CN115564666A (en) | 2022-09-05 | 2022-09-12 | Low-dose PET image denoising method based on contrast learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115564666A (en) |
-
2022
- 2022-09-12 CN CN202211107388.6A patent/CN115564666A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325686B (en) | Low-dose PET three-dimensional reconstruction method based on deep learning | |
CN108492269B (en) | Low-dose CT image denoising method based on gradient regular convolution neural network | |
CN110827216B (en) | Multi-generator generation countermeasure network learning method for image denoising | |
US11120582B2 (en) | Unified dual-domain network for medical image formation, recovery, and analysis | |
CN111627082B (en) | PET image reconstruction method based on filtering back projection algorithm and neural network | |
WO2022121160A1 (en) | Method for enhancing quality and resolution of ct images based on deep learning | |
CN111429379B (en) | Low-dose CT image denoising method and system based on self-supervision learning | |
CN112258415B (en) | Chest X-ray film super-resolution and denoising method based on generation countermeasure network | |
WO2023202265A1 (en) | Image processing method and apparatus for artifact removal, and device, product and medium | |
CN114219719A (en) | CNN medical CT image denoising method based on dual attention and multi-scale features | |
CN114387236A (en) | Low-dose Sinogram denoising and PET image reconstruction method based on convolutional neural network | |
He et al. | Dynamic PET image denoising with deep learning-based joint filtering | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
Liang et al. | Multi-scale self-attention generative adversarial network for pathology image restoration | |
WO2021226500A1 (en) | Machine learning image reconstruction | |
CN111080736B (en) | Low-dose CT image reconstruction method based on sparse transformation | |
CN116245969A (en) | Low-dose PET image reconstruction method based on deep neural network | |
CN115937113B (en) | Method, equipment and storage medium for identifying multiple types of skin diseases by ultrasonic images | |
CN116563554A (en) | Low-dose CT image denoising method based on hybrid characterization learning | |
CN114463459B (en) | Partial volume correction method, device, equipment and medium for PET image | |
CN116664710A (en) | CT image metal artifact unsupervised correction method based on transducer | |
CN115564666A (en) | Low-dose PET image denoising method based on contrast learning | |
Zhang et al. | Deep residual network based medical image reconstruction | |
CN114926383A (en) | Medical image fusion method based on detail enhancement decomposition model | |
CN114936977A (en) | Image deblurring method based on channel attention and cross-scale feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |