CN112669402B - Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning - Google Patents
Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning Download PDFInfo
- Publication number
- CN112669402B CN112669402B CN202011542650.0A CN202011542650A CN112669402B CN 112669402 B CN112669402 B CN 112669402B CN 202011542650 A CN202011542650 A CN 202011542650A CN 112669402 B CN112669402 B CN 112669402B
- Authority
- CN
- China
- Prior art keywords
- coincidence
- sinogram
- data
- deep learning
- case
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Nuclear Medicine (AREA)
Abstract
The invention belongs to the technical field of PET imaging scattering correction, and particularly relates to a four-dimensional PET imaging rapid dynamic scattering correction method based on deep learning, which comprises the following steps: acquiring raw data generated by 4D PET of a plurality of patients; acquiring a total coincidence instance and a random coincidence instance in each time frame, and converting the total coincidence instance and the random coincidence instance into a sinogram; each frame of data is obtained by subtracting the sinogram of the random coincidence case from the sinogram of the total coincidence case; each frame of data adopts a single scattering simulation method or a multiple scattering simulation method to make scattering correction; training a model; and inputting the clinically acquired 4D PET data into the model according to different time frame data by using the trained model and weights. The network model adopted by the invention is based on a DenseNet structure, enhances interlayer information transmission and reuse, and solves the problems of gradient disappearance and the like by combining a residual error structure, thereby realizing high-precision scattering estimation. The invention is used for correcting PET imaging.
Description
Technical Field
The invention belongs to the technical field of PET imaging scattering correction, and particularly relates to a four-dimensional PET imaging rapid dynamic scattering correction method based on deep learning.
Background
Positron emission tomography (Position Emission Tomography, PET) can detect physiological and biochemical information such as metabolism of biological tissues, receptor molecular combination and the like at molecular level, and is widely applied to the fields of clinical examination of nuclear medicine imaging, efficacy evaluation, drug development and the like.
PET imaging is performed according to the principle of radioisotope tracking and coincidence detection techniques. In the process of acquiring data by using the PET coincidence detection technology, besides detecting two photons with 511keV opposite directions generated by annihilation of true positive and negative electrons, due to the influence of Compton scattering, part of photons scatter, scattered photons deviate from the original moving direction while losing energy, and the two photons detected by the detector originate from the same electron annihilation event, but at least one photon and a medium scatter once or more, so that the case is called a scattering coincidence case. The scattering accords with the example and causes the problems of serious image noise, poor contrast, inaccurate quantitative analysis and the like, and seriously influences the image quality, so that correction is required in modern PET imaging.
Common PET scatter correction methods include block fitting, convolution/deconvolution, windowing, single scatter simulation (Single Scatter Simulation, SSS), multiple scatter simulation, and monte carlo simulation (Monte Carlo simulation, MC). Because of the limitations of the fitting, convolution/deconvolution, windowing and MC methods, single or multiple scattering simulations are commonly used in the industry, which first require a rough reconstruction of the radionuclide radioactivity, but the radionuclide radioactivity distribution will vary continuously over time. Conventional dynamic parameter image reconstruction is performed Frame-based (FM) with each Frame being reconstructed independently, frame-by-Frame. Whereas four-dimensional reconstruction requires estimation of the scattering rate in time and space domain, which is not possible with conventional SSS.
Disclosure of Invention
Aiming at the technical problems of limited application range and low calculation efficiency of the conventional scattering correction method, the invention provides a fast dynamic scattering correction method for four-dimensional PET imaging based on deep learning, which has high speed, high accuracy and small error.
In order to solve the technical problems, the invention adopts the following technical scheme:
a fast dynamic scattering correction method of four-dimensional PET imaging based on deep learning comprises the following steps:
s1, acquiring original data generated by 4D PET of a plurality of patients, and converting total coincidence cases and random coincidence cases into sinograms;
s2, acquiring a total coincidence instance and a random coincidence instance in each time frame according to a frame-based method, and converting the total coincidence instance and the random coincidence instance into a sinogram;
s3, subtracting the sine graph of the random coincidence instance from the sine graph of the total coincidence instance, so as to obtain a pre-correction sine graph Sino1;
s4, performing scattering correction on each frame of data by adopting a single scattering simulation method or a multiple scattering simulation method, and acquiring a corrected sine chart Sino2 which truly accords with the case;
s5, training the pre-corrected sinogram Sino1 and the sinogram Sino2 data which truly conform to the case and are generated in all patients in a deep learning model, training according to a model training flow, and after training is completed, storing a trained model and weights;
s6, inputting the clinically collected 4D PET data into the model according to different time frame data by using the model and the weight which are completed through training, and obtaining a corrected sinogram.
The total coincidence cases in S3 always include true coincidence cases, random coincidence cases and scattering coincidence cases.
The pre-corrected sinogram Sino1 in S3 includes a true coincidence case and a scattering coincidence case, where the pre-corrected sinogram Sino1 is a sum of the true coincidence case and the scattering coincidence case.
The training method according to the model training flow in the S5 is as follows: comprises the following steps:
s5.1, preparing a pre-correction sinogram Sino1 and a sinogram Sino2 which truly accords with the case to be used;
s5.2, dividing the pre-corrected sinogram Sino1 and the sinogram Sino2 image data set which truly accords with the case into a training set, a verification set and a test set;
s5.3, respectively extracting 2D image slices from the same positions of the pre-corrected sinogram Sino1 and the sinogram Sino2 truly conforming to the case in the training set and the testing set to form training data pairs;
s5.4, inputting the generated 2D image data pair into a deep learning network model;
s5.5, calculating network loss;
s5.6, judging whether the error of the model on the verification set is minimum;
s5.7, continuously updating network parameters without reaching the minimum, and if the network parameters reach the minimum, storing the network and the weight;
s5.8, evaluating the model performance by using the test data.
The average square loss function is used in S5.5 to calculate the network loss.
The pre-corrected sinogram Sino1 in S5 is used as input data, and the true case-conforming sinogram Sino2 is used as tag data.
The deep learning network model in S5.4 adopts a DenseNet structure.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a method for realizing rapid and accurate scattering correction in four-dimensional PET imaging by using a deep learning method, wherein a network model adopted by the method is based on a DenseNet structure, so that interlayer information transmission and reuse are enhanced, and the problems of gradient disappearance and the like are solved by combining a residual error structure, thereby realizing high-precision scattering estimation.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a model training flow chart of the present invention;
FIG. 3 is a block diagram of a deep learning network model of the present invention;
FIG. 4 is a sinusoidal graph of the overall coincidence instance of the present invention;
FIG. 5 is a graph of a random coincidence instance sinogram of the present invention;
FIG. 6 is a pre-corrected sinogram of the present invention;
FIG. 7 is a sinogram of a true coincidence instance of the present invention;
FIG. 8 is a sinusoidal graph after correction in accordance with the present invention;
FIG. 9 is a graph of the correction results of the multiple scattering simulation;
FIG. 10 is a graph of the algorithm modification result of the present invention;
FIG. 11 is a plot of pixel values versus cross-section for multiple scattering simulations in accordance with the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A fast dynamic scattering correction method of four-dimensional PET imaging based on deep learning, as shown in figure 1, comprises the following steps:
s1, acquiring original data generated by 4D PET of a plurality of patients, and converting total coincidence cases and random coincidence cases into sinograms;
s2, acquiring a total coincidence instance and a random coincidence instance in each time frame according to a frame-based method, and converting the total coincidence instance and the random coincidence instance into a sinogram as shown in fig. 4 and 5;
s3, subtracting the sine graph of the random coincidence instance from the sine graph of the total coincidence instance, as shown in FIG. 6, so as to obtain a pre-correction sine graph Sino1;
s4, performing scattering correction on each frame of data by adopting a single scattering simulation method or a multiple scattering simulation method, as shown in FIG. 7, and acquiring a corrected sine chart Sino2 of a real coincidence instance;
s5, training the pre-corrected sinogram Sino1 and the sinogram Sino2 data which truly conform to the case and are generated in all patients in a deep learning model, training according to a model training flow, and after training is completed, storing a trained model and weights;
s6, inputting the clinically collected 4D PET data into the model according to different time frame data by using the trained model and weights, as shown in FIG. 8, and obtaining a corrected sinogram.
Further, the total coincidence cases in S3 always include a true coincidence case, a random coincidence case, and a scattering coincidence case.
Further, the pre-corrected sinogram Sino1 in S3 includes a true coincidence case and a scatter coincidence case, and the pre-corrected sinogram Sino1 is a sum of the true coincidence case and the scatter coincidence case.
Further, as shown in fig. 2, the training method according to the model training procedure in S5 is as follows: comprises the following steps:
s5.1, preparing a pre-correction sinogram Sino1 and a sinogram Sino2 which truly accords with the case to be used;
s5.2, dividing the pre-corrected sinogram Sino1 and the sinogram Sino2 image data set which truly accords with the case into a training set, a verification set and a test set;
s5.3, respectively extracting 2D image slices from the same positions of the pre-corrected sinogram Sino1 and the sinogram Sino2 truly conforming to the case in the training set and the testing set to form training data pairs;
s5.4, inputting the generated 2D image data pair into a deep learning network model;
s5.5, calculating network loss;
s5.6, judging whether the error of the model on the verification set is minimum;
s5.7, continuously updating network parameters without reaching the minimum, and if the network parameters reach the minimum, storing the network and the weight;
s5.8, evaluating the model performance by using the test data.
Further, it is preferable to calculate the network loss using an average square loss function in S5.5.
Further, the pre-corrected sinogram Sino1 in S5 is taken as input data, and the true case-conforming sinogram Sino2 is taken as tag data.
Further, as shown in fig. 3, the deep learning network model in S5.4 preferably adopts a DenseNet structure, which enhances interlayer information transfer and reuse.
As shown in fig. 9 and 10, which are respectively a multiple scattering simulation method and an algorithm correction result diagram of the present invention, as shown in fig. 11, the average error of the cross-section pixel value of the present invention and the multiple scattering simulation method is <1%, and the time required for adopting the single scattering simulation method or the multiple scattering simulation method is about 30 seconds, and the time required for adopting the method of the present invention is only about 3 seconds, which shows that the present invention provides a fast and accurate scattering correction in four-dimensional PET imaging by using the deep learning method, and the calculation efficiency of the method of the present invention is 10 times faster than that of the multiple scattering simulation method, the accuracy is similar to that of the multiple scattering simulation method, and the error is less than 1%.
The preferred embodiments of the present invention have been described in detail, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and the various changes are included in the scope of the present invention.
Claims (6)
1. A fast dynamic scattering correction method of four-dimensional PET imaging based on deep learning is characterized in that: comprises the following steps:
s1, acquiring original data generated by 4D PET of a plurality of patients, and converting total coincidence cases and random coincidence cases into sinograms;
s2, acquiring a total coincidence instance and a random coincidence instance in each time frame according to a frame-based method, and converting the total coincidence instance and the random coincidence instance into a sinogram;
s3, subtracting the sine graph of the random coincidence instance from the sine graph of the total coincidence instance, so as to obtain a pre-correction sine graph Sino1;
s4, performing scattering correction on each frame of data by adopting a single scattering simulation method or a multiple scattering simulation method, and acquiring a corrected sine chart Sino2 which truly accords with the case;
s5, training the pre-corrected sinogram Sino1 and the sinogram Sino2 data which truly conform to the case and are generated in all patients in a deep learning model, training according to a model training flow, and after training is completed, storing a trained model and weights;
the training method according to the model training flow in the S5 is as follows: comprises the following steps:
s5.1, preparing a pre-correction sinogram Sino1 and a sinogram Sino2 which truly accords with the case to be used;
s5.2, dividing the pre-corrected sinogram Sino1 and the sinogram Sino2 image data set which truly accords with the case into a training set, a verification set and a test set;
s5.3, respectively extracting 2D image slices from the same positions of the pre-corrected sinogram Sino1 and the sinogram Sino2 truly conforming to the case in the training set and the testing set to form training data pairs;
s5.4, inputting the generated 2D image data pair into a deep learning network model;
s5.5, calculating network loss;
s5.6, judging whether the error of the model on the verification set is minimum;
s5.7, continuously updating network parameters without reaching the minimum, and if the network parameters reach the minimum, storing the network and the weight;
s5.8, evaluating the model performance by using the test data;
s6, inputting the clinically collected 4D PET data into the model according to different time frame data by using the model and the weight which are completed through training, and obtaining a corrected sinogram.
2. The method for rapid dynamic scattering correction of four-dimensional PET imaging based on deep learning according to claim 1, wherein the method comprises the steps of: the total coincidence cases in S3 always include true coincidence cases, random coincidence cases and scattering coincidence cases.
3. The method for rapid dynamic scattering correction of four-dimensional PET imaging based on deep learning according to claim 1, wherein the method comprises the steps of: the pre-corrected sinogram Sino1 in S3 includes a true coincidence case and a scattering coincidence case, where the pre-corrected sinogram Sino1 is a sum of the true coincidence case and the scattering coincidence case.
4. The method for rapid dynamic scattering correction of four-dimensional PET imaging based on deep learning according to claim 1, wherein the method comprises the steps of: the average square loss function is used in S5.5 to calculate the network loss.
5. The method for rapid dynamic scattering correction of four-dimensional PET imaging based on deep learning according to claim 1, wherein the method comprises the steps of: the pre-corrected sinogram Sino1 in S5 is used as input data, and the true case-conforming sinogram Sino2 is used as tag data.
6. The method for rapid dynamic scattering correction of four-dimensional PET imaging based on deep learning according to claim 1, wherein the method comprises the steps of: the deep learning network model in S5.4 adopts a DenseNet structure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011542650.0A CN112669402B (en) | 2020-12-22 | 2020-12-22 | Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011542650.0A CN112669402B (en) | 2020-12-22 | 2020-12-22 | Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112669402A CN112669402A (en) | 2021-04-16 |
CN112669402B true CN112669402B (en) | 2023-09-15 |
Family
ID=75409292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011542650.0A Active CN112669402B (en) | 2020-12-22 | 2020-12-22 | Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112669402B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109717891A (en) * | 2018-12-29 | 2019-05-07 | 浙江明峰智能医疗科技有限公司 | A kind of PET scatter correction method based on deep learning |
CN110197516A (en) * | 2019-05-29 | 2019-09-03 | 浙江明峰智能医疗科技有限公司 | A kind of TOF-PET scatter correction method based on deep learning |
CN112017258A (en) * | 2020-09-16 | 2020-12-01 | 上海联影医疗科技有限公司 | PET image reconstruction method, apparatus, computer device, and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11302003B2 (en) * | 2017-10-26 | 2022-04-12 | Wisconsin Alumni Research Foundation | Deep learning based data-driven approach for attenuation correction of pet data |
CN109697741B (en) * | 2018-12-28 | 2023-06-16 | 上海联影智能医疗科技有限公司 | PET image reconstruction method, device, equipment and medium |
US11010938B2 (en) * | 2019-04-03 | 2021-05-18 | Uih America, Inc. | Systems and methods for positron emission tomography image reconstruction |
-
2020
- 2020-12-22 CN CN202011542650.0A patent/CN112669402B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109717891A (en) * | 2018-12-29 | 2019-05-07 | 浙江明峰智能医疗科技有限公司 | A kind of PET scatter correction method based on deep learning |
CN110197516A (en) * | 2019-05-29 | 2019-09-03 | 浙江明峰智能医疗科技有限公司 | A kind of TOF-PET scatter correction method based on deep learning |
CN112017258A (en) * | 2020-09-16 | 2020-12-01 | 上海联影医疗科技有限公司 | PET image reconstruction method, apparatus, computer device, and storage medium |
Non-Patent Citations (2)
Title |
---|
An improved PET image reconstruction method based on super-resolution;Zhanli Hu等;Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment;第946卷;320-329 * |
基于深度学习的肺部医学图像分析研究进展;刘锐;何先波;;川北医学院学报(第02期);160-164 * |
Also Published As
Publication number | Publication date |
---|---|
CN112669402A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zaidi et al. | Scatter modelling and compensation in emission tomography | |
Frey et al. | Application of task-based measures of image quality to optimization and evaluation of three-dimensional reconstruction-based compensation methods in myocardial perfusion SPECT | |
NL2010492C2 (en) | Systems and methods for attenuation compensation in nuclear medicine imaging based on emission data. | |
Niu et al. | Effects of motion, attenuation, and scatter corrections on gated cardiac SPECT reconstruction | |
Fan et al. | Scatter and crosstalk corrections for 99mTc/123I dual‐radionuclide imaging using a CZT SPECT system with pinhole collimators | |
Jin et al. | 4D reconstruction for low‐dose cardiac gated SPECT | |
Gilland et al. | An evaluation of maximum likelihood-expectation maximization reconstruction for SPECT by ROC analysis | |
Qi et al. | Limited‐angle effect compensation for respiratory binned cardiac SPECT | |
Du et al. | Model‐based crosstalk compensation for simultaneous dual‐isotope brain SPECT imaging | |
Li et al. | Effective noise‐suppressed and artifact‐reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm | |
CN112669402B (en) | Rapid dynamic scattering correction method of four-dimensional PET imaging based on deep learning | |
Zaidi et al. | Scatter correction strategies in emission tomography | |
Bataille et al. | Monte Carlo simulation for the ECAT HRRT using GATE | |
Bouwens et al. | Image-correction techniques in SPECT | |
CN110215223A (en) | Scatter correction method, system, readable storage medium storing program for executing and equipment | |
Cheng et al. | Maximum likelihood activity and attenuation estimation using both emission and transmission data with application to utilization of Lu‐176 background radiation in TOF PET | |
JP2021512312A (en) | Scattering correction for positron emission tomography (PET) | |
Cervo et al. | Quantitative simultaneous 111In/99mTc SPECT‐CT of osteomyelitis | |
Akamatsu et al. | Influences of reconstruction and attenuation correction in brain SPECT images obtained by the hybrid SPECT/CT device: evaluation with a 3-dimensional brain phantom | |
Wang et al. | Pixel‐wise estimation of noise statistics on iterative CT reconstruction from a single scan | |
Thomas et al. | A dual modality approach to quantitative quality control in emission tomography | |
CN107961028A (en) | A kind of normalization factor obtains, determines method and medical imaging procedure | |
Hughes et al. | A multi-center phantom study comparing image resolution from three state-of-the-art SPECT-CT systems | |
Szlávecz et al. | The use of multi-energy photon emitters in 3D SPECT reconstruction | |
CN113288189B (en) | PET time correction method based on ADMM-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |