CN115830167A - PET image scattering correction method and PET system - Google Patents

PET image scattering correction method and PET system Download PDF

Info

Publication number
CN115830167A
CN115830167A CN202211552380.0A CN202211552380A CN115830167A CN 115830167 A CN115830167 A CN 115830167A CN 202211552380 A CN202211552380 A CN 202211552380A CN 115830167 A CN115830167 A CN 115830167A
Authority
CN
China
Prior art keywords
image
pet
attenuation coefficient
pet image
scattering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211552380.0A
Other languages
Chinese (zh)
Inventor
崔洁
李楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sinogram Medical Technology Co ltd
Original Assignee
Jiangsu Sinogram Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sinogram Medical Technology Co ltd filed Critical Jiangsu Sinogram Medical Technology Co ltd
Priority to CN202211552380.0A priority Critical patent/CN115830167A/en
Publication of CN115830167A publication Critical patent/CN115830167A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Nuclear Medicine (AREA)

Abstract

The invention relates to a scattering correction method of a PET image and a PET system, wherein the method comprises the following steps: acquiring a first PET image and a first attenuation coefficient image aiming at the detection data of the specified axial field of view; inputting the basic information of the patient, the first PET image and the first attenuation coefficient image into a deep learning network, and acquiring a second PET image and a second attenuation coefficient image output by the deep learning network; the second PET image is a simulation output image outside the axial visual field range; and acquiring scatter correction information of the detection data based on the first and second PET images, the first and second attenuation coefficient images and pre-established calculation information of the scatter correction SSS, wherein the scatter correction information is used for reconstructing the PET images. The scattering correction information obtained by the method is more comprehensive and accurate, and is applied to PET reconstruction, so that the PET image has no artifact and better image quality.

Description

PET image scattering correction method and PET system
Technical Field
The invention relates to the technical field of medical imaging, in particular to a PET image scatter correction method and a PET system.
Background
Currently, during Positron Emission Tomography (PET) system acquisition, photons may compton scatter with human tissue before reaching the detector, changing the direction of flight. Due to the limited energy resolution of the detector, these scattering events are incorrectly recorded as true coincidence events, confusing the positional information of the nuclides, which in turn produces scattering artifacts in the image, severely affecting the image quality. Especially in three-dimensional data acquisition, the number of scatter coincidences can reach 30% -60% of the total count, which makes scatter correction one of the key links for PET reconstruction.
Some factors often exist in the existing commonly used PET acquisition mode to influence the accuracy of scattering distribution, thereby seriously influencing the image quality:
first, the single scatter analog correction (SSS) method is widely used for scatter correction in PET reconstruction. The method models the scatter distribution by calculating the probability that a coincident gamma photon will experience a single scatter event before being detected. Single scatter simulation correction methods typically only estimate scatter events occurring within the axial scan field of view (FOV), and do not provide an accurate estimate of scatter events occurring outside the axial scan field of view (FOV) of an object. The more commonly used PET today typically has a limited axial field of view, thus requiring multiple bed scans in body scanning. Each scanning bed can only be scatter corrected using the data for that bed. However, this method has a limitation in that since the radiation sources are continuously distributed in the human body, when acquiring the scan data of each bed, the scatter outside the bed field of view enters the field of view and is collected by the detector. When SSS is corrected, scattering distribution outside the field of view cannot be accurately estimated due to the lack of information outside the axial field of view, and therefore scattering event distribution in single-bed scanning is affected. The currently used method of tail fitting can roughly compensate for the scattering in the field of vision, but the compensation is often a simple approximation and is inaccurate, and the accuracy of scattering correction is seriously affected when the injection dosage of a patient is low, the scanning time is short or the noise is large in the scanning of an obese patient.
Secondly, for multi-modality acquisition modes, the linear attenuation coefficient distribution matching the PET data sometimes cannot be accurately obtained, so that artifacts are generated on the PET image. There may be relative deviations in the image positions of the different modalities. Taking a PET/CT system as an example, a CT scan can usually be completed in a very short time, and the obtained image is a snapshot at almost a certain moment. However, PET scanning is slow and typically takes several minutes per position, making it impossible to complete the data acquisition while the patient is holding his breath. In a long-time PET scan, the patient's body may move (for example, the arm and head may move during the long-time scan), which may cause the PET and CT images to be mismatched and generate obvious scattering artifacts. Obvious artifacts exist in the attenuation image in the scanning process, and obvious errors appear in the attenuation coefficient step by step of the PET image. For example, CT images of patients with metal substances in their bodies (such as cardiac pacemakers or metal braces) have obvious highlighted metal artifacts, which make the surrounding tissues difficult to accurately distinguish, so that the linear attenuation coefficient distribution has obvious artifacts, and the scatter correction of PET images is seriously affected. Finally, the scan range of PET will typically be larger than the scan range of other modalities (such as CT or MRI). Other modality imaging is likely to fail to provide a sufficiently large imaging range when scanning a patient of relatively large body weight, which may result in a truncation of the linear attenuation coefficient distribution. The application of incomplete attenuation information of this kind also produces artifacts in the PET reconstruction. When PET is imaged in conjunction with other modality images, the other modality images limit the accuracy of PET scatter correction to some extent.
Disclosure of Invention
Technical problem to be solved
In view of the above disadvantages and shortcomings of the prior art, the present invention provides a method for scatter correction of PET images and a PET system, which can perform accurate scatter correction on a scanning bed.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a method for scatter correction of a PET image, including:
s10, aiming at the detection data in the specified axial visual field range (0, D), acquiring a first PET image of the detection data and a first attenuation coefficient image corresponding to the first PET image, wherein the first PET image is an image which is not subjected to scatter correction and is reconstructed by the detection data in the axial visual field range (0, D);
s20, based on a pre-trained deep learning network, inputting basic information of a patient, a first PET image and a first attenuation coefficient image into the deep learning network, and acquiring a second PET image output by the deep learning network and a second attenuation coefficient image corresponding to the second PET image; the second PET image is a simulation output image outside the axial visual field range (D, D + F), wherein D represents the size of the axial visual field range, and F represents the size of the axial expansion range;
s30, acquiring scattering correction information of the detection data based on the first PET image, the second PET image, the first attenuation coefficient image, the second attenuation coefficient image and pre-established calculation information of the scattering correction SSS, wherein the scattering correction information is used for reconstructing the PET image;
the pre-trained deep learning network is obtained by training the constructed deep learning network based on the PET image in the axial visual field range, the matched associated image and the associated information, and the network outputs the PET image outside the axial visual field range and the attenuation coefficient image corresponding to the PET image.
Optionally, before S10, the method further includes:
s00, acquiring a training sample for training the deep learning network based on the reconstructed PET image and the matched associated image and associated information;
wherein each training sample comprises: the reconstructed first PET image/simulated first PET image, the attenuation coefficient image corresponding to the first PET image, the height, the weight and the radioactivity of the user to which the PET image belongs, and the interval time of tracer injection verify the real linear attenuation image obtained by using PET/CT or PET/MR scanning;
s01, training the deep learning network based on the training samples to obtain a trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function L of the optimally trained deep learning network to be minimum, so that the PET image and the attenuation coefficient image which are not subjected to the scattering correction in the axial visual field range are mapped into the PET image and the attenuation coefficient image which are not subjected to the scattering correction and outside the expanded axial visual field range.
Optionally, S01 comprises: the outputs of the network parameters θ and G of the deep learning network G are represented as follows:
Figure BDA0003981854760000041
Figure BDA0003981854760000042
wherein x1 is a first PET image, μ 1 is a first attenuation coefficient image, x2 is a second PET image, μ 2 is a second attenuation coefficient image, h is the height of the user, w is the weight of the user, a is the radioactivity, t is the radioactivity a Interval for tracer injection;
a second PET image x2, a second attenuation coefficient image μ 2 as an output of G,
during the verification process, the output of G and a real linear attenuation image mu obtained by PET/CT or PET/MR scanning True And is based on mu True Reconstructed image x True A comparison is made.
Optionally, the deep learning network is one of: CNN networks, unet networks, GAN networks.
Optionally, the S10 includes:
a first PET image x1 in an axial view is obtained by maximizing a log-likelihood function by means of an MLEM algorithm while keeping a linear attenuation coefficient distribution mu as a constant;
and (3) linear attenuation coefficient distribution mu 1 in the axial view range, keeping the first PET image x1 in the axial view as a constant, and calculating to obtain the first attenuation coefficient image mu 1 in the axial view by utilizing PET data according to the log-likelihood function of unknown linear attenuation coefficient distribution mu.
Optionally, the S30 includes:
acquiring a third PET image x3 of an extended axial field of view j And a third attenuation coefficient image mu 3 k
Figure BDA0003981854760000051
Based on the third PET image x3 j And a third attenuation coefficient image mu 3 k Correcting the calculation information of the pre-established scattering correction SSS to obtain the corrected scattering correction information
Figure BDA0003981854760000052
Will be provided with
Figure BDA0003981854760000053
As scatter correction information for the detection data;
Figure BDA0003981854760000054
Figure BDA0003981854760000055
Figure BDA0003981854760000056
wherein, I A Indicating that the positron annihilates at any point on S1 to emit a pair of gamma photons, one of which moves along path S1 without scattering and is detected by detector A with photon energy of 511keV and detection efficiency of epsilon AS The linear attenuation coefficient image is mu 3, another photon moves along a path S2 after scattering at the point S and is detected by a detector B, the photon energy is less than 511keV, and the detection efficiency is epsilon' BS The linear attenuation coefficient image is μ 3'; i is B Indicating that the positive and negative electrons annihilate at any point on S2 to emit a pair of gamma photons, one of which is unscattered and moves along path S2 and is detected by detector B with photon energy of 511keV and detection efficiency of epsilon BS The linear attenuation coefficient image is mu 3, another photon moves along a path S1 after scattering at the point S and is detected by a detector A, the photon energy is less than 511keV, and the detection efficiency is epsilon AS The linear attenuation coefficient image is μ 3'; r AS ,R BS Respectively representing the distance from a scattering point to a detector A and a detector B, and S represents the position of the scattering point;
Figure BDA0003981854760000057
middle ds represents the integral over distance s;
vs denotes the total scattering volume, σ AS Representing the geometric section of the detector A along gamma rays, sigma BS Representing a geometric cross section of detector B along ray y.
σ c Which represents the compton scattering cross-section,
Figure BDA0003981854760000061
the differential cross section representing compton scattering can be obtained by the Klein-Nishina equation, and Ω represents the scattering solid angle.
Optionally, the training sample further comprises:
a multi-bed training sample, the multi-bed training sample comprising: verifying a real linear attenuation image obtained by using PET/CT or PET/MR scanning by using a reconstructed PET image of multi-bed detection data, an attenuation coefficient image of the PET image, the height, the weight and the radioactivity of a user to which the PET image belongs and the interval time of tracer injection;
in use, the deep learning network outputs scattering correction information for each bed
Figure BDA0003981854760000062
For multi-bed data, scanned bed data are spliced and input into the deep learning network along with the increase of beds, so that the data input into the network are more complete and the information is richer along with the increase of beds.
In a second aspect, an embodiment of the present invention further provides a method for reconstructing a PET image, which includes,
p01, based on the specified axial field of view (0]Acquiring scatter correction information of the detection data by the scatter correction method of the PET image according to any one of claims 1 to 6
Figure BDA0003981854760000063
P02 according to
Figure BDA0003981854760000064
And carrying out PET reconstruction by using a pre-established log-likelihood function to obtain a PET image with accurate scattering correction.
In a fourth aspect, embodiments of the present invention also provide a PET system, including: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, and specifically executes the method of any embodiment of any one of the above aspects.
(III) advantageous effects
The method provided by the invention overcomes the defect that other modality images limit the PET scattering correction accuracy when imaging with other modalities in the prior art.
In the invention, the scattering outside the axial view is accurately estimated by means of the PET image and the attenuation image in the axial view, so as to obtain the scattering correction of the region of interest, wherein the scattering correction of the region of interest not only considers the scattering event in the axial view, but also relatively accurately estimates the scattering event outside the view, thereby greatly improving the accuracy of the scattering correction.
Finally, PET scanning does not depend on other modality images, and the radiation dose of a patient is greatly reduced while higher image quality is obtained.
Specifically, obtaining scatter outside the axial field of view may be: the reconstructed PET image of the current bed is calculated to obtain an accurate attenuation coefficient image in the axial visual field range, and information such as height, weight and injection activity of a patient is used as deep learning network input, and radioactivity activity distribution and linear attenuation coefficient distribution outside the current axial visual field range are estimated through deep learning, so that PET scanning does not depend on other modal images, the radiation dose of the patient is greatly reduced, and meanwhile, higher image quality is obtained.
Drawings
Fig. 1 is a schematic flowchart of a method for scatter correction of a PET image according to an embodiment of the present invention;
FIG. 2 is a diagram showing the comparison of the results of PET images obtained by various algorithms.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the present embodiments of the invention, which are illustrated in the accompanying drawings.
Positron Emission Tomography (PET) is a high-end nuclear medicine image diagnostic device. In practice using radionuclides (e.g. of the type 18 F、 11 C, etc.) to mark the metabolic substance and inject the nuclide into the human body, and then the PET system is used for carrying out functional metabolic imaging on the patient to reflect the condition of the metabolic activity of the life, thereby achieving the purpose of diagnosis. Currently, commercial Positron Emission Tomography (PET) is usually integrated with other modality Imaging systems, such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), to achieve the purpose of Imaging the anatomical structure of a patient, so that the PET nuclide distribution Imaging can be accurately positioned, and the accuracy of lesion positioning is improved. The final functional imaging and the anatomic imaging are fused with each other, the advantage of the dual-mode imaging is compatible, the whole condition of the whole body can be understood clearly, the aims of early focus finding and disease diagnosis are fulfilled, and the method has more advantages in guiding diagnosis and treatment of tumors, heart diseases and brain diseases.
In the existing PET image reconstruction, a SSS method is generally used for estimating the scattering distribution condition, however, the method cannot accurately estimate scattering events outside an axial view, so that the image quality is influenced, and the diagnosis of doctors is influenced.
In order to solve the problem of inaccurate scattering estimation in the conventional PET image reconstruction, in the embodiment of the invention, firstly, attenuation information is directly extracted from PET acquired data without depending on other modality imaging, so that linear attenuation coefficient distribution artifacts can be effectively corrected, and the application occasions of PET imaging are widened. Prior application No. 201910218840.8, the name is: the method for acquiring the correction information for performing attenuation correction on the PET activity distribution image and the application number 201910234260.8 are named as follows: and a correction information acquisition method for performing attenuation correction on the PET activity distribution image. Linear attenuation coefficient distribution images (Linear attenuation coefficient) are iteratively extracted from Time Of Flight TOF (Time Of Flight) information acquired by PET, and the PET image and the attenuation image in an axial view can be acquired in the prior application. The PET image acquired by both methods may be a PET image that has not been scatter corrected.
Then, accurately estimating the scattering outside the axial view; for example, the PET image of the current bed obtained in the above-mentioned prior application is calculated to obtain an accurate attenuation coefficient image within the axial visual field range, and the accurate attenuation coefficient image is input as the deep neural network together with the height, weight, injection activity and other information of the patient, and the radioactivity activity distribution and linear attenuation coefficient distribution outside the current axial visual field range are estimated by deep learning.
And finally, correcting the scattering distribution estimated by the SSS method, so that the scattering correction of the region of interest not only considers the scattering events in the axial view, but also relatively accurately estimates the scattering events in the field of view, and the accuracy of the scattering correction is greatly improved.
Example one
As shown in fig. 1, an embodiment of the present invention provides a method for scatter correction of a PET image, where an implementation subject of the method of the present embodiment may be a control device/electronic device for a method for PET image reconstruction, the control device may be integrated in an acquisition device of a PET system or a separate computer processing device, and the method for scatter correction of a PET image includes the following steps:
and S10, acquiring a first PET image of the detection data and a first attenuation coefficient image corresponding to the first PET image aiming at the detection data of the specified axial visual field range (0, D), wherein the first PET image is an image which is not subjected to scatter correction and is reconstructed by the detection data in the axial visual field range (0, D).
The first PET image at this step can be reconstructed in a conventional manner, and as described above, can be reconstructed by the methods of the above-mentioned two applications.
S20, inputting basic information of a patient, a first PET image and a first attenuation coefficient image into a deep learning network based on the deep learning network trained in advance, and acquiring a second PET image output by the deep learning network and a second attenuation coefficient image corresponding to the second PET image; the second PET image is a simulated output image outside the axial visual field range (D, D + F), D represents the size of the axial visual field range, and F represents the size of the axial expansion range.
In this embodiment, the deep learning network is one of the following: CNN networks, unet networks, GAN networks.
S30, acquiring scattering correction information of the detection data based on the first PET image, the second PET image, the first attenuation coefficient image, the second attenuation coefficient image and pre-established calculation information of the scattering correction SSS, wherein the scattering correction information is used for reconstructing the PET image;
the pre-trained deep learning network is obtained by training the constructed deep learning network based on the PET image in the axial visual field range and the matched associated image and associated information, and the network outputs the PET image outside the axial visual field range and the attenuation coefficient image corresponding to the PET image.
The target in the embodiment firstly reconstructs the radioactivity distribution and the linear attenuation coefficient distribution in the visual field from the PET acquisition data in the axial visual field range (0, D), then restores the radioactivity distribution and the linear attenuation coefficient distribution in the axial visual field (D, D + F) by using a deep learning network, and finally splices the radioactivity distribution and the linear attenuation coefficient distribution in the axial visual field range (0, D + F) to perform SSS scattering correction, so that the scattering in the axial visual field (0, D) can be accurately estimated, D represents the size of the axial visual field range in the process, and F represents the size of the axial expansion range.
It is understood that the value of D may be determined by the axial field of view of different machines, and F may be selected from empirical values, or may be calculated according to a related algorithm, for example, the value of D may be 218.4mm, the empirical value of F may be 9.3mm, etc., where the values of D and F are not limited, and may be adjusted according to actual needs.
In a specific application, before the step S10, the method shown in fig. 1 may further include the following step S00 not shown in the figure:
s00, acquiring a training sample for training the deep learning network based on the reconstructed PET image and the matched associated image and associated information;
wherein each training sample comprises: and verifying a true linear attenuation image obtained by using PET/CT or PET/MR scanning by using a reconstructed first PET image/simulated first PET image, wherein the attenuation coefficient image corresponding to the first PET image, the height, the weight and the radioactivity of the user to which the PET image belongs, and the interval time of tracer injection.
S01, training the deep learning network based on the training samples to obtain the trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function L of the optimally trained deep learning network to be minimum, so that the PET image and the attenuation coefficient image which are not subjected to the scattering correction in the axial visual field range are mapped into the PET image and the attenuation coefficient image which are not subjected to the scattering correction and outside the expanded axial visual field range.
The outputs of the network parameters θ and G of the deep learning network G are expressed as follows:
Figure BDA0003981854760000111
Figure BDA0003981854760000112
wherein x1 is a first PET image, μ 1 is a first attenuation coefficient image, x2 is a second PET image, μ 2 is a second attenuation coefficient image, h is the height of the user, w is the weight of the user, a is the radioactivity, t is the radioactivity a Interval for tracer injection;
a second PET image x2, a second attenuation coefficient image μ 2 as an output of G,
during the verification process, the output of G and a real linear attenuation image mu obtained by PET/CT or PET/MR scanning True And is based on mu True Reconstructed image x True A comparison is made.
The first PET image x1 in the axial visual field keeps the linear attenuation coefficient distribution mu as a constant and is obtained by means of a maximum log-likelihood function of an MLEM algorithm;
the linear attenuation coefficient distribution mu 1 in the axial view range keeps the first PET image x1 in the axial view as a constant, and the first attenuation coefficient image mu 1 in the axial view is calculated by utilizing PET data according to the log-likelihood function of the unknown linear attenuation coefficient distribution mu.
According to the embodiment, the accurate linear attenuation coefficient distribution in the axial visual field is obtained by using the existing attenuation correction method, the radioactivity distribution and the linear attenuation coefficient distribution in the axial visual field are obtained by using a deep learning method, and finally, the scattering events inside and outside the axial visual field are uniformly estimated, so that the scattering correction is more accurate, the image quality is improved, and the influence of mismatching images of other modes on the scattering distribution is avoided.
Example two
In order to better understand the method of the first embodiment, the following detailed description is made on the scatter correction method of the PET image by combining the formula and the inference process.
Step 1:the PET acquisition process can be modeled as the following equation:
Figure BDA0003981854760000121
y = [ y ] in formula 1t ,y 2t ,…,y it ,…,y NT ]' indicates that the detected data is transmitted,
Figure BDA0003981854760000122
denotes the average value of the detection data, N denotes the size of the sinogram of the detection data, T denotes the size of the time-of-flight TOF discrete space, i denotes the variable index (index) of the detection data sinogram response line LOR (line of response), and T denotes the variable index of the time-of-flight TOF discrete space. The single quote superscript indicates the matrix transpose operation. x = [ x = 1 ,x 2 ,…,x j ,…,x M ]' denotes an unknown radioactivity distribution image, M denotes the size of the radioactivity distribution image space, and j denotes a variable index of the radioactivity distribution image space, representing a point source corresponding to a spatial position. μ = [ μ ] 12 ,…,μ k ,…,μ K ]' denotes a linear attenuation coefficient image, K denotes a size of a linear attenuation coefficient image space, and K denotes a variable index of the linear attenuation coefficient image space, representing a point source corresponding to a spatial position. A = [ A ] ijt ]For a system matrix, the probability that a spatial position point source j is detected by a response line LORi and the time of flight TOF is t in the PET system is expressed in a mathematical form, the physical characteristics of the system are reflected, and l = [ l ] ik ]And the linear attenuation coefficient matrix represents the track crossing length of the LOR i when the LOR i passes through the space position point source k. R = [ R ] 1 ,r 2 ,…,r N ]"denotes the average value of random noise, S = [ S ] 1 ,s 2 ,…,s N ]' denotes the mean value of the scattering noise.
Step 2:the current clinically common scatter estimation method SSS (equation (2)) uses PET reconstructed images and attenuation coefficient images to calculate the scatter distribution based on analytical methods. As can be known from the formula (2), the SSS needs to perform integration operation on the PET reconstructed image and the attenuation coefficient image, the robustness is high, and therefore the detail requirements on the PET reconstructed image and the attenuation coefficient image are reduced.
The SSS formula is as follows:
Figure BDA0003981854760000131
Figure BDA0003981854760000132
Figure BDA0003981854760000133
I A indicating that the positron is arbitrary on S1One point annihilation emits a pair of gamma photons, one of which moves along path S1 without being scattered and is detected by detector A at a photon energy of 511keV and a detection efficiency of ε AS The linear attenuation coefficient image is mu, another photon moves along a path S2 after scattering at the point S and is detected by a detector B, the photon energy is less than 511keV, and the detection efficiency is epsilon' BS The linear attenuation coefficient image is μ'.
I B Indicating that the positive and negative electrons annihilate at any point on S2 to emit a pair of gamma photons, one of which is unscattered and moves along path S2 and is detected by detector B with photon energy of 511keV and detection efficiency of epsilon BS The linear attenuation coefficient image is mu, another photon moves along a path S1 after scattering at the point S and is detected by a detector A, the photon energy is less than 511keV, and the detection efficiency is epsilon AB The linear attenuation coefficient image is μ'.
Vs denotes the total scattering volume, σ AS Representing the geometric section of the detector A along gamma rays, sigma BS Representing a geometric cross section of detector B along ray y.
σ c Which represents the compton scattering cross-section,
Figure BDA0003981854760000134
the differential cross section representing compton scattering can be obtained by the Klein-Nishina equation, and Ω represents the scattering solid angle.
R AS ,R BS Respectively representing the distance of a scattering point to the detector A and the detector B;
s denotes the scattering point position.
And step 3:SSS equation (2) the estimation of the scatter distribution requires a known radioactivity distribution image and attenuation coefficient distribution image. The radioactivity image and the attenuation coefficient image in the axial visual field range can be obtained by alternative iterative solution in PET detection data directly.
The first PET image x1 in the axial field of view, keeping the linear attenuation coefficient distribution μ constant, can be reconstructed by maximizing the log-likelihood function, i.e. the PET image, the general MLEM algorithm:
Figure BDA0003981854760000141
n represents the number of iterations, the initial value of x1 in formula (3) may be set to be a constant, or may be an image obtained by other reconstruction methods or a PET image without correction, and is set to be a constant distribution in an imaging field of view in the present embodiment, and the constant is selected to be 1.
And 4, step 4:calculating the linear attenuation coefficient distribution mu 1 in the axial view range by using a formula (4), keeping the first PET image x1 in the axial view as a constant, and directly calculating by using PET data to obtain the first linear attenuation coefficient image mu 1 in the axial view according to the log-likelihood function of the unknown linear attenuation coefficient distribution mu:
Figure BDA0003981854760000142
n represents the number of iterations, and the initial value of μ 1 in equation (4) may be set to be a constant or may be a linear attenuation coefficient distribution obtained by other modalities, and in this embodiment, the distribution in the imaging field of view is set to be 0.
After the radioactivity distribution and the linear attenuation coefficient distribution in the axial visual field range are obtained in the step 4, the radioactivity distribution and the linear attenuation coefficient distribution in the axial visual field are not completely unknown, the radioactivity distribution and the linear attenuation coefficient distribution in a living body are continuous, and the similarity of the activity or tissue distribution in the human body is very large. Further, proper extrapolation of the radioactivity distribution and the linear attenuation coefficient distribution in the visual field range can be realized, and accurate estimation of the radioactivity distribution and the linear attenuation coefficient distribution in the limited visual field range can be obtained.
The attenuation coefficient image distribution is used for integral operation, so that the requirement on image details is not high, and the feasibility is realized by utilizing a deep learning extrapolation method; on the other hand, due to the influence of the scattering angle, scattering outside the field of view only in a limited distance has influence on the scattering correction inside the field of view, and the estimation difficulty is also reduced. Because the height and the weight of a patient influence the distribution of the radioactivity activity and the attenuation coefficient of the patient, and the injection activity and the injection interval time influence the distribution of the radioactivity activity in the patient, the information is simultaneously used as prior knowledge and is used as the input of a depth network together with an image, a real-collected or simulated PET image and an attenuation coefficient image which expand the axial visual field are used as output, a deep learning network model is trained, and network parameters are optimized, so that a second PET image and a second attenuation coefficient image which are expanded axially are obtained.
And 5:the deep learning network G implements a mapping of the first PET image x1, the first attenuation coefficient image μ 1 to the second PET image x2, the second attenuation coefficient image μ 2,
Figure BDA0003981854760000151
namely, the first PET image x1, the first attenuation coefficient image μ 1, and the height h, weight w, radioactivity a of the patient are separated from the injected tracer by the interval t a As input, a second PET image x2, and a second attenuation coefficient image μ 2 as output.
During the training process, the output of G is also used for obtaining a true linear attenuation image mu by PET/CT or PET/MR scanning True And a reconstructed image x True Comparing, and optimizing a training network parameter theta to minimize a loss function L, so that mapping the PET image and the attenuation coefficient image which are not subjected to scatter correction in the FOV into the PET image and the attenuation coefficient image after the axial FOV is expanded can be finally realized, namely:
Figure BDA0003981854760000152
the training data set may be from simulation or from actual acquisition. The training data set needs to be preprocessed, and the attenuation coefficient image outside the field of view and the unattenuated PET image are completely matched through screening, so that truncation or motion artifacts do not exist.
Without loss of generality, the training network G may select a CNN network, a uet network, a GAN network, or other networks, all of which are intended to be within the scope of patent protection.
Step 6:in order to ensure the accuracy of scattering correction to the maximum extent, a PET image and an attenuation coefficient image obtained by using a formula 3 and a formula 4 are utilized for an area in an axial FOV; for the outside FOV area, a PET image and an attenuation coefficient image are obtained by deep learning, and thereby a third PET image and a third attenuation coefficient image of an extended axial field of view are obtained as in equation (6):
Figure BDA0003981854760000161
and 7:through steps 4-6, SSS equation (2) can be transformed as:
Figure BDA0003981854760000162
Figure BDA0003981854760000163
Figure BDA0003981854760000164
the third PET image and the third attenuation coefficient image in the formula (7) take account of the distribution conditions in the field of view and outside the field of view, so that scattering is caused
Figure BDA0003981854760000165
The estimation is more accurate. R AS ,R BS Respectively, the distance of the scatter point to detector a and detector B, and S the scatter point position.
It should be noted that, for the clinical scan of the actual multiple beds, in order to fully utilize the scan data, iterative computations can be performed through steps 3 to 4, and for the mth bed, first, a first PET image and a first attenuation coefficient image which correspond to the m beds and are spliced are generated; generating a second PET image and a second attenuation coefficient image after the visual field is expanded by the method; regenerating a third PET image and a third attenuation coefficient image used for calculating the scattering distribution; and finally, estimating the accurate scattering distribution including scattering distribution outside the field of view, and gradually carrying out iterative calculation along with the increase of the bed. The iteration step is accompanied by the increase of the number of beds, the spliced image is more and more complete, the information in the axial visual field of the current bed is gradually enriched, and the estimation of the scattering distribution in the axial visual field is more and more accurate.
It is here intended to express that, in the training G process, the training samples may further include: a multi-bed training sample, the multi-bed training sample comprising: verifying a real linear attenuation image obtained by using PET/CT or PET/MR scanning by using a reconstructed PET image of multi-bed detection data, an attenuation coefficient image of the PET image, the height, the weight and the radioactivity of a user to which the PET image belongs and the interval time of tracer injection;
thus, in use, the deep learning network outputs scatter correction information for each bed
Figure BDA0003981854760000171
For multi-bed data, scanned bed data are spliced and input into the deep learning network along with the increase of beds, so that the data input into the network is more complete, the information is richer, and the training G is more accurate along with the increase of beds.
The method jointly applies the existing attenuation correction patent algorithm and the deep learning method to estimate the activity distribution and the linear attenuation coefficient distribution of the object outside the axial visual field, and then uniformly estimates the scattering events inside and outside the axial visual field, thereby accurately performing scattering correction on the scanning bed, improving the image quality, simultaneously enabling the PET scattering correction not to depend on other modal images, and greatly reducing the radiation measurement of the patient.
And 8: based on the specified axial field of view (0, D)]The detection data is obtained by adopting the scattering correction method from the step 1 to the step 7According to the scattering correction information
Figure BDA0003981854760000172
P02 according to
Figure BDA0003981854760000173
And carrying out PET reconstruction by using a pre-established log-likelihood function to obtain a PET image with accurate scattering correction.
Through experimental verification, the PET image of the embodiment is more accurate and free of artifacts. FIG. 2 (a) is a PET reconstructed image obtained by calculating the scatter distribution using a conventional algorithm, in which significant uptake artifacts occur around the bladder due to the scatter distribution outside the axial field of view not being fully considered; fig. 2 (b) is a diagram illustrating that the method of the present embodiment is used to calculate the scattering distribution, and the obtained PET image has more accurate scattering distribution, no obvious artifact, and better image quality under the same reconstruction method and parameters.
The method overcomes the defect that the traditional scattering calculation method cannot consider the scattering distribution outside the axial view, also gets rid of the dependence on other modal images, obtains more accurate scattering distribution under the condition that other modal images do not exist or other modal images have obvious artifacts, obtains higher-quality images while greatly reducing the radiation dose of a patient, and has stronger applicability. Compared with the traditional scattering correction algorithm, the method has higher correction precision and is beneficial to improving the image quality.
In addition, an embodiment of the present invention further provides a PET system, which includes: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, specifically, executes the above-described PET image reconstruction method or PET image scatter correction method. It is understood that the PET system may be an operator station communicatively connected to the detectors.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third and the like are for convenience only and do not denote any order. These words are to be understood as part of the name of the component.
Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (9)

1. A method of scatter correction of a PET image, comprising:
s10, aiming at the detection data in the specified axial visual field range (0, D), acquiring a first PET image of the detection data and a first attenuation coefficient image corresponding to the first PET image, wherein the first PET image is an image which is not subjected to scatter correction and is reconstructed by the detection data in the axial visual field range (0, D);
s20, based on a pre-trained deep learning network, inputting basic information of a patient, a first PET image and a first attenuation coefficient image into the deep learning network, and acquiring a second PET image output by the deep learning network and a second attenuation coefficient image corresponding to the second PET image; the second PET image is a simulation output image outside the axial visual field range (D, D + F), wherein D represents the size of the axial visual field range, and F represents the size of the axial expansion range;
s30, acquiring scattering correction information of the detection data based on the first PET image, the second PET image, the first attenuation coefficient image, the second attenuation coefficient image and pre-established calculation information of the scattering correction SSS, wherein the scattering correction information is used for reconstructing the PET image;
the pre-trained deep learning network is obtained by training the constructed deep learning network based on the PET image in the axial visual field range and the matched associated image and associated information, and the network outputs the PET image outside the axial visual field range and the attenuation coefficient image corresponding to the PET image.
2. The method of claim 1, wherein prior to S10, the method further comprises:
s00, acquiring a training sample for training the deep learning network based on the reconstructed PET image and the matched associated image and associated information;
wherein each training sample comprises: the reconstructed first PET image/simulated first PET image, the attenuation coefficient image corresponding to the first PET image, the height, the weight and the radioactivity of the user to which the PET image belongs, and the interval time of tracer injection verify the real linear attenuation image obtained by using PET/CT or PET/MR scanning;
s01, training the deep learning network based on the training samples to obtain a trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function L of the optimally trained deep learning network to be minimum, so that the PET image and the attenuation coefficient image which are not subjected to the scattering correction in the axial visual field range are mapped into the PET image and the attenuation coefficient image which are not subjected to the scattering correction and outside the expanded axial visual field range.
3. The method of claim 2, wherein S01 comprises: the outputs of the network parameters θ and G of the deep learning network G are represented as follows:
Figure FDA0003981854750000021
Figure FDA0003981854750000022
wherein x1 is a first PET image, μ 1 is a first attenuation coefficient image, x2 is a second PET image, μ 2 is a second attenuation coefficient image, h is the height of the user, w is the weight of the user, a is the radioactivity, t is the radioactivity a Interval for tracer injection;
a second PET image x2, a second attenuation coefficient image μ 2 as an output of G,
during the verification process, the output of G and a real linear attenuation image mu obtained by PET/CT or PET/MR scanning True And is based on mu True Reconstructed image x True A comparison is made.
4. The method of claim 1, wherein the deep learning network is one of: CNN networks, unet networks, GAN networks.
5. The method of claim 3, wherein the S10 comprises:
a first PET image x1 in an axial view is obtained by maximizing a log-likelihood function by means of an MLEM algorithm while keeping a linear attenuation coefficient distribution mu as a constant;
and (3) linear attenuation coefficient distribution mu 1 in the axial view range, keeping the first PET image x1 in the axial view as a constant, and calculating to obtain the first attenuation coefficient image mu 1 in the axial view by utilizing PET data according to the log-likelihood function of unknown linear attenuation coefficient distribution mu.
6. The method according to claim 5, wherein the S30 comprises:
acquiring a third PET image x3 of an extended axial field of view j And a third attenuation coefficient image mu 3 k
Figure FDA0003981854750000031
Based on the third PET image x3 j And a third attenuation coefficient image mu 3 k Correcting the calculation information of the pre-established scattering correction SSS to obtain the corrected scattering correction information
Figure FDA0003981854750000032
Will be provided with
Figure FDA0003981854750000033
As scatter correction information for the detection data;
Figure FDA0003981854750000034
Figure FDA0003981854750000035
Figure FDA0003981854750000036
wherein, I A Indicating that the positron annihilates at any point on S1 to emit a pair of gamma photons, one photon which is not scattered moves along the path S1 and is detected by the detector A, the photon energy is 511keV, and the detection efficiency is epsilon AS The linear attenuation coefficient image is mu 3, another photon moves along a path S2 after scattering at the point S and is detected by a detector B, the photon energy is less than 511keV, and the detection efficiency is epsilon' BS The linear attenuation coefficient image is μ 3'; i is B Indicating that the positive and negative electrons annihilate at any point on S2 to emit a pair of gamma photons, one of which is unscattered and moves along path S2 and is detected by detector B with photon energy of 511keV and detection efficiency of epsilon BS The linear attenuation coefficient image is mu 3, another photon moves along a path S1 after scattering at the point S and is detected by a detector A, the photon energy is less than 511keV, and the detection efficiency is epsilon' AS The linear attenuation coefficient image is μ 3'; r is AS ,R BS Respectively representing the distance from a scattering point to a detector A and the distance from a scattering point to a detector B, and S represents the position of the scattering point;
Figure FDA0003981854750000037
middle ds represents the integral over distance s;
vs denotes the total scattering volume, σ AS Representing the geometric section of the detector A along gamma rays, sigma BS Representing a geometric cross section of detector B along ray y.
σ c Which represents the compton scattering cross-section,
Figure FDA0003981854750000041
the differential cross section representing compton scattering can be obtained by the Klein-Nishina equation, and Ω represents the scattering solid angle.
7. The method of claim 2, wherein the training samples further comprise:
a multi-bed training sample, the multi-bed training sample comprising: verifying a real linear attenuation image obtained by using PET/CT or PET/MR scanning by using a reconstructed PET image of multi-bed detection data, an attenuation coefficient image of the PET image, the height, the weight and the radioactivity of a user to which the PET image belongs and the interval time of tracer injection;
in use, the deep learning network outputs scattering correction information for each bed
Figure FDA0003981854750000042
8. A method of reconstructing a PET image, comprising:
p01, based on the specified axial field of view (0]Acquiring scatter correction information of the detection data by the scatter correction method of the PET image according to any one of claims 1 to 6
Figure FDA0003981854750000043
P02 according to
Figure FDA0003981854750000044
And carrying out PET reconstruction by using a pre-established log-likelihood function to obtain a PET image with accurate scattering correction.
9. A PET system, comprising: a memory and a processor; the memory has stored therein computer program instructions, and the processor executes the computer program instructions stored in the memory, in particular to perform the method of any of the preceding claims 1 to 8.
CN202211552380.0A 2022-12-05 2022-12-05 PET image scattering correction method and PET system Pending CN115830167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211552380.0A CN115830167A (en) 2022-12-05 2022-12-05 PET image scattering correction method and PET system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211552380.0A CN115830167A (en) 2022-12-05 2022-12-05 PET image scattering correction method and PET system

Publications (1)

Publication Number Publication Date
CN115830167A true CN115830167A (en) 2023-03-21

Family

ID=85544115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211552380.0A Pending CN115830167A (en) 2022-12-05 2022-12-05 PET image scattering correction method and PET system

Country Status (1)

Country Link
CN (1) CN115830167A (en)

Similar Documents

Publication Publication Date Title
US20220117570A1 (en) Systems and methods for contrast flow modeling with deep learning
CN106456098B (en) The generation method and system of decay pattern
CN106491151B (en) PET image acquisition method and system
CN107111867B (en) Multi-modality imaging system and method
CN110599472B (en) Method and system for calculating SUV normalization coefficient in SPECT quantitative tomographic image
EP2210238B1 (en) Apparatus and method for generation of attenuation map
CN109961419B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
CN105147312A (en) PET image acquiring method and system
CN109978966B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
EP3619686B1 (en) Generation of accurate hybrid datasets for quantitative molecular imaging
CN114387364A (en) Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction
EP3804625A1 (en) Internal dose tomography
CN110458779B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
US7675038B2 (en) Truncation compensation in transmission reconstructions for a small FOV cardiac gamma camera
CN112529977B (en) PET image reconstruction method and system
CN110215226B (en) Image attenuation correction method, image attenuation correction device, computer equipment and storage medium
Zubal et al. High resolution, MRI-based, segmented, computerized head phantom
CN115439572A (en) Attenuation correction coefficient image acquisition method and PET image reconstruction method
CN110811665A (en) PET image attenuation correction method, apparatus, computer device and storage medium
Matheoud et al. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation
CN115423892A (en) Attenuation-free correction PET reconstruction method based on maximum expectation network
CN115830167A (en) PET image scattering correction method and PET system
EP2711738A1 (en) A method and a device to generate virtual X-ray computed tomographic image data
CN110428384B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
CN117788624A (en) Scattering correction method and system for PET image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination