CN114387364A - Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction - Google Patents

Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction Download PDF

Info

Publication number
CN114387364A
CN114387364A CN202111630372.9A CN202111630372A CN114387364A CN 114387364 A CN114387364 A CN 114387364A CN 202111630372 A CN202111630372 A CN 202111630372A CN 114387364 A CN114387364 A CN 114387364A
Authority
CN
China
Prior art keywords
pet
image
attenuation coefficient
linear attenuation
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111630372.9A
Other languages
Chinese (zh)
Inventor
李楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sinogram Medical Technology Co ltd
Original Assignee
Jiangsu Sinogram Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sinogram Medical Technology Co ltd filed Critical Jiangsu Sinogram Medical Technology Co ltd
Priority to CN202111630372.9A priority Critical patent/CN114387364A/en
Publication of CN114387364A publication Critical patent/CN114387364A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5264Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Optics & Photonics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention relates to a linear attenuation coefficient acquisition method and a linear attenuation coefficient reconstruction method for PET image reconstruction, wherein the coefficient acquisition method comprises the following steps: s10, constructing a log-likelihood function L of the PET image without attenuation correction based on the acquired PET detection data; s20, adjusting the logarithm likelihood function L and carrying out iterative optimization according to the pre-defined filtering information and the prior penalty function to obtain a PET image without attenuation correction; s30, mapping the PET image without attenuation correction into a linear attenuation coefficient image mu corresponding to the detection object according to the pre-constructed mapping relation0(ii) a S40, collecting basedPosition parameter variation of corresponding scanning bed and linear attenuation coefficient image mu corresponding to detection target in PET detection data0Obtaining linear attenuation coefficient mu for PET image reconstruction1. The linear attenuation coefficient method obtained by the invention can improve the operation speed, improve the stability of the result in the PET reconstruction and simultaneously avoid the movement and truncation artifacts of the PET image.

Description

Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction
Technical Field
The invention relates to the field of medical imaging, in particular to a linear attenuation coefficient acquisition method used for image reconstruction in a positron emission computed tomography system, a PET image reconstruction method based on the linear attenuation coefficient and a PET detection system.
Background
Positron Emission Tomography (PET) (Positron Emission tomography) is a high-end nuclear medicine image diagnostic device, and radioactive nuclide (such as radionuclide) is utilized in practical operation18F、11C, etc.) to mark the metabolic substance and inject the nuclide into the human body, and then the PET system is used for carrying out functional metabolic imaging on the patient to reflect the condition of the metabolic activity of the life, thereby achieving the purpose of diagnosis. When acquired by a PET system, photons are attenuated in the body before reaching the PET system, and coincidence events at the surface of the object are detected more efficiently than at the interior of the object. If the attenuation factor is not corrected, the reconstructed image will generate attenuation artifacts that the image of the edge of the object is too bright and the image of the internal tissues of the object is too dark.
To eliminate attenuation artifacts and to accurately quantify the radiopharmaceutical distribution in a patient, other modalities (e.g., CT, MRI, etc.) are often coupled to obtain an image of the patient's anatomy. On one hand, the nuclide distribution condition can be accurately positioned, and the accuracy of focus positioning is improved; on the other hand, the tissue density distribution of the patient can be correspondingly obtained, the attenuation property (linear attenuation coefficient) of the tissue to the ray is calculated, then the attenuation property is applied to image reconstruction, attenuation correction is carried out on PET data, and finally a functional image of the actual radioactivity distribution of the tissue is obtained. The two images are finally fused in the same machine, and the advantages of functional imaging and anatomical imaging are compatible, so that the aims of early focus discovery and disease diagnosis are fulfilled, and the diagnosis and treatment guidance of tumors, heart diseases and brain diseases are more advantageous.
However, in multi-modality acquisition applications, attenuation information matching the PET data is sometimes not accurately obtained, so that artifacts are generated on the PET image:
first, in PET multi-modality imaging, there may be relative deviations in the image positions of different modalities. Taking a PET/CT system as an example, a CT scan can usually be completed in a very short time, and the obtained image is a snapshot at almost a certain moment. However, PET scanning is slow and typically takes several minutes per position, making it impossible to complete the data acquisition while the patient is holding his breath. Under the influence of cardiac pulsation and respiratory motion, there is a degree of mismatch in the position and phase of the PET and CT images for the same lesion. On one hand, the PET acquisition superposes data acquired by moving focuses at different positions, and the corresponding imaging reflects an average effect of the focus positions, which inevitably brings about the reduction of resolution, leads to the fuzzy focus imaging and the accuracy reduction of quantitative analysis SUV (standardized uptake value) values, and has a difference in form with CT imaging. On the other hand, the CT image and the PET image are registered and fused to generate a deviation (particularly near the diaphragm with the largest motion amplitude), and the instantaneous CT image is used for attenuation correction of the average PET image, so that a position deviation inevitably occurs, local artifacts are generated on the PET image, and the accurate diagnosis of the tumor in the chest and abdomen and the formulation of a treatment plan are possibly influenced. In addition, in long-time PET scanning, the patient may move (for example, the arm and head may move during the long-time scanning), which may also cause the PET and CT images to be mismatched and generate attenuation artifacts.
Secondly, commonly used in clinical PET collection18F-FDG is a non-specific imaging agent that reflects glucose metabolism. Compared with normal cells, malignant tumor cells grow actively, and the cells proliferate abnormally, so that the energy demand is large, and the DNA synthesis and the amino acid utilization are greatly increased. Thus, by detecting glucose accumulation, PET is able to assess the activity of tumors at a metabolic level. However, benign lesions such as acute inflammation and granuloma can also be obviously ingested due to the increase of metabolism, resulting in the occurrence of false positive. In order to better identify benign and malignant lesions, delayed imaging is generally clinically used, and through multi-time point acquisition and multi-time point PET imaging, tumors take up FDG to a higher degree than inflammation along with the time interval, so that the benign and malignant lesions can be better identified. In practical application, the whole scanning time is divided into several sections, static PET images are respectively reconstructed, and finally, the static PET images of a plurality of time points are obtained and are compared and analyzed. Generally, delayed imaging multi-time point PET acquisition needs to be matched with multi-time point CT acquisition to avoid attenuation correction information error and attenuation artifact caused by the fact that a patient leaves a sickbed or moves at a body position midway in the scanning process, which cannot avoidThe X-ray radiation dose of the patient is increased.
Thirdly, obvious artifacts exist in the attenuation image during the scanning process, which can cause obvious errors in the attenuation correction of the PET image. For example, CT images of patients with metal substances in their bodies (such as cardiac pacemakers or metal braces) have obvious highlight metal artifacts, which make the surrounding tissues difficult to accurately distinguish, thereby causing obvious artifacts in the attenuation images and seriously affecting the attenuation correction of the PET images.
Furthermore, the scan range of PET is typically larger than the scan range of other modalities (such as CT or MRI). Other modality imaging is likely to fail to provide a sufficiently large imaging range when scanning a relatively heavy patient, which can result in attenuation images being truncated. The application of such incomplete attenuation information in PET reconstruction also produces attenuation artifacts.
Finally, when PET is imaged in combination with other phantoms, satisfactory attenuation-corrected images, such as PET/MR imaging, are sometimes not obtained. Compared to CT imaging, MR mainly utilizes magnetic spin imaging, not tissue density distribution imaging, and therefore does not directly provide accurate information about tissue attenuation properties. At present, the algorithm for attenuation correction by means of MR imaging is complex in application and low in precision, and attenuation artifacts are easily generated. In addition, MR cannot image the couch and MR coils, which also has an effect on subsequent attenuation correction.
Meanwhile, the application conditions of other modalities restrict the application of PET imaging, for example, patients with dentures or cardiac pacemakers cannot be examined by MR, and the application of PET/MR is influenced. In addition, extremely high radioactivity protection requirements are required for CT imaging, and strict nuclear magnetic resonance shielding is required for MR imaging, so that the scanning protection requirements for multi-modal imaging are high, and the method is not easy to popularize.
Disclosure of Invention
Technical problem to be solved
In view of the above drawbacks and deficiencies of the prior art, the present invention provides a linear attenuation coefficient acquisition method and a reconstruction method for PET image reconstruction.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a linear attenuation coefficient acquisition method for PET image reconstruction, including:
s10, constructing a log-likelihood function L of the PET image without attenuation correction based on the acquired PET detection data;
s20, adjusting and carrying out iterative optimization on the log-likelihood function L according to the pre-defined filtering information and the prior penalty function to obtain a PET image without attenuation correction;
s30, mapping the PET image without attenuation correction into a linear attenuation coefficient image mu corresponding to the detection object according to the pre-constructed mapping relation0
S40, based on the position parameter variation of the corresponding scanning bed when the PET detection data is acquired and the linear attenuation coefficient image mu corresponding to the detection target0Obtaining linear attenuation coefficient mu for PET image reconstruction1
Optionally, the S10 includes:
according to the reconstruction of the following formula I, estimating a log-likelihood function L of the PET image without attenuation correction;
the formula I is as follows:
Figure BDA0003440862840000041
wherein r ═ r1t,r2t,…,rNT]TDenotes the average value of random noise, y ═ y1t,y2t,…,yNT]TRepresenting the acquired PET detection data, M being the size of the PET image space, A ═ Aijt]Is a system matrix, x ═ x1,x2,…,xj…,xM]TRepresenting an unknown PET image, and t is the time of flight.
Optionally, the S20 includes:
adjusting and carrying out iterative optimization according to the following formula II to obtain a PET image without attenuation correction;
formula two:
Figure BDA0003440862840000042
Wherein [ mu ] is12,…,μK]TRepresents an unknown linear attenuation coefficient distribution, β is a weighting factor, F represents a post-filtering function, and r (x) is an a priori scalar penalty function.
Optionally, the S30 includes:
the pre-constructed mapping relation is a pre-trained deep learning network G which is used for realizing a PET image without attenuation correction and a linear attenuation coefficient image mu0Mapping of (2);
specifically, a first training data set for training the deep learning network G is obtained, the first training data set comprising: simulating simulated training data and/or actually acquired training data;
preprocessing the first training data set, inputting the first training data set into a deep learning network G, obtaining an output result, and optimizing a network parameter theta of the deep learning network G to minimize a loss function L', so as to obtain a trained deep learning network G, wherein the network parameter in the trained deep learning network G is
Figure BDA0003440862840000051
Figure BDA0003440862840000052
Obtaining a linear attenuation coefficient image mu corresponding to a detection target based on a deep learning network G of training0
Figure BDA0003440862840000053
Optionally, the S30 includes:
the pre-constructed mapping relation is a pre-trained deep learning network G ', and the trained deep learning network G' is used for mapping the PET image and the CT image without attenuation correction;
specifically, a second training data set for training the deep learning network G' is obtained, the second training data set comprising: simulating simulated training data and/or actually acquired training data;
preprocessing the second training data set, inputting the second training data set into a deep learning network G ', obtaining an output result, and optimizing a network parameter theta of the deep learning network G ' to minimize a loss function L ', obtaining a trained deep learning network G ', wherein the network parameter in the trained deep learning network G ' is
Figure BDA0003440862840000054
Figure BDA0003440862840000055
Acquiring a CT image corresponding to a detection target based on the trained deep learning network G';
converting the CT image into a linear attenuation coefficient image mu corresponding to a detection target0
Optionally, the deep learning network G and the deep learning network G' are both a CNN network, a uet network, a GAN network, or other networks, and the specific structure of the deep learning network is not limited in this embodiment, and is selected according to actual needs.
Optionally, the S40 includes:
three pairs of linear attenuation coefficient images mu according to the following formula0Correcting to obtain linear attenuation coefficient image mu for PET image reconstruction1
The formula III is as follows: mu.s1=μ0bed(Δx,Δy,Δz);
μbed(Δ x, Δ y, Δ z) are linear attenuation coefficient images of the scanning bed where the detection object is located, and Δ x, Δ y, Δ z are expressed as movement values in three directions.
In a second aspect, an embodiment of the present invention further provides a method for reconstructing a PET image, including:
obtaining a linear attenuation coefficient image for PET image reconstruction by using the linear attenuation coefficient obtaining method of any one of the first aspect;
taking the linear attenuation coefficient image value as a known initial value of a linear attenuation coefficient;
based on a target function of a pre-established PET image, linear attenuation coefficient distribution of a known initial value and PET radioactivity distribution x, adopting an alternative solving strategy to obtain a final reconstructed PET image;
wherein, the linear attenuation coefficient distribution mu and the PET radioactivity distribution x are respectively two variables in the objective function of the PET image;
the alternate solution strategy is: and when the first variable is a known value, acquiring an estimated value of the second variable, taking the acquired estimated value of the second variable as the known value, acquiring the estimated value of the first variable, and solving for n times alternately, wherein n is a natural number greater than 1.
Optionally, obtaining a final reconstructed PET image by using an alternating solution strategy based on a pre-established objective function of the PET image, linear attenuation coefficient distribution of a known initial value, and PET radioactivity distribution x, including:
when the linear attenuation coefficient mu is a known value, acquiring an estimated value of the PET radioactivity distribution x according to the following formula five;
Figure BDA0003440862840000061
when the PET radioactivity distribution x is a known value, acquiring an estimated value of linear attenuation coefficient distribution mu according to the following formula six;
Figure BDA0003440862840000071
wherein x is [ x ]1,x2,…,xM]TRepresenting an unknown PET image, i.e. the PET radioactivity distribution, mu ═ mu12,…,μK]TRepresents a linear attenuation coefficient distribution, A ═ Aijt]Is a system matrix, r ═ r1t,r2t,…,rNT]TRepresenting random noise and scatteringAverage value of noise, T denotes the dimension of the time of flight TOF, l ═ lik]In the linear attenuation coefficient matrix, j represents a spatial position point source in the PET detection system, and i represents a response line LOR.
In a third aspect, a PET detection system includes: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, in particular, executes the method for reconstructing a PET image according to the second aspect.
(III) advantageous effects
In the invention, the attenuation correction information in the PET image reconstruction process is acquired by adopting the PET detection data, so that when the PET multi-mode images are not matched due to respiration or heartbeat and patient movement, the attenuation correction can still be carried out on the PET images, the image quality is improved, and more accurate images are provided for the analysis and application of doctors.
In addition, because the initial value of the linear attenuation coefficient acquired by the PET detection data is adopted, the PET image is further reconstructed in an iterative manner, and the accurate attenuation correction can be carried out aiming at the patients with the attenuation image artifacts, such as the PET/CT scanning patients with a cardiac pacemaker or a metal tooth socket in the body, and the CT image has obvious metal artifacts, so that the influence of the metal artifacts is eliminated, the problem of attenuation image truncation is effectively avoided in the processing application, and doctors can scan the heavy patients conveniently.
It can be understood that the initial value of the attenuation correction iterative algorithm in the PET image reconstruction process is obtained from the PET image which is not subjected to attenuation correction through the deep learning network, and the quantification and tissue distribution are more accurate than before, so that the stability and the iteration speed of the attenuation correction algorithm are greatly improved.
When the linear attenuation coefficient image is obtained, the deep learning network mapping is carried out in the image threshold, the processing speed is high, the additionally increased time can be ignored relative to the reconstruction process, and the feasibility of the algorithm is ensured; the linear attenuation coefficient image generated by the deep learning network is subjected to fine adjustment by means of the attenuation coefficient iterative algorithm and depending on the acquired data, so that the problem of generalization of PET acquired data is solved, the difficulty of the deep learning network is simplified, and the stability of the deep learning network is improved; PET collection does not depend on other modalities, can be applied to single PET scanning, reduces the scanning environment requirement, and expands the application occasions.
Drawings
Fig. 1 and fig. 2A are schematic flow charts of a linear attenuation coefficient obtaining method for PET image reconstruction according to an embodiment of the present invention;
FIG. 2B is a schematic diagram of a deep learning network training process;
FIG. 3 is a block diagram of a PET image detection system;
fig. 4 is a schematic diagram showing a comparison between a PET image obtained by the method for reconstructing a PET image according to the present invention and a PET image obtained by a conventional reconstruction algorithm.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
In addition, in order to effectively correct attenuation artifacts and widen the application field Of PET imaging in the prior art, two attenuation correction methods are provided, which iteratively extract a Linear attenuation coefficient distribution image (Linear attenuation coefficient image) from time Of flight tof (time Of flight) information acquired by PET for attenuation correction Of PET reconstruction, so that strict matching between the PET image and the attenuation image can be ensured, motion artifacts are effectively eliminated, image quality is improved, and more accurate images are provided for analysis and application Of doctors. However, in practical applications, since it is difficult to obtain an accurate linear attenuation coefficient initial distribution, the estimation algorithm usually requires multiple iterations: if the linear attenuation image with the uniformly distributed full imaging visual field is used as the iteration initial, inaccurate guess can cause the convergence process to be lengthened, and the calculated amount is increased; if images of other modalities are used as the a priori iteration initiatives, multiple iterations are still required to eliminate artifacts, subject to possible mismatches between multi-modal images.
Therefore, in practical applications, in order to solve the problem of long iterative convergence operation time of the fading correction algorithm, a higher level of computing resources (such as a high-performance GPU) is usually required to be matched, which increases the cost. In addition, if there is a large difference between the iteration initial value and the true linear attenuation coefficient, it may cause the iterative algorithm to converge to the local optimum, and a global optimum result cannot be obtained. In particular, in order to avoid the occurrence of local optimal conditions, many restrictions and protections need to be added to the iterative algorithm in the prior art, and many adjustment parameters also need to be set, which reduces the stability and robustness of the algorithm.
In order to improve the operation speed and the result stability of the linear attenuation coefficient iterative algorithm, the deep learning network is utilized to map the PET image without attenuation correction into the linear attenuation coefficient image as a known initial value of a subsequent algorithm, so as to optimize the convergence path of the algorithm and achieve the aim of converging to a global optimization solution as soon as possible. Compared with the original initial value (the linear attenuation coefficient image obtained by converting the full-imaging visual field uniform linear attenuation coefficient image or other modal images), the linear attenuation coefficient image obtained by the algorithm has more accurate attenuation information of the scanned object, and is used as a good approximation of a real attenuation image, so that the fast convergence of the iterative process can be ensured, and the stability and the quantitative accuracy of the algorithm are improved. In addition, attenuation information extracted by the deep learning model is derived from PET images, mismatching between multi-modal images does not exist, and motion and truncation artifacts are avoided.
It should be noted that the linear attenuation coefficient image and the linear attenuation coefficient are the same in the following description, and are used in different descriptions.
Example one
As shown in fig. 1, the present embodiment provides a linear attenuation coefficient acquisition method for PET image reconstruction, and the method of the present embodiment may be implemented on any electronic device, preferably in a computing device associated with a PET detector, and the method of the present embodiment may include the following steps:
s10, constructing a log-likelihood function L of the PET image without attenuation correction based on the acquired PET detection data.
For example, the log-likelihood function L of a PET image without attenuation correction is as follows;
the formula I is as follows:
Figure BDA0003440862840000101
wherein r ═ r1t,r2t,…,rNT]TDenotes the average value of random noise, y ═ y1t,y2t,…,yNT]TRepresenting the acquired PET detection data, M being the size of the PET image space, and the spatial size being constant, a ═ aijt]Is a system matrix, x ═ x1,x2,…,xj…,xM]TRepresenting an unknown PET image, and t is the time of flight.
And S20, adjusting and iteratively optimizing the log-likelihood function L according to the pre-defined filtering information and the prior penalty function to obtain the PET image without attenuation correction.
In this embodiment, iterative optimization can be performed according to the following formula two to obtain a PET image without attenuation correction;
the formula II is as follows:
Figure BDA0003440862840000102
wherein [ mu ] is12,…,μK]TRepresents an unknown linear attenuation coefficient distribution, β is a weighting factor, F represents a post-filtering function, and r (x) is an a priori scalar penalty function.
S30, mapping the PET image without attenuation correction into a linear attenuation coefficient image mu corresponding to the detection object according to the pre-constructed mapping relation0
It can be understood that the pre-constructed mapping relationship is a pre-trained deep learning network G, and the trained deep learning network G is used for realizing the PET image without attenuation correction and the linear attenuation coefficient image mu0Mapping of (2);
specifically, a first training data set for training the deep learning network G is obtained, the first training data set comprising: simulating simulated training data and/or actually acquired training data;
preprocessing the first training data set, inputting the first training data set into a deep learning network G, obtaining an output result, and optimizing a network parameter theta of the deep learning network G to minimize a loss function L', so as to obtain a trained deep learning network G, wherein the network parameter in the trained deep learning network G is
Figure BDA0003440862840000111
Figure BDA0003440862840000112
As shown in fig. 2B.
Obtaining a linear attenuation coefficient image mu corresponding to a detection target based on a deep learning network G of training0
Figure BDA0003440862840000113
In other embodiments, the pre-constructed mapping relationship is a pre-trained deep learning network G' for mapping the PET image and the CT image without attenuation correction. Correspondingly, after training, acquiring a CT image corresponding to the detection target based on the trained deep learning network G'; converting the CT image into a linear attenuation coefficient image mu corresponding to a detection target0
The deep learning network G and the deep learning network G' in this embodiment may be a CNN network, a uet network, a GAN network, or other networks. The deep learning network G realizes the mapping of the PET image into the linear attenuation coefficient image, the deep learning network G' realizes the mapping of the PET image into the CT image, and then the CT image is converted into the linear attenuation coefficient image according to the one-to-one correspondence between the CT image and the linear attenuation coefficient image.
S40, based on the position parameter variation of the corresponding scanning bed when the PET detection data is acquired and the linear attenuation coefficient image mu corresponding to the detection target0Obtaining linear attenuation coefficient mu for PET image reconstruction1
For example, inThe step S40 can be implemented by using three pairs of linear attenuation coefficient images μ according to the following formula0Correcting to obtain linear attenuation coefficient mu for PET image reconstruction1
The formula III is as follows: mu.s1=μ0bed(Δx,Δy,Δz);
μbed(Δ x, Δ y, Δ z) are linear attenuation coefficient images of the scanning bed where the detection object is located, and Δ x, Δ y, Δ z are expressed as movement values in three directions.
In the embodiment, when the linear attenuation coefficient image is obtained, the deep learning network mapping is performed in the image threshold, the processing speed is high, the additionally increased time can be ignored relative to the reconstruction process, and the feasibility of the algorithm is ensured; the linear attenuation coefficient image generated by the deep learning network is subjected to fine adjustment by means of the attenuation coefficient iterative algorithm and depending on the acquired data, so that the problem of generalization of PET acquired data is solved, the difficulty of the deep learning network is simplified, and the stability of the deep learning network is improved; the acquisition of PET detection data does not depend on other modalities, and can be applied to single PET scanning, thereby reducing the scanning environment requirement and expanding the application occasions.
In particular, the attenuation correction information in the PET image reconstruction process is acquired by adopting the PET detection data, so that when the PET multi-mode images are not matched due to respiration or heartbeat and patient movement, the attenuation correction can still be carried out on the PET images, the image quality is improved, and more accurate images are provided for analysis and application of doctors.
Example two
The embodiment provides a method for obtaining a linear attenuation coefficient image by mapping from a PET image without attenuation correction by using a deep learning network, and then obtaining a reconstructed PET image by taking the obtained linear attenuation coefficient image value as an initial value of a linear attenuation coefficient iterative algorithm, so as to ensure fast and stable convergence of a linear attenuation coefficient estimation algorithm. The method can be completed in the calculation equipment of the PET detection system, effectively improves the calculation speed and the calculation time, increases the reliability and the stability, and is combined with the steps shown in the figures 2A to 4, and the method comprises the following specific steps:
the following steps 201 to 203 are all the existing modeling processes, and are listed here because the following steps require formulas and descriptions.
201. The acquisition process of PET detection data can be modeled as follows:
Figure BDA0003440862840000121
y ═ y in formula (1)1t,y2t,…,yNT]TRepresenting the detected data, i.e. PET detection data, N represents the size of the sinogram of the detection data, and T represents the dimension of the time of flight TOF.
x=[x1,x2,…,xj…,xM]TRepresenting the unknown PET image and M is expressed as the size of the PET image space. Mu ═ mu12,…,μK]TRepresenting an unknown linear attenuation coefficient distribution, the dimensions of the attenuation coefficient being independent of the time of flight.
A=[Aijt]For the system matrix, the probability that a spatial position point source j is detected by a response line lor (line of response) i and the flight time TOF is t in the PET system is expressed in a mathematical form, and the physical characteristics of the system are reflected, wherein l ═ik]And the linear attenuation coefficient matrix represents the track crossing length of the LOR i when the LOR i passes through the space position point source k. r ═ r1t,r2t,…,rNT]TMean values of random noise and scattering noise are indicated.
202. The PET detection data obeyed a poisson distribution with unknowns being the PET radioactivity distribution x and the linear attenuation coefficient distribution μ. The log-likelihood function (maximum likelihood function) of the probe data is expressed as:
Figure BDA0003440862840000131
203. substituting equation (1) into equation (2), ignoring terms that are not related to unknowns, the log-likelihood function can be written as:
Figure BDA0003440862840000132
204. PET image reconstruction is performed based on PET detection data, attenuation correction is not considered in the reconstruction process, i.e. assuming that all rays are not attenuated, the linear attenuation coefficient in the full imaging field of view is set to zero: mu.sjWhen 0, j is 1, …, K, the log-likelihood function (3) becomes:
Figure BDA0003440862840000133
since gamma rays do not have attenuation effect during detection and thus are not scattered, r is ═ r in formula (1)1t,r2t,…,rNT]TRepresents the average value of random noise.
PET images without attenuation correction can be reconstructed by post-filtered normalized maximum likelihood (post-filtered) method of reconstructing the same:
Figure BDA0003440862840000141
in the formula (5), F denotes a post-filtering function, L (x, μ ═ 0, y) denotes a log-likelihood function (log-likelihood function) without considering attenuation correction, r (x) is a predefined scalar penalty function (penalty function), and β is a predefined weight factor for balancing the importance of the log-likelihood function and the penalty function. If β is chosen to be 0, then equation (5) becomes the traditional Maximum Likelihood Expectation Maximization (MLEM) or its accelerated Subset Expectation Maximization (OSEM).
The aforementioned PET image without attenuation correction
Figure BDA0003440862840000142
There are significant attenuation artifacts, resulting in radioactivityThe degree distribution has inaccurate quantitiveness: such as too bright an image of the patient's margins, too dark an image of the patient's internal tissues, too high a pulmonary uptake, etc.
Albeit PET images
Figure BDA0003440862840000143
The radioactive uptake values of different tissues of a patient are inaccurate, but the structural information of the different tissues is still kept, for example, although the edge of the patient is highlighted, the edge range of the patient can still be determined; lung uptake, while incorrect in contrast, can still rely on images to delineate the lung.
Therefore, in the present embodiment, the linear attenuation coefficient image can be restored from the PET image without attenuation correction, and the tissue distribution inside the patient can be accurately reflected. Experiments prove that the linear attenuation coefficient value of the human tissue does not have particularly large deviation in the linear attenuation coefficient image depending on the density of the human tissue, such as the soft tissue region is approximately 0.0975cm < -1 >, the fat region is approximately 0.0864cm < -1 >, and the lung region is approximately 0.0224cm < -1 >. The quantitative accuracy of the linear attenuation image restored from the unatenuated corrected PET image is also relatively easy to guarantee.
205. In order to extract the features of the PET image sufficiently, the present embodiment selects a mapping between the PET image without attenuation correction and the linear attenuation coefficient image using the deep learning network G.
The deep learning network G is a pre-trained network. In the training process, taking PET/CT as an example, the unattenuated PET image is taken
Figure BDA0003440862840000151
As input, a linear attenuation coefficient image μ0As output, the real linear attenuation image mu obtained by CT scanningCTAnd comparing, and optimizing a training network parameter theta to minimize a loss function L', so that the PET image without attenuation correction can be mapped into an accurate linear attenuation coefficient image, namely:
Figure BDA0003440862840000152
wherein the content of the first and second substances,
Figure BDA0003440862840000153
the training data set is obtained by optimization in the training process, and can be from simulation or actual acquisition. The training data set needs to be preprocessed, and the linear attenuation coefficient image and the unattenuated PET image are completely matched through screening, so that no truncation or motion artifact exists. The preprocessing is to ensure that the linear attenuation coefficient image is completely matched with the unattenuated PET original image by screening, and no truncation or motion artifact exists.
Without loss of generality, the deep learning network G may select a CNN network, a uet network, a GAN network, or another network, and this embodiment does not limit the structure of the deep learning network G.
206. Since the PET image without attenuation correction only belongs to the functional imaging, and the scanning bed cannot be displayed on the PET image, the attenuation information of getting on the bed needs to be additionally added to the linear attenuation image output by the deep learning network G. The linear attenuation coefficient image added with the attenuation information of the scanning bed is mu1=μ0bed(Δx,Δy,Δz)。
In this embodiment, since the shape of the bed is known and only rigid motion is possible, and the PET detection system is well mechanically adjusted before scanning, the bed only needs to be considered for translation in three directions, and the linear attenuation coefficient distribution of the bed can be expressed as μbed(Δ x, Δ y, Δ z), Δ x, Δ y, Δ z are expressed as movement values in three directions, μbed(0, 0, 0) is the initial position of the bed. Since the bed cannot move horizontally during scanning, the horizontal direction movement Δ x can be obtained by a mechanical measurement or calibration process, set to a constant during scanning; the vertical movement ay can be determined by reading the elevation height value of the bed; the axial direction movement az can be determined by reading the axial position of the scanning position light. In other embodiments, the neural network can be used to learn the phase of the patient and the bed in the PET image without depending on external signalsIs obtained for the position.
Since the PET image is a functional imaging, it is impossible to image the scanning bed, but since the patient lies on the scanning bed, the activity distribution on the lower side of the body is flat, and the head rest on the scanning bed determines the start scanning position of the head, and therefore, although the PET image itself does not display the scanning bed, the radioactivity distribution thereof contains the position information of the scanning bed, and can be learned by image shape recognition, specifically, the PET image whose network input is attenuation or no attenuation is outputted as the position of the scanning bed, and the network is constructed by learning.
It should be noted that the PET image without attenuation correction is processed in step 205 by means of a deep learning network G
Figure BDA0003440862840000161
Mapping to a linear attenuation coefficient image mu1The compensation of the scan bed in step 206 is also a linear decay coefficient map.
In another possible implementation manner, since the linear attenuation coefficient and the CT value conform to a bilinear transformation relationship, satisfy a one-to-one correspondence requirement, and the dynamic range of the CT value is larger, step 205 and step 206 may also adopt the deep learning network G' to correct the non-attenuation-corrected PET image
Figure BDA0003440862840000162
Mapping into CT image, adding compensation information of scanning bed to the mapped CT image, and converting the CT image with the compensation information of scanning bed into linear attenuation coefficient image mu1
Of course, if the deep learning network G' is not to attenuate the corrected PET image
Figure BDA0003440862840000163
And mapping to be a CT image, training and compensating aiming at the CT image in the training process, and finally converting the CT image after bed compensation into a linear attenuation coefficient image. The deep learning network G' is not needed to be used in the conversion of the CT image into the linear attenuation coefficient.
In specific operation, because the dynamic range of the CT value is large, when the optimized deep learning network carries out gradient calculation, higher precision is achieved, and the optimization result is more accurate.
In addition, for better understanding, the selection and training of the deep learning network G is described below. The quality of the PET images is greatly different under the influence of different fields, different devices and different scanning parameters, so that the quality of the current PET image (PET image without attenuation correction) is difficult to be ensured to be the same as that of the training PET image in practical application, and the applicability of a learning network result is greatly influenced, namely the problem caused by generalization of PET detection data is solved. In order to solve the problem of generalization of PET detection data, image data of various different conditions are generally required to be provided for deep learning network training, the data requirement is generally unrealistic, great difficulty is brought to the construction of a deep learning network, and the training time and the memory requirement are also great. The present embodiment therefore does not apply μ directly in the reconstruction1Instead, mu is1As an initial variable of the linear attenuation coefficient distribution iterative algorithm, the acquired data can be used for training the obtained mu1Fine adjustment is carried out, the linear attenuation coefficient result is ensured to be consistent with the actually acquired data, and the requirement of generalization of training network data is lowered through phase change; on the other hand,. mu.1Compared with the initial value set by the original algorithm, the method can greatly improve the operation speed and the quantitative accuracy of the original algorithm and can quickly obtain a linear attenuation image which is more matched with the actual acquisition. In order to solve the problem of unmatched data generalization, the result is finely adjusted by adopting actual collected data.
207. Since the log-likelihood function in the formula (3) is a very complex function for the unknowns x and μ, it is difficult to obtain an analytic solution, and therefore an iterative algorithm is required to gradually approximate the optimal solution. For unknown PET radioactivity distribution x, keeping linear attenuation coefficient distribution mu as a constant, and maximizing a log-likelihood function, namely an MLEM algorithm universal for PET image reconstruction:
Figure BDA0003440862840000171
in formula (7), n represents the current iteration number.
208. Keeping the PET radioactivity activity distribution x as a constant, and directly utilizing PET detection data to calculate and obtain new linear attenuation coefficient distribution mu aiming at an unknown attenuation coefficient distribution mu maximized log-likelihood function, wherein the corresponding algorithm is as follows:
Figure BDA0003440862840000181
in the specific implementation process, the linear attenuation coefficient distribution μ is firstly kept as a constant, the objective function is maximized for the PET radioactivity distribution x, and the step 207, namely the traditional MLEM iterative reconstruction algorithm, is adopted; the objective function is then maximized for the unknown linear attenuation coefficient distribution μ, step 208, keeping the PET radioactivity distribution x constant. The initial value of the linear attenuation coefficient distribution applied to the PET image reconstruction in the first iteration is the linear attenuation coefficient map μ obtained in step 2061The value of (c). And the operation is performed alternately, attenuation correction is continuously corrected to approximate to the real attenuation condition, and finally the estimated values of x and mu meeting the requirement of the maximum objective function are obtained.
Taking a PET/CT system as an example, fig. 3 defines a multi-modal detection system coordinate system, fig. 4(a) is a PET image without attenuation correction, fig. 4(b) is an attenuation-corrected PET image obtained after iteration of the above step 208 and step 207, and fig. 4(c) is a linear attenuation coefficient distribution obtained by using a deep learning network and used as an initial value of a linear attenuation coefficient iteration algorithm.
Compared with the traditional method for attenuation correction through other modal images, the PET image reconstructed by the embodiment has better quality, solves the problem of generalization of PET acquisition data, reduces the requirement of scanning environment, and expands the application occasions.
EXAMPLE III
The embodiment of the invention provides a method for reconstructing a PET image, which can be implemented on any electronic device and comprises the following steps:
301. the linear attenuation coefficient image mu for PET image reconstruction is obtained by adopting the linear attenuation coefficient acquisition method described in the first embodiment1
302. Imaging the linear attenuation coefficient1The value is taken as a known initial value of the linear attenuation coefficient distribution mu;
303. based on a target function of a pre-established PET image, a linear attenuation coefficient of a known initial value and PET radioactivity distribution x, adopting an alternative solving strategy to obtain a final reconstructed PET image;
wherein, the linear attenuation coefficient distribution mu and the PET radioactivity distribution x are respectively two variables in the objective function of the PET image;
the alternate solution strategy is: and when the first variable is a known value, acquiring an estimated value of the second variable, taking the acquired estimated value of the second variable as the known value, acquiring the estimated value of the first variable, and solving for n times alternately, wherein n is a natural number greater than 1.
For example, when the linear attenuation coefficient μ is a known value, obtaining an estimated value of the PET radioactivity distribution x according to the following formula five;
Figure BDA0003440862840000191
when the PET radioactivity distribution x is a known value, acquiring an estimated value of linear attenuation coefficient distribution mu according to the following formula six;
Figure BDA0003440862840000192
wherein x is [ x ]1,x2,…,xM]TRepresenting an unknown PET image, i.e. the PET radioactivity distribution, mu ═ mu12,…,μK]TRepresents a linear attenuation coefficient distribution, A ═ Aijt]Is a system matrix, r ═ r1t,r2t,…,rNT]TRepresenting randomnessAverage of noise and scattering noise, T denotes the dimension of the time of flight TOF, l ═ lik]In the linear attenuation coefficient matrix, j represents a spatial position point source in the PET detection system, and i represents a response line LOR.
In addition, the embodiment of the present invention further provides a PET detection system, which includes: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, and specifically executes the method for reconstructing a PET image according to any of the embodiments described above.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third and the like are for convenience only and do not denote any order. These words are to be understood as part of the name of the component.
Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (10)

1. A linear attenuation coefficient acquisition method for PET image reconstruction, comprising:
s10, constructing a log-likelihood function L of the PET image without attenuation correction based on the acquired PET detection data;
s20, adjusting and carrying out iterative optimization on the log-likelihood function L according to the pre-defined filtering information and the prior penalty function to obtain a PET image without attenuation correction;
s30, mapping the PET image without attenuation correction into a linear attenuation coefficient image mu corresponding to the detection object according to the pre-constructed mapping relation0
S40, based on the position parameter variation of the corresponding scanning bed when the PET detection data is acquired and the linear attenuation coefficient image mu corresponding to the detection target0Obtaining linear attenuation coefficient mu for PET image reconstruction1
2. The method according to claim 1, wherein the S10 includes:
according to the reconstruction of the following formula I, estimating a log-likelihood function L of the PET image without attenuation correction;
the formula I is as follows:
Figure FDA0003440862830000011
wherein r ═ r1t,r2t,…,rNT]TDenotes the average value of random noise, y ═ y1t,y2t,…,yNT]TRepresenting the acquired PET detection data, M being the size of the PET image space, A ═ Aijt]Is a system matrix, x ═ x1,x2,…,xj…,xM]TRepresenting an unknown PET image, and t is the time of flight.
3. The method according to claim 1 or 2, wherein the S20 includes:
adjusting and carrying out iterative optimization according to the following formula II to obtain a PET image without attenuation correction;
the formula II is as follows:
Figure FDA0003440862830000012
wherein [ mu ] is12,…,μK]TRepresents an unknown linear attenuation coefficient distribution, β is a weighting factor, F represents a post-filtering function, and r (x) is an a priori scalar penalty function.
4. The method according to claim 1 or 2, wherein the S30 includes:
the pre-constructed mapping relation is a pre-trained deep learning network G which is used for realizing a PET image without attenuation correction and a linear attenuation coefficient image mu0Mapping of (2);
specifically, a first training data set for training the deep learning network G is obtained, the first training data set comprising: simulating simulated training data and/or actually acquired training data;
preprocessing the first training data set, inputting the first training data set into a deep learning network G, obtaining an output result, and optimizing a network parameter theta of the deep learning network G to minimize a loss function L', so as to obtain a trained deep learning network G, wherein the network parameter in the trained deep learning network G is
Figure FDA0003440862830000021
Figure FDA0003440862830000022
Obtaining a linear attenuation coefficient image mu corresponding to a detection target based on a deep learning network G of training0
Figure FDA0003440862830000023
5. The method according to claim 1 or 2, wherein the S30 includes:
the pre-constructed mapping relation is a pre-trained deep learning network G ', and the trained deep learning network G' is used for mapping the PET image and the CT image without attenuation correction;
specifically, a second training data set for training the deep learning network G' is obtained, the second training data set comprising: simulating simulated training data and/or actually acquired training data;
preprocessing the second training data set, inputting the second training data set into a deep learning network G ', obtaining an output result, and optimizing a network parameter theta of the deep learning network G ' to minimize a loss function L ', obtaining a trained deep learning network G ', wherein the network parameter in the trained deep learning network G ' is
Figure FDA0003440862830000024
Figure FDA0003440862830000025
Acquiring a CT image corresponding to a detection target based on the trained deep learning network G';
converting the CT image into a linear attenuation coefficient image mu corresponding to a detection target0
6. The method according to claim 4 or 5,
the deep learning network G and the deep learning network G' are both CNN networks, Unet networks or GAN networks.
7. The method according to claim 1 or 2, wherein the S40 includes:
three pairs of linear attenuation coefficient images mu according to the following formula0Correcting to obtain linear attenuation coefficient image mu for PET image reconstruction1
The formula III is as follows: mu.s1=μ0bed(Δx,Δy,Δz);
μbed(Δ x, Δ y, Δ z) are linear attenuation coefficient images of the scanning bed where the detection object is located, and Δ x, Δ y, Δ z are expressed as movement values in three directions.
8. A method of reconstructing a PET image, comprising:
obtaining a linear attenuation coefficient image for PET image reconstruction by using the linear attenuation coefficient obtaining method according to any one of claims 1 to 7;
taking the linear attenuation coefficient image value as a known initial value of a linear attenuation coefficient;
based on a target function of a pre-established PET image, linear attenuation coefficient distribution of a known initial value and PET radioactivity distribution x, adopting an alternative solving strategy to obtain a final reconstructed PET image;
wherein, the linear attenuation coefficient distribution mu and the PET radioactivity distribution x are respectively two variables in the objective function of the PET image;
the alternate solution strategy is: and when the first variable is a known value, acquiring an estimated value of the second variable, taking the acquired estimated value of the second variable as the known value, acquiring the estimated value of the first variable, and solving for n times alternately, wherein n is a natural number greater than 1.
9. The reconstruction method according to claim 8, wherein the final reconstructed PET image is obtained by adopting an alternating solution strategy based on the pre-established objective function of the PET image, the linear attenuation coefficient distribution of the known initial value and the PET radioactivity distribution x, and comprises the following steps:
when the linear attenuation coefficient mu is a known value, acquiring an estimated value of the PET radioactivity distribution x according to the following formula five;
Figure FDA0003440862830000041
when the PET radioactivity distribution x is a known value, acquiring an estimated value of linear attenuation coefficient distribution mu according to the following formula six;
Figure FDA0003440862830000042
wherein x is [ x ]1,x2,…,xM]TRepresenting an unknown PET image, i.e. the PET radioactivity distribution, mu ═ mu12,…,μK]TRepresents a linear attenuation coefficient distribution, A ═ Aijt]Is a system matrix, r ═ r1t,r2t,…,rNT]TDenotes the mean value of random and scattering noise, T denotes the dimension of the time of flight TOF, l ═ lik]In the linear attenuation coefficient matrix, j represents a spatial position point source in the PET detection system, and i represents a response line LOR.
10. A PET detection system, comprising: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, in particular, the method for reconstructing a PET image according to any one of claims 8 or 9.
CN202111630372.9A 2021-12-28 2021-12-28 Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction Pending CN114387364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111630372.9A CN114387364A (en) 2021-12-28 2021-12-28 Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111630372.9A CN114387364A (en) 2021-12-28 2021-12-28 Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction

Publications (1)

Publication Number Publication Date
CN114387364A true CN114387364A (en) 2022-04-22

Family

ID=81199295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111630372.9A Pending CN114387364A (en) 2021-12-28 2021-12-28 Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction

Country Status (1)

Country Link
CN (1) CN114387364A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115429299A (en) * 2022-09-26 2022-12-06 明峰医疗系统股份有限公司 Method, system, equipment and storage medium for scattering correction based on positioning sheet
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium
CN117671463A (en) * 2023-12-07 2024-03-08 上海万怡医学科技股份有限公司 Multi-mode medical data quality calibration method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115429299A (en) * 2022-09-26 2022-12-06 明峰医疗系统股份有限公司 Method, system, equipment and storage medium for scattering correction based on positioning sheet
CN117611750A (en) * 2023-12-05 2024-02-27 北京思博慧医科技有限公司 Method and device for constructing three-dimensional imaging model, electronic equipment and storage medium
CN117671463A (en) * 2023-12-07 2024-03-08 上海万怡医学科技股份有限公司 Multi-mode medical data quality calibration method

Similar Documents

Publication Publication Date Title
CN106491151B (en) PET image acquisition method and system
US11189374B2 (en) Method and system for calculating SUV normalization coefficient in a SPECT quantitative tomographic image
RU2524302C2 (en) Extension on basis of model of vision field in radionuclide visualisation
US7737406B2 (en) Compensating for truncated CT images for use as attenuation maps in emission tomography
EP2399238B1 (en) Functional imaging
CN104252714B (en) The reconstruction of time-variable data
CN109961419B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
CN114387364A (en) Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction
CN109978966B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
US20110148928A1 (en) System and method to correct motion in gated-pet images using non-rigid registration
Qi et al. Extraction of tumor motion trajectories using PICCS‐4DCBCT: a validation study
US20100284598A1 (en) Image registration alignment metric
CN107348969B (en) PET data processing method and system and PET imaging equipment
JP7359851B2 (en) Artificial Intelligence (AI)-based standard uptake value (SUV) correction and variation assessment for positron emission tomography (PET)
CN111544023B (en) Method and system for real-time positioning of region of interest based on PET data
CN110458779B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
CN112529977B (en) PET image reconstruction method and system
US11495346B2 (en) External device-enabled imaging support
Pourmoghaddas et al. Respiratory phase alignment improves blood‐flow quantification in Rb82 PET myocardial perfusion imaging
CN115439572A (en) Attenuation correction coefficient image acquisition method and PET image reconstruction method
CN110428384B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
US10417793B2 (en) System and method for data-consistency preparation and image reconstruction
WO2022036633A1 (en) Systems and methods for image registration
CN111951346B (en) 4D-CBCT reconstruction method combining motion estimation and space-time tensor enhancement representation
EP2711738A1 (en) A method and a device to generate virtual X-ray computed tomographic image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination