CN115439572A - Attenuation correction coefficient image acquisition method and PET image reconstruction method - Google Patents

Attenuation correction coefficient image acquisition method and PET image reconstruction method Download PDF

Info

Publication number
CN115439572A
CN115439572A CN202211291273.7A CN202211291273A CN115439572A CN 115439572 A CN115439572 A CN 115439572A CN 202211291273 A CN202211291273 A CN 202211291273A CN 115439572 A CN115439572 A CN 115439572A
Authority
CN
China
Prior art keywords
image
attenuation correction
deep learning
correction coefficient
learning network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211291273.7A
Other languages
Chinese (zh)
Inventor
李楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinounion Healthcare Inc
Original Assignee
Sinounion Healthcare Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinounion Healthcare Inc filed Critical Sinounion Healthcare Inc
Priority to CN202211291273.7A priority Critical patent/CN115439572A/en
Publication of CN115439572A publication Critical patent/CN115439572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an attenuation correction coefficient image acquisition method and a PET image reconstruction method, wherein the acquisition method comprises the following steps: aiming at detection data used for medical image reconstruction, acquiring a first image without attenuation correction and a second image approximate to attenuation correction of the detection data, inputting the first image and/or the second image into a trained deep learning network, and acquiring a first attenuation correction coefficient output by the deep learning network; correcting an attenuation correction coefficient I; the modified attenuation correction coefficient is used as an initial value in the medical image reconstruction and/or as an elastic transformation coefficient for adjusting a linear attenuation correction coefficient of each iteration in the medical image reconstruction iteration process so as to accelerate a convergence path in the medical image reconstruction iteration process. The method can ensure the rapid convergence of the iterative process and increase the stability, the quantification and the accuracy of the reconstruction algorithm.

Description

Attenuation correction coefficient image acquisition method and PET image reconstruction method
Technical Field
The invention relates to the technical field of medical imaging, in particular to an attenuation correction coefficient image acquisition method, a PET image reconstruction method and a PET system.
Background
Positron Emission Tomography (PET) is one of the most advanced clinical examination imaging techniques in the current nuclear medicine field. When acquired by a PET system, photons are attenuated in the body before reaching the PET system, and coincidence events at the surface of the object are detected more efficiently than at the interior of the object. If the ray attenuation effect is not corrected, the reconstructed image may generate attenuation artifacts that the image of the edge of the object is too bright and the image of the internal tissues of the object is too dark. The PET system usually integrates other modality systems (such as CT, MRI, etc.) to acquire the anatomical imaging of the patient, so that on one hand, the nuclide distribution can be accurately positioned, and the accuracy of lesion positioning is improved; on the other hand, the correspondingly obtained tissue density distribution of the patient can be used for attenuation correction in PET image reconstruction, and finally, the accurate distribution of the radiopharmaceutical in the patient body is obtained. The PET functional imaging and the anatomical structure imaging of other modes are fused in the same machine, the advantages of the functional imaging and the anatomical imaging are compatible, the purposes of early focus discovery and disease diagnosis are achieved, and diagnosis and treatment guidance of tumors, heart diseases and brain diseases are more advantageous.
However, in multi-modality acquisition applications, attenuation information matching the PET data is sometimes not accurately obtained, resulting in attenuation correction errors such that additional artifacts are generated on the PET image.
Therefore, an attenuation information acquisition method that does not depend on multi-modal data is required, and therefore, in practical application, an empirical value needs to be set and long-time iteration is performed. That is, in practical applications, it is usually necessary to go through multiple iterations to approach the ideal value, which results in an excessively long iteration convergence operation time, and usually requires a higher level of computational resources (such as a high-performance GPU) to be matched, thereby increasing the cost. In addition, the iterative algorithm cannot ensure convergence to a global optimal result, and may converge to a local optimal solution. To avoid this, many constraints and protections need to be added to the iterative algorithm, and adjustment parameters also need to be set, which reduces the stability and robustness of the algorithm.
Disclosure of Invention
Technical problem to be solved
In view of the above drawbacks and deficiencies of the prior art, the present invention provides an attenuation correction coefficient image acquisition method and a PET image reconstruction method.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
in a first aspect, an embodiment of the present invention provides a method for acquiring an attenuation correction coefficient image, including:
s10, aiming at detection data used for medical image reconstruction, acquiring a first image without attenuation correction and a second image approximate to attenuation correction of the detection data, wherein the second image is a reconstructed image obtained by performing attenuation correction on the first image based on a linear attenuation correction coefficient empirical value of a specified region;
s20, inputting the first image and/or the second image into a pre-trained deep learning network, and acquiring a first linear attenuation correction coefficient output by the deep learning network;
s30, acquiring a modified attenuation correction coefficient for reconstructing the medical image based on a predetermined linear attenuation coefficient II of the scanning bed and the linear attenuation correction coefficient I;
the corrected attenuation correction coefficient is used as an initial value in the reconstruction of the medical image and/or as an elastic transformation coefficient for adjusting a linear attenuation correction coefficient of each iteration in the reconstruction iteration process of the medical image, and is used for accelerating a convergence path in the reconstruction iteration process of the medical image;
the pre-trained deep learning network is obtained by training the deep learning network constructed on the basis of the reconstructed medical image and the matched associated image.
Optionally, the deep learning network is one of: CNN networks, uet networks, GAN networks;
the medical image is a PET image or a CT image.
Optionally, when the medical image is a PET image, before S10, the method further includes:
s00, acquiring a training sample for training the deep learning network based on the reconstructed medical image and the matched associated image;
wherein each training sample comprises: the method comprises the steps that a reconstructed PET image/simulated PET image is obtained, the approximate linear attenuation correction coefficient corresponding to the PET image and other modality images corresponding to the PET image are obtained, the other modality images are used for obtaining a real linear attenuation correction coefficient, and the real linear attenuation correction coefficient is used for verifying whether a trained deep learning network is converged or not;
s01, training the deep learning network based on the training samples to obtain the trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function phi value of the optimally trained deep learning network to be minimum.
Optionally, training the deep learning network based on the training samples includes:
inputting the approximate linear attenuation correction coefficient of each training sample into a deep learning network to obtain output, and comparing the output with real linear attenuation correction coefficients obtained by other modal images in the training sample by means of a loss function phi;
and/or the presence of a gas in the gas,
inputting the reconstructed non-attenuated PET image of each training sample into a deep learning network to obtain output, and comparing the output with a real linear attenuation correction coefficient obtained from other modal images in the training sample by means of a loss function phi;
and/or the presence of a gas in the gas,
summing the reconstructed non-attenuated PET image and the approximate linear attenuation correction coefficient of each training sample, inputting the summed image into a deep learning network to obtain output, and comparing the output with the real linear attenuation correction coefficient obtained by other modal images in the training sample by means of a loss function phi;
and the loss function phi is one or more of L1 norm, L2 norm and KL divergence and is used for similarity between each output of the deep learning network and a real linear attenuation correction coefficient to which the output belongs in scale training. The scale here can be understood as: the value calculated by the loss function represents the degree of similarity between the linear attenuation coefficient of the network output and the true linear attenuation coefficient. For example, the L1 norm is the sum of absolute values of differences between elements in a vector, the L2 norm is the sum of squares of differences between elements in a vector, and then the square root is calculated, convergence effects will be different in actual training, and an attempt needs to be made to find a loss function with the best effect, that is, the meaning of the scale.
Optionally, the other modality image is a CT image and the approximate linear attenuation correction coefficient is a linear attenuation correction coefficient image generated based on known linear attenuation correction coefficients of the specified region on the non-attenuation corrected PET image.
In a second aspect, an embodiment of the present invention further provides a PET image reconstruction method, which includes:
p01, acquiring a modified attenuation correction factor μ for the detection data to be PET reconstructed using the method of claim 1 above prior ';
P02 based on prior ', performing alternate iteration on the attenuation correction coefficient mu and the PET radioactivity distribution x by adopting an alternate iteration strategy to obtain an estimated value of x meeting the requirement of a maximized objective function, and taking the estimated value as a reconstructed image of the detection data;
wherein the alternate iteration strategy comprises: firstly, mu prior ' As an initial value, solving x by maximizing an objective function, solving mu by maximizing the objective function by taking the solved x as a constant, and utilizing mu prior ' adjusting the solved mu, taking the adjusted mu as a constant, and maximizing an objective function to solve the next x, and alternately operating.
Optionally, the objective function is a log-likelihood function L (x, μ, y), and the P02 includes:
mu is to be prior ' as an initial value, maximizing the constructed log-likelihood function L (x, mu, y) to obtain a radioactivity distribution x;
Figure BDA0003894289090000041
n represents the iteration times, and the initial value of x is a set value;
taking the x of the iterative solution as a constant, maximizing the constructed log-likelihood function L (x, mu, y) to obtain the linear attenuation coefficient distribution
Figure BDA0003894289090000051
Using mu prior ' adjusting mu of the solution to obtain adjusted
Figure BDA0003894289090000055
Namely that
Figure BDA0003894289090000052
d (n+0.5) Is a displacement field variation vector; o represents an elastic transformation operator, because the displacement field change vector acts on the image to perform elastic transformation rather than matrix multiplication, and is represented by an open circle;
will be adjusted
Figure BDA0003894289090000053
As a constant, maximizing the constructed log-likelihood function L (x, mu, y) to obtain the radioactivity distribution x;
Figure BDA0003894289090000054
and sequentially and alternately operating to obtain an estimated value of x meeting the requirement of maximizing the constructed log-likelihood function.
In a third aspect, an embodiment of the present invention further provides a training method for a deep learning network, where the trained deep learning network is used to accelerate iterative convergence of an attenuation correction coefficient, and the training method includes:
acquiring a training sample for training a deep learning network based on the reconstructed medical image and the matched associated image;
wherein each training sample comprises: the method comprises the steps that a reconstructed PET image/simulated PET image is obtained, the approximate linear attenuation correction coefficient corresponding to the PET image and other modality images corresponding to the PET image are obtained, the other modality images are used for obtaining a real linear attenuation correction coefficient, and the real linear attenuation correction coefficient is used for verifying whether a trained deep learning network is converged or not;
training the deep learning network based on the training samples to obtain a trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function phi value of the optimized and trained deep learning network to be minimum.
Optionally, training the deep learning network based on the training samples includes:
inputting the approximate linear attenuation correction coefficient of each training sample into a deep learning network to obtain output, and obtaining a true linear attenuation correction coefficient mu by using the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and/or the presence of a gas in the gas,
inputting the reconstructed non-attenuated PET image of each training sample into a deep learning network to obtain output, and obtaining a true linear attenuation correction coefficient mu by the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and/or the presence of a gas in the gas,
summing the reconstructed non-attenuated PET image of each training sample with the approximate linear attenuation correction coefficient, inputting the summed image into a deep learning network to obtain output, and obtaining the real linear attenuation correction coefficient mu of the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and the loss function phi is one or more of L1 norm, L2 norm and KL divergence and is used for similarity between each output of the deep learning network and a real linear attenuation correction coefficient to which the output belongs in scale training.
Optionally, the other modality image is a CT image, and the approximate linear attenuation correction coefficient is a linear attenuation correction coefficient image generated based on a known linear attenuation coefficient of the specified region on the non-attenuation-corrected PET image;
Figure BDA0003894289090000061
Figure BDA0003894289090000062
Figure BDA0003894289090000063
for the image to be non-attenuation corrected,
Figure BDA0003894289090000064
an image corrected for approximate attenuation;
Figure BDA0003894289090000065
is an intermediate parameter, theta is a network parameter in the deep learning network, mu DL Is the output of the deep learning network in training.
In a fourth aspect, embodiments of the present invention further provide a PET system, including: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, and specifically executes the method of any embodiment of any one of the above aspects.
(III) advantageous effects
In the method for acquiring the attenuation correction coefficient image, the embodiment of the invention utilizes the trained deep learning network to process the unattenuated image and the empirical value attenuation image of the detection data to acquire the linear attenuation correction coefficient image, and the calculation speed and the result stability of the iterative algorithm can be better improved in the iterative calculation of the radioactivity.
The linear attenuation correction coefficient image obtained based on the trained deep learning network is used for adjusting the iterative convergence process of the linear attenuation coefficient in real time, and the purposes of optimizing the convergence path and converging to a global optimization solution as soon as possible can be achieved. Compared with the algorithm in the prior art, the algorithm simultaneously utilizes the priori knowledge, can ensure the fast convergence of the iterative process, and increases the stability, the quantification and the accuracy of the algorithm.
In addition, attenuation information extracted by deep learning is derived from PET images, mismatching among multi-modal images does not exist, and motion and truncation artifacts can be avoided in the convergence adjustment process.
Drawings
Fig. 1 is a schematic flowchart of a method for obtaining an attenuation correction coefficient image according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of the results of FIG. 1 for PET image reconstruction;
FIG. 3 is a schematic diagram of a multi-modal detection system coordinate system defined during training of a deep learning network;
FIG. 4 is a diagram showing the comparison of the results of PET images obtained by various algorithms.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
In the multi-mode acquisition application in the prior art, attenuation information matched with PET data can not be accurately obtained sometimes, so that attenuation correction is wrong, and extra artifacts can be generated on a PET image. The concrete description is as follows:
first, in PET multi-modality imaging, there may be misalignment of images of different modalities. Taking a PET/CT system as an example, a CT scan is usually completed in a very short time, and the obtained image is a snapshot at almost a certain moment. However, PET scanning is slow, each position typically takes several minutes, and the patient may not be able to complete the data acquisition in a breath-hold condition. There is a degree of mismatch in the location and phase of the lesion in the PET and CT images, subject to the effects of cardiac activity and respiratory motion. On one hand, the PET image is obtained by superposing data acquired when a moving focus is at different positions, and the corresponding imaging reflects an average effect of the focus position, which inevitably brings about the reduction of resolution, leads to the reduction of focus imaging blur and accuracy of quantitative analysis standardized uptake value (SUV value) and has a morphological difference with CT imaging. On the other hand, the registration and fusion of the CT image and the PET image generate deviation (particularly near the diaphragm with the largest motion amplitude), and the instantaneous CT image is used for attenuation correction of the long-term PET image, so that position errors inevitably occur, local artifacts are generated on the PET image, and the accurate diagnosis of the tumor in the chest and abdomen and the formulation of a treatment plan are influenced. In addition, during long-time PET scanning, the patient's body may move (for example, the arm and head may move during the long-time scanning), which also results in the PET image and the CT image not being matched, and the attenuation artifacts may be generated.
Secondly, commonly used in clinical collection of PET 18 F-FDG is a non-specific imaging agent that reflects glucose metabolism. Compared with normal cells, malignant tumor cells grow actively, the cells proliferate abnormally, the demand for energy is large, and the glucose uptake is greatly increased. Thus by detecting glucose aggregation, PET images can assess tumor activity at a metabolic level. However, benign lesions such as acute inflammatory phase and granuloma can also be obviously ingested due to the increase of metabolism, resulting in the occurrence of false positive. Delayed imaging, multi-time point PET acquisition and imaging are often used clinically to better identify benign and malignant lesions. The FDG uptake of tumors will be higher than that of inflammation with time intervals, which may allow better identification of true and false positive lesions. Generally, delayed imaging multi-time point PET acquisition needs to be matched with multi-time point CT acquisition to avoid attenuation correction information error and attenuation artifacts caused by the fact that a patient moves away from a sickbed or a body position midway in the scanning process, which inevitably increases the X-ray radiation dose of the patient.
Again, in some cases there may be significant artifacts in the attenuation image, resulting in errors in the attenuation correction of the PET image. For example, CT images of patients with metal substances in their bodies (such as cardiac pacemakers or metal braces) have highlighted metal artifacts, which make the surrounding tissues difficult to accurately distinguish, severely affect attenuation correction, and can result in false positive artifacts in PET images.
Furthermore, the scan range of PET is typically larger than the scan range of other modalities (such as CT or MRI). When scanning a patient with a large body weight, other modality imaging is likely to fail to provide a sufficiently large imaging range, resulting in truncation of the attenuation image. The application of such incomplete attenuation information in attenuation correction of PET images also produces attenuation artifacts.
At the same time, when PET is imaged in combination with other phantoms, satisfactory attenuation-corrected images, such as PET/MR imaging, are sometimes not obtained. Compared to CT imaging, MR mainly utilizes magnetic spin imaging, not tissue density distribution imaging, and therefore does not directly provide accurate information about tissue attenuation properties. At present, the algorithm for attenuation correction by means of MR imaging is complex in application and low in precision, and attenuation artifacts are easily generated. In addition, MR cannot image the couch and MR coils, which also has an effect on subsequent attenuation correction.
Finally, the application conditions of other modalities restrict the application of PET imaging, for example, patients with dentures or cardiac pacemakers cannot be examined by MR, and the application of PET/MR is influenced. In addition, extremely high radioactivity protection requirements are required for CT imaging, and strict nuclear magnetic resonance shielding is required for MR imaging, so that the scanning protection requirements for multi-modal imaging are high, and the method is not easy to popularize.
In order to accurately correct attenuation artifacts and widen the application range of PET imaging, the key is whether attenuation information can be directly extracted from PET acquisition data without depending on other modality imaging. In the existing patents, for example, a correction information acquisition method for performing attenuation correction on a PET activity distribution image and a correction information acquisition method for performing attenuation correction on a PET activity distribution image are both methods Of directly extracting a Linear attenuation coefficient distribution image (Linear attenuation coefficient image) from Time Of Flight TOF (Time Of Flight) information acquired by PET for attenuation correction Of PET reconstruction, so that strict matching between a PET radioactivity activity image and an attenuation image can be completely ensured, attenuation artifacts are effectively eliminated, and image quality is improved. However, in practical applications, the algorithm usually needs to go through multiple iterations to approach the ideal value, which results in an excessively long iteration convergence operation time, and usually requires a higher level of computing resources (such as a high-performance GPU) to be matched, thereby increasing the cost. In addition, the iterative algorithm cannot ensure convergence to a global optimal result, and may converge to a local optimal solution. To avoid this, many constraints and protections need to be added to the iterative algorithm, and adjustment parameters also need to be set, which reduces the stability and robustness of the algorithm.
Therefore, in order to improve the operation speed and the result stability of the linear attenuation coefficient iterative algorithm, the invention provides a method for learning the mapping from the PET image to the linear attenuation coefficient image by using a deep learning network so as to further obtain an attenuation correction coefficient image.
In order to better understand the scheme of the embodiment of the invention, part of the words are explained:
the PET reconstructed image is a PET radioactivity distribution/PET radioactivity distribution image x;
the attenuation correction coefficient, the linear attenuation correction coefficient, the linear attenuation coefficient image and the linear attenuation correction coefficient image all represent one meaning, and different descriptions are adopted in different embodiments.
Example one
As shown in fig. 1, an embodiment of the present invention provides an attenuation correction coefficient image acquisition method, where an implementation subject of the method of the present embodiment may be a control device/electronic device of a method for PET image reconstruction, the control device may be integrated in an acquisition device of a PET system or a separate computer processing device, and the attenuation correction coefficient image acquisition method includes the following steps:
s10, aiming at detection data used for reconstructing a medical image, acquiring a first image without attenuation correction and a second image approximate to attenuation correction of the detection data, wherein the second image is a reconstructed image obtained by performing attenuation correction on the first image based on an empirical value of a linear attenuation correction coefficient of a specified region;
s20, inputting the first image and/or the second image into a pre-trained deep learning network, and acquiring a first attenuation correction coefficient output by the deep learning network;
s30, acquiring a modified attenuation correction coefficient for reconstructing the medical image based on a predetermined linear attenuation coefficient II of the scanning bed and the attenuation correction coefficient I;
the corrected attenuation correction coefficient is used as an initial value in the medical image reconstruction and/or as an elastic transformation coefficient for adjusting a linear attenuation correction coefficient of each iteration in the medical image reconstruction iteration process so as to accelerate a convergence path of the medical image reconstruction iteration process;
the pre-trained deep learning network is obtained by training a deep learning network constructed based on the reconstructed medical image and the matched associated image.
The second linear attenuation coefficient of the scanning bed in this embodiment is a known value or a value obtained in a known manner in advance.
The deep learning network in this embodiment may be one of the following: CNN networks, uet networks, GAN networks; generally, the medical image is a PET image or a CT image.
In this embodiment, the linear attenuation correction coefficient image obtained based on the trained deep learning network is used for adjusting the iterative convergence process of the linear attenuation coefficient in real time, so as to achieve the purpose of optimizing the convergence path and converging to the global optimization solution as soon as possible. Compared with the algorithm in the prior art, the algorithm simultaneously utilizes the priori knowledge, can ensure the fast convergence of the iterative process, and increases the stability, the quantification and the accuracy of the algorithm.
In a specific implementation process, when the medical image is a PET image, the process of training the deep learning network may include steps S00 and S01, which are not shown in the following figures.
S00, acquiring a training sample for training the deep learning network based on the reconstructed medical image and the matched associated image;
wherein each training sample comprises: the method comprises the steps that a reconstructed PET image/simulated PET image is obtained, the approximate linear attenuation correction coefficient corresponding to the PET image and other modality images corresponding to the PET image are obtained, the other modality images are used for obtaining a real linear attenuation correction coefficient, and the real linear attenuation correction coefficient is used for verifying whether a trained deep learning network is converged or not;
s01, training the deep learning network based on the training samples to obtain the trained deep learning network; and the network parameter theta in the trained deep learning network enables the loss function phi value of the optimally trained deep learning network to be minimum.
For example, the approximate linear attenuation correction coefficient of each training sample may be input into the deep learning network to obtain an output, and the output is compared with the true linear attenuation correction coefficient obtained from other modal images in the training sample by means of the loss function Φ;
and/or inputting the reconstructed non-attenuated PET image of each training sample into a deep learning network to obtain output, and comparing the output with real linear attenuation correction coefficients obtained by other modal images in the training sample by means of a loss function phi;
and/or summing the reconstructed non-attenuated PET image and the approximate linear attenuation correction coefficient of each training sample, inputting the summed image into a deep learning network to obtain output, and comparing the output with the true linear attenuation correction coefficient obtained by other modal images in the training sample by means of a loss function phi;
the loss function phi is one or more of L1 norm, L2 norm and KL divergence, or can be weighted summation of a plurality of loss functions and is used for similarity between each output of the deep learning network and a true linear attenuation correction coefficient to which the output belongs in scale training. The scale here can be understood as: the value calculated by the loss function represents the degree of similarity between the linear attenuation coefficient of the network output and the true linear attenuation coefficient. For example, the L1 norm is the sum of absolute differences of elements in a vector, and the L2 norm is the sum of squares of differences of elements in a vector, and then square root is calculated, convergence effects may be different in actual training, and an attempt is made to find a loss function with the best effect.
In this embodiment, the other modality image is a CT image, and the approximate linear attenuation correction coefficient is a linear attenuation correction coefficient image generated based on a known linear attenuation coefficient of a specified region on the attenuation-uncorrected PET image. For example, the designated area may include: soft tissue region, fat region, lung region; thread of soft tissue regionThe linear attenuation coefficient is assigned to 0.0975cm -1 Assigned fat region of 0.0864cm -1 Assigned lung region of 0.0224cm -1
For example,
Figure BDA0003894289090000131
Figure BDA0003894289090000132
for the image to be non-attenuation corrected,
Figure BDA0003894289090000133
an image corrected for approximate attenuation;
Figure BDA0003894289090000134
is an intermediate parameter, theta is a network parameter in the deep learning network, mu DL For the output of the deep learning network in training, the loss function phi is used to scale mu DL And mu CT The similarity of (c).
In the embodiment, the attenuation information extracted by the deep learning is derived from the PET images, the mismatching between the multi-mode images does not exist, and the motion and truncation artifacts can be effectively avoided in the convergence adjustment process.
Example two
In addition, an embodiment of the present invention further provides a PET image reconstruction method, where an execution subject of the method of this embodiment may be any control device/electronic device, the control device may be integrated in an acquisition device of a PET system or a separate computer processing device, and the PET image reconstruction method includes the following steps:
p01, acquiring a modified attenuation correction factor μ for the detection data to be PET reconstructed using the method of claim 1 above prior ';
P02 based on prior ' performing alternate iteration on the attenuation correction coefficient mu and the PET radioactivity distribution x by adopting an alternate iteration strategy to obtain an estimated value of x meeting the requirement of a maximized objective function, and using the estimated value as a reconstructed image of the detection data;
wherein the alternate iteration strategy comprises: firstly, mu prior ' As an initial value, solving x by maximizing an objective function, solving mu by maximizing the objective function by taking the solved x as a constant, and utilizing mu prior ' adjusting the solved mu, taking the adjusted mu as a constant, and maximizing an objective function to solve the next x, and alternately operating.
For example, the objective function is a log-likelihood function L (x, μ, y) (corresponding to equation (3) in the embodiment)):
Figure BDA0003894289090000141
the step P02 may include:
p021, mixing prior ' as an initial value, maximizing the constructed log-likelihood function L (x, mu, y) to obtain a radioactivity distribution x;
Figure BDA0003894289090000142
n represents the iteration times, and the initial value of x is a set value;
p022, taking x obtained by iterative solution as a constant, maximizing the constructed log-likelihood function L (x, mu, y) and obtaining linear attenuation coefficient distribution
Figure BDA0003894289090000143
P023, utilizing mu prior ' adjusting said solved mu to obtain adjusted
Figure BDA0003894289090000144
Namely, it is
Figure BDA0003894289090000145
d (n+0.5) Is a displacement field variation vector; . The operator of elastic transformation is represented by an empty circle because the displacement field change vector acts on the image to perform elastic transformation and is not matrix multiplication, and belongs to the common industryOperator symbols.
P024, to be regulated
Figure BDA0003894289090000146
As a constant, maximizing the constructed log-likelihood function L (x, μ, y) to obtain a radioactivity distribution x;
Figure BDA0003894289090000147
and sequentially and alternately operating to obtain an estimated value of x meeting the requirement of maximizing the constructed log-likelihood function, wherein the estimated value of x is the reconstructed PET image.
In the embodiment, the non-attenuation image and the empirical value attenuation image of the detection data are processed by using the trained deep learning network to obtain the linear attenuation correction coefficient image, and the operation speed and the result stability of the iterative algorithm can be better improved in the iterative calculation of the radioactivity.
It should be noted that, the deep learning network is used to learn the mapping from the PET image to the linear attenuation coefficient image, and the iterative convergence process of the linear attenuation coefficient is adjusted in real time through the elastic transformation, so that the purposes of optimizing the convergence path and converging to the global optimal solution as soon as possible can be achieved.
EXAMPLE III
In order to better understand the methods described in the first embodiment and the second embodiment, the following specifically exemplifies the two methods with formulas. That is, the embodiment provides a method for obtaining a linear attenuation coefficient image by mapping from a PET image through a deep learning network, and then using the obtained linear attenuation coefficient image to adjust a convergence path of a linear attenuation coefficient image iterative algorithm, so as to ensure that the linear attenuation coefficient image and a radioactivity activity distribution estimation value converge quickly and stably. The following method is illustrative of the combination of the training process and the use process. The method comprises the following specific steps:
the following steps 01 to 04 are conventional steps, and are not improved in this embodiment.
Step 01:the PET acquisition process canModeled as the following equation:
Figure BDA0003894289090000151
y = [ y ] in formula (1) 1t ,y 2t ,…,y it ,…,y NT ]' means detected data i.e. detection data,
Figure BDA0003894289090000152
denotes the average value of the detection data, N denotes the size of the sinogram of the detection data, T denotes the size of the time-of-flight TOF discrete space, i denotes the variable index (index) of the detection data sinogram response line LOR (line of response), and T denotes the variable index of the time-of-flight TOF discrete space. The single quote superscript indicates the matrix transpose operation. x = [ x = 1 ,x 2 ,…,x j ,…,x M ]' denotes an unknown radioactivity distribution image, M denotes the size of the radioactivity distribution image space, and j denotes a variable index of the radioactivity distribution image space, representing a point source corresponding to a spatial position. μ = [ μ = 12 ,…,μ k ,…,μ K ]' denotes an unknown linear attenuation coefficient image, K denotes a size of a linear attenuation coefficient image space, and K denotes a variable index of the linear attenuation coefficient image space, representing a point source corresponding to a spatial position. A = [ A ] ijt ]For a system matrix, the probability that a spatial position point source j is detected by a response line LORi and the time of flight TOF is t in the PET system is expressed in a mathematical form, the physical characteristics of the system are reflected, and l = [ l ] ik ]And the linear attenuation coefficient matrix represents the track crossing length of the LOR i when the LOR i passes through the space position point source k. r = [ r ] 1t ,r 2t ,…,r it ,…,r NT ]' denotes the average of random noise and scattering noise.
Step 02:the PET detection data obeyed a poisson distribution with unknowns being the PET radioactivity distribution x and the linear attenuation coefficient distribution μ. The log-likelihood function of the probe data is expressed as:
Figure BDA0003894289090000161
step 03:substituting equation (1) into equation (2), ignoring terms that are not related to unknowns, the log-likelihood function can be written as:
Figure BDA0003894289090000162
the above equation (3) is a log-likelihood function and is also an objective function as described below.
Step 04:PET image reconstruction is performed without taking attenuation correction into account during reconstruction, i.e. assuming that all rays are not attenuated, the linear attenuation coefficient in the full imaging field of view is zero: mu.s j K, then the log-likelihood function (3) becomes:
Figure BDA0003894289090000163
in practice, gamma rays have no attenuation effect and no scattering during detection, and r = [ r ] in formula (1) 1t ,r 2t ,…,r NT ] T Represents the average value of random noise.
The unknown image without attenuation correction can be reconstructed by a maximum likelihood method to obtain the image without attenuation correction
Figure BDA0003894289090000164
Figure BDA0003894289090000165
Wherein F denotes a post-filter function, which is a known term, L (x, μ =0, y) denotes a log-likelihood function (log-likelihood function) without considering attenuation correction, R (x) is a scalar penalty function (penalty function) representing a priori knowledge of x, and β is a weighting factor for weighting the log-likelihood functionThe importance of the function and the penalty function,
Figure BDA0003894289090000178
the image is reconstructed for the intermediate variable, i.e. the unfiltered radioactivity distribution.
If β is selected to be 0, the solution method of the formula (5) is the existing Maximum Likelihood Expectation Maximization algorithm MLEM (Maximum Likelihood Expectation Maximization optimization) or the Ordered Subset Expectation Maximization optimization algorithm OSEM (Ordered Subset Expectation Maximization optimization) of the acceleration algorithm thereof.
Note that the image is not attenuation-corrected
Figure BDA0003894289090000171
There are significant attenuation artifacts: the image of the edge of the patient is too bright, the image of the internal tissue of the patient is too dark, the intake of the lung is too high, and the like. Albeit image
Figure BDA0003894289090000172
The radioactive uptake values of different tissues of a patient are not accurate, but the structural information of the different tissues is still retained, for example, although the edge of the patient is high, the edge range of the patient can still be determined; lung uptake, while incorrect in contrast, can still rely on images to delineate the lung. A linear attenuation image can therefore theoretically be derived from the non-attenuation-corrected PET image, reflecting the tissue distribution within the patient.
Step 05:non-attenuation corrected image
Figure BDA0003894289090000173
Structural information of different tissues is still kept, and an approximate linear attenuation coefficient image can be obtained based on a known linear attenuation coefficient image of a specified area
Figure BDA0003894289090000174
For example, for different tissue regions, empirical values are assigned to the linear attenuation coefficients of the regions to obtain an approximate linear attenuation coefficient mapResult of image guess
Figure BDA0003894289090000175
For example, a linear attenuation coefficient assigned to a soft tissue region of 0.0975cm -1 Assigned fat area of 0.0864cm -1 Assigned lung region of 0.0224cm -1 . For example, an image that is not attenuation corrected may be used
Figure BDA0003894289090000176
The outline of the whole area is sketched into a complete area, and is uniformly assigned as the linear attenuation coefficient of the soft tissue, so that an approximate linear attenuation coefficient image is generated
Figure BDA0003894289090000177
And the embodiment does not limit the method, and the method can realize acquisition according to the assignment of the linear attenuation coefficient of the specified area
Figure BDA0003894289090000181
Step 06:performing PET image reconstruction, and utilizing approximate linear attenuation coefficient in the reconstruction process
Figure BDA0003894289090000182
With the attenuation correction, the log-likelihood function (3) becomes:
Figure BDA0003894289090000183
the unknown image which is subjected to attenuation correction by using the approximate linear attenuation coefficient can be reconstructed by a maximum likelihood method to obtain an approximate attenuation correction image
Figure BDA0003894289090000184
Figure BDA0003894289090000185
Wherein F represents a post-filter function, which is a known term;
Figure BDA0003894289090000186
an image is reconstructed for the intermediate variable, i.e. the unfiltered radioactivity distribution,
Figure BDA0003894289090000187
the method is characterized in that a log-likelihood function (log-likelihood function) of approximate attenuation correction is used, R (x) is a scalar penalty function (penalty function) and represents the prior knowledge of x, and beta is a weighting factor and is used for balancing the importance of the log-likelihood function and the penalty function. If β is chosen to be 0, the solution of equation (7) becomes conventional MLEM or OSEM.
Since the attenuation-corrected image obtained in the above-described step 06 is an approximate result, the attenuation-corrected image is approximated
Figure BDA0003894289090000188
Attenuation artifacts still exist, but images can be corrected for non-attenuation
Figure BDA0003894289090000189
And in cooperation with the use, the interactive certificate can obtain more information. Such as approximate attenuation corrected images
Figure BDA00038942890900001810
And non-attenuation corrected image
Figure BDA00038942890900001811
In contrast, the lung contours are unclear, but the liver organs are segmented more accurately.
Step 07:in order to fully extract the features of the PET image, the mapping from the PET image to the linear attenuation coefficient image is selected and realized by using a pre-constructed deep learning network G.
For example, in training the deep learning network G, taking PET/CT as an example, the PET images in the training samples may comprise unattenuated PET images
Figure BDA0003894289090000191
And approximate attenuation corrected PET image
Figure BDA0003894289090000192
Two PET images are used as two-channel input, and linear attenuation coefficient image mu generated by PET image mapping DL As output, with the true linear attenuation image mu obtained by CT scanning CT Comparing, and optimizing a training network parameter theta to minimize a loss function phi, so that the PET image can be finally translated into an accurate linear attenuation coefficient image, namely:
Figure BDA0003894289090000193
the training samples in the training data may be from simulation or from actual acquisition. All PET images of the training data need to be preprocessed, and linear attenuation coefficient images in each sample are matched with the PET images through preprocessing screening, so that truncation or motion artifacts do not exist. That is to say, the data input into G during training is a PET reconstructed image, the data is output as a linear attenuation coefficient image mapped by the PET image, the learning target is a linear attenuation coefficient image obtained from a CT image actually acquired, and network training is to optimize network parameters and ensure that network output is similar to an actual result.
It is understood that the deep learning network G may select a CNN network, a uet network, a GAN network, or other network. In general, the deep learning network input can only select the unattenuated PET image
Figure BDA0003894289090000194
Or images with approximate attenuation correction
Figure BDA0003894289090000195
Or selecting the unattenuated PET image
Figure BDA0003894289090000196
And approximation attenuation corrected images
Figure BDA0003894289090000197
The sum is taken as the network input. The loss function phi can be used to scale mu DL And mu CT The similarity of (c) may be selected from L1 norm, L2 norm, KL divergence, etc., or may be a weighted sum of a plurality of loss functions.
And step 08:because PET can only perform functional imaging and the scanning bed cannot be displayed on the image, attenuation information of the bed needs to be additionally added to the linear attenuation image output by the network.
Since the shape of the couch is known and only rigid motion is possible and the system is well mechanically adjusted before scanning, the couch only needs to consider translation in three directions and the linear attenuation coefficient distribution of the couch can be expressed as μ bed (Δ x, Δ y, Δ z), Δ x, Δ y, Δ z are expressed as movement values in three directions, μ bed (0, 0) is the initial position of the bed.
The bed can not move horizontally during scanning, the horizontal movement delta x can be obtained through mechanical measurement or a correction process, and is set as a constant in the scanning process; the vertical movement ay can be determined by reading the elevation value of the bed; the axial direction movement az can be determined by reading the axial position of the bed movement. If not dependent on external signal mu bed It can also be obtained by learning the relative position of the patient and the bed in the PET image using a deep learning network.
Thus, the linear attenuation coefficient image plus the attenuation information of the bed is μ prior =μ DLbed (Δ x, Δ y, Δ z), which can be used as a priori knowledge in the subsequent iteration of the linear attenuation coefficient.
Deep learning network training is to use the unattenuated PET image
Figure BDA0003894289090000201
And approximation attenuation corrected images
Figure BDA0003894289090000202
Mapped as a linear decay factor and the compensation of the bed in step 08 is also a linear decayA reduced coefficient map. Since the linear attenuation coefficient and the CT value conform to the bilinear transformation relation, the requirement of one-to-one correspondence is met, and the dynamic range of the CT value is larger, the step 9 and the step 10 can also train and compensate the CT value, and finally the CT image after bed compensation is uniformly converted into the linear attenuation coefficient.
Step 09:aiming at the trained deep learning network, the deep learning network maps the detection data of the PET to be processed and outputs mu prior
Due to the influences of different fields, different devices and different scanning parameters, the quality of the PET images is greatly different, so that the quality of the current PET image is possibly difficult to be ensured to be the same as that of the training PET image in practical application, and the applicability of a deep learning network is greatly influenced.
In this embodiment, in order to solve the problem of PET data generalization, image data of various different situations need to be provided for network training, and this data requirement is usually unrealistic, which also brings great difficulty to the construction of a deep learning network, and training time and memory requirements are also great. Influenced by the suitability of the network, mu prior Although the shape of the linear attenuation coefficient image iteratively solved by actually acquiring data is consistent with the real linear attenuation coefficient image, the shape of the linear attenuation coefficient image is limited by the accuracy (such as spatial resolution, temporal resolution and the like) and noise of the acquired data, and the image quality is obviously inferior to mu prior
This embodiment may therefore not apply μ directly in PET reconstruction prior Instead, the linear attenuation coefficient is solved by data iteration, then mu is calculated prior And performing elastic transformation on the linear attenuation coefficient direction generated by the iteration through the elastic transformation and using the linear attenuation coefficient direction in the next iteration. Thus on the one hand mu prior The linear attenuation coefficient generated by elastic transformation has accurate distribution shape, conforms to the actually acquired data, and reduces the requirement of generalization of training network data by phase change; on the other hand mu prior The quality of the linear attenuation coefficient image generated by performing the elastic transformation is better, by whichThe elastic transformation adjusts the iterative process of the linear attenuation coefficient, has better tolerance to noise, can greatly improve the operation speed and the quantitative accuracy of the original algorithm, and ensures that the linear attenuation coefficient is quickly converged to the globally optimal linear attenuation image.
Step 10:since the log-likelihood function in the formula (3) is a very complex function for the unknowns x and μ, it is difficult to obtain an analytic solution, and therefore an iterative algorithm is required to gradually approximate the optimal solution. For unknown PET radioactivity distribution x, keeping linear attenuation coefficient distribution mu as a constant, and maximizing a log-likelihood function to obtain radioactivity distribution updated by iterative optimization:
Figure BDA0003894289090000211
in the formula, n represents the current iteration number. The iterative initial value of the PET radioactivity distribution x may select a full spatial normal distribution.
Step 11:keeping the PET radioactivity distribution x as a constant, and obtaining the linear attenuation coefficient distribution updated by iterative optimization aiming at the unknown attenuation coefficient distribution mu to maximize a log-likelihood function:
Figure BDA0003894289090000212
in the formula, the upper standard 0.5 indicates that the linear attenuation coefficient obtained based on the PET data at present cannot be directly used for attenuation correction in reconstruction of radioactivity activity distribution x in the next iteration, and a linear attenuation coefficient prior result mu is required to be utilized prior Readjustment is performed by elastic transformation. The initial value of the iteration of the attenuation coefficient distribution mu is mu prior
Step 12:the three-dimensional elastic deformation of the image is modeled by adopting an optical flow model, and the scale of the deformation parameter is the displacement field change vector d = (d) of each voxel u ,d v ,d w ) The superscripts u, v, w respectively represent three directions of motion in space. A three-dimensional image registration algorithm based on demons algorithm is adopted,the purpose is to get the linear attenuation coefficient a priori prior Iterative results to linear attenuation coefficients
Figure BDA0003894289090000221
The direction is elastically deformed, the similarity of the two is ensured, and the corresponding displacement field change vector d is obtained by solving (n+0.5) Namely:
Figure BDA0003894289090000222
in the formula
Figure BDA0003894289090000223
Showing the linear attenuation coefficient prior result mu prior And performing elastic transformation on the scale of the displacement field change vector d (n + 0.5). The updated linear attenuation coefficient image
Figure BDA0003894289090000224
Comprises the following steps:
Figure BDA0003894289090000225
updated linear attenuation coefficient image
Figure BDA0003894289090000226
For use in the PET radioactivity distribution x reconstruction of the next iteration.
Step 13:in the implementation process of the invention, the linear attenuation coefficient distribution mu is kept as a constant, the objective function (such as formula (9)) is maximized to solve the PET radioactivity distribution x, then the PET radioactivity distribution x is kept as a constant, the objective function (such as formula (10)) is maximized to solve the linear attenuation coefficient distribution mu, finally, the linear attenuation coefficient iteration process is adjusted according to the linear attenuation coefficient iteration result and the elastic change (such as formula (11) and formula (12)) is carried out on the linear attenuation coefficient prior result. The operation is performed alternately, attenuation correction is continuously corrected to approximate to the real attenuation condition, and finally the requirement is metThe estimates of x and μ required to maximize the objective function.
As shown in fig. 3, fig. 3 shows a pre-defined multi-modal detection system coordinate system when the above objective function (e.g. formula (3)) is presented. Fig. 4 (a) is a PET image without attenuation correction, fig. 4 (b) is a PET image with approximate attenuation correction, fig. 4 (c) is a PET image with attenuation correction performed by using the second embodiment or the third embodiment, and fig. 4 (d) is a PET image with linear attenuation coefficient distribution obtained by using a learning network as a priori knowledge and used as an adjustment in the iterative process of linear attenuation coefficients.
Compared with the traditional method for performing attenuation correction through other modal images, in the scheme of the embodiment, the attenuation correction information in the reconstruction process is from PET data, and when the PET multi-modal images are not matched due to respiration or heartbeat and patient movement, the attenuation correction can still be performed on the images, so that image artifacts are eliminated; if the attenuation images obtained by other modalities have artifacts (for example, obvious metal artifacts exist in the CT images of PET/CT scanning patients with cardiac pacemakers or metal braces in the bodies), accurate attenuation correction can still be carried out; the algorithm is applied without the problem of attenuation image truncation, so that doctors can scan heavy patients conveniently; the attenuation correction iteration result is adjusted by a linear attenuation coefficient prior image generated by a deep learning network in the iteration process, and the quantitative property and the tissue distribution are more accurate than those of the original iteration algorithm, so that the stability and the iteration speed of the attenuation correction algorithm are greatly improved; deep learning network mapping is carried out in an image threshold, the processing speed is high, the additionally increased time can be ignored relative to the reconstruction process, and the feasibility of the algorithm is ensured; the linear attenuation coefficient generated by the deep learning network is applied to an attenuation coefficient iterative algorithm, so that the problem of generalization of PET (positron emission tomography) acquired data is solved, the difficulty of the learning network is simplified, and the stability of the network is improved; PET acquisition does not depend on other modalities, can be applied to single PET scanning, reduces the scanning environment requirement and expands the application occasions.
In addition, an embodiment of the present invention further provides a PET system, which includes: a memory and a processor; the memory stores computer program instructions, and the processor executes the computer program instructions stored in the memory, specifically executes the PET image reconstruction method, the linear attenuation correction coefficient image acquisition method, the deep learning network training method, and the like.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. are used for convenience only and do not denote any order. These words are to be understood as part of the name of the component.
Furthermore, it should be noted that in the description of the present specification, the description of the term "one embodiment", "some embodiments", "examples", "specific examples" or "some examples", etc., means that a specific feature, structure, material or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the claims should be construed to include preferred embodiments and all changes and modifications that fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention should also include such modifications and variations.

Claims (11)

1. A method for acquiring an attenuation correction coefficient image is characterized by comprising the following steps:
s10, aiming at detection data used for medical image reconstruction, acquiring a first image without attenuation correction and a second image approximate to attenuation correction of the detection data, wherein the second image is a reconstructed image obtained by performing attenuation correction on the first image based on a linear attenuation correction coefficient empirical value of a specified region;
s20, inputting the first image and/or the second image into a pre-trained deep learning network, and acquiring a first linear attenuation correction coefficient output by the deep learning network;
s30, acquiring a modified attenuation correction coefficient for medical image reconstruction based on a predetermined linear attenuation coefficient II of the scanning bed and the linear attenuation correction coefficient I;
the corrected attenuation correction coefficient is used as an initial value in the medical image reconstruction and/or as an elastic transformation coefficient for adjusting a linear attenuation correction coefficient of each iteration in the medical image reconstruction iteration process, and is used for accelerating a convergence path in the medical image reconstruction iteration process;
the pre-trained deep learning network is obtained by training a deep learning network constructed based on the reconstructed medical image and the matched associated image.
2. The method of claim 1, wherein the deep learning network is one of: CNN networks, unet networks, GAN networks;
the medical image is a PET image or a CT image.
3. The method according to claim 1, wherein when the medical image is a PET image, before S10, the method further comprises:
s00, acquiring a training sample for training the deep learning network based on the reconstructed medical image and the matched associated image;
wherein each training sample comprises: the method comprises the steps that a reconstructed PET image/simulated PET image is obtained, the approximate linear attenuation correction coefficient corresponding to the PET image and other modality images corresponding to the PET image are obtained, the other modality images are used for obtaining a real linear attenuation correction coefficient, and the real linear attenuation correction coefficient is used for verifying whether a trained deep learning network is converged or not;
s01, training the deep learning network based on the training samples to obtain the trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function phi value of the optimized training deep learning network to be minimum.
4. The method of claim 3, wherein training the deep learning network based on the training samples comprises:
inputting the approximate linear attenuation correction coefficient of each training sample into a deep learning network to obtain output, and comparing the output with real linear attenuation correction coefficients obtained from other modal images in the training sample by means of a loss function phi;
and/or the presence of a gas in the gas,
inputting the reconstructed non-attenuated PET image of each training sample into a deep learning network to obtain output, and comparing the output with a real linear attenuation correction coefficient obtained from other modal images in the training sample by means of a loss function phi;
and/or the presence of a gas in the gas,
summing the reconstructed non-attenuated PET image and the approximate linear attenuation correction coefficient of each training sample, inputting the summed image into a deep learning network to obtain output, and comparing the output with the real linear attenuation correction coefficient obtained by other modal images in the training sample by means of a loss function phi;
and the loss function phi is one or more of L1 norm, L2 norm and KL divergence and is used for similarity between each output of the deep learning network and a real linear attenuation correction coefficient to which the output belongs in scale training.
5. The method according to claim 3 or 4, wherein the other modality image is a CT image and the approximate linear attenuation correction coefficient is a linear attenuation correction coefficient image generated based on a known linear attenuation correction coefficient of the specified region on the non-attenuation corrected PET image.
6. A PET image reconstruction method, comprising:
p01, acquiring a modified attenuation correction factor μ for the detection data to be PET reconstructed using the method of claim 1 above prior ';
P02 based on prior ' performing alternate iteration on the attenuation correction coefficient mu and the PET radioactivity distribution x by adopting an alternate iteration strategy to obtain an estimated value of x meeting the requirement of a maximized objective function, and using the estimated value as a reconstructed image of the detection data;
wherein the alternate iteration strategy comprises: firstly, mu prior ' As an initial value, solving x by maximizing an objective function, solving mu by maximizing the objective function by taking the solved x as a constant, and utilizing mu prior ' adjusting the solved mu, taking the adjusted mu as a constant, and maximizing an objective function to solve the next x, and alternately operating.
7. The method of claim 6, wherein the objective function is a log-likelihood function L (x, μ, y), and wherein P02 comprises:
mu to prior ' as an initial value, maximizing the constructed log-likelihood function L (x, mu, y) to obtain a radioactivity distribution x;
Figure FDA0003894289080000031
n represents the iteration times, and the initial value of x is a set value;
taking the x of the iterative solution as a constant, maximizing the constructed log-likelihood function L (x, mu, y) to obtain the linear attenuation coefficient distribution
Figure FDA0003894289080000032
Using mu prior ' adjusting mu of the solution to obtain adjusted
Figure FDA0003894289080000033
Namely that
Figure FDA0003894289080000034
d (n+0.5) Is a displacement field variation vector;
will be adjusted
Figure FDA0003894289080000035
As a constant, maximizing the constructed log-likelihood function L (x, mu, y) to obtain the radioactivity distribution x;
Figure FDA0003894289080000036
and sequentially and alternately operating to obtain an estimated value of x meeting the requirement of maximizing the constructed log-likelihood function.
8. A training method of a deep learning network, wherein the trained deep learning network is used for accelerating iterative convergence of attenuation correction coefficients, the training method comprising:
acquiring a training sample for training the deep learning network based on the reconstructed medical image and the matched associated image;
wherein each training sample comprises: the method comprises the steps that a reconstructed PET image/simulated PET image is obtained, the approximate linear attenuation correction coefficient corresponding to the PET image and other modality images corresponding to the PET image are obtained, the other modality images are used for obtaining a real linear attenuation correction coefficient, and the real linear attenuation correction coefficient is used for verifying whether a trained deep learning network is converged or not;
training the deep learning network based on the training samples to obtain a trained deep learning network;
and the network parameter theta in the trained deep learning network enables the loss function phi value of the optimally trained deep learning network to be minimum.
9. The method of claim 8, wherein training the deep learning network based on the training samples comprises:
inputting the approximate linear attenuation correction coefficient of each training sample into a deep learning network to obtain output, and obtaining a real linear attenuation correction coefficient mu by the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and/or the presence of a gas in the gas,
inputting the reconstructed non-attenuated PET image of each training sample into a deep learning network to obtain output, and obtaining a true linear attenuation correction coefficient mu of the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and/or the presence of a gas in the gas,
summing the reconstructed non-attenuated PET image of each training sample with the approximate linear attenuation correction coefficient, inputting the summed image into a deep learning network to obtain output, and obtaining the real linear attenuation correction coefficient mu of the output and other modal images in the training sample CT Comparison is performed by means of a loss function Φ;
and the loss function phi is one or more of L1 norm, L2 norm and KL divergence and is used for similarity between each output of the deep learning network and a real linear attenuation correction coefficient to which the output belongs in scale training.
10. The method according to claim 8 or 9, wherein the other modality image is a CT image, and the approximate linear attenuation correction coefficient is a linear attenuation correction coefficient image generated based on a known linear attenuation coefficient of the specified region on the non-attenuation-corrected PET image;
Figure FDA0003894289080000051
Figure FDA0003894289080000052
Figure FDA0003894289080000053
for an image that is not attenuation-corrected,
Figure FDA0003894289080000054
an image corrected for approximate attenuation;
Figure FDA0003894289080000055
is an intermediate parameter, theta is a network parameter in the deep learning network, mu DL Is the output of the deep learning network in training.
11. A PET system, comprising: a memory and a processor; the memory has stored therein computer program instructions, and the processor executes the computer program instructions stored in the memory, in particular to perform the method of any of the preceding claims 1 to 10.
CN202211291273.7A 2022-10-17 2022-10-17 Attenuation correction coefficient image acquisition method and PET image reconstruction method Pending CN115439572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211291273.7A CN115439572A (en) 2022-10-17 2022-10-17 Attenuation correction coefficient image acquisition method and PET image reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211291273.7A CN115439572A (en) 2022-10-17 2022-10-17 Attenuation correction coefficient image acquisition method and PET image reconstruction method

Publications (1)

Publication Number Publication Date
CN115439572A true CN115439572A (en) 2022-12-06

Family

ID=84252571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211291273.7A Pending CN115439572A (en) 2022-10-17 2022-10-17 Attenuation correction coefficient image acquisition method and PET image reconstruction method

Country Status (1)

Country Link
CN (1) CN115439572A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502701A (en) * 2023-06-29 2023-07-28 合肥锐世数字科技有限公司 Attenuation correction method and device, training method and device, imaging method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116502701A (en) * 2023-06-29 2023-07-28 合肥锐世数字科技有限公司 Attenuation correction method and device, training method and device, imaging method and system
CN116502701B (en) * 2023-06-29 2023-10-20 合肥锐世数字科技有限公司 Attenuation correction method and device, training method and device, imaging method and system

Similar Documents

Publication Publication Date Title
CN106456098B (en) The generation method and system of decay pattern
CN106491151B (en) PET image acquisition method and system
US8774481B2 (en) Atlas-assisted synthetic computed tomography using deformable image registration
EP2399238B1 (en) Functional imaging
US12073492B2 (en) Method and system for generating attenuation map from SPECT emission data
CN109961419B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
CN109978966B (en) Correction information acquisition method for attenuation correction of PET activity distribution image
US8588488B2 (en) Group-wise image registration based on motion model
US20100284598A1 (en) Image registration alignment metric
CN107133549A (en) ECT motion gates signal acquiring method and ECT image rebuilding methods
CN114387364A (en) Linear attenuation coefficient acquisition method and reconstruction method for PET image reconstruction
CN111127521A (en) System and method for generating and tracking the shape of an object
CN110458779B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
CN115439572A (en) Attenuation correction coefficient image acquisition method and PET image reconstruction method
CN112529977B (en) PET image reconstruction method and system
CN110428384B (en) Method for acquiring correction information for attenuation correction of PET images of respiration or heart
CN115908610A (en) Method for obtaining attenuation correction coefficient image based on single-mode PET image
US10417793B2 (en) System and method for data-consistency preparation and image reconstruction
WO2022036633A1 (en) Systems and methods for image registration
CN116172599A (en) PET radioactivity distribution acquisition method and PET system
US12100075B2 (en) Image reconstruction by modeling image formation as one or more neural networks
JP7459243B2 (en) Image reconstruction by modeling image formation as one or more neural networks
CN115830167A (en) PET image scattering correction method and PET system
CN117788625A (en) Scattering correction method and system for PET image
CN117788624A (en) Scattering correction method and system for PET image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination