CN114463459A - Partial volume correction method, device, equipment and medium for PET image - Google Patents
Partial volume correction method, device, equipment and medium for PET image Download PDFInfo
- Publication number
- CN114463459A CN114463459A CN202210078683.7A CN202210078683A CN114463459A CN 114463459 A CN114463459 A CN 114463459A CN 202210078683 A CN202210078683 A CN 202210078683A CN 114463459 A CN114463459 A CN 114463459A
- Authority
- CN
- China
- Prior art keywords
- image
- mri
- training
- pet
- partial volume
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine (AREA)
Abstract
The invention discloses a method, a device, equipment and a medium for correcting partial volume of a PET (positron emission tomography) image, wherein a network model is trained by acquiring a training PET image and a training MRI image with the same target object to obtain a partial volume correction model, and a first PET image and a first MRI image with the same target object are input into the partial volume correction model to obtain a target PET image after partial volume correction; the segmentation of the training MRI image is avoided, the information of the training MRI image is fully reserved through the MRI reconstruction sub-network, the excessive dependence on the first MRI image is reduced for the final target PET image output by the partial volume correction model, and the quality of the target PET image is improved.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method, a device, equipment and a medium for correcting partial volume of a PET image.
Background
Positron Emission Tomography (PET) is an important imaging tool for clinical diagnosis and research at a molecular level, and due to insufficient spatial resolution of a detector, partial volume effect is more obvious than that of an MRI/CT imaging device, and the partial volume effect can cause image blurring and lesion distortion, so that image quality degradation is caused, and clinical diagnosis and quantitative evaluation are affected.
In order to solve the above problems, partial volume correction of the post-PET reconstruction process at the region of interest level is proposed today, but the method is mainly to restore the true radioactivity of the regions, which are assumed to have the same activity in each region, and usually, the region of interest is acquired by segmenting the anatomical image registered with the PET image, so that the registration and segmentation accuracy of the PET image and the anatomical image is high, and meanwhile, it is assumed that the region activities are consistent, and the registration and segmentation errors cause the image quality to be degraded, and have certain limitations and complexities.
Disclosure of Invention
In view of the above, in order to solve at least one of the above technical problems, an object of the present invention is to provide a method, an apparatus, a device and a medium for partial volume correction of PET images.
The embodiment of the invention adopts the technical scheme that:
a method of partial volume correction of a PET image, comprising:
acquiring training data; the training data comprises a training PET image and a training MRI image of the same target object;
inputting the training PET image and the training MRI image into a network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on the training MRI image to obtain MRI information; the PET reconstruction sub-network performs second feature extraction processing on the training PET image and performs superposition decoding processing according to the MRI information and a second feature extraction processing result;
and inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a target PET image after partial volume correction.
Further, the acquiring training data includes:
acquiring an original PET image of a target object through a PET device, and acquiring an original MRI image of the target object through an MRI device;
and carrying out first registration according to the original PET image and the original MRI image to obtain the training PET image and the training MRI image.
Further, the inputting the training PET image and the training MRI image into a network model for training to obtain a partial volume correction model includes:
performing first decoding processing on the MRI information through the MRI reconstruction sub-network to obtain a reconstructed MRI image;
performing second feature extraction processing on the training PET image, and performing superposition decoding processing according to the MRI information and a second feature extraction processing result to obtain a corrected PET image;
calculating a first loss value from the reconstructed MRI image, the training MRI image, and a first loss function of the MRI reconstruction sub-network;
calculating a second loss value from the corrected PET image, the training PET image, and a second loss function of the PET reconstruction sub-network;
and training the network model according to the first loss value and the second loss value to obtain a partial volume correction model.
Further, the performing a first feature extraction process on the training MRI image to obtain MRI information includes:
performing first convolution processing on the training MRI image to obtain MRI information; the first convolution processing comprises a plurality of times of first convolution sub-processing, and the MRI information comprises all first convolution sub-processing results.
Further, the performing a second feature extraction process on the training PET image, and performing a superposition decoding process according to the MRI information and a second feature extraction process result to obtain a corrected PET image includes:
performing second convolution processing on the training PET image;
performing convolution transformation on the MRI information, and performing superposition decoding: and carrying out second decoding processing on the second convolution processing result and superposing the convolution transformation result in the second decoding processing process to obtain a corrected PET image.
Further, the training the network model according to the first loss value and the second loss value to obtain a partial volume correction model includes:
calculating a product of the first loss value and a weight coefficient;
determining an optimization parameter according to the sum of the product and the second loss value;
and adjusting network parameters of the PET reconstruction sub-network and the MRI reconstruction sub-network according to the optimization parameters until the optimization parameters are less than or equal to an optimization threshold value, and obtaining a partial volume correction model.
Further, the inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a partial volume corrected target PET image includes:
performing second registration according to the first PET image and the first MRI image to obtain a second PET image and a second MRI image;
and inputting the second PET image and the second MRI image into the partial volume correction model to obtain the target PET image after partial volume correction.
The embodiment of the invention also provides a partial volume correction device for PET images, which comprises:
the acquisition module is used for acquiring training data; the training data comprises a training PET image and a training MRI image of the same target object;
the training module is used for inputting the training PET images and the training MRI images into a network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on the training MRI image to obtain MRI information; the PET reconstruction sub-network performs second feature extraction processing on the training PET image and performs superposition decoding processing according to the MRI information and a second feature extraction processing result;
and the correction module is used for inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a target PET image after partial volume correction.
An embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the method.
Embodiments of the present invention also provide a computer-readable storage medium, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method.
The invention has the beneficial effects that: inputting a training PET image and a training MRI image which have the same target object into a network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on the training MRI image to obtain MRI information; the PET reconstruction sub-network carries out second feature extraction processing on the training PET image, carries out superposition decoding processing according to the MRI information and a second feature extraction processing result, and inputs a first PET image and a first MRI image with the same target object into the partial volume correction model to obtain a target PET image after partial volume correction; the segmentation of the training MRI image is avoided, the information of the training MRI image is fully reserved through the MRI reconstruction sub-network, the excessive dependence on the first MRI image is reduced for the final target PET image output by the partial volume correction model, and the quality of the target PET image is improved.
Drawings
FIG. 1 is a flow chart illustrating the steps of the partial volume correction method for PET images according to the present invention;
FIG. 2 is a schematic diagram of a network model according to an embodiment of the present invention;
FIG. 3(a) is a graph comparing experimental results with and without introduction of an MRI image, and FIG. 3(b) is a graph quantifying results with and without introduction of an MRI image;
fig. 4(a) is a comparison graph of experimental results with and without a deep learning network under introduction of an MRI image, and fig. 4(b) is a graph of quantification results with and without a deep learning network under introduction of an MRI image.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for correcting a partial volume of a PET image, including steps S100-S300:
and S100, acquiring training data.
In the embodiment of the present invention, the training data includes a training PET image and a training MRI image having the same object, that is, both the training PET image and the training MRI image are images obtained by imaging the same object, and include, for example, but not limited to, a certain part of the object. Alternatively, PET refers to positive electron Emission computed Tomography (PET), and MRI refers to Magnetic Resonance Imaging (MRI), which is a type of Imaging device for medical examination made by using a Magnetic Resonance phenomenon, and can provide a high-quality anatomical image for clinic; partial volume correction: the value of each pixel on the PET image represents the average activity value of the corresponding unit tissue, and the phenomenon when the value of the activity of the unit tissue cannot be faithfully reflected is called partial volume correction effect, while the partial volume effect can lead to image blurring, lesion distortion, image quality degradation and influence on clinical diagnosis and quantitative evaluation, so that the influence of the partial volume effect needs to be reduced by adopting a correction method.
Optionally, step S100 comprises steps S110-S120:
s110, acquiring an original PET image of the target object through a PET device, and acquiring an original MRI image of the target object through an MRI device.
And S120, performing first registration according to the original PET image and the original MRI image to obtain a training PET image and a training MRI image.
Optionally, a raw PET image of the object is acquired by a PET device, a raw MRI image of the object is acquired by an MRI device, and a system resolution of a detector in the PET (imaging) device may be acquired simultaneously, and then the raw PET image and the raw MRI image are subjected to a first registration to obtain a training PET image corresponding to the raw PET image and a training MRI image corresponding to the raw MRI image. It should be noted that the first registration includes, but is not limited to, a rigid registration; the training PET image after the first registration process may be a raw PET image.
And S200, inputting the training PET image and the training MRI image into a network model for training to obtain a partial volume correction model.
In the embodiment of the invention, the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and a training MRI image is input into the MRI reconstruction sub-network in the training process to be subjected to first feature extraction processing to obtain MRI information and first decoding processing to obtain a reconstructed MRI image; and inputting the training PET image into a PET reconstruction sub-network for second feature extraction processing, and performing superposition decoding processing according to the MRI information and a second feature extraction processing result to obtain a corrected PET image. The MRI reconstruction sub-network has a corresponding network parameter (denoted as a first network parameter) and a first loss function, and the PET reconstruction sub-network has a corresponding network parameter (denoted as a second network parameter) and a second loss function.
Optionally, the first network parameter and the second network parameter include, but are not limited to, a data processing (or preprocessing) related parameter, a training process and training related parameter, or a network related parameter. For example, data processing (or pre-processing) related parameters include, but are not limited to, rich database parameters (enrich data), feature normalization and scaling parameters (feature normalization and scaling) and BN processing parameters (batch normalization); parameters related to training in the training process include but are not limited to training momentum, learning rate, attenuation function, weight initialization and regularization related methods; network-related parameters include, but are not limited to, selection parameters of the classifier, the number of neurons, the number of filters, and the number of network layers.
As shown in fig. 2, optionally, the MRI reconstruction sub-network 100 has a first encoder 101, the first encoder 101 has a plurality of first convolution layers 102 connected in series and in series, and in the training process of step S200, the first feature extraction process is performed on the training MRI image 103 to obtain MRI information, including: and carrying out first convolution processing on the training MRI image to obtain MRI information. It should be noted that the first encoder 101 is configured to perform the first convolution processing, each first convolution layer 102 performs the first convolution sub-processing once, and the first convolution layer of the subsequent layer performs the further first convolution sub-processing on the first convolution sub-processing result of the previous layer, where the MRI information includes the first convolution sub-processing result each time.
Optionally, step S200 includes steps S210-S250, where the execution order of S210 and S220 is not limited, and the execution order of S220 and S240 is not limited:
s210, carrying out first decoding processing on the MRI information through an MRI reconstruction sub-network to obtain a reconstructed MRI image.
As shown in fig. 2, optionally, a first convolution sub-processing result corresponding to the first convolution layer 102 of the last layer is subjected to a first decoding process by a first decoder 104 to obtain a reconstructed MRI image 105.
It should be noted that, the MRI reconstruction sub-network needs to ensure that the reconstructed MRI image is consistent with the training MRI image as much as possible, so that the encoding process fully retains the MRI information as much as possible, the MRI reconstruction sub-network ensures that the MRI information is retained to the greatest extent, the training MRI image contains a great deal of information and knowledge about the imaging target, learning is performed in an encoding manner, and a first encoder for establishing the training MRI image is used for extracting the depth expression of the image so as to reflect the intrinsic characteristics of the training MRI image, and a first decoder is used for reconstructing the MRI image from the depth expression.
And S220, performing second feature extraction processing on the training PET image, and performing superposition decoding processing according to the MRI information and the second feature extraction processing result to obtain a corrected PET image.
Optionally, step S220 includes steps S2201-S2202:
and S2201, performing second convolution processing on the training PET image.
S2202, convolution conversion is carried out on the MRI information, and superposition decoding processing is carried out: and performing second decoding processing on the second convolution processing result and superposing the convolution transformation result in the second decoding processing process to obtain a corrected PET image.
As shown in fig. 2, in the embodiment of the present invention, a convolution transformation (not shown) may be performed once for each result of the first convolution sub-processing; the PET reconstruction sub-network 200 has a second encoder 201 and a second decoder 202, the second encoder 201 performs a second convolution process on the training PET image, and may include a plurality of second convolution layers; the second decoder 202 may be provided with a plurality of decoding layers 203 connected in series, each decoding layer 203 is used for performing the second decoding process, and the decoding layer 203 of the subsequent layer performs the further second decoding process on the result of the second decoding process of the previous layer.
As shown in fig. 2, specifically, taking the example of four layers of the first convolution layer 102 and three layers of the decoding layer 203 as an example, the superposition decoding process is to superpose the convolution transformation result corresponding to the first convolution sub-processing result layer by layer in the second decoding process, for example: the second encoder 201 performs a second convolution process on the training PET image 204, the first layer decoding layer 203 performs a first second decoding process on the second convolution process result and the convolution conversion result corresponding to the first convolution sub-process result of the third layer first convolution layer 102, the second layer decoding layer 203 performs a second decoding process on the first second decoding process result and the convolution conversion result corresponding to the first convolution sub-process result of the second layer first convolution layer 102, and the third layer decoding layer 203 performs a third second decoding process on the second decoding process result and the convolution conversion result corresponding to the first convolution sub-process result of the first layer first convolution layer 102 to obtain a superposition decoding process result, that is, the corrected PET image 205.
It will be appreciated that the PET reconstruction sub-network incorporates the acquired multi-level, multi-scale MRI information cross-modal into the reconstruction of the PET reconstruction sub-network, specifically: and (3) superimposing the MRI information obtained by the first encoder onto the decoding process of the second decoder layer by layer, wherein the superimposing method can adopt the operation of convolution transformation (c), and multilayer and multi-scale MRI information is utilized through PET reconstruction sub-networks.
S220, calculating a first loss value according to the reconstructed MRI image, the training MRI image and the first loss function of the MRI reconstruction sub-network.
Specifically, the first loss function MRI is:
wherein f (z; theta) is the output of the MRI reconstruction sub-network (namely the reconstructed MRI image), z is the training MRI image, and theta is the weight parameter of the MRI reconstruction sub-network;x1 is f (z; theta) -z, X1TIs the transpose of X1 and D is the gaussian weighting matrix. Specifically, a first loss value may be calculated by substituting each parameter into the first loss function.
S240, calculating a second loss value according to the corrected PET image, the training PET image and a second loss function of the PET reconstruction sub-network.
Wherein y is the output of the PET reconstruction sub-network (i.e., the corrected PET image), h is the system matrix, x is the convolution (including the second and third convolution processes), and g is the PET image;x2 is g-h X y, X2TIs the transpose of X2 and D is the gaussian weighting matrix. Specifically, the first loss value may be calculated by substituting each parameter into the second loss function.
And S250, training the network model according to the first loss value and the second loss value to obtain a partial volume correction model.
Optionally, step S250 includes steps S2501-S2503:
s2501, calculating the product of the first loss value and the weight coefficient.
S2502, determining an optimization parameter according to the sum of the product and the second loss value.
In the embodiment of the invention, the calculation formula of the optimized parameters is as follows:
wherein, theta*For optimizing the parameters, β is a weight coefficient, and can be set as required.
S2503, network parameters of the PET reconstruction sub-network and the MRI reconstruction sub-network are adjusted according to the optimization parameters until the optimization parameters are smaller than or equal to the optimization threshold value, and a partial volume correction model is obtained.
Optionally, the optimization threshold may be set as required, the network parameters of the PET reconstruction sub-network and the network parameters of the MRI reconstruction sub-network, including but not limited to the system matrix, the weight parameters, etc., are continuously adjusted in the training process until the optimization parameters are less than or equal to the optimization threshold, and the network model at this time is determined according to the network parameters adjusted last time, so as to obtain the partial volume correction model. Optionally, the training process may be performed by an ADMM algorithm, so as to obtain the final optimized parameters.
S300, inputting the first PET image and the first MRI image with the same target object into a partial volume correction model to obtain a partial volume corrected target PET image.
Optionally, step S300 includes steps S310-S320:
and S310, carrying out second registration according to the first PET image and the first MRI image to obtain a second PET image and a second MRI image.
And S320, inputting the second PET image and the second MRI image into a partial volume correction model to obtain a target PET image after partial volume correction.
Optionally, the same target object is captured by the PET device and the MRI device respectively to obtain a first PET image and a first MRI image, then the first PET image and the first MRI image are subjected to second registration (for example, in the same manner as the first registration) to obtain a second PET image and a second MRI image, and then the second PET image and the second MRI image are input into the partial volume correction model to obtain a target PET image after partial volume correction, that is, an image after partial volume correction is performed on the second PET image. It should be noted that, in some embodiments, the first PET image and the first MRI image may be images that are registered in advance, and then the first PET image and the first MRI image are input into the partial volume correction model.
According to the embodiment of the invention, the deep-level feature information of MRI and PET is applied to the partial volume correction process of PET through the deeply learned network model, the segmentation of the training MRI image is not needed, the information of the training MRI image is fully reserved through the MRI reconstruction sub-network, and the excessive dependence of the generated correction PET image on the training MRI image can be avoided by applying the information to the partial volume correction algorithm process of the network model. Meanwhile, the algorithm speed is greatly improved through advanced pre-training of a deep network model, the weight prior constraint network weight is introduced, and the network optimization efficiency is improved.
As shown in fig. 3(a), PET simulation image data of a real human body is used, wherein a in fig. 3(a) is a PET ideal image corresponding to a real target object, and the size is 256 × 181; in fig. 3(a), B is an MRI image, segmented into 6 different brain regions; for the PET simulation image, after attenuation correction, homogenization correction and reduction of photon count in a projection domain, obtaining an uncorrected PET image through MLEM iteration for 240 times and then reconstruction, as shown in C in FIG. 3 (a); d in fig. 3(a) is a correction result of the NLM method without using the MRI image, and E in fig. 3(a) is a PET image obtained by the partial volume correction method (MNPVC method for short) of the PET image according to the embodiment of the present invention. For NLM, the size of a search window set by the method is 21 × 21, the size of an image block is 7 × 7, the two comparative correction methods are a non-local mean algorithm (namely an NLM method) without introducing MRI information and the method (marked as an MNPVC method) of the invention, and image results are compared under the condition of reaching the same noise level. As can be seen from fig. 3(a), the target PET image obtained by the MNPVC method provided by the present invention has a visual effect closer to that of a real phantom image and a better effect.
As shown in fig. 3(b) as a quantification result, compared with the NLM method, the MNPVC method of the present invention has lower noise and deviation, and compared with a method that does not use MRI anatomical structure information (NO-PVC), the MNPVC method of the present invention can better suppress PET image noise, better realize the correction of PET partial volume, and improve PET image quality.
As shown in fig. 4(a), it is derived from real human PET data, where a in fig. 4(a) is an ideal PET image corresponding to a real target with a size of 256 × 181; in fig. 4(a), B is the MRI image introduced, segmented into 6 different brain regions; FIG. 4(a) C is an MRI segmentation chart for the PET partial volume correction method based on MRI images; for the PET simulation image, after attenuation correction, homogenization correction and reduction of photon count in a projection domain, reconstructing an obtained uncorrected PET image after 240 times of MLEM iteration, as shown by D in FIG. 4 (a); in fig. 4(a), E is a PET image obtained by applying RBV algorithm to D; in fig. 4(a), F is a PET image obtained by applying sBowsher algorithm to D, and G is a PET image obtained by applying MNPVC method of the present invention to D. It should be noted that, the RBV algorithm is used to extend the GTM algorithm to pixels, and is also a typical algorithm for partial volume correction using MRI information, and the sbowser algorithm is a symmetric sbowser algorithm, and is a classical algorithm for partial volume correction of PET based on MRI images without segmentation of MRI, so that it can be seen that the PET image obtained by the MNPVC method of the present invention has a better effect when the two methods are selected for comparison with the method of the present invention.
As shown in fig. 4(b), the MNPVC method of the present invention is better than the RBV method of a segmented image requiring MRI and the sBowsher algorithm of a segmented image not requiring MRI in both visual effect and quantization index. Table 1 provides a comparison of the time used by four different partial volume correction algorithms, and overall the MNPVC method of the present invention takes the least time and, in combination with image quality, the MNPVC can achieve the optimal PET partial volume correction.
TABLE 1
Method | Time (minutes) |
RBV | 3.35 |
NLM | 14.48 |
sBowsher | 15.12 |
MNPVC | 2.63 |
The embodiment of the invention also provides a partial volume correction device for PET images, which comprises:
the acquisition module is used for acquiring training data; the training data comprises a training PET image and a training MRI image of the same object;
the training module is used for inputting the training PET images and the training MRI images into the network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on a training MRI image to obtain MRI information; the PET reconstruction sub-network performs second feature extraction processing on the training PET image and performs superposition decoding processing according to the MRI information and a second feature extraction processing result;
and the correction module is used for inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a partial volume corrected target PET image.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
An embodiment of the present invention further provides an electronic device, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the method for correcting partial volume of a PET image according to the foregoing embodiment. The electronic equipment of the embodiment of the invention comprises but is not limited to any intelligent terminal such as a mobile phone, a tablet computer, a vehicle-mounted computer and the like.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the beneficial effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
Embodiments of the present invention also provide a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for partial volume correction of PET images of the foregoing embodiments.
Embodiments of the present invention also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the partial volume correction method of the PET image of the foregoing embodiment.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A method for partial volume correction of a PET image, comprising:
acquiring training data; the training data comprises a training PET image and a training MRI image of the same target object;
inputting the training PET image and the training MRI image into a network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on the training MRI image to obtain MRI information; the PET reconstruction sub-network performs second feature extraction processing on the training PET image and performs superposition decoding processing according to the MRI information and a second feature extraction processing result;
and inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a target PET image after partial volume correction.
2. The partial volume correction method for PET images according to claim 1, characterized in that: the acquiring training data comprises:
acquiring an original PET image of a target object through a PET device, and acquiring an original MRI image of the target object through an MRI device;
and carrying out first registration according to the original PET image and the original MRI image to obtain the training PET image and the training MRI image.
3. The partial volume correction method for PET images according to any one of claims 1 to 2, characterized in that: inputting the training PET image and the training MRI image into a network model for training to obtain a partial volume correction model, wherein the method comprises the following steps:
performing first decoding processing on the MRI information through the MRI reconstruction sub-network to obtain a reconstructed MRI image;
performing second feature extraction processing on the training PET image, and performing superposition decoding processing according to the MRI information and a second feature extraction processing result to obtain a corrected PET image;
calculating a first loss value from the reconstructed MRI image, the training MRI image, and a first loss function of the MRI reconstruction sub-network;
calculating a second loss value from the corrected PET image, the training PET image, and a second loss function of the PET reconstruction sub-network;
and training the network model according to the first loss value and the second loss value to obtain a partial volume correction model.
4. The partial volume correction method for PET images according to claim 3, characterized in that: the performing a first feature extraction process on the training MRI image to obtain MRI information includes:
performing first convolution processing on the training MRI image to obtain MRI information; the first convolution processing comprises a plurality of times of first convolution sub-processing, and the MRI information comprises all first convolution sub-processing results.
5. The partial volume correction method for PET images according to claim 3, characterized in that: the second feature extraction processing is performed on the training PET image, and superposition decoding processing is performed according to the MRI information and a second feature extraction processing result to obtain a corrected PET image, including:
performing second convolution processing on the training PET image;
performing convolution transformation on the MRI information, and performing superposition decoding: and carrying out second decoding processing on the second convolution processing result and superposing the convolution transformation result in the second decoding processing process to obtain a corrected PET image.
6. The partial volume correction method for PET images according to claim 3, characterized in that: the training the network model according to the first loss value and the second loss value to obtain a partial volume correction model, including:
calculating a product of the first loss value and a weight coefficient;
determining an optimization parameter according to the sum of the product and the second loss value;
and adjusting network parameters of the PET reconstruction sub-network and the MRI reconstruction sub-network according to the optimization parameters until the optimization parameters are less than or equal to an optimization threshold value, and obtaining a partial volume correction model.
7. The partial volume correction method for PET images according to claim 1, characterized in that: the inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a partial volume corrected target PET image includes:
performing second registration according to the first PET image and the first MRI image to obtain a second PET image and a second MRI image;
and inputting the second PET image and the second MRI image into the partial volume correction model to obtain the target PET image after partial volume correction.
8. A partial volume correction apparatus for PET images, comprising:
the acquisition module is used for acquiring training data; the training data comprises a training PET image and a training MRI image of the same target object;
the training module is used for inputting the training PET images and the training MRI images into a network model for training to obtain a partial volume correction model; the network model comprises a PET reconstruction sub-network and an MRI reconstruction sub-network, and in the training process, the MRI reconstruction sub-network performs first feature extraction processing on the training MRI image to obtain MRI information; the PET reconstruction sub-network performs second feature extraction processing on the training PET image and performs superposition decoding processing according to the MRI information and a second feature extraction processing result;
and the correction module is used for inputting the first PET image and the first MRI image with the same target object into the partial volume correction model to obtain a target PET image after partial volume correction.
9. An electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the method according to any one of claims 1-7.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210078683.7A CN114463459B (en) | 2022-01-24 | 2022-01-24 | Partial volume correction method, device, equipment and medium for PET image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210078683.7A CN114463459B (en) | 2022-01-24 | 2022-01-24 | Partial volume correction method, device, equipment and medium for PET image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114463459A true CN114463459A (en) | 2022-05-10 |
CN114463459B CN114463459B (en) | 2022-09-27 |
Family
ID=81410996
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210078683.7A Active CN114463459B (en) | 2022-01-24 | 2022-01-24 | Partial volume correction method, device, equipment and medium for PET image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463459B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843679A (en) * | 2023-08-28 | 2023-10-03 | 南方医科大学 | PET image partial volume correction method based on depth image prior frame |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090110256A1 (en) * | 2007-10-30 | 2009-04-30 | General Electric Company | System and method for image-based attenuation correction of pet/spect images |
US20090219289A1 (en) * | 2008-02-28 | 2009-09-03 | International Business Machines Corporation | Fast three-dimensional visualization of object volumes without image reconstruction by direct display of acquired sensor data |
CN103942763A (en) * | 2014-05-03 | 2014-07-23 | 南方医科大学 | Voxel level PET (positron emission tomography) image partial volume correction method based on MR (magnetic resonance) information guide |
CN111161182A (en) * | 2019-12-27 | 2020-05-15 | 南方医科大学 | MR structure information constrained non-local mean guided PET image partial volume correction method |
CN112216371A (en) * | 2020-11-20 | 2021-01-12 | 中国科学院大学 | Multi-path multi-scale parallel coding and decoding network image segmentation method, system and medium |
CN112700380A (en) * | 2020-12-22 | 2021-04-23 | 颜建华 | PET image volume correction method based on MR gradient information and deep learning |
-
2022
- 2022-01-24 CN CN202210078683.7A patent/CN114463459B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090110256A1 (en) * | 2007-10-30 | 2009-04-30 | General Electric Company | System and method for image-based attenuation correction of pet/spect images |
US20090219289A1 (en) * | 2008-02-28 | 2009-09-03 | International Business Machines Corporation | Fast three-dimensional visualization of object volumes without image reconstruction by direct display of acquired sensor data |
CN103942763A (en) * | 2014-05-03 | 2014-07-23 | 南方医科大学 | Voxel level PET (positron emission tomography) image partial volume correction method based on MR (magnetic resonance) information guide |
CN111161182A (en) * | 2019-12-27 | 2020-05-15 | 南方医科大学 | MR structure information constrained non-local mean guided PET image partial volume correction method |
CN112216371A (en) * | 2020-11-20 | 2021-01-12 | 中国科学院大学 | Multi-path multi-scale parallel coding and decoding network image segmentation method, system and medium |
CN112700380A (en) * | 2020-12-22 | 2021-04-23 | 颜建华 | PET image volume correction method based on MR gradient information and deep learning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843679A (en) * | 2023-08-28 | 2023-10-03 | 南方医科大学 | PET image partial volume correction method based on depth image prior frame |
CN116843679B (en) * | 2023-08-28 | 2023-12-26 | 南方医科大学 | PET image partial volume correction method based on depth image prior frame |
Also Published As
Publication number | Publication date |
---|---|
CN114463459B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325686B (en) | Low-dose PET three-dimensional reconstruction method based on deep learning | |
CN108492269B (en) | Low-dose CT image denoising method based on gradient regular convolution neural network | |
CN110930318B (en) | Low-dose CT image repairing and denoising method | |
CN109215014B (en) | Training method, device and equipment of CT image prediction model and storage medium | |
CN111325695B (en) | Low-dose image enhancement method and system based on multi-dose grade and storage medium | |
CN111340903A (en) | Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image | |
CN110310244A (en) | One kind being based on the decoded medical image denoising method of residual coding | |
CN108038840B (en) | Image processing method and device, image processing equipment and storage medium | |
CN110874855B (en) | Collaborative imaging method and device, storage medium and collaborative imaging equipment | |
CN114463459B (en) | Partial volume correction method, device, equipment and medium for PET image | |
Li et al. | Learning non-local perfusion textures for high-quality computed tomography perfusion imaging | |
CN116645283A (en) | Low-dose CT image denoising method based on self-supervision perceptual loss multi-scale convolutional neural network | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
CN111161182A (en) | MR structure information constrained non-local mean guided PET image partial volume correction method | |
CN112767273B (en) | Low-dose CT image restoration method and system applying feature decoupling | |
WO2021226500A1 (en) | Machine learning image reconstruction | |
JP7557944B2 (en) | Image processing device and image processing method | |
Xue et al. | PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network | |
CN112017258A (en) | PET image reconstruction method, apparatus, computer device, and storage medium | |
CN116245969A (en) | Low-dose PET image reconstruction method based on deep neural network | |
CN116071449A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
CN114331996B (en) | Medical image typing method and system based on self-encoding decoder | |
CN108010093B (en) | PET image reconstruction method and device | |
CN105488824B (en) | A kind of method and apparatus for rebuilding PET image | |
Khaleghi et al. | Neural network performance evaluation of simulated and genuine head-and-neck computed tomography images to reduce metal artifacts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |