CN108805947B - PET data processing method and device and PET imaging system - Google Patents

PET data processing method and device and PET imaging system Download PDF

Info

Publication number
CN108805947B
CN108805947B CN201810494961.0A CN201810494961A CN108805947B CN 108805947 B CN108805947 B CN 108805947B CN 201810494961 A CN201810494961 A CN 201810494961A CN 108805947 B CN108805947 B CN 108805947B
Authority
CN
China
Prior art keywords
pet
image
weight
determining
tissue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810494961.0A
Other languages
Chinese (zh)
Other versions
CN108805947A (en
Inventor
朱闻韬
冯涛
李弘棣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201810494961.0A priority Critical patent/CN108805947B/en
Publication of CN108805947A publication Critical patent/CN108805947A/en
Application granted granted Critical
Publication of CN108805947B publication Critical patent/CN108805947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Abstract

One aspect of the present disclosure relates to a PET data processing method including acquiring PET data of a body part of a subject; reconstructing the PET data to acquire a PET image of a body part and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data; carrying out segmentation processing on the attenuation image to obtain a segmentation result; determining a specific tissue from the segmentation result; calculating the weight of the characteristic tissue; and determining a SUL value based on the calculated weight of the specific tissue and the PET image. The disclosure also relates to a corresponding device.

Description

PET data processing method and device and PET imaging system
Technical Field
The present disclosure relates generally to the calculation of Standard Uptake Value (SUV), and more particularly to the calculation of a sul (standard up take value of mean body mass) value based on PET data information.
Background
SUV is the most widely used semi-quantitative indicator in PET imaging, which is the ratio of the concentration of radioactivity deduced from the PET image to the concentration of radioactivity injected. The calculation formula for an SUV may generally be as follows:
Figure BDA0001668752810000011
wherein, SUV (t) may represent the SUV value at time t; c (t) is the corrected PET image pixel intensity value in the image; dose represents the injected Dose; weight represents body weight.
The formula for this index uses the total body weight as input, without regard to the fat fraction. However, using the SUV formula for PET quantification under equivalent physiological conditions allows for an overall higher SUV for obese patients than for normal patients. Therefore, this general calculation formula is not very suitable for obese patients. The method solves the problem that the quantification of the SUV of the obese patient is not accurate enough, and the PET imaging in the prior art is important for keeping the group quantification consistency of people in the same physiological state.
Disclosure of Invention
One aspect of the present disclosure relates to a PET data processing method, including: acquiring PET data of a body part of a subject; reconstructing the PET data to acquire a PET image of a body part and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data; carrying out segmentation processing on the attenuation image to obtain a segmentation result; determining a specific tissue from the segmentation result; calculating the weight of the characteristic tissue; and determining a SUL value based on the calculated weight of the specific tissue and the PET image.
According to another exemplary but non-limiting embodiment, the method further comprises, before determining the weight of the specific tissue from the result of the segmentation, correcting the result of the segmentation in at least one of the following ways: acquiring an Atlas template of the attenuation image, and classifying pixels in the segmentation result by using the Atlas template of the attenuation image; or denoising the pixels in the segmentation result.
According to yet another exemplary but non-limiting embodiment, segmenting the attenuation image to obtain segmentation results comprises: the pixel values in the attenuation image are counted to determine the boundaries of different organs.
According to yet another exemplary but non-limiting embodiment, the subject body part further comprises fat, the segmentation result comprises a boundary of a fat region and a boundary of a non-fat region, and the specific tissue corresponds to the fat region.
According to a further exemplary embodiment, the SUL value is a SUV value corresponding to a target tissue, and determining the SUL value based on the calculated weight of the specific tissue and the PET image comprises: determining a mass of the subject's body part; determining a weight of a target tissue from the mass of the subject's body part and the weight of the specific tissue; an SUL value is determined based on the PET image, an injected dose of contrast agent, and a weight of the target tissue.
Another aspect of the present disclosure relates to a PET data processing apparatus, characterized by comprising: means for acquiring PET data of a subject's body part; means for reconstructing the PET data to acquire a PET image of the body part and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data; the device is used for carrying out segmentation processing on the attenuation image to obtain a segmentation result; means for determining a specific tissue from the segmentation result; means for calculating the weight of the feature tissue; and means for determining a SUL value based on the calculated weight of the specific tissue and the PET image.
According to a further exemplary but nonlimiting embodiment, the means for segmenting the attenuation image to obtain segmentation results comprises means for counting pixel values in the attenuation image to determine boundaries of different organs.
According to still another exemplary but non-limiting embodiment, the SUL value is a SUV value corresponding to a target tissue, and the means for determining the SUL value based on the calculated weight of the specific tissue and the PET image comprises: means for determining a mass of the subject's body part; means for determining a weight of a target tissue based on the mass of the subject's body part and the weight of the specific tissue; means for determining a SUL value based on the PET image, an injected dose of contrast agent, and a weight of the target tissue. .
Yet another aspect of the present disclosure relates to a PET imaging system comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: acquiring PET data of a body part of a subject; reconstructing the PET data to obtain a PET image of a body part and obtaining an attenuation image corresponding to the PET image during reconstruction of the PET data; carrying out segmentation processing on the attenuation image to obtain a segmentation result; determining a specific tissue according to the segmentation result; calculating the weight of the characteristic tissue; and determining a SUL value based on the calculated weight of the specific tissue and the PET image.
Yet another aspect of the disclosure relates to a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed, implement any of the methods described above.
Drawings
Fig. 1 shows a method according to an exemplary embodiment of the present invention.
Fig. 2 shows a device according to an exemplary embodiment of the present invention.
FIG. 3 shows a PET imaging system according to an exemplary embodiment of the invention.
Detailed Description
The use of the SUV calculation formula of the prior art, such as formula (1), inevitably leads to a large deviation in the population consistency of obese patients from an equivalent physiological state. This is because the contrast agent generally accumulates in non-adipose tissue. Thus, if the same dose is injected, the activity c (t) of the same non-adipose tissue (e.g., liver) is similar for two patients of the same non-adipose body weight (i.e., total body weight-adipose weight). However, obese patients are heavier, which will eventually result in higher SUV values in the liver of obese patients.
Some scholars therefore consider it more appropriate to change the body weight to "muscle weight" or "body surface area". Since a large amount of fat does not substantially uptake PET contrast agents, it is not reasonable to include it in the calculation formula for SUV. It is well documented that SUV values obtained using "muscle mass" calculations have a smaller population variance in population tests and are therefore more valuable for population quantification studies.
In order to calculate the "muscle weight", the academia has proposed some common methods. In PET/CT scanning, one possible solution is to use the pixel values of the CT image to distinguish between muscle, fat, and bone, whereby the total weight of fat is calculated and subtracted from the total weight. However, in the case of a PET/CT scanning, the CT image part is often missing, and the muscle weight calculated by the method deviates from the actual value. Another possible solution in PET/MR is to perform water-fat separation by scanning of Dixon sequence (the technique is proposed by Dixon, and the arbitrary included angle between two magnetization vectors of water and fat is theoretically obtained accurately by adjusting different echo times mainly by using chemical shift effect on the basis of conventional spin echo sequence.0 degree included angle image and pi included angle image of water and fat are respectively acquired, and then water and fat images are obtained by calculation from two Magnetic Resonance (MR) images), and then the proportion of fat in each part of the body is calculated, and the content of body fat is estimated accordingly. However, the fat estimation method based on the Dixon sequence image cannot accurately acquire the fat scale map, and there is a problem in the estimation accuracy of the fat weight.
The invention aims to solve the problem of fat weight estimation when a CT part is missing in PET/CT scanning or a Dixon sequence image in PETMR scanning cannot accurately acquire a fat scale map. Of course, the invention is not limited thereto, but can also be used as an alternative or supplementary fat estimation means in the case of PETCT scans in which CT images can be used to calculate fat weights and/or in the case of PET/MR in which fat proportions can be acquired by means of Dixon sequences. In particular, one of the basic ideas of the present invention may comprise reconstructing an attenuation image of the PET imaging using information carried by the TOF-PET data itself, for example, and estimating the fat distribution/ratio from the reconstructed attenuation image. As will be appreciated by those of ordinary skill in the art, although embodiments of the present invention are described below with respect to PET as an example, aspects of the present invention are equally applicable to SPECT, PET-MR, PET-CT, or the like.
According to one exemplary but non-limiting embodiment, a method of the present invention includes acquiring PET data, and reconstructing a PET image using the PET data and acquiring an attenuation image during reconstruction of the PET image. Acquiring PET data may include, for example, acquiring PET data of a patient by a device, obtaining PET data from a database, or receiving PET data from other sources, etc.
According to an optional aspect, the PET image and the attenuation image may be iteratively reconstructed using a joint estimation algorithm prior to the threshold-based segmentation from the attenuation image to obtain a more accurate PET image and attenuation image, and thus a more accurate segmentation.
According to an exemplary but non-limiting embodiment, iterative reconstruction of the PET image and the attenuation image using a joint estimation algorithm may include a specific joint estimation procedure as follows.
First, the PET image is updated using the ordered subset maximum expectation value method (OSEM) with the attenuation sinogram fixed.
Figure BDA0001668752810000051
Wherein f isj (n,m+1)Representing the PET activity (radiation) image obtained after the mth subset iteration in the nth iteration in the reconstruction process, SmRepresenting the mth data subset, H, in the data spaceijtAnd HiktRepresenting the system matrix, i is the serial number of the response line, k and j represent the kth or jth voxel in the radiological image, t represents the number of the temporal flight box, εi(t) represents the normalization coefficient for the tabulated data on the ith line of response for the t-th time bin, si(t) and ri(t) respectively representing the number of scattering coincidence events and random coincidence events on the ith response line of the tth time box;
Figure BDA0001668752810000052
representing the attenuation sine value of the ith element in the attenuation sine map used in the mth subset iteration of the nth iteration.
It should be noted that, in the following description,
Figure BDA0001668752810000053
the initial value can be determined according to prior information, namely attenuation values of various voxels are stored in a prior database, and the system allocates an initial attenuation value to the corresponding voxel according to the initial activity value of the PET live image; or, acquiring an image of the anatomical structure of the same part of the subject, for example, selecting a corresponding CT image or MR image, and assigning an attenuation value or an attenuation coefficient to the corresponding voxel by referring to the CT value of the CT image; for another example, the boundary and classification of the tissue and organ are determined by referring to the MR image, and the corresponding attenuation coefficient is assigned to the corresponding tissue and organ, so as to obtain the corresponding attenuation value. In an exemplary manner, the first and second electrodes are,
Figure BDA0001668752810000054
Figure BDA0001668752810000055
wherein: lijA system matrix representing a line integral model mapped from the attenuation image to the attenuation sinogram,
Figure BDA0001668752810000056
representing the attenuation coefficient of voxel j before it has passed through the subiteration of the mth subset of n iterations.
Next, the contribution of the updated PET image on the data domain is calculated
Figure BDA0001668752810000057
Wherein the content of the first and second substances,
Figure BDA0001668752810000058
representing the expected value of the ith voxel in the nonTOF sinogram in the PET radiological image obtained after the mth sub-iteration in the n iterations.
Next, each voxel of the attenuation image is updated.
Figure BDA0001668752810000061
Wherein the content of the first and second substances,
Figure BDA0001668752810000062
representing the sub-iterations of the m-th subset of the n iterations followed by
Figure BDA0001668752810000063
Updated attenuation coefficient (instead of image), lijIs a line integral system matrix mapped from the attenuation image to the attenuation coefficient representing the length of the ith response line in the sinogram across voxel j, yiRepresents the number of annihilation photon pairs collected on the ith line of response, siAnd riRespectively representing the number of scattering coincidence events and random coincidence events on the ith line of response,
Figure BDA0001668752810000064
representing the expected value of the ith voxel in the nonTOF sinogram in the PET radiological image obtained through the (m + 1) th sub-iteration of the n iterations.
In the iterative reconstruction process, in each sub-iterative process, the PET attenuation image is kept unchanged, the formula (2) is used for updating the PET radiation image, the PET radiation image is kept unchanged, the formula (4) is used for updating the attenuation image, the next iteration is performed after all the ordered subsets are traversed in the one-time iterative process, the iteration is repeated until a preset iteration stop condition is met, and the iteration is stopped to obtain the PET radiation image and the attenuation image. Otherwise, the value obtained by the iteration is taken as an initial value, and the iteration process is continued.
Although the above description of the embodiment in which the PET image and the attenuation image are reconstructed using the joint estimation algorithm employs the ordered subset maximum expectation value method (OSEM), other algorithms for reconstructing the PET image and/or the attenuation image are also available, such as the MLAA algorithm, and the like. And as previously mentioned, this reconstruction is an optional step and may be omitted in some embodiments.
According to an exemplary, non-limiting embodiment, an aspect of the invention includes threshold-based segmentation from attenuation images. In an attenuation image, the voxel values may represent the degree of attenuation of a gamma ray as it penetrates a unit length. The attenuation coefficient difference is large for different tissues. For example, the attenuation coefficient of bone is generally above 0.012/mm, the attenuation coefficient of tissue and muscle is about 0.0096/mm, and the attenuation coefficient of fat is lower, about 0.008/mm. By thresholding the attenuation image based on the attenuation coefficient, the volume and fraction of bone, tissue, fat, and/or its topology can be roughly obtained.
There are many ways of image segmentation, and in one illustrative, non-limiting example of the invention, a histogram-based segmentation method may be employed. However, one of ordinary skill in the art will appreciate that the present invention is not so limited and may be adapted to other segmentation methods. For example, the present invention may also include using other prior art or future developed pixel value statistics methods other than histograms.
The image may contain n different tissues or organs, and the histogram method assumes that the same organ or tissue has the same or similar pixel values in the attenuation map, while different organs and/or tissues may have different pixel values between them. By way of example and not limitation, according to the histogram method, it is assumed that the histogram function h (v) may be the number of pixels having a value v in the attenuation map. Thus, the n different tissues or organs in the attenuation map correspond to n maxima of h (v). By discriminating the h (v) maximum, the most likely values of different organs in the attenuation map can be determined. By determining the h (v) minima, the most likely demarcation points and segmentation areas of different organs in the attenuation map can be determined.
According to an exemplary but non-limiting embodiment, the definition of the histogram can be expressed as:
h(v)=∫Allδ(g(x,y,z)-v)dxdydz (5)
wherein h (v) may represent the number of pixels with a value v in the attenuation map; x represents the abscissa of the pixel; y represents the ordinate of the pixel point; z represents the vertical coordinate of the pixel point; the value of the pixel point with coordinates (x, y, z) in the attenuation map is recorded as g (x, y, z). δ (v) represents a binarization function when v ∈ [ -v [ - ]0,v0]When δ is 1; otherwise 0, where v0Is an adjustable parameter.
According to an exemplary but non-limiting embodiment, the maxima and/or minima of h (v) can be derived by differentiating h (v). For example, according to an example, h (v) may have three maxima h (v)a)、h(vb) And h (v)c) And two minima h (v)d) And h (v)e) Wherein, for example, va<vd<vb<ve<vc
From the maximum and minimum values of h (v), for example, the distribution of attenuation map pixel values v e v corresponding to an organ can be obtained1,v2]So as to obtain the region R corresponding to the organ as R (x, y, z) | g (x, y, z) ∈ [ v [)1,v2]Expressed as when g (x, y, z) is epsilon [ v [ ]1,v2]When R (x, y, z) is 1, otherwise it is 0. Therefore, a region where R (x, y, z) is 1 is represented as a segmentation result of the organ.
For example, according to the previous example, assume that organ A corresponds to a maximum value h (v)a) Organ B corresponds to a maximum h (v)b) Organ C corresponds to a maximum h (v)c) The demarcation point between organ A and organ B corresponds to a minimum value h (v)d) And the demarcation point between organ B and organ C corresponds to a minimum value h (v)e). Accordingly, to segment an organ B (e.g., fat), the attenuation map pixel value distribution [ v ] for that organ B may be selectedB1,vB2]. The value range can be selected according to various algorithms. For example, according to one non-limiting example, the range [ v ]B1,vB2]Can be selected according to the mean value + -n times the standard deviation. According to another non-limiting example, the range [ v ]B1,vB2]Can be according to vb±v0Is selected wherein v0Is an adjustable parameter.
According to the selected range [ v ]B1,vB2]Obtaining the region R corresponding to the organ BbIs Rb(x,y,z)|g(x,y,z)∈[vB1,vB2]。
According to an exemplary but non-limiting embodiment, after threshold-based segmentation from the attenuation image, the segmentation results may optionally be modified to obtain, for example, fat regions.
The fat topology morphology as acquired in any of the previous embodiments may not be completely accurate due to the noise present in the iterative algorithm acquiring the attenuation images. In some cases, free non-fat voxels are misinterpreted as fat on the one hand, and fat voxels are misinterpreted as non-fat tissue on the other hand. Some a priori information can be used to correct the fat topology morphology obtained in step 2. Some optional a priori information include, but are not limited to:
I) the fat fraction obtained after segmentation is further modified using, for example, an attenuation image Atlas template.
For example, according to an exemplary but nonlimiting embodiment, Atlas database may contain a plurality of dictionary elements (D1, D2, … Dn) covering different heights, weights, sexes, and diseases. Each dictionary element Di may contain, for example, two sets of paired images: PET attenuation image IMG (i), pixel classification distribution image
Figure BDA0001668752810000081
In practical application, for the attenuation image obtained by iteration of the current scanning patient, the dictionary element i can be selected by matching information such as height, weight and the like, and then the attenuation image img (i) in the dictionary element is registered to the current patient image through image registration. Finally, the pixels of the dictionary elements are classified and distributed into images through the same registration parameters
Figure BDA0001668752810000082
And carrying out deformation to obtain an initial image classification of the currently scanned patient. There are many methods for image registration, such as the commonly used optical flow field (opv) method.
II) carrying out image processing on the fat part obtained in the step 2, and removing classification misjudgment brought by noise.
Image processing algorithms help to reduce classification errors for each voxel of the image. In this application, one possible image processing method is to compare the current voxel with all voxels within a certain radius of the surroundings. If the difference is large, it means that the value of the voxel is likely to be abnormal due to the influence of noise, and therefore, filtering needs to be performed through a median filter, for example, so as to avoid the phenomenon of classification error as much as possible.
Although the above description of the embodiment of modifying the segmentation results to obtain the fat regions employs Atlas templates, image processing algorithms, or any combination thereof, other algorithms for modifying the segmentation results are also available. And as previously mentioned, this reconstruction is an optional step and may be omitted in some embodiments.
According to an exemplary but non-limiting embodiment, after threshold-based segmentation from the attenuation image, and optionally after modification of the segmentation results, a "muscle weight" lean _ weight may be calculated based on the obtained fat region.
For example, according to one exemplary but non-limiting embodiment, the lean _ weight-fat _ weight, i.e., "muscle weight" — body weight-fat weight. And thus can be obtained by calculating the weight of fat. According to another exemplary but non-limiting embodiment, after the step of modifying the segmentation result from the previous description, the fat volume may be obtained based on the obtained modified fat topology, and the fat weight may be obtained by multiplying the fat density. It is noted that body fat density is typically 0.9 g/cc.
After muscle mass is taken, the SUL value (SUV lean, standard body mass uptake) can be calculated. The calculation of the SUL value will be very straightforward. For example, one may simply multiply the SUV by a factor, as follows
Figure BDA0001668752810000091
Wherein the content of the first and second substances,
Figure BDA0001668752810000092
c is the corrected (attenuation corrected and/or scattered) in the PET imageCorrected) pixel intensity values; dose represents the injected Dose of contrast agent; weight represents the weight of the target region of the subject.
In the case of partial CT missing in PET/CT scanning or the case of a PETMR scanning in which the fat ratio map cannot be accurately obtained from Dixon sequence images, as an alternative, and/or in the case of a PET/CT scanning in which the fat weight can be calculated using CT images and/or in the case of a PET/MR in which the fat ratio can be obtained via Dixon sequences, as a supplement, the fat weight can be estimated using various fat estimation means according to the aforementioned embodiments of the present invention, thereby solving important problems such as inaccurate SUV quantification for obese patients, and the consistency of the population quantification for people of the same physiological state. As will be appreciated by those of ordinary skill in the art, while embodiments of the present invention are described with respect to PET as an example, aspects of the present invention are equally applicable to SPECT and the like.
Fig. 1 illustrates a method 100 for calculating a SUL value based on PET data information according to an exemplary embodiment of the present invention, and program instructions involved in the method may be stored in a computer-readable storage medium, which stores instructions executable by a PET system or other devices. The method 100 may include, for example, acquiring PET data (1020). The method 100 also includes reconstructing a PET image using the acquired PET data and acquiring an attenuation image during the PET image reconstruction (1040). Optionally, the method 100 may include iteratively reconstructing the PET image and the attenuation image using a joint estimation algorithm (step 1060). As will be appreciated by one of ordinary skill in the art, the PET data, PET images, and/or attenuation images are not limited to being obtained in only the manner described above. For example, the PET data, PET images, and/or attenuation images may be obtained from a database or received from other sources.
The method 100 may also include performing threshold-based segmentation, for example, from the attenuation image (step 1080). Optionally, for example, the method may further include correcting the segmentation results (step 1100).
The method 100 may further include calculating a "muscle weight" lean _ weight (step 1120), for example, based on the results of the segmentation; and calculating the SUL value after acquiring the muscle weight (step 1140).
Fig. 2 illustrates a SUL value calculation apparatus 200 based on PET data information according to an exemplary embodiment of the present invention. The device 200 may include, for example, a processor 2010, a memory 2020, and an input/output interface 2030. The processor 2020 may store software 2025 as well as data and the like (not shown). The computing device 2000 may also include, for example, means for acquiring PET data (block 2040), means for reconstructing a PET image using the acquired PET data, and acquiring an attenuation image during PET image reconstruction (block 2060); (optional) means for iteratively reconstructing the PET image and the attenuation image using a joint estimation algorithm (box 2080); means for performing threshold-based segmentation from the attenuation image (block 2100); (optional) means for modifying the segmentation results (block 2120); means for calculating a "muscle weight" lean _ weight based on the results of the segmentation (block 2140); and means for calculating a SUL value after obtaining the muscle weight (block 2160).
Although the devices described above (e.g., 2040-2160, etc.) are shown coupled to other components by bus 2200 in FIG. 2, the invention is not limited thereto. For example, according to an exemplary embodiment, the above described apparatus and its functions may be implemented by computer executable instructions stored in the memory 2020 and executable by the processor 2010. According to another exemplary embodiment, the above described apparatus and its functions may be implemented by hardware circuits designed to perform the above described method steps. According to a further exemplary embodiment, the apparatus for acquiring PET data may comprise or be integrated in the input/output interface. According to another exemplary embodiment, the means for acquiring PET data may comprise, for example, means for acquiring PET data of a patient by the apparatus and/or means for obtaining PET data from a database (not shown) or receiving PET data from other sources (not shown), or the like.
Fig. 3 schematically illustrates a PET imaging system 300 according to an embodiment of the present invention, in which the PET imaging system 300 is centered on a control unit 310, and includes a frame 320, a signal processing unit 330, a coincidence counting unit 340, a storage unit 350, a processor 360, a display unit 370, and an operation unit 380. Wherein, a plurality Of detector rings are arranged on the machine frame 320 along the central axis Of the circumference, the detector rings have a plurality Of detectors arranged on the circumference around the central axis, and the examinee P can be imaged in the Field Of View (FOV) surrounded by the plurality Of detectors. The specific imaging process comprises the following steps: injecting a radioisotope-identified pharmaceutical agent into the subject P prior to PET scanning; a detector for detecting pair annihilation gamma rays emitted from the inside of the subject P and generating a pulse-like electric signal according to the amount of light of the pair annihilation gamma rays; the pulse-like electric signals are supplied to the signal processing section 330, the signal processing section 330 generates Single Event Data (Single Event Data) from the electric signals, and the signal processing section 330 electrically detects annihilation gamma rays in practice by detecting that the intensity of the electric signals exceeds a threshold value; the single event data is supplied to the coincidence counting unit 340, and the coincidence counting unit 340 performs coincidence counting processing on the single event data concerning a plurality of single events. Specifically, the coincidence counting unit 340 repeatedly specifies event data concerning two single events accommodated within a predetermined time range from the repeatedly supplied single event data, and the time range is set to, for example, about 6ns to 18 ns. The paired single events are presumed to result from paired annihilation gamma rays generated from the same pair of annihilation sites, where the paired single events are broadly referred to as coincident events. A Line connecting the pair Of detectors that detect the pair Of annihilation gamma rays is called a Line Of Response (LOR). In this way, the coincidence counting unit 340 counts event data (hereinafter, coincidence event data) relating to a coincidence event and a pair of events constituting an LOR for each LOR, and stores the coincidence event data in the storage unit 350. The processor 360 reconstructs image data representing the spatial distribution of the concentration of the radioisotope within the subject from the coincidence event data relating to the plurality of coincidence events.
Further, the processor is further capable of: acquiring PET data of a body part of a subject; reconstructing PET data to obtain a PET image of the target part, and obtaining an attenuation image corresponding to the PET image in the reconstruction process of the PET data; segmenting the attenuation image to obtain a segmentation result; determining the weight of the specific tissue according to the segmentation result; and determining a SUL value based on the calculated weight of the specific tissue and the PET image. Please refer to the description of fig. 1 or fig. 2, which is not repeated herein.
Those of ordinary skill in the art appreciate that the benefits of the disclosure are not realized in full in any single embodiment. Various combinations, modifications, and alternatives will be apparent to one of ordinary skill in the art in light of this disclosure.
Furthermore, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, the phrase "X" employing "a" or "B" is intended to mean any of the natural inclusive permutations. That is, the phrase "X" is satisfied using either "a" or "B" by any of the following examples: x is A; x is B; or X employs both A and B. The terms "connected" and "coupled" can mean the same thing, meaning that two devices are electrically connected. The "inverter" and "converter" may both represent a circuit or electrical assembly having an inverter bridge. The "resonant cavity input current" can be represented by "first leg output current" and "inverter output current". In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
Various aspects or features will be presented in terms of systems that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Combinations of these approaches may also be used.
The various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Further, at least one processor may comprise one or more modules operable to perform one or more of the steps and/or actions described above. For example, the embodiments described above in connection with the various methods may be implemented by a processor and a memory coupled to the processor, wherein the processor may be configured to perform any of the steps of any of the methods described above, or any combination thereof.
Further, the steps and/or actions of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. For example, the embodiments described above in connection with the various methods may be implemented by a computer readable medium having stored computer program code which when executed by a processor/computer performs any of the steps of any of the methods described above, or any combination thereof.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.

Claims (8)

1. A PET data processing method, comprising:
acquiring PET data of a body part of a subject;
reconstructing the PET data to acquire a PET image of a body part including a specific tissue and a target tissue, and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data;
carrying out segmentation processing on the attenuation image to obtain a segmentation result;
determining a specific tissue from the segmentation result;
calculating the weight of the specific tissue; and
determining a SUL value based on the calculated weight of the specific tissue and the PET image;
the segmentation result comprises the boundary of an adipose area and the boundary of a non-adipose area, and the specific tissue corresponds to the adipose area;
wherein the SUL value is an SUV value corresponding to a target tissue, and determining the SUL value based on the calculated weight of the specific tissue and the PET image comprises:
determining a mass of the subject's body part;
determining a weight of a target tissue from the mass of the subject's body part and the weight of the specific tissue;
an SUL value is determined based on the PET image, an injected dose of contrast agent, and a weight of the target tissue.
2. The method of claim 1, further comprising, prior to determining the weight of the particular tissue from the segmentation result, modifying the segmentation result in at least one of:
acquiring an Atlas template of the attenuation image, and classifying pixels in the segmentation result by using the Atlas template of the attenuation image;
or denoising the pixels in the segmentation result.
3. The method of claim 1, wherein performing a segmentation process on the attenuation image to obtain a segmentation result comprises: the pixel values in the attenuation image are counted to determine the boundaries of different organs.
4. A PET data processing device, characterized by comprising:
means for acquiring PET data of a subject's body part;
means for reconstructing the PET data to acquire a PET image of a body part including a specific tissue and a target tissue, and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data;
the device is used for carrying out segmentation processing on the attenuation image to obtain a segmentation result;
means for determining a specific tissue from the segmentation result;
means for calculating a weight of the particular tissue; and
means for determining a SUL value based on the calculated weight of the particular tissue and the PET image;
the segmentation result comprises the boundary of an adipose area and the boundary of a non-adipose area, and the specific tissue corresponds to the adipose area;
wherein the SUL value is an SUV value corresponding to a target tissue, and determining the SUL value based on the calculated weight of the specific tissue and the PET image comprises:
determining a mass of the subject's body part;
determining a weight of a target tissue from the mass of the subject's body part and the weight of the specific tissue;
an SUL value is determined based on the PET image, an injected dose of contrast agent, and a weight of the target tissue.
5. The apparatus of claim 4, wherein the means for performing a segmentation process on the attenuation image to obtain a segmentation result comprises: means for counting pixel values in the attenuation image to determine demarcations of different organs.
6. The apparatus of claim 4, wherein the means for determining a SUL value based on the calculated weight of the particular tissue and the PET image comprises:
means for determining a mass of the subject's body part;
means for determining a weight of a target tissue based on the mass of the subject's body part and the weight of the specific tissue;
means for determining a SUL value based on the PET image, an injected dose of contrast agent, and a weight of the target tissue.
7. A PET imaging system comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
acquiring PET data of a body part of a subject;
reconstructing the PET data to acquire a PET image of a body part including a specific tissue and a target tissue, and acquiring an attenuation image corresponding to the PET image during reconstruction of the PET data;
carrying out segmentation processing on the attenuation image to obtain a segmentation result;
determining a specific tissue according to the segmentation result;
calculating the weight of the specific tissue; and
determining a SUL value based on the calculated weight of the specific tissue and the PET image;
the segmentation result comprises the boundary of an adipose area and the boundary of a non-adipose area, and the specific tissue corresponds to the adipose area;
wherein the SUL value is an SUV value corresponding to a target tissue, and determining the SUL value based on the calculated weight of the specific tissue and the PET image comprises:
determining a mass of the subject's body part;
determining a weight of a target tissue from the mass of the subject's body part and the weight of the specific tissue;
an SUL value is determined based on the PET image, an injected dose of contrast agent, and a weight of the target tissue.
8. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed, implement any of the methods of claims 1-3.
CN201810494961.0A 2018-05-22 2018-05-22 PET data processing method and device and PET imaging system Active CN108805947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810494961.0A CN108805947B (en) 2018-05-22 2018-05-22 PET data processing method and device and PET imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810494961.0A CN108805947B (en) 2018-05-22 2018-05-22 PET data processing method and device and PET imaging system

Publications (2)

Publication Number Publication Date
CN108805947A CN108805947A (en) 2018-11-13
CN108805947B true CN108805947B (en) 2022-05-27

Family

ID=64091353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810494961.0A Active CN108805947B (en) 2018-05-22 2018-05-22 PET data processing method and device and PET imaging system

Country Status (1)

Country Link
CN (1) CN108805947B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862772B (en) * 2021-01-29 2023-08-08 上海联影医疗科技股份有限公司 Image quality evaluation method, PET-MR system, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102124361A (en) * 2008-08-15 2011-07-13 皇家飞利浦电子股份有限公司 Attenuation correction for PET or SPECT nuclear imaging systems using magnetic resonance spectroscopic image data
CN104463840A (en) * 2014-09-29 2015-03-25 北京理工大学 Fever to-be-checked computer aided diagnosis method based on PET/CT images
CN107115119A (en) * 2017-04-25 2017-09-01 上海联影医疗科技有限公司 The acquisition methods of PET image attenuation coefficient, the method and system of correction for attenuation
CN107123095A (en) * 2017-04-01 2017-09-01 上海联影医疗科技有限公司 A kind of PET image reconstruction method, imaging system
WO2018006419A1 (en) * 2016-07-08 2018-01-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for generating attenuation map
CN107610198A (en) * 2017-09-20 2018-01-19 赛诺联合医疗科技(北京)有限公司 PET image attenuation correction method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620053B2 (en) * 2009-11-04 2013-12-31 Siemens Medical Solutions Usa, Inc. Completion of truncated attenuation maps using maximum likelihood estimation of attenuation and activity (MLAA)
US20110275699A1 (en) * 2010-03-16 2011-11-10 University Of California Treatment For Obesity And Diabetes
US9135695B2 (en) * 2012-04-04 2015-09-15 Siemens Aktiengesellschaft Method for creating attenuation correction maps for PET image reconstruction
US9398855B2 (en) * 2013-05-30 2016-07-26 Siemens Aktiengesellschaft System and method for magnetic resonance imaging based respiratory motion correction for PET/MRI
US9747701B2 (en) * 2015-08-20 2017-08-29 General Electric Company Systems and methods for emission tomography quantitation
US10078889B2 (en) * 2015-08-25 2018-09-18 Shanghai United Imaging Healthcare Co., Ltd. System and method for image calibration
US10210634B2 (en) * 2016-07-20 2019-02-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for segmenting medical image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102124361A (en) * 2008-08-15 2011-07-13 皇家飞利浦电子股份有限公司 Attenuation correction for PET or SPECT nuclear imaging systems using magnetic resonance spectroscopic image data
CN104463840A (en) * 2014-09-29 2015-03-25 北京理工大学 Fever to-be-checked computer aided diagnosis method based on PET/CT images
WO2018006419A1 (en) * 2016-07-08 2018-01-11 Shanghai United Imaging Healthcare Co., Ltd. System and method for generating attenuation map
CN107123095A (en) * 2017-04-01 2017-09-01 上海联影医疗科技有限公司 A kind of PET image reconstruction method, imaging system
CN107115119A (en) * 2017-04-25 2017-09-01 上海联影医疗科技有限公司 The acquisition methods of PET image attenuation coefficient, the method and system of correction for attenuation
CN107610198A (en) * 2017-09-20 2018-01-19 赛诺联合医疗科技(北京)有限公司 PET image attenuation correction method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
~(18)F-FDGPET/CT脑显像鉴别帕金森病和多系统萎缩的临床价值;尚琨等;《临床和实验医学杂志》;20171110(第21期);19-22 *
An assessment of the impact of incorporating timeof-flight information into clinical PET/CT imaging;Lois C 等;《Journal of Nuclear Medicine》;20101231;237-245 *
PET成像的高分辨率快速局域重建算法的建立;朱闻韬等;《中国医学装备》;20151015(第10期);38-42 *
迁移模糊聚类在医学PET/MRI快速衰减校正中的应用;孙寿伟;钱鹏江;胡凌志;苏冠豪;Raymond F.Muzic等;《计算机工程与科学》;20160415(第04期);163-172 *

Also Published As

Publication number Publication date
CN108805947A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
US10810768B2 (en) System and method for segmenting medical image
JP5530446B2 (en) Method and apparatus for generating attenuation map in PET-MR
Salomon et al. Simultaneous reconstruction of activity and attenuation for PET/MR
US8150112B2 (en) Regional reconstruction of spatially distributed functions
Bezrukov et al. MR-based PET attenuation correction for PET/MR imaging
US9474495B2 (en) System and method for joint estimation of attenuation and activity information
US8958620B2 (en) Region of interest definition in cardiac imaging
US8600135B2 (en) System and method for automatically generating sample points from a series of medical images and identifying a significant region
EP2684066B1 (en) Mr segmentation using nuclear emission data in hybrid nuclear imaging/mr
CN106999135B (en) Radiation emission imaging system and method
CN107292889B (en) Tumor segmentation method, system and readable medium
WO2020214911A1 (en) Method and system for generating attenuation map from spect emission data based upon deep learning
WO2014176154A1 (en) System and method for image intensity bias estimation and tissue segmentation
Parker et al. Graph-based Mumford-Shah segmentation of dynamic PET with application to input function estimation
Yoder Basic PET data analysis techniques
US20210304457A1 (en) Using neural networks to estimate motion vectors for motion corrected pet image reconstruction
CN108805947B (en) PET data processing method and device and PET imaging system
US11481934B2 (en) System, method, and computer-accessible medium for generating magnetic resonance imaging-based anatomically guided positron emission tomography reconstruction images with a convolutional neural network
CN112529977B (en) PET image reconstruction method and system
Blaffert et al. Comparison of threshold-based and watershed-based segmentation for the truncation compensation of PET/MR images
US11704795B2 (en) Quality-driven image processing
Frohwein et al. Correction of MRI‐induced geometric distortions in whole‐body small animal PET‐MRI
Vermandel et al. Combining MIP images and fuzzy set principles for vessels segmentation: application to TOF MRA and CE-MRA
Bidaut et al. 3D image reconstruction in medicine and beyond
JP2009502430A (en) System and method for automatically segmenting blood vessels in a chest magnetic resonance sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai city Jiading District Industrial Zone Jiading Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant