WO2015193756A2 - Magnetic resonance (mr)-based attenuation correction and monitor alignment calibration - Google Patents

Magnetic resonance (mr)-based attenuation correction and monitor alignment calibration Download PDF

Info

Publication number
WO2015193756A2
WO2015193756A2 PCT/IB2015/054067 IB2015054067W WO2015193756A2 WO 2015193756 A2 WO2015193756 A2 WO 2015193756A2 IB 2015054067 W IB2015054067 W IB 2015054067W WO 2015193756 A2 WO2015193756 A2 WO 2015193756A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
pet
images
values
registration
Prior art date
Application number
PCT/IB2015/054067
Other languages
French (fr)
Other versions
WO2015193756A3 (en
Inventor
Yang-Ming Zhu
Steven Michael Cochoff
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2015193756A2 publication Critical patent/WO2015193756A2/en
Publication of WO2015193756A3 publication Critical patent/WO2015193756A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10084Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present application relates generally to medical imaging. It finds particular application in conjunction with magnetic resonance (MR)-based attenuation correction (AC) (i.e., MRAC) during positron emission tomography (PET) reconstruction, and will be described with particular reference thereto. However, it is to be understood that it also finds application in other usage scenarios, and is not necessarily limited to the aforementioned application.
  • MR magnetic resonance
  • AC attenuation correction
  • PET positron emission tomography
  • MRAC is AC during PET reconstruction that uses an attenuation map derived from an MR image.
  • a challenge with MRAC is that the MR image is oftentimes truncated during MR acquisition.
  • various approaches to truncation compensation have been developed.
  • One approach is based on simultaneous transmission and emission estimation. However, this approach suffers from computational complexity.
  • Another approach is to reconstruct the non-AC (NAC) PET image first.
  • the boundary of the patient is then segmented in the NAC PET image and used as a mask for the MR image. Any missing portions of the MR image within the mask are then filled in as soft tissue. However, the missing portions are arbitrarily filled with soft tissue.
  • the approach requires a high quality NAC PET image to sufficiently segment the NAC PET image, since the quality of the segmentation determines the quality of truncation compensation.
  • time-of-flight (ToF) is employed to improve the quality of the NAC PET image.
  • ToF time-of-flight
  • a hybrid imaging system typically acquires a pair of images, each corresponding to a different imaging modality, and registers the images of the pair.
  • registration relies on an alignment calibration that is periodically performed. If a hybrid imaging system is not in a well calibrated state, the images will not be properly registered, which compromises image interpretation and can subsequently impact clinical diagnosis.
  • a hybrid imaging system is typically calibrated according to a fixed schedule, such as every 6 months, or in response to specific events, such as system upgrades.
  • system alignment can drift out of the calibrated state before the next scheduled calibration. Further, alignment calibrations can be performed unnecessarily based on a fixed schedule without regard to the actual calibration state. This wastes times and can reduce the availability of hybrid imaging systems for clinical use.
  • the present application provides a new and improved system and method which overcome these problems and others.
  • a system for truncation compensation of an image includes a histogram module configured to receive or generate a joint histogram describing frequencies of feature vectors in a pair of images.
  • the images of the pair are generated with different imaging modalities.
  • the feature vectors represent features at a spatial location by at least values of the images of the pair at the spatial location.
  • the system further includes an estimation module configured to estimate values for a truncated region of the image.
  • the estimation includes looking up distributions of values in the joint histogram with partial feature vectors at a truncated spatial location.
  • a method for truncation compensation of an image includes receiving or generating a joint histogram describing frequencies of feature vectors in a pair of images.
  • the images of the pair are generated with different imaging modalities.
  • the feature vectors represent features at a spatial location by at least values of the images of the pair at the spatial location.
  • the method further includes estimating values for a truncated region of the image. The estimating includes looking up distributions of values in the joint histogram with partial feature vectors at a truncated spatial location.
  • a system for monitoring a calibration state of a hybrid imaging system includes at least one processor programmed to receive or generate sets of registration parameters resulting from registration of image pairs generated with the hybrid imaging scanner, analyze the sets to statistically infer a calibration state of the hybrid imaging scanner, and report the inferred calibration state of the hybrid imaging scanner to a user.
  • Another advantage resides in reduced segmentation.
  • Another advantage resides in improved image reading and diagnosis.
  • Another advantage resides in improved efficiency maintaining alignment calibration.
  • Another advantage resides in reduced registration errors.
  • the invention may take form in various components and arrangements of components, and in various steps and arrangements of steps.
  • the drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
  • FIGURE 1 illustrates a hybrid magnetic resonance (MR) and positron emission tomography (PET) imaging system employing truncation compensation according to the present application
  • FIGURE 2 illustrates a PET detector of the imaging system.
  • FIGURE 3A illustrates MR images before truncation compensation.
  • FIGURE 3B illustrates MR images after truncation compensation.
  • FIGURE 4 illustrates a method for truncation compensation according to the present application.
  • FIGURE 5 illustrates a method for monitoring a trend in an alignment calibration state of a hybrid imaging scanner according to the present application.
  • the present application describes an approach for truncation compensation of a magnetic (MR) image used during MR-based attenuation correction (AC) of a positron emission tomography (PET) image (i.e., MRAC) reconstruction.
  • MR magnetic
  • PET positron emission tomography
  • MRAC positron emission tomography
  • a joint PET and MR histogram with spatial and contextual data is generated and used to estimate the truncated MR image.
  • the PET value, the MR value, and the PET values surrounding the given location can all be treated as contextual data.
  • the spatial and contextual data can be analyzed and combined into the joint histogram to describe the statistical properties of the data.
  • a PET value associated with a spatial location of the truncated MR data can be applied to the joint histogram to predict the most likely MR value.
  • an imaging system 10 includes an MR scanner 12.
  • the MR scanner 12 generates raw MR scan data and includes a housing 14 defining an MR imaging volume 16 for receiving a target volume of a subject to be imaged.
  • a subject support 18 can be employed to support the subject and to position the target volume near the isocenter of the MR imaging volume 16.
  • a main magnet 20 of the MR scanner 12 creates a strong, static Bo magnetic field extending through the MR imaging volume 16.
  • the strength of the static Bo magnetic field is commonly one of 0.23 Tesla, 0.5 Tesla, 1.5 Tesla, 3 Tesla, 7 Tesla, and so on in the MR imaging volume 16, but other strengths are contemplated.
  • a gradient controller 22 of the MR scanner 12 is controlled to superimpose magnetic field gradients, such as x, y and z gradients, on the static Bo magnetic field in the MR imaging volume 16 using a plurality of magnetic field gradient coils 24 of the MR scanner 12.
  • the magnetic field gradients spatially encode magnetic spins within the MR imaging volume 16.
  • the plurality of magnetic field gradient coils 24 includes three separate magnetic field gradient coils spatially encoding in three orthogonal spatial directions.
  • one or more transmitters 26 of the MR scanner 12 are controlled to transmit Bi resonance excitation and manipulation radio frequency (RF) pulses into the MR imaging volume 16 with one or more transmit coil arrays 28.
  • the Bi pulses are typically of short duration and, when taken together with the magnetic field gradients, achieve a selected manipulation of MR.
  • the Bi pulses excite the hydrogen dipoles to resonance and the magnetic field gradients encode spatial information in the frequency and phase of the resonance signal.
  • resonance can be excited in other dipoles, such as phosphorous, which tend to concentrate in known tissues, such as bones.
  • An MR scan controller 30 controls the gradient controller 22 and/or the transmitters 26 according to imaging sequences to produce spatially encoded MR signals within the MR imaging volume 16.
  • An imaging sequence defines a sequence of Bi pulses and/or magnetic field gradients. Further, the imaging sequences can be received from a device or system being remote or local to the MR scan controller 30, such as a sequence memory.
  • One or more RF receivers 32 receive the spatially encoded magnetic resonance signals from the MR imaging volume 16 and demodulate the received spatially encoded magnetic resonance signals to MR data sets.
  • the MR data sets include, for example, k-space data trajectories.
  • the receivers 16 use one or more receive coil arrays 28. As illustrated, the receivers 32 share a whole body coil 28 with the transmitters 26 by way of a switch 34 that selectively connects the receivers 32 and the transmitters 26 to the coil 28 depending upon whether the transmitters 26 or the receivers 32 are being used.
  • the receivers 32 typically store the MR data sets in an MR buffer memory 36.
  • An MR reconstruction processor 38 reconstructs the MR data sets into MR images or maps of the MR imaging volume 16. This includes, for each MR signal captured by the MR data sets, spatially decoding the spatial encoding by the magnetic field gradients to ascertain a property of the MR signal from each spatial region, such as a pixel or voxel.
  • the intensity or magnitude of the MR signal is commonly ascertained, but other properties related to phase, relaxation time, magnetization transfer, and the like can also be ascertained.
  • the MR images or maps are typically stored in an MR image memory 40.
  • An MR main controller 42 coordinates the generation of one or more MR diagnostic images of the target volume using one or more MR scans of the target volume.
  • the MR main controller 42 provides scan parameters to the MR scan controller 30.
  • the MR main controller 42 can carry out the foregoing functionality by software, hardware or both.
  • the MR main controller 42 employs software, the MR main controller 42 includes at least one processor executing the software.
  • the software is suitably stored on a program memory.
  • the MR main controller 42 can be managed by a user using a graphical user interface presented to the user by way of a display device 44 and a user input device 46. The user can, for example, initiate imaging, display images, manipulate images, etc.
  • the MR reconstruction processor 38 and the MR scan controller 30 were illustrated as external to the MR main controller 42, it is to be appreciated that one or more of these components can be integrated with the MR main controller 42 as software, hardware or a combination of both.
  • the MR reconstruction processor 38 can be integrated with the MR main controller 42 as a software module executing on the at least one processor of the MR main controller 42.
  • the MR buffer memory 36 and the MR image memory 40 were illustrated as external to the MR main controller 42, it is to be appreciated that one or more of these components can be integrated with the MR main controller 42.
  • a challenge with MR is that the field of view (FOV) is limited in MR imaging
  • Bo-homogeneity reducing at the edges of the FOV, and a non-linearity of the magnetic field gradients in the outer areas of the FOV, are responsible for the restriction of the FOV.
  • an approach to truncation compensation is hereafter described.
  • the imaging system 10 further includes a PET scanner 48 for truncation compensation.
  • the PET scanner 48 generates PET data and includes a stationary gantry 50 housing a plurality of gamma detectors 52 arranged around a bore of the scanner.
  • the bore defines a PET imaging volume 54 for receiving a target volume of a subject to be imaged.
  • the detectors 52 are typically arranged in one or more stationery rings which extend the length of the PET imaging volume 54. However, rotatable heads are also contemplated.
  • the detectors 52 detect gamma photons from the PET imaging volume 54 and generate the PET data.
  • each of the detectors 52 includes one or more scintillators 56 arranged in a grid.
  • the scintillators 56 scintillate and generate visible light pulses in response to energy depositions by gamma photons.
  • a gamma photon 58 deposits energy in a scintillator 60, thereby resulting in a visible light pulse 62.
  • the magnitude of a visible light pulse is proportional to the magnitude of the corresponding energy deposition.
  • scintillators 56 include sodium iodide doped with thallium (Nal(Tl)), cerium-doped lutetium yttrium orthosilicate (LYSO) and cerium doped lutetium oxyorthosilicate (LSO).
  • the detectors 52 each include a sensor 64 detecting the visible light pulses in the scintillators 56.
  • the sensor 64 includes a plurality of light sensitive elements 66.
  • the light sensitive elements 66 are arranged in a grid of like size as the grid of scintillators and optically coupled to corresponding scintillators 56.
  • the light sensitive elements 66 can be coupled to the scintillators 56 in a one-to-one arrangement, a one- to-many arrangement, a many-to-one arrangement, or any other arrangement.
  • the light sensitive elements 66 are silicon photomultipliers (SiPMs), but photomultiplier tubes (PMTs) are also contemplated.
  • the light sensitive elements 66 are SiPMs
  • the light sensitive elements 66 there is typically a one-to-one correspondence between the scintillators 56 and the light sensitive elements 66, as illustrated, but other correspondences are contemplated.
  • Each of the SiPMs includes a photodiode array (e.g., Geiger-mode avalanche photodiode arrays), each photodiode corresponding to a cell of the photodiode array.
  • the SiPMs are configured to operate in a Geiger mode to produce a series of unit pulses to operate in a digital mode.
  • the SiPMs can be configured to operate in an analog mode.
  • the light sensitive elements 66 are PMTs
  • there is often a many-to-one correspondence between the scintillators 56 and the light sensitive elements 66 but other correspondences are contemplated.
  • a target volume of the subject is injected with a radiopharmaceutical or radionuclide.
  • the radiopharmaceutical or radionuclide causes gamma photons to be emitted from the target volume.
  • the target volume is then positioned in the PET imaging volume 54 using a subject support 18 corresponding to the PET scanner 48.
  • the PET scanner 48 is controlled by a PET scan controller 68 to perform a scan of the target volume and event data is acquired.
  • the acquired event data describes the time, location and energy of each scintillation event detected by the detectors and is suitably stored in a PET buffer memory 70.
  • the location of a scintillation event corresponds to a pixel of the PET scanner 48.
  • a pixel is the smallest area to which a scintillation event can be localized.
  • the light sensitive elements 66 are SiPMs and there is a one-to-one coupling between scintillators 56 and the light sensitive elements 66.
  • the smallest area to which a scintillation event can be localized is typically a scintillator/SiPM pair, whereby a pixel typically corresponds to a scintillator/SiPM pair.
  • the light sensitive elements 66 are PMTs or SiPMs and there is a many-to-one coupling between the scintillators 56 and the light sensitive elements 66.
  • Anger logic is typically used to localize scintillation events to individual scintillators, whereby a pixel typically corresponds to a scintillator, but not a light sensitive element.
  • an event verification processor 72 filters the buffered event data.
  • the filtering includes comparing energy (cell counts in the digital mode) of each scintillation event to an energy window, which defines the acceptable energy range for scintillation events. Those scintillation events falling outside the energy window are filtered out.
  • the energy window is centered on the known energy of the gamma photons to be received from the PET imaging volume 54 (e.g., 51 1 kiloelectron volt (keV)) and determined using the full width half max (FWHM) of an energy spectrum generated from a calibration phantom.
  • the event verification processor 72 further generates lines of response (LO s) from the filtered event data.
  • a LOR is defined by a pair of gamma photons striking the detectors 52 within a specified time difference of each other (i.e., a coincident event).
  • the specified time difference is small enough to ensure the gammas are from the same annihilation event.
  • a gamma photon can yield multiple scintillation events.
  • the scintillation events of the event data are combined based on gamma photon. For example, the energy of scintillation events belonging to a common gamma photon can be summed and the location with which the gamma photon struck the detectors 52 can be approximated.
  • the event verification processor 72 filters and determines LORs from the updated event data.
  • Data describing the coincident events, as or once determined by the event verification processor 72, is stored within a list mode memory 74 as a list, where each list item corresponds to a coincident event.
  • the data for each of the list items describes the corresponding LOR by the spatial data (e.g., by the X and Z locations) for the two pixels to which the pair of gamma photons of the LOR are localized. Further, the data for each of the list items can optionally describe the energy of the two gamma photons of the corresponding coincident event, and/or either the times stamps of the two gamma photons or the difference between the times stamps of the two gamma photons.
  • a PET reconstruction processor 76 reconstructs the list mode data into a final, reconstructed image of the target volume.
  • the reconstructed image is typically stored in a PET image memory 78.
  • any suitable reconstruction algorithm can be employed.
  • an iterative-based reconstruction algorithm can be employed.
  • the reconstruction can be performed with or without AC.
  • an attenuation map from an attenuation map memory 80 is employed.
  • the AC is MR-based AC (i.e., MRAC) and the attenuation map is generated using the MR scanner 12.
  • the reconstruction can be performed with or without time of flight (ToF).
  • a PET main controller 42 coordinates the generation of one or more PET diagnostic images of the target volume using one or more PET scans of the target volume.
  • the PET main controller 42 provides scan parameters to the PET scan controller 68.
  • the PET main controller 42 can carry out the foregoing functionality by software, hardware or both.
  • the PET main controller 42 employs software, the PET main controller 42 includes at least one processor executing the software.
  • the software is suitably stored on a program memory.
  • the PET main controller 42 can be managed by a user using a graphical user interface presented to the user by way of a display device 44 and a user input device 46. The user can, for example, initiate imaging, display images, manipulate images, etc.
  • the PET reconstruction processor 76, the event verification processor 72, and the PET scan controller 68 were illustrated as external to the PET main controller 42, it is to be appreciated that one or more of these components can be integrated with the PET main controller 42 as software, hardware or a combination of both.
  • the PET reconstruction processor 76 and the event verification processor 72 can be integrated with the PET main controller 42 as a software module executing on the at least one processor of the PET main controller 42.
  • the PET buffer memory 70, the list mode memory 74 and the PET image memory 78 were illustrated as external to the PET main controller 42, it is to be appreciated that one or more of these components can be integrated with the PET main controller 42.
  • the PET reconstruction processor 76 can perform image reconstruction with MRAC and an attenuation map generated using the MR scanner 12.
  • a challenge posed by traditional systems employing MRAC is that the MR image used to derive the attenuation map is truncated.
  • a truncation compensation processor 82 generates 84 a joint PET and MR histogram, typically with spatial and/or contextual data, and estimates 86 the truncated MR image values from the joint histogram.
  • the complete MR image is then passed to an attenuation map generation processor 88 that generates an attenuation map used by the PET reconstruction processor 76 to generate an MRAC PET image.
  • the attenuation map is typically stored in the attenuation map memory 80.
  • the truncation compensation processor 82 receives a non-AC PET image of a target volume, typically from the PET image memory 78. Further, an MR image of the target volume is received, typically from the MR image memory 40.
  • the MR and PET main controllers 42 can be used to coordinate the generation of the images. Further, the values of the MR and PET images are typically normalized and clamped.
  • the MR image is registered to the PET image using, for example, a registration processor 90 or by system calibration (discussed hereafter). Further, the MR image is resampled so there is a one-to-one correspondence between the pixels (i.e., voxels) of the MR and PET images.
  • a pixel in the MR and PET images is the smallest area to which a value (e.g., a PET or MR value) can be localized.
  • the joint histogram is generated 84 by combining the MR image with the PET image.
  • the joint histogram is one of a global joint histogram and a set of localized joint histograms.
  • the global joint histogram makes use of the entire overlapping image volume of the MR and PET images, whereas the set of localized joint histogram collectively make use of the entire overlapping image volume, but individually make use of non-overlapping subsets of the entire overlapping image volume.
  • the global joint histogram is typically two dimensional (2D) and includes a dimension corresponding to PET values and a dimension corresponding to MR values.
  • a count is added to a corresponding location of the global joint histogram for each pair of MR and PET values corresponding to the same spatial location in the MR and PET images. That is to say, each pair of MR and PET values corresponding to the same spatial location in the MR and PET images is used to lookup a corresponding location in the global joint histogram. A count is then added to this location.
  • the global joint histogram includes one or more additional dimensions (i.e., more than the two dimensions discussed above) to convey context data.
  • the context data for example, describes neighboring PET values.
  • a pixel or voxel can have up to 26 neighbors. Additional dimensions can be added for one or more of these neighbors.
  • the global joint histogram can include a dimension for the neighbor to the immediate left. The global joint histogram is generated as above except that the additional dimensions are taken into account. More specifically, the features of the three or more dimensions are extracted at each spatial location of overlapping image volume and used to lookup a corresponding location in the global joint histogram. A count is then added to this location.
  • the features at a spatial location include the PET and MR values at the spatial location, and the PET values of the additional contextual dimensions.
  • a dimension for each of the 26 neighbors is possible.
  • generating the global joint histogram with so many dimensions can be highly computational and demand a considerable amount of memory.
  • six contextual dimensions are employed. These dimensions can correspond to, for example, neighbors to the immediate left, right, front, back, top, and bottom.
  • only three dimensions can be employed. These dimensions can correspond to, for example, the average of the immediate top and bottom neighbors, the average of the immediate left and right neighbors, and the average of the immediate front and back neighbors.
  • the dimensions are defined using the difference between the neighboring PET values and the center PET value. This reduces memory because the dynamic range of the difference is less than the dynamic range of PET values.
  • the joint histogram can be a set of localized joint histograms.
  • the set of localized joint histogram collectively make use of the entire overlapping image volume, but individually make use of non-overlapping subsets of the entire overlapping image volume.
  • the non-overlapping subsets are defined by spatial dimensions (e.g., Z direction).
  • Each of the localized joint histograms is generated in the same manner as the global joint histogram except that the localized joint histogram is generated from a subset of the overlapping image volume being defined by one or more spatial dimensions.
  • the localized joint histograms take into account context data, as described above.
  • the overlapping image volume is divided into subsets along the Z direction (typically along the axes of the bores of the scanners 12, 48 and typically along the length of the subject).
  • one subset might include all spatial locations of the imaging volume with Z values between one and ten.
  • a localized joint histogram is then generated, optionally taking into account context data, for each of the subsets.
  • the localized joint histograms are generated as above except that each localized joint histogram is associated with a Z value, or a range of Z values, and limited to MR and PET values located at the Z value or in the neighborhood of the Z value.
  • the joint histogram includes a set of localized joint histograms varying spatially along the Z direction.
  • the joint histogram can be generated from PET and MR images for a population of subjects.
  • the population can be a population to which the subject belongs. For example, if the subject suffers from prostate cancer, the joint histogram can be generated from PET and MR images for patients with prostate cancer.
  • the population can be a population to which the subject does not belong but in which truncation is common.
  • the truncation occurs where MR values are considered background and PET values are considered foreground.
  • the determination can be performed manually by a clinician segmenting the MR image to identify the truncated portions of the MR image. Alternatively, the determination can be performed automatically.
  • the automatic determination can be performed using heuristics that indicate where truncation would happen (e.g., on the periphery of the images).
  • the automatic determination can be performed by estimating a threshold for background MR values and a threshold for foreground PET values. These thresholds can then be applied to the MR and PET images to identify where the MR image is truncated.
  • the thresholds are estimated using marginal histogram analysis of the joint histogram. For example, assuming the number of background voxels decreases exponentially, the exponential fitting of the first few points on marginal histograms gives estimates of the thresholds.
  • the values of the pixels or voxels in the truncated portion are estimated 86. Estimation depends upon the number of dimensions in the joint histogram. For each pixel or voxel in the truncated portion, the corresponding values for all the dimensions, except the dimension for MR values, are determined. The distribution of MR values in the joint histogram is then looked up using these values. For example, supposing the joint histogram has only two dimensions, the PET value for each pixel or voxel in the truncated portion is used to look up a distribution of MR values in the joint histogram.
  • an MR value is selected. For example, the average of foreground MR values in the distribution can be used. Foreground MR values can be identified using a threshold determined as described above. As another example, the MR value with the largest probability (i.e., the most histogram counts) can be used. As another example, the MR value is randomly selected from the distribution with weighting based on the histogram counts.
  • the distribution is first filtered.
  • the MR value can then be selected as discussed above from the filtered distribution.
  • the distribution can be first fit with a curve, such as a Gaussian curve. The MR value can then be selected as discussed above from the curve.
  • MR values for the truncated portions are then estimated.
  • MR values are estimated 86 for the entire MR image.
  • the final MR value for each pixel or voxel is then the maximum value of the original MR value and the estimated MR. This approach works on that assumption that if a pixel is truncated, the estimated MR value will be larger than the original MR value.
  • FIGURES 3A and 3B MR images before and after truncation compensation are illustrated.
  • the MR images were generated for a 67 year old subject.
  • FIGURE 3A illustrates the original MR images at three different locations
  • FIGURE 3B illustrates these three MR images after truncation compensated.
  • the missing MR portion is completed. Due to the low quality of the PET images, some extraneous MR is also added.
  • the truncation compensation processor 82 and the attenuation map generation processor 88 were illustrated as external to the PET and MR main controllers 42, it is to be appreciated that one or more of these components can be integrated with the main controllers 42 as software, hardware or a combination of both.
  • the attenuation map memory 88 was illustrated as external to the main controllers 42, it is to be appreciated that the attenuation map memory 88 can be integrated with the main controllers 42.
  • CT computed tomography
  • the PET and MR scanners 12, 48 are combined into a hybrid scanner, as illustrated.
  • the PET and MR scanners 12, 48 share a main controller 42.
  • the PET scanner 48 can be mounted on tracks 92 to facilitate patient access.
  • the tracks 92 extend in parallel to a longitudinal axis of a subject support 18 shared by both of the scanners 12, 48, thus enabling the scanners 12, 48 to form a closed system.
  • a first motor and drive can provide movement of the PET scanner 48 in and out of the closed position, and a second motor and drive can also provide longitudinal movement and vertical adjustment of the subject support 18 in the imaging volumes 16, 54.
  • the MR and PET scanners 12, 48 can be mounted in a single, shared closed system with a common imaging volume.
  • a hybrid imaging system typically acquires a pair of images, each corresponding to a different imaging modality, and registers the images of the pair.
  • a challenge is that the registration relies on an alignment calibration that is periodically performed. If a hybrid scanner is not in a well calibrated state, the images will not be properly registered, which compromises image interpretation and can subsequently impact clinical diagnosis.
  • alignment calibration is performed according to a fixed schedule, such as every 6 months, or in response to specific events, such as system upgrades. However, system alignment can drift out of the calibrated state before the next scheduled calibration. Further, alignment calibrations can be performed unnecessarily without regard to the actual calibration state.
  • the present application further describes an approach for monitoring the trend of the calibration state of a hybrid scanner, and for detecting and reporting a potential misalignment. All image pairs acquired on the hybrid scanner are registered, registration results are monitored to detect trends, the alignment calibration state is statistically inferred from the detected trends, and the inferred calibration state is periodically reported to the user. The user can then perform alignment calibration on an as-needed basis. Patient motion can also be statistically inferred from the detected trends and the user can be alerted to evaluate an image pair for potential misalignment caused by patient motion.
  • a registration processor 90 applies a rigid registration algorithm to register the images to one another.
  • Any automated registration algorithm can be employed.
  • a simplified, fast algorithm can be employed.
  • a full-featured, automated algorithm is not necessary. If the hybrid scanner 12, 48 is perfectly calibrated, the registration algorithm perfectly registers the PET and MR images.
  • the registration parameters typically include three translation parameters and three rotation parameters. If there is no subject motion during the data acquisition, the registration parameters will be zeros for all three translation and all three rotation parameters in the digital imaging and communications in medicine (DICOM) space (i.e., patient space).
  • DICOM digital imaging and communications in medicine
  • the registration algorithm can be augmented to only return the three translation parameters. If there is systematic drift from perfect registration, these three parameters would be sufficient to describe the drift.
  • the resulting registration parameters for the pair of PET and MR images are then stored in a registration parameter memory 94, along with the previous registration parameters for other pairs of PET and MR images generated since the last alignment calibration.
  • the registration parameters of the registration parameter memory 94 are cleared.
  • An alignment calibration analysis processor 96 analyzes the registration parameters stored in the registration parameter memory 94 and detects trends in the calibration state of the hybrid scanner 12, 48. Further, an alignment calibration report processor 98 takes into account the results of the analysis, as well as user preferences, to provide a summary, typically a daily summary, of the calibration state of the hybrid scanner 12, 48. The report processor 98 can further prompt the user regarding corrective actions. There are several approaches by which the analysis and report processors 96, 98 can cooperate to monitor the calibration state of the hybrid scanner 12, 48.
  • each registration parameter e.g., the six translation and rotation registration parameters.
  • the thresholds typically discriminate between registration parameters indicating proper alignment calibration and registration parameters indicating improper alignment calibration.
  • the thresholds are image resolution dependent.
  • each registration parameter includes thresholds for low and high resolution images. This is advantageous because the translation thresholds can, for example, be large (e.g., in millimeters (mm)) for low resolution images and small for high resolution images.
  • the analysis processor 96 retrieves the registration parameters from the registration parameter memory 94 and compares the absolute values of the retrieved registration parameters to the corresponding thresholds. Further, the analysis processor 96 can check for conditions triggering alignment calibration.
  • the trigger conditions can be user defined or predefined by, for example, the manufacturer of the analysis processor or of the hybrid scanner 12, 48.
  • One example of a trigger condition is "X% of the immediate past Y image pairs has error in the registration parameters larger than the thresholds", where X and Y are user-specified numbers. To check this trigger condition, the registration parameters for the last Y image pairs are retrieved from the registration parameter memory and compared to the corresponding thresholds.
  • a summary report is generated by the report processor 98 and sent to the user or visually displayed on the system 10.
  • the summary report can include what percent of the image pairs in the registration parameter memory 94 has a registration parameter exceeding the alignment error of the corresponding threshold.
  • the summary can also include whether a new alignment calibration is recommended based on user preferences. For example, alignment calibration can be recommended if a trigger condition is met. If alignment calibration is not recommended and an alignment calibration is scheduled according to a fixed schedule, the use can make an informed decision as to whether to proceed with the alignment calibration.
  • the summary can identify which image pairs indicate the hybrid scanner 12, 48 is out of alignment calibration.
  • the image pairs indicating that the hybrid scanner 12, 48 is out of alignment calibration can be identified to user so that the user can investigate the reason the image pairs indicate the hybrid scanner 12, 48 is out of alignment. If it's determined that motion of the subject, or cardiac or respiratory motion, caused an image pair to indicate misalignment, that image pair can be excluded from decision making. Moreover, if there was motion in a pair of images, the user can elect to register the images of the pair to correct for the motion.
  • the analysis processor 96 can identify the outliers among the past image pairs and exclude the identified outliers in the analysis. Any outlier detection approach can be employed.
  • the outlier detection suitably excludes the image pairs having subject motion or in which the registration algorithm failed. The outlier detection would not impact calibration and registration related errors. Further, the list of outliers excluded is added to the summary sent to user so the user is informed and can take appropriate action.
  • the first approach is extended.
  • the analysis processor 96 can account for the residual calibration error, as well as the intrinsic error of the registration algorithm itself. More specifically, the user is given the option to learn the residual calibration and registration errors from the past W number of image pairs immediately following the last calibration. W can be a user preference. Immediately following the last calibration, it's assumed that the hybrid scanner 12, 48 has not yet drifted out of the calibration. When comparing the registration parameters to the thresholds, the residual calibration and registration errors are subtracted from the registration parameters. The remaining details of the first approach are the same.
  • the residual calibration and registration errors can be learned by calculating the mean ⁇ ! and the standard deviation of each registration parameter i of the W image pairs, optionally with outlier removal as described above. In calculating the means ⁇ ! and the standard deviations ⁇ 2 , it is assumed that each registration parameter follows a normal distribution ⁇ /( ⁇ ⁇ 5 ⁇ 2 ). The mean of a parameter is the considered the residual calibration and registration error.
  • the thresholds discussed above in connection with the first approach were absolute thresholds.
  • the user can also probabilistically specify the thresholds. For a given absolute threshold, the probability of a registration parameter falling outside the threshold can be calculated from the distribution ⁇ /( ⁇ ⁇ 5 ⁇ 2 ) even if the system is still in calibration. For example, for a threshold equal to 2 ⁇ , the probability is 5%.
  • the absolute threshold can also be calculated.
  • the user can specify absolute thresholds and/or probabilistic thresholds since the distribution allows translation between the two types of thresholds.
  • the analysis processor 96 calculates the mean ⁇ ! and the standard deviation ⁇ 2 of each of the registration parameters from the past W number of image pairs immediately following the last calibration. The calculation is performed as described above assuming a normal distribution
  • the 96 calculates a Gaussian probability distribution p ⁇ x ⁇ ⁇ ⁇ 5 ⁇ 2 ) for each registration parameter i from the mean ⁇ ⁇ 5 the standard deviation ⁇ ! , and the registration parameter values x; of the immediate past Y number of image pairs.
  • the probability distribution for a registration parameter i describes the probability of the registration parameter values for the past Y image pairs being anomalies. If the hybrid scanner 12, 48 is still in calibration, x; will be around ⁇ ⁇ 5 and P ! will be large. If the hybrid scanner 12, 48 is out of calibration, x; will be largely away from ⁇ ⁇ 5 and p ! will be small.
  • the user does not specify an absolute or probabilistic threshold for each registration parameter. Instead, the user specifies a probability threshold ⁇ to detect an anomaly (i.e., a bad registration due to the hybrid scanner 12, 48 being out of calibration or due to other reasons).
  • the probability threshold ⁇ reflects the user's preference as to when an alignment calibration is warranted. If ⁇ , the data point is considered an anomaly.
  • the analysis processor 96 compares the smallest p ! among all p ⁇ of the registration parameter i against ⁇ . Alternatively, all of the registration can be multiplied together and the product can be compared ⁇ . Depending on which approach is in employed, the value of ⁇ needs to be adjust appropriately.
  • trigger conditions can be employed as described above. For example, such a trigger condition might be "X% of the immediate past Y studies are anomalies" or outliers are excluded.
  • a summary report is generated by the report processor 98 and sent to the user or visually displayed to the user.
  • the summary report can include what percent of the image pairs in the registration parameter memory 94 has an anomalous registration parameter value.
  • the summary can also include whether a new alignment calibration is recommended based on user preferences. For example, alignment calibration can be recommended if a trigger condition is met. Even more, the summary can identify which image pairs indicate the hybrid scanner 12, 48 is out of alignment calibration.
  • the analysis processor 96 performs outlier detection on the image pairs of the registration parameter memory 94 to exclude image pairs with subject motion from the data set.
  • the mean of a registration parameter will be due to residual calibration and registration error. If the hybrid scanner 12, 48 has drifted out of calibration, the mean of the registration parameters will due to drift out of the calibration, in addition to residual calibration and registration error.
  • the analysis processor 96 calculates the mean ⁇ ! and the standard deviation of each registration parameter i from the past W number of image pairs immediately following the last calibration. Further, the analysis processor 96 calculates the mean ⁇ and the standard deviation ⁇ of each of the registration parameters i from the immediate past Y number of image pairs. The calculations are performed as described above assuming a normal distribution. The standard deviations are assumed to be the same for the same parameters, since the randomness is due to the implementation of optimization used in the registration. Thus, to determine if the hybrid scanner 12, 48 has drifted out of the calibration state becomes a statistical testing problem between means.
  • the statistical testing problem can be framed as a t-test of the difference between means.
  • the degree of freedom is w+y-2, where w and y are the numbers of the past W and Y image pairs, respectively.
  • the null hypothesis is that two means are equal (i.e., there is no drift from the calibrated state).
  • the p-value for each registration parameter is calculated from the corresponding t-value and compared to a user specified p-value threshold (e.g., 0.05 or 0.01). If the p-value is less than the p-value threshold, the null hypothesis is rejected.
  • a summary report is further generated by the report processor 98 and sent to the user.
  • the summary report can include the list of outliers (i.e., the image pairs suspected of motion during acquisition). This can allow the user to check the registrations.
  • the summary report can further include the p-value for each registration parameter. Even more, the summary report indicates whether the hybrid scanner 12, 48 is suspected of drifting from calibration and whether an alignment calibration is recommended. Alignment calibration is recommended where the null hypothesis is rejected.
  • each registration parameter i modeled each registration parameter i as a normal distribution ⁇ /( ⁇ ⁇ 5 ⁇ ).
  • the registration parameters are alternatively modeled as a multivariate normal distribution ⁇ /( ⁇ ⁇ 5 ⁇ ;), where ⁇ ! is a mean vector of registration parameters and ⁇ i is the covariance matrix.
  • the normal distribution is replaced with the multivariate normal distribution.
  • the foregoing approaches to monitoring the hybrid scanner 12, 48 pertained to determining when the hybrid scanner 12, 48 drifted out of alignment calibration.
  • these approaches can be extended to determining subject motion.
  • the reconstruction processors 38, 76 can cooperate with the analysis processor 96 to warn the user of potential patient motion between the PET images and the MR images that can impact the attenuation correction for the PET images.
  • the main controllers 42 or other device displaying the PET and MR images can cooperate with the analysis processor 96 to warn the user of a potential misalignment between the MR and PET images. In this way, reading and diagnosis can be improved.
  • the registration processor 90, the analysis processor 96 and the report processor 98 were illustrated as external to the PET and MR main controllers 42, it is to be appreciated that one or more of these components can be integrated with the main controllers 42 as software, hardware or a combination of both.
  • the registration parameter memory 94 was illustrated as external to the main controllers 42, it is to be appreciated that the registration parameter memory 94 can be integrated with the main controllers 42.
  • a method 150 for truncation compensation of an image is provided.
  • the method 150 is typically performed by the truncation compensation processor 82.
  • the method 150 includes receiving or generating 152 a joint histogram describing frequencies of feature vectors in a pair of images.
  • the images of the pair can include the image to be compensated for truncation.
  • the images are registered to each other and resampled as necessary so there is a one-to-one correspondence between pixels (or voxels) in the images.
  • the images are generated with different imaging modalities and of a common target volume.
  • One image of the pair is generated using the same imaging modality used to generate the image to be compensated for truncation.
  • the two imaging modalities are MR and PET.
  • each of the feature vectors includes the same variables and size. Further, each of the feature vectors represents features at a spatial location by at least values of images of the pair at the spatial location.
  • the features vectors each include values for spatial and/or contextual data.
  • the features vectors can each include a spatial value, such as a Z-direction value, for the corresponding spatial location.
  • the feature vectors can each include a contextual value for the corresponding spatial location.
  • the contextual value can correspond to a value of a neighbor of the corresponding spatial location in the image of the pair generated using the modality other than the modality of the image for which truncation compensation is desired.
  • a feature vector at each spatial location is mapped 154 (or computed) to the feature space to generate a feature vector.
  • the spatial location is common to both images.
  • the generated feature vector is then used to lookup 156 a location in the joint histogram and a count is added 158 to the location.
  • the joint histogram includes N dimensions, where N is the number of dimensions of the feature vectors.
  • the joint histogram is used to estimate 160 values of the truncated region of the image to be compensated for truncation.
  • the joint histogram is used to estimate 160 values of the entire image.
  • the maximum of an estimated value and an existing value is used as the value of each spatial location in the non-truncated region of the image.
  • the estimation 160 includes looking up 162 distributions of values in the joint histogram with partial feature vectors. Estimates are selected 164 from the distributions.
  • the PET voxel value and it's spatial and context data are mapped 166 to the feature space to generate a partial feature vector.
  • the feature vector is partial because there is no MR value at the spatial location.
  • the partial feature vector is used to look up 162 a distribution of MR values in the joint histogram, and a value is selected 164 from the distribution as an estimate. For example, the average value of the distribution can be selected. As another example, the value with the largest histogram count of the distribution can be selected.
  • the method 150 is applied to MRAC.
  • MR data of a target volume is reconstructed into an MR image, which is then mapped to a MRAC image.
  • PET data of the target volume can be reconstructed into a PET image without attenuation correction, and the joint histogram can be generated from the PET and MRAC map images. That way, the truncated MRAC map values can be estimated.
  • the joint histogram is received from an external source or generated from a different pair of PET and MRAC map images. Once the joint histogram is generated or received, values of the MRAC map image for the truncated region are estimated from the joint histogram to compensate the MRAC map image for truncation.
  • the PET data is then reconstructed into a PET image with attenuation correction using the truncation compensated MRAC map image.
  • a method 200 for monitoring a trend in an alignment calibration state of a hybrid imaging scanner 12, 48 is provided.
  • the method 200 is suitably performed through cooperation between the alignment calibration analysis processor 96 and the alignment calibration report processor 98.
  • Sets of registration parameters are received 202, typically from a registration parameter memory 94. Each set corresponds to a pair of images captured together on the hybrid imaging scanner 12, 48. Further, each set describes the registration results of the pair.
  • the received sets are analyzed 204 to identify trends and to statistically infer the alignment calibration state of the hybrid imaging scanner 12, 48. Based on the analysis, a report is sent 206 or visually displayed to a user indicating the inferred alignment calibration state of the hybrid imaging scanner 12, 48. Images pairs with subject motion can also be identified from the analyzing 204.
  • a memory includes any device or system storing data, such as a random access memory (RAM) or a read-only memory (ROM).
  • a processor includes any device or system processing input device to produce output data, such as a microprocessor, a microcontroller, a graphic processing unit (GPU), an application-specific integrated circuit (ASIC), a FPGA, and the like;
  • a controller includes any device or system controlling another device or system, and typically includes at least one processor;
  • a user input device includes any device, such as a mouse or keyboard, allowing a user of the user input device to provide input to another device or system;
  • a display device includes any device for displaying data, such as a liquid crystal display (LCD) or a light emitting diode (LED) display.
  • LCD liquid crystal display
  • LED light emitting diode

Abstract

A system (10) and method (150) compensate a medical image for truncation. A histogram module (84) is configured to receive or generate a joint histogram describing frequencies of feature vectors in a pair of images. The images of the pair are generated with different imaging modalities. The feature vectors each represent features at a spatial location by at least values of the images of the pair at the spatial location. An estimation module (86) is configured to estimate values for a truncated region of the image. The estimation includes looking up distributions of values in the joint histogram with partial feature vectors. A system (10) and method (200) further monitor the trend of an alignment calibration state of a hybrid imaging scanner. The calibration state is statistically inferred from registration parameters resulting from registration of image pairs generated with the scanner.

Description

MAGNETIC RESONANCE (MR)-BASED ATTENUATION CORRECTION AND MONITOR ALIGNMENT CALIBRATION
The present application relates generally to medical imaging. It finds particular application in conjunction with magnetic resonance (MR)-based attenuation correction (AC) (i.e., MRAC) during positron emission tomography (PET) reconstruction, and will be described with particular reference thereto. However, it is to be understood that it also finds application in other usage scenarios, and is not necessarily limited to the aforementioned application.
MRAC is AC during PET reconstruction that uses an attenuation map derived from an MR image. A challenge with MRAC is that the MR image is oftentimes truncated during MR acquisition. To deal with this, various approaches to truncation compensation have been developed. One approach is based on simultaneous transmission and emission estimation. However, this approach suffers from computational complexity.
Another approach is to reconstruct the non-AC (NAC) PET image first. The boundary of the patient is then segmented in the NAC PET image and used as a mask for the MR image. Any missing portions of the MR image within the mask are then filled in as soft tissue. However, the missing portions are arbitrarily filled with soft tissue. Further, the approach requires a high quality NAC PET image to sufficiently segment the NAC PET image, since the quality of the segmentation determines the quality of truncation compensation. In some instances, time-of-flight (ToF) is employed to improve the quality of the NAC PET image. However, using ToF increases the reconstruction time.
MRAC is often employed together with a hybrid PET and MR imaging system. A hybrid imaging system typically acquires a pair of images, each corresponding to a different imaging modality, and registers the images of the pair. A challenge with hybrid imaging systems is that registration relies on an alignment calibration that is periodically performed. If a hybrid imaging system is not in a well calibrated state, the images will not be properly registered, which compromises image interpretation and can subsequently impact clinical diagnosis. To address the foregoing challenge, a hybrid imaging system is typically calibrated according to a fixed schedule, such as every 6 months, or in response to specific events, such as system upgrades. However, system alignment can drift out of the calibrated state before the next scheduled calibration. Further, alignment calibrations can be performed unnecessarily based on a fixed schedule without regard to the actual calibration state. This wastes times and can reduce the availability of hybrid imaging systems for clinical use.
The present application provides a new and improved system and method which overcome these problems and others.
In accordance with one aspect, a system for truncation compensation of an image is provided. The system includes a histogram module configured to receive or generate a joint histogram describing frequencies of feature vectors in a pair of images. The images of the pair are generated with different imaging modalities. The feature vectors represent features at a spatial location by at least values of the images of the pair at the spatial location. The system further includes an estimation module configured to estimate values for a truncated region of the image. The estimation includes looking up distributions of values in the joint histogram with partial feature vectors at a truncated spatial location.
In accordance with another aspect, a method for truncation compensation of an image is provided. The method includes receiving or generating a joint histogram describing frequencies of feature vectors in a pair of images. The images of the pair are generated with different imaging modalities. The feature vectors represent features at a spatial location by at least values of the images of the pair at the spatial location. The method further includes estimating values for a truncated region of the image. The estimating includes looking up distributions of values in the joint histogram with partial feature vectors at a truncated spatial location.
In accordance with another aspect, a system for monitoring a calibration state of a hybrid imaging system is provided. The system includes at least one processor programmed to receive or generate sets of registration parameters resulting from registration of image pairs generated with the hybrid imaging scanner, analyze the sets to statistically infer a calibration state of the hybrid imaging scanner, and report the inferred calibration state of the hybrid imaging scanner to a user.
One advantage resides in reduced computational complexity.
Another advantage resides in reduced segmentation.
Another advantage resides in the intelligent filling of missing magnetic resonance
(MR) data.
Another advantage resides in improved image reading and diagnosis.
Another advantage resides in improved efficiency maintaining alignment calibration.
Another advantage resides in reduced registration errors.
Still further advantages of the present invention will be appreciated to those of ordinary skill in the art upon reading and understand the following detailed description.
The invention may take form in various components and arrangements of components, and in various steps and arrangements of steps. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIGURE 1 illustrates a hybrid magnetic resonance (MR) and positron emission tomography (PET) imaging system employing truncation compensation according to the present application
FIGURE 2 illustrates a PET detector of the imaging system.
FIGURE 3A illustrates MR images before truncation compensation. FIGURE 3B illustrates MR images after truncation compensation.
FIGURE 4 illustrates a method for truncation compensation according to the present application.
FIGURE 5 illustrates a method for monitoring a trend in an alignment calibration state of a hybrid imaging scanner according to the present application.
The present application describes an approach for truncation compensation of a magnetic (MR) image used during MR-based attenuation correction (AC) of a positron emission tomography (PET) image (i.e., MRAC) reconstruction. According to the approach, a joint PET and MR histogram with spatial and contextual data is generated and used to estimate the truncated MR image. For example, at a particular spatial location, the PET value, the MR value, and the PET values surrounding the given location can all be treated as contextual data. The spatial and contextual data can be analyzed and combined into the joint histogram to describe the statistical properties of the data. Thereafter, a PET value associated with a spatial location of the truncated MR data can be applied to the joint histogram to predict the most likely MR value.
With reference to FIGURE 1, an imaging system 10 includes an MR scanner 12. The MR scanner 12 generates raw MR scan data and includes a housing 14 defining an MR imaging volume 16 for receiving a target volume of a subject to be imaged. A subject support 18 can be employed to support the subject and to position the target volume near the isocenter of the MR imaging volume 16.
A main magnet 20 of the MR scanner 12 creates a strong, static Bo magnetic field extending through the MR imaging volume 16. The strength of the static Bo magnetic field is commonly one of 0.23 Tesla, 0.5 Tesla, 1.5 Tesla, 3 Tesla, 7 Tesla, and so on in the MR imaging volume 16, but other strengths are contemplated.
A gradient controller 22 of the MR scanner 12 is controlled to superimpose magnetic field gradients, such as x, y and z gradients, on the static Bo magnetic field in the MR imaging volume 16 using a plurality of magnetic field gradient coils 24 of the MR scanner 12. The magnetic field gradients spatially encode magnetic spins within the MR imaging volume 16. Typically, the plurality of magnetic field gradient coils 24 includes three separate magnetic field gradient coils spatially encoding in three orthogonal spatial directions.
Further, one or more transmitters 26 of the MR scanner 12 are controlled to transmit Bi resonance excitation and manipulation radio frequency (RF) pulses into the MR imaging volume 16 with one or more transmit coil arrays 28. The Bi pulses are typically of short duration and, when taken together with the magnetic field gradients, achieve a selected manipulation of MR. For example, the Bi pulses excite the hydrogen dipoles to resonance and the magnetic field gradients encode spatial information in the frequency and phase of the resonance signal. By adjusting the RF frequencies, resonance can be excited in other dipoles, such as phosphorous, which tend to concentrate in known tissues, such as bones. An MR scan controller 30 controls the gradient controller 22 and/or the transmitters 26 according to imaging sequences to produce spatially encoded MR signals within the MR imaging volume 16. An imaging sequence defines a sequence of Bi pulses and/or magnetic field gradients. Further, the imaging sequences can be received from a device or system being remote or local to the MR scan controller 30, such as a sequence memory.
One or more RF receivers 32, such as a transceiver, receive the spatially encoded magnetic resonance signals from the MR imaging volume 16 and demodulate the received spatially encoded magnetic resonance signals to MR data sets. The MR data sets include, for example, k-space data trajectories. To receive the spatially encoded magnetic resonance signals, the receivers 16 use one or more receive coil arrays 28. As illustrated, the receivers 32 share a whole body coil 28 with the transmitters 26 by way of a switch 34 that selectively connects the receivers 32 and the transmitters 26 to the coil 28 depending upon whether the transmitters 26 or the receivers 32 are being used. The receivers 32 typically store the MR data sets in an MR buffer memory 36.
An MR reconstruction processor 38 reconstructs the MR data sets into MR images or maps of the MR imaging volume 16. This includes, for each MR signal captured by the MR data sets, spatially decoding the spatial encoding by the magnetic field gradients to ascertain a property of the MR signal from each spatial region, such as a pixel or voxel. The intensity or magnitude of the MR signal is commonly ascertained, but other properties related to phase, relaxation time, magnetization transfer, and the like can also be ascertained. The MR images or maps are typically stored in an MR image memory 40.
An MR main controller 42 coordinates the generation of one or more MR diagnostic images of the target volume using one or more MR scans of the target volume. For example, the MR main controller 42 provides scan parameters to the MR scan controller 30. The MR main controller 42 can carry out the foregoing functionality by software, hardware or both. Where the MR main controller 42 employs software, the MR main controller 42 includes at least one processor executing the software. The software is suitably stored on a program memory. Further, the MR main controller 42 can be managed by a user using a graphical user interface presented to the user by way of a display device 44 and a user input device 46. The user can, for example, initiate imaging, display images, manipulate images, etc. Notwithstanding that the MR reconstruction processor 38 and the MR scan controller 30 were illustrated as external to the MR main controller 42, it is to be appreciated that one or more of these components can be integrated with the MR main controller 42 as software, hardware or a combination of both. For example, the MR reconstruction processor 38 can be integrated with the MR main controller 42 as a software module executing on the at least one processor of the MR main controller 42. Further, notwithstanding that the MR buffer memory 36 and the MR image memory 40 were illustrated as external to the MR main controller 42, it is to be appreciated that one or more of these components can be integrated with the MR main controller 42.
A challenge with MR is that the field of view (FOV) is limited in MR imaging
(e.g., to 50 to 55 centimeters (cm) in the trans-axial direction). In particular, Bo-homogeneity reducing at the edges of the FOV, and a non-linearity of the magnetic field gradients in the outer areas of the FOV, are responsible for the restriction of the FOV. This often leads to truncated anatomical structures (e.g., truncated arms and shoulders) in the outer areas of the FOV. This problem is exacerbated in the examination of larger and overweight subjects. Hence, an approach to truncation compensation is hereafter described.
With continued reference to FIGURE 1, the imaging system 10 further includes a PET scanner 48 for truncation compensation. The PET scanner 48 generates PET data and includes a stationary gantry 50 housing a plurality of gamma detectors 52 arranged around a bore of the scanner. The bore defines a PET imaging volume 54 for receiving a target volume of a subject to be imaged. The detectors 52 are typically arranged in one or more stationery rings which extend the length of the PET imaging volume 54. However, rotatable heads are also contemplated. The detectors 52 detect gamma photons from the PET imaging volume 54 and generate the PET data.
With reference to FIGURE 2, each of the detectors 52 includes one or more scintillators 56 arranged in a grid. The scintillators 56 scintillate and generate visible light pulses in response to energy depositions by gamma photons. As illustrated, a gamma photon 58 deposits energy in a scintillator 60, thereby resulting in a visible light pulse 62. The magnitude of a visible light pulse is proportional to the magnitude of the corresponding energy deposition. Examples of scintillators 56 include sodium iodide doped with thallium (Nal(Tl)), cerium-doped lutetium yttrium orthosilicate (LYSO) and cerium doped lutetium oxyorthosilicate (LSO). In addition to the scintillators 56, the detectors 52 each include a sensor 64 detecting the visible light pulses in the scintillators 56. The sensor 64 includes a plurality of light sensitive elements 66. The light sensitive elements 66 are arranged in a grid of like size as the grid of scintillators and optically coupled to corresponding scintillators 56. The light sensitive elements 66 can be coupled to the scintillators 56 in a one-to-one arrangement, a one- to-many arrangement, a many-to-one arrangement, or any other arrangement. Typically, as illustrated, the light sensitive elements 66 are silicon photomultipliers (SiPMs), but photomultiplier tubes (PMTs) are also contemplated.
Where the light sensitive elements 66 are SiPMs, there is typically a one-to-one correspondence between the scintillators 56 and the light sensitive elements 66, as illustrated, but other correspondences are contemplated. Each of the SiPMs includes a photodiode array (e.g., Geiger-mode avalanche photodiode arrays), each photodiode corresponding to a cell of the photodiode array. Suitably, the SiPMs are configured to operate in a Geiger mode to produce a series of unit pulses to operate in a digital mode. Alternatively, the SiPMs can be configured to operate in an analog mode. Where the light sensitive elements 66 are PMTs, there is often a many-to-one correspondence between the scintillators 56 and the light sensitive elements 66, but other correspondences are contemplated.
Referring back to FIGURE 1, during a scan of a subject using the PET scanner 48, a target volume of the subject is injected with a radiopharmaceutical or radionuclide. The radiopharmaceutical or radionuclide causes gamma photons to be emitted from the target volume. The target volume is then positioned in the PET imaging volume 54 using a subject support 18 corresponding to the PET scanner 48. Once the target volume is positioned within the PET imaging volume, the PET scanner 48 is controlled by a PET scan controller 68 to perform a scan of the target volume and event data is acquired. The acquired event data describes the time, location and energy of each scintillation event detected by the detectors and is suitably stored in a PET buffer memory 70.
The location of a scintillation event corresponds to a pixel of the PET scanner 48. A pixel is the smallest area to which a scintillation event can be localized. For example, suppose the light sensitive elements 66 are SiPMs and there is a one-to-one coupling between scintillators 56 and the light sensitive elements 66. In such instances, the smallest area to which a scintillation event can be localized is typically a scintillator/SiPM pair, whereby a pixel typically corresponds to a scintillator/SiPM pair. As another example, suppose the light sensitive elements 66 are PMTs or SiPMs and there is a many-to-one coupling between the scintillators 56 and the light sensitive elements 66. In such instances, Anger logic is typically used to localize scintillation events to individual scintillators, whereby a pixel typically corresponds to a scintillator, but not a light sensitive element.
Subsequent to acquisition, or concurrently therewith, an event verification processor 72 filters the buffered event data. The filtering includes comparing energy (cell counts in the digital mode) of each scintillation event to an energy window, which defines the acceptable energy range for scintillation events. Those scintillation events falling outside the energy window are filtered out. Typically, the energy window is centered on the known energy of the gamma photons to be received from the PET imaging volume 54 (e.g., 51 1 kiloelectron volt (keV)) and determined using the full width half max (FWHM) of an energy spectrum generated from a calibration phantom. The event verification processor 72 further generates lines of response (LO s) from the filtered event data. A LOR is defined by a pair of gamma photons striking the detectors 52 within a specified time difference of each other (i.e., a coincident event). The specified time difference is small enough to ensure the gammas are from the same annihilation event. Hence, assuming that there is a one-to-one correspondence between scintillation events and gamma photons striking the detectors 52, a LOR can be defined by a pair of scintillation events.
The foregoing filtering of event data and determining of LORs assumed that there was a one-to-one correspondence between scintillation events and gamma photons striking the detectors 52. However, those skilled in the art will appreciate that in practice, a gamma photon can yield multiple scintillation events. In some instances, before the event data is passed to the event verification processor 72, the scintillation events of the event data are combined based on gamma photon. For example, the energy of scintillation events belonging to a common gamma photon can be summed and the location with which the gamma photon struck the detectors 52 can be approximated. The event verification processor 72 then filters and determines LORs from the updated event data.
Data describing the coincident events, as or once determined by the event verification processor 72, is stored within a list mode memory 74 as a list, where each list item corresponds to a coincident event. The data for each of the list items describes the corresponding LOR by the spatial data (e.g., by the X and Z locations) for the two pixels to which the pair of gamma photons of the LOR are localized. Further, the data for each of the list items can optionally describe the energy of the two gamma photons of the corresponding coincident event, and/or either the times stamps of the two gamma photons or the difference between the times stamps of the two gamma photons.
A PET reconstruction processor 76 reconstructs the list mode data into a final, reconstructed image of the target volume. The reconstructed image is typically stored in a PET image memory 78. To generate the reconstructed image, any suitable reconstruction algorithm can be employed. For example, an iterative-based reconstruction algorithm can be employed. The reconstruction can be performed with or without AC. As to the former, an attenuation map from an attenuation map memory 80 is employed. In some instances, the AC is MR-based AC (i.e., MRAC) and the attenuation map is generated using the MR scanner 12. Further, the reconstruction can be performed with or without time of flight (ToF).
A PET main controller 42 coordinates the generation of one or more PET diagnostic images of the target volume using one or more PET scans of the target volume. For example, the PET main controller 42 provides scan parameters to the PET scan controller 68. The PET main controller 42 can carry out the foregoing functionality by software, hardware or both. Where the PET main controller 42 employs software, the PET main controller 42 includes at least one processor executing the software. The software is suitably stored on a program memory. Further, the PET main controller 42 can be managed by a user using a graphical user interface presented to the user by way of a display device 44 and a user input device 46. The user can, for example, initiate imaging, display images, manipulate images, etc.
Notwithstanding that the PET reconstruction processor 76, the event verification processor 72, and the PET scan controller 68 were illustrated as external to the PET main controller 42, it is to be appreciated that one or more of these components can be integrated with the PET main controller 42 as software, hardware or a combination of both. For example, the PET reconstruction processor 76 and the event verification processor 72 can be integrated with the PET main controller 42 as a software module executing on the at least one processor of the PET main controller 42. Further, notwithstanding that the PET buffer memory 70, the list mode memory 74 and the PET image memory 78 were illustrated as external to the PET main controller 42, it is to be appreciated that one or more of these components can be integrated with the PET main controller 42.
As discussed above, the PET reconstruction processor 76 can perform image reconstruction with MRAC and an attenuation map generated using the MR scanner 12. A challenge posed by traditional systems employing MRAC is that the MR image used to derive the attenuation map is truncated. To address this challenge, a truncation compensation processor 82 generates 84 a joint PET and MR histogram, typically with spatial and/or contextual data, and estimates 86 the truncated MR image values from the joint histogram. The complete MR image is then passed to an attenuation map generation processor 88 that generates an attenuation map used by the PET reconstruction processor 76 to generate an MRAC PET image. The attenuation map is typically stored in the attenuation map memory 80.
More specifically, the truncation compensation processor 82 receives a non-AC PET image of a target volume, typically from the PET image memory 78. Further, an MR image of the target volume is received, typically from the MR image memory 40. The MR and PET main controllers 42 can be used to coordinate the generation of the images. Further, the values of the MR and PET images are typically normalized and clamped. The MR image is registered to the PET image using, for example, a registration processor 90 or by system calibration (discussed hereafter). Further, the MR image is resampled so there is a one-to-one correspondence between the pixels (i.e., voxels) of the MR and PET images. A pixel in the MR and PET images is the smallest area to which a value (e.g., a PET or MR value) can be localized.
Once the MR image is registered and resampled, the joint histogram is generated 84 by combining the MR image with the PET image. The joint histogram is one of a global joint histogram and a set of localized joint histograms. The global joint histogram makes use of the entire overlapping image volume of the MR and PET images, whereas the set of localized joint histogram collectively make use of the entire overlapping image volume, but individually make use of non-overlapping subsets of the entire overlapping image volume.
The global joint histogram is typically two dimensional (2D) and includes a dimension corresponding to PET values and a dimension corresponding to MR values. A count is added to a corresponding location of the global joint histogram for each pair of MR and PET values corresponding to the same spatial location in the MR and PET images. That is to say, each pair of MR and PET values corresponding to the same spatial location in the MR and PET images is used to lookup a corresponding location in the global joint histogram. A count is then added to this location.
In some instances, the global joint histogram includes one or more additional dimensions (i.e., more than the two dimensions discussed above) to convey context data. The context data, for example, describes neighboring PET values. In a three dimensional (3D) space, a pixel or voxel can have up to 26 neighbors. Additional dimensions can be added for one or more of these neighbors. For example, in some instances, the global joint histogram can include a dimension for the neighbor to the immediate left. The global joint histogram is generated as above except that the additional dimensions are taken into account. More specifically, the features of the three or more dimensions are extracted at each spatial location of overlapping image volume and used to lookup a corresponding location in the global joint histogram. A count is then added to this location. The features at a spatial location include the PET and MR values at the spatial location, and the PET values of the additional contextual dimensions.
As noted above, a dimension for each of the 26 neighbors is possible. However, generating the global joint histogram with so many dimensions can be highly computational and demand a considerable amount of memory. To reduce the computational burden in one instance, only six contextual dimensions are employed. These dimensions can correspond to, for example, neighbors to the immediate left, right, front, back, top, and bottom. Alternatively, only three dimensions can be employed. These dimensions can correspond to, for example, the average of the immediate top and bottom neighbors, the average of the immediate left and right neighbors, and the average of the immediate front and back neighbors. To reduce the memory demand, instead of defining the dimensions using the neighboring PET values, the dimensions are defined using the difference between the neighboring PET values and the center PET value. This reduces memory because the dynamic range of the difference is less than the dynamic range of PET values.
As an alternative to the global joint histogram, the joint histogram can be a set of localized joint histograms. The set of localized joint histogram collectively make use of the entire overlapping image volume, but individually make use of non-overlapping subsets of the entire overlapping image volume. The non-overlapping subsets are defined by spatial dimensions (e.g., Z direction). Each of the localized joint histograms is generated in the same manner as the global joint histogram except that the localized joint histogram is generated from a subset of the overlapping image volume being defined by one or more spatial dimensions. In some instances, the localized joint histograms take into account context data, as described above.
To illustrate, suppose the overlapping image volume is divided into subsets along the Z direction (typically along the axes of the bores of the scanners 12, 48 and typically along the length of the subject). For example, one subset might include all spatial locations of the imaging volume with Z values between one and ten. A localized joint histogram is then generated, optionally taking into account context data, for each of the subsets. The localized joint histograms are generated as above except that each localized joint histogram is associated with a Z value, or a range of Z values, and limited to MR and PET values located at the Z value or in the neighborhood of the Z value. In this way, the joint histogram includes a set of localized joint histograms varying spatially along the Z direction.
As an alternative to generating the joint histogram from PET and MR images specific to the subject, the joint histogram can be generated from PET and MR images for a population of subjects. The population can be a population to which the subject belongs. For example, if the subject suffers from prostate cancer, the joint histogram can be generated from PET and MR images for patients with prostate cancer. Alternatively, the population can be a population to which the subject does not belong but in which truncation is common.
Once the joint histogram is generated, a determination is made as to where the MR image is truncated. The truncation occurs where MR values are considered background and PET values are considered foreground. The determination can be performed manually by a clinician segmenting the MR image to identify the truncated portions of the MR image. Alternatively, the determination can be performed automatically.
The automatic determination can be performed using heuristics that indicate where truncation would happen (e.g., on the periphery of the images). Alternatively, the automatic determination can be performed by estimating a threshold for background MR values and a threshold for foreground PET values. These thresholds can then be applied to the MR and PET images to identify where the MR image is truncated. In some instances, the thresholds are estimated using marginal histogram analysis of the joint histogram. For example, assuming the number of background voxels decreases exponentially, the exponential fitting of the first few points on marginal histograms gives estimates of the thresholds. Using the determined portions of the MR image that are truncated, the values of the pixels or voxels in the truncated portion are estimated 86. Estimation depends upon the number of dimensions in the joint histogram. For each pixel or voxel in the truncated portion, the corresponding values for all the dimensions, except the dimension for MR values, are determined. The distribution of MR values in the joint histogram is then looked up using these values. For example, supposing the joint histogram has only two dimensions, the PET value for each pixel or voxel in the truncated portion is used to look up a distribution of MR values in the joint histogram.
From the MR distribution for a pixel or voxel, an MR value is selected. For example, the average of foreground MR values in the distribution can be used. Foreground MR values can be identified using a threshold determined as described above. As another example, the MR value with the largest probability (i.e., the most histogram counts) can be used. As another example, the MR value is randomly selected from the distribution with weighting based on the histogram counts.
In some instances, the distribution is first filtered. The MR value can then be selected as discussed above from the filtered distribution. As another example, the distribution can be first fit with a curve, such as a Gaussian curve. The MR value can then be selected as discussed above from the curve.
The foregoing relies on determination of the truncated portions of the MR image. MR values for the truncated portions are then estimated. As an alternative, instead of determining where the MR image is truncated, MR values are estimated 86 for the entire MR image. The final MR value for each pixel or voxel is then the maximum value of the original MR value and the estimated MR. This approach works on that assumption that if a pixel is truncated, the estimated MR value will be larger than the original MR value.
With reference to FIGURES 3A and 3B, MR images before and after truncation compensation are illustrated. The MR images were generated for a 67 year old subject. FIGURE 3A illustrates the original MR images at three different locations, and FIGURE 3B illustrates these three MR images after truncation compensated. As can be seen, the missing MR portion is completed. Due to the low quality of the PET images, some extraneous MR is also added. Notwithstanding that the truncation compensation processor 82 and the attenuation map generation processor 88 were illustrated as external to the PET and MR main controllers 42, it is to be appreciated that one or more of these components can be integrated with the main controllers 42 as software, hardware or a combination of both. Moreover, notwithstanding that the attenuation map memory 88 was illustrated as external to the main controllers 42, it is to be appreciated that the attenuation map memory 88 can be integrated with the main controllers 42.
Further, while the approach to truncation compensation was discussed in connection with PET and MR, it is to be understood that the same approach can be employed to compensate for the truncation of computed tomography (CT) images. In this instance, CT values are employed in place of MR values when generating the joint histogram and when estimating values for the truncated portions.
Although not necessary, in some instances, the PET and MR scanners 12, 48 are combined into a hybrid scanner, as illustrated. In such instances, the PET and MR scanners 12, 48 share a main controller 42. Further, the PET scanner 48 can be mounted on tracks 92 to facilitate patient access. The tracks 92 extend in parallel to a longitudinal axis of a subject support 18 shared by both of the scanners 12, 48, thus enabling the scanners 12, 48 to form a closed system. A first motor and drive can provide movement of the PET scanner 48 in and out of the closed position, and a second motor and drive can also provide longitudinal movement and vertical adjustment of the subject support 18 in the imaging volumes 16, 54. Alternatively, the MR and PET scanners 12, 48 can be mounted in a single, shared closed system with a common imaging volume.
A hybrid imaging system typically acquires a pair of images, each corresponding to a different imaging modality, and registers the images of the pair. A challenge is that the registration relies on an alignment calibration that is periodically performed. If a hybrid scanner is not in a well calibrated state, the images will not be properly registered, which compromises image interpretation and can subsequently impact clinical diagnosis. Currently, alignment calibration is performed according to a fixed schedule, such as every 6 months, or in response to specific events, such as system upgrades. However, system alignment can drift out of the calibrated state before the next scheduled calibration. Further, alignment calibrations can be performed unnecessarily without regard to the actual calibration state. The present application further describes an approach for monitoring the trend of the calibration state of a hybrid scanner, and for detecting and reporting a potential misalignment. All image pairs acquired on the hybrid scanner are registered, registration results are monitored to detect trends, the alignment calibration state is statistically inferred from the detected trends, and the inferred calibration state is periodically reported to the user. The user can then perform alignment calibration on an as-needed basis. Patient motion can also be statistically inferred from the detected trends and the user can be alerted to evaluate an image pair for potential misalignment caused by patient motion.
With reference to FIGURE 1 , for each pair of PET and MR images generated by the hybrid scanner 12, 48, a registration processor 90 applies a rigid registration algorithm to register the images to one another. Any automated registration algorithm can be employed. For example, a simplified, fast algorithm can be employed. A full-featured, automated algorithm is not necessary. If the hybrid scanner 12, 48 is perfectly calibrated, the registration algorithm perfectly registers the PET and MR images.
By running the registration algorithm, registration parameters are generated. The registration parameters typically include three translation parameters and three rotation parameters. If there is no subject motion during the data acquisition, the registration parameters will be zeros for all three translation and all three rotation parameters in the digital imaging and communications in medicine (DICOM) space (i.e., patient space). In some instances, the registration algorithm can be augmented to only return the three translation parameters. If there is systematic drift from perfect registration, these three parameters would be sufficient to describe the drift.
The resulting registration parameters for the pair of PET and MR images are then stored in a registration parameter memory 94, along with the previous registration parameters for other pairs of PET and MR images generated since the last alignment calibration. When alignment calibration is performed, the registration parameters of the registration parameter memory 94 are cleared.
There are multiple sources of error that can cause the registration parameters to deviate from zero. One source of error is residual error in alignment calibration, which is fixed. Another source of error is system drift out of the calibration. Although this error is unknown at a given time point, it is not considered random. Other sources of error are patient motion and the registration algorithm. The error with the registration algorithm can be attributed to the specific implementation and the dataset.
An alignment calibration analysis processor 96 analyzes the registration parameters stored in the registration parameter memory 94 and detects trends in the calibration state of the hybrid scanner 12, 48. Further, an alignment calibration report processor 98 takes into account the results of the analysis, as well as user preferences, to provide a summary, typically a daily summary, of the calibration state of the hybrid scanner 12, 48. The report processor 98 can further prompt the user regarding corrective actions. There are several approaches by which the analysis and report processors 96, 98 can cooperate to monitor the calibration state of the hybrid scanner 12, 48.
According to a first approach to monitoring the hybrid scanner 12, 48, the user specifies absolute thresholds for each registration parameter (e.g., the six translation and rotation registration parameters). The thresholds typically discriminate between registration parameters indicating proper alignment calibration and registration parameters indicating improper alignment calibration. In some instances, the thresholds are image resolution dependent. For example, in some instances, each registration parameter includes thresholds for low and high resolution images. This is advantageous because the translation thresholds can, for example, be large (e.g., in millimeters (mm)) for low resolution images and small for high resolution images.
At regular intervals, such as daily, the analysis processor 96 retrieves the registration parameters from the registration parameter memory 94 and compares the absolute values of the retrieved registration parameters to the corresponding thresholds. Further, the analysis processor 96 can check for conditions triggering alignment calibration. The trigger conditions can be user defined or predefined by, for example, the manufacturer of the analysis processor or of the hybrid scanner 12, 48. One example of a trigger condition is "X% of the immediate past Y image pairs has error in the registration parameters larger than the thresholds", where X and Y are user-specified numbers. To check this trigger condition, the registration parameters for the last Y image pairs are retrieved from the registration parameter memory and compared to the corresponding thresholds.
After performing the check for trigger conditions, a summary report is generated by the report processor 98 and sent to the user or visually displayed on the system 10. The summary report can include what percent of the image pairs in the registration parameter memory 94 has a registration parameter exceeding the alignment error of the corresponding threshold. The summary can also include whether a new alignment calibration is recommended based on user preferences. For example, alignment calibration can be recommended if a trigger condition is met. If alignment calibration is not recommended and an alignment calibration is scheduled according to a fixed schedule, the use can make an informed decision as to whether to proceed with the alignment calibration.
Even more, the summary can identify which image pairs indicate the hybrid scanner 12, 48 is out of alignment calibration. The image pairs indicating that the hybrid scanner 12, 48 is out of alignment calibration can be identified to user so that the user can investigate the reason the image pairs indicate the hybrid scanner 12, 48 is out of alignment. If it's determined that motion of the subject, or cardiac or respiratory motion, caused an image pair to indicate misalignment, that image pair can be excluded from decision making. Moreover, if there was motion in a pair of images, the user can elect to register the images of the pair to correct for the motion.
In some instances, the analysis processor 96 can identify the outliers among the past image pairs and exclude the identified outliers in the analysis. Any outlier detection approach can be employed. The outlier detection suitably excludes the image pairs having subject motion or in which the registration algorithm failed. The outlier detection would not impact calibration and registration related errors. Further, the list of outliers excluded is added to the summary sent to user so the user is informed and can take appropriate action.
According to a second approach to monitoring the hybrid scanner 12, 48, the first approach is extended. In contrast with the first approach, the analysis processor 96 can account for the residual calibration error, as well as the intrinsic error of the registration algorithm itself. More specifically, the user is given the option to learn the residual calibration and registration errors from the past W number of image pairs immediately following the last calibration. W can be a user preference. Immediately following the last calibration, it's assumed that the hybrid scanner 12, 48 has not yet drifted out of the calibration. When comparing the registration parameters to the thresholds, the residual calibration and registration errors are subtracted from the registration parameters. The remaining details of the first approach are the same.
The residual calibration and registration errors can be learned by calculating the mean μ! and the standard deviation of each registration parameter i of the W image pairs, optionally with outlier removal as described above. In calculating the means μ! and the standard deviations σ 2, it is assumed that each registration parameter follows a normal distribution Λ/(μΐ5 σ 2). The mean of a parameter is the considered the residual calibration and registration error.
The thresholds discussed above in connection with the first approach were absolute thresholds. According to this approach, the user can also probabilistically specify the thresholds. For a given absolute threshold, the probability of a registration parameter falling outside the threshold can be calculated from the distribution Λ/(μΐ5 σ 2) even if the system is still in calibration. For example, for a threshold equal to 2σ, the probability is 5%. Similarly, for a given probabilistic threshold, the absolute threshold can also be calculated. Hence, the user can specify absolute thresholds and/or probabilistic thresholds since the distribution allows translation between the two types of thresholds.
According to a third approach to monitoring the hybrid scanner 12, 48, the analysis processor 96 calculates the mean μ! and the standard deviation σ 2 of each of the registration parameters from the past W number of image pairs immediately following the last calibration. The calculation is performed as described above assuming a normal distribution
Λ/(μΐ5 σ 2) and optionally with outlier removal as described above. Further, the analysis processor
96 calculates a Gaussian probability distribution p^x^ μΐ5 σ 2) for each registration parameter i from the mean μΐ5 the standard deviation σ! , and the registration parameter values x; of the immediate past Y number of image pairs. The probability distribution for a registration parameter i describes the probability of the registration parameter values for the past Y image pairs being anomalies. If the hybrid scanner 12, 48 is still in calibration, x; will be around μΐ5 and P! will be large. If the hybrid scanner 12, 48 is out of calibration, x; will be largely away from μΐ5 and p! will be small.
In contrast with the foregoing approaches, the user does not specify an absolute or probabilistic threshold for each registration parameter. Instead, the user specifies a probability threshold ε to detect an anomaly (i.e., a bad registration due to the hybrid scanner 12, 48 being out of calibration or due to other reasons). The probability threshold ε reflects the user's preference as to when an alignment calibration is warranted. If ρ<ε, the data point is considered an anomaly. To determine whether a registration parameter is an anomaly or not, the analysis processor 96 compares the smallest p! among all p^ of the registration parameter i against ε. Alternatively, all of the registration can be multiplied together and the product can be compared ε. Depending on which approach is in employed, the value of ε needs to be adjust appropriately. To guard against patient motion, trigger conditions can be employed as described above. For example, such a trigger condition might be "X% of the immediate past Y studies are anomalies" or outliers are excluded.
After determining anomalies and applying trigger conditions, a summary report is generated by the report processor 98 and sent to the user or visually displayed to the user. The summary report can include what percent of the image pairs in the registration parameter memory 94 has an anomalous registration parameter value. The summary can also include whether a new alignment calibration is recommended based on user preferences. For example, alignment calibration can be recommended if a trigger condition is met. Even more, the summary can identify which image pairs indicate the hybrid scanner 12, 48 is out of alignment calibration.
According to a fourth approach to monitoring the hybrid scanner 12, 48, the analysis processor 96 performs outlier detection on the image pairs of the registration parameter memory 94 to exclude image pairs with subject motion from the data set. Hence, right after the calibration, the mean of a registration parameter will be due to residual calibration and registration error. If the hybrid scanner 12, 48 has drifted out of calibration, the mean of the registration parameters will due to drift out of the calibration, in addition to residual calibration and registration error.
After outlier detection, the analysis processor 96 calculates the mean μ! and the standard deviation of each registration parameter i from the past W number of image pairs immediately following the last calibration. Further, the analysis processor 96 calculates the mean μ and the standard deviation σ\ of each of the registration parameters i from the immediate past Y number of image pairs. The calculations are performed as described above assuming a normal distribution. The standard deviations are assumed to be the same for the same parameters, since the randomness is due to the implementation of optimization used in the registration. Thus, to determine if the hybrid scanner 12, 48 has drifted out of the calibration state becomes a statistical testing problem between means.
The statistical testing problem can be framed as a t-test of the difference between means.
Figure imgf000021_0001
The degree of freedom is w+y-2, where w and y are the numbers of the past W and Y image pairs, respectively. Further, the null hypothesis is that two means are equal (i.e., there is no drift from the calibrated state). After solving the statistical testing problem to get the t-value ti for each registration parameter i, the p-value for each registration parameter is calculated from the corresponding t-value and compared to a user specified p-value threshold (e.g., 0.05 or 0.01). If the p-value is less than the p-value threshold, the null hypothesis is rejected.
A summary report is further generated by the report processor 98 and sent to the user. The summary report can include the list of outliers (i.e., the image pairs suspected of motion during acquisition). This can allow the user to check the registrations. The summary report can further include the p-value for each registration parameter. Even more, the summary report indicates whether the hybrid scanner 12, 48 is suspected of drifting from calibration and whether an alignment calibration is recommended. Alignment calibration is recommended where the null hypothesis is rejected.
The foregoing approaches to monitoring the hybrid scanner 12, 48 modeled each registration parameter i as a normal distribution Λ/(μΐ5 σ^). In some instances, the registration parameters are alternatively modeled as a multivariate normal distribution Λ/(μΐ5∑;), where μ! is a mean vector of registration parameters and∑i is the covariance matrix. In such instances, the normal distribution is replaced with the multivariate normal distribution.
Further, the foregoing approaches to monitoring the hybrid scanner 12, 48 pertained to determining when the hybrid scanner 12, 48 drifted out of alignment calibration. In some instances, these approaches can be extended to determining subject motion. For example, the reconstruction processors 38, 76 can cooperate with the analysis processor 96 to warn the user of potential patient motion between the PET images and the MR images that can impact the attenuation correction for the PET images. As another example, the main controllers 42 or other device displaying the PET and MR images can cooperate with the analysis processor 96 to warn the user of a potential misalignment between the MR and PET images. In this way, reading and diagnosis can be improved.
Even more, the foregoing approaches to monitoring the hybrid scanner 12, 48 were described in connection with a hybrid PET and MR scanner. It is to be understood that the foregoing approaches find application in connection with other hybrid scanners, such as a hybrid PET and CT scanner, a hybrid single-photon emission computed tomography (SPECT) and CT scanner, and a hybrid SPECT and MR scanner. Further, the foregoing approaches to monitoring the hybrid scanner 12, 48 find application in connection with hybrid scanners with more than two imaging modalities.
Moreover, notwithstanding that the registration processor 90, the analysis processor 96 and the report processor 98 were illustrated as external to the PET and MR main controllers 42, it is to be appreciated that one or more of these components can be integrated with the main controllers 42 as software, hardware or a combination of both. Moreover, notwithstanding that the registration parameter memory 94 was illustrated as external to the main controllers 42, it is to be appreciated that the registration parameter memory 94 can be integrated with the main controllers 42.
With reference to FIGURE 4, a method 150 for truncation compensation of an image is provided. The method 150 is typically performed by the truncation compensation processor 82. The method 150 includes receiving or generating 152 a joint histogram describing frequencies of feature vectors in a pair of images. The images of the pair can include the image to be compensated for truncation. The images are registered to each other and resampled as necessary so there is a one-to-one correspondence between pixels (or voxels) in the images. Further, the images are generated with different imaging modalities and of a common target volume. One image of the pair is generated using the same imaging modality used to generate the image to be compensated for truncation. Typically, the two imaging modalities are MR and PET.
The feature vectors are in the same feature space, whereby each of the feature vectors includes the same variables and size. Further, each of the feature vectors represents features at a spatial location by at least values of images of the pair at the spatial location. In some instances, the features vectors each include values for spatial and/or contextual data. For example, the features vectors can each include a spatial value, such as a Z-direction value, for the corresponding spatial location. As another example, the feature vectors can each include a contextual value for the corresponding spatial location. The contextual value can correspond to a value of a neighbor of the corresponding spatial location in the image of the pair generated using the modality other than the modality of the image for which truncation compensation is desired.
To generate the joint histogram, a feature vector at each spatial location is mapped 154 (or computed) to the feature space to generate a feature vector. The spatial location is common to both images. The generated feature vector is then used to lookup 156 a location in the joint histogram and a count is added 158 to the location. The joint histogram includes N dimensions, where N is the number of dimensions of the feature vectors.
Regardless of whether the joint histogram is generated or received, the joint histogram is used to estimate 160 values of the truncated region of the image to be compensated for truncation. Alternatively, the joint histogram is used to estimate 160 values of the entire image. In such instances, the maximum of an estimated value and an existing value is used as the value of each spatial location in the non-truncated region of the image. The estimation 160 includes looking up 162 distributions of values in the joint histogram with partial feature vectors. Estimates are selected 164 from the distributions.
Assuming truncation compensation is performed on only the truncated region of the image, for each truncated spatial location of the truncated region, the PET voxel value and it's spatial and context data are mapped 166 to the feature space to generate a partial feature vector. The feature vector is partial because there is no MR value at the spatial location. The partial feature vector is used to look up 162 a distribution of MR values in the joint histogram, and a value is selected 164 from the distribution as an estimate. For example, the average value of the distribution can be selected. As another example, the value with the largest histogram count of the distribution can be selected.
In some instances, the method 150 is applied to MRAC. In such instances, MR data of a target volume is reconstructed into an MR image, which is then mapped to a MRAC image. PET data of the target volume can be reconstructed into a PET image without attenuation correction, and the joint histogram can be generated from the PET and MRAC map images. That way, the truncated MRAC map values can be estimated. Alternatively, the joint histogram is received from an external source or generated from a different pair of PET and MRAC map images. Once the joint histogram is generated or received, values of the MRAC map image for the truncated region are estimated from the joint histogram to compensate the MRAC map image for truncation. The PET data is then reconstructed into a PET image with attenuation correction using the truncation compensated MRAC map image.
With reference to FIGURE 5, a method 200 for monitoring a trend in an alignment calibration state of a hybrid imaging scanner 12, 48 is provided. The method 200 is suitably performed through cooperation between the alignment calibration analysis processor 96 and the alignment calibration report processor 98. Sets of registration parameters are received 202, typically from a registration parameter memory 94. Each set corresponds to a pair of images captured together on the hybrid imaging scanner 12, 48. Further, each set describes the registration results of the pair. The received sets are analyzed 204 to identify trends and to statistically infer the alignment calibration state of the hybrid imaging scanner 12, 48. Based on the analysis, a report is sent 206 or visually displayed to a user indicating the inferred alignment calibration state of the hybrid imaging scanner 12, 48. Images pairs with subject motion can also be identified from the analyzing 204.
As used herein, a memory includes any device or system storing data, such as a random access memory (RAM) or a read-only memory (ROM). Further, as used herein, a processor includes any device or system processing input device to produce output data, such as a microprocessor, a microcontroller, a graphic processing unit (GPU), an application-specific integrated circuit (ASIC), a FPGA, and the like; a controller includes any device or system controlling another device or system, and typically includes at least one processor; a user input device includes any device, such as a mouse or keyboard, allowing a user of the user input device to provide input to another device or system; and a display device includes any device for displaying data, such as a liquid crystal display (LCD) or a light emitting diode (LED) display.
The invention has been described with reference to the preferred embodiments.
Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims

CLAIMS:
1. A system (10) for truncation compensation of an image, said system (10) comprising:
a histogram module (84) configured to receive or generate a joint histogram describing frequencies of feature vectors in a pair of images, the images of the pair generated with different imaging modalities, the feature vectors representing features at a spatial location by at least values of the images of the pair at the spatial location; and
an estimation module (86) configured to estimate values for a truncated region of the image, the estimation including looking up distributions of values in the joint histogram with partial feature vectors representing a truncated spatial location.
2. The system (10) according to claim 1, wherein the histogram module (84) includes one or more processors (82) being programmed to generate the joint histogram by: for each spatial location in an overlapping region of the pair of images:
mapping voxel values and spatial and context data to generate a feature vector;
looking up a histogram location in the joint histogram corresponding to the feature vector; and
adding a count to the histogram location.
3. The system (10) according to either one of claims 1 and 2, wherein the estimation module (86) includes one or more processors (82) being programmed to estimate the values by: for each truncated spatial location:
mapping voxel values and spatial and context data at the spatial location to generate a partial feature vector;
looking up a distribution of values in the joint histogram with the partial feature vector; and
selecting a value of the distribution as an estimate.
4. The system (10) according to any one of claims 1-3, wherein the joint histogram includes all dimensions of a feature space of the feature vectors related to voxel values and corresponding spatial and context data.
5. The system (10) according to any one of claims 1-4, wherein the feature vectors include a spatial value, such as a Z-direction value, for the corresponding spatial location.
6. The system (10) according to any one of claims 1-5, wherein the feature vectors include a contextual value at the spatial location, such as a voxel value of the second image at the spatial location or other voxel values at neighboring voxels.
7. The system (10) according to any one of claims 1-6, further including one or more processors (38, 76, 82, 96, 98) programmed to:
reconstruct positron emission tomography (PET) data of a target volume into a PET image without attenuation correction;
reconstruct magnetic resonance (MR) data of the target volume into an MR image;
generate the joint histogram from the PET and MR images, the images of the pair being the MR and PET images;
estimate values of the MR image for the truncated region from the joint histogram to compensate the MR image for truncation; and
reconstruct the PET data into a PET image with attenuation correction using the truncation compensated MR image.
8. The system (10) according to claim 7, wherein the MR image is a magnetic resonance (MR)-based attenuation correction (AC) (MRAC) map image.
9. The system (10) according to any one of claim 1-8, further including one or more processors (38, 76, 82, 96, 98) programmed to:
receive or generate sets of registration parameters resulting from registration of image pairs generated with a hybrid imaging scanner (12, 48); analyze the sets to statistically infer a calibration state of the hybrid imaging scanner (12, 48); and
report the inferred calibration state of the hybrid imaging scanner (12, 48) to a user.
10. The system (10) according to claim 9, wherein the calibration state is inferred by at least one of:
comparing the registration parameters to thresholds;
subtracting residual calibration and registration error from the registration parameters before comparing the registration parameters to the thresholds;
comparing a probability threshold to probabilities of the registration parameters being anomalies; and
perform a t-test of the difference between means on a set of calibrated registration parameters and a set of registration parameters with an unknown calibration state.
11. The system (10) according to any one of claims 1-10, further including one or more processors (38, 76, 82, 96, 98) programmed to:
receive or generate sets of registration parameters resulting from registration of image pairs generated with a hybrid imaging scanner (12, 48);
analyze the sets to identify subject motion; and
report the identified subject motion to a user.
12. A method (150) for truncation compensation of an image, said method (150) comprising:
receiving or generating (152) a joint histogram describing frequencies of feature vectors in a pair of images, the images of the pair generated with different imaging modalities, the feature vectors representing features at a spatial location by at least values of the images of the pair at the spatial location; and
estimating (160) values for a truncated region of the image, the estimating (160) including looking up (162) distributions of values in the joint histogram with partial feature vectors representing a truncated spatial location.
13. The method (150) according to claim 12, further including:
generating (152) the joint histogram by:
for each spatial location:
mapping (154) voxel values and spatial and context data to
generate a feature vector;
looking up (156) a histogram location in the joint
histogram corresponding to the feature vector; and
adding (158) a count to the histogram location.
14. The method (150) according to either one of claims 12 and 13, wherein the estimating (160) includes:
for each truncated spatial location of the truncated region:
mapping (166) voxel values and spatial and context data at the spatial location to generate a partial feature vector;
looking up (162) a distribution in the joint histogram with the partial feature vector; and
selecting (164) a value of the distribution as an estimate
15. The method (150) according to any one of claims 12-14, wherein the feature vectors include at least one of:
voxel values;
a spatial value, such as a Z-direction value, for the corresponding spatial location; and a contextual values for the corresponding spatial location.
16. The method (150) according to any one of claim 12-15, further including:
reconstructing positron emission tomography (PET) data of a target volume into a PET image without attenuation correction;
reconstructing magnetic resonance (MR) data of the target volume into an MR image; generating the joint histogram from the PET and MR images, the images of the pair being the MR and PET images;
estimating values of the MR image for the truncated region from the joint histogram to compensate the MR image for truncation; and reconstructing the PET data into a PET image with attenuation correction using the truncation compensated MR image.
17. The method (150) according to any one of claims 12-16, further including:
receiving sets of registration parameters resulting from registration of image pairs generated with a hybrid imaging scanner, the first and second images being one of the image pairs;
analyzing the sets to statistically infer a calibration state of the hybrid imaging scanner; and
reporting the inferred calibration state of the hybrid imaging scanner to a user and a recommendation to calibrate the hybrid imaging scanner based on the inferred calibration state.
18. A system (10) for monitoring a calibration state of a hybrid imaging scanner comprising one or more processors (38, 76, 82, 96, 98) programmed to:
receive or generate sets of registration parameters resulting from registration of image pairs generated with the hybrid imaging scanner (12, 48);
analyze the sets to statistically infer a calibration state of the hybrid imaging scanner (12, 48); and
report the inferred calibration state of the hybrid imaging scanner (12, 48) to a user.
19. The system (10) according to claim 18, wherein the one or more processors infer the calibration state by at least one of:
comparing the registration parameters to thresholds;
subtracting residual calibration and registration error from the registration parameters before comparing the registration parameters to the thresholds;
comparing a probability threshold to probabilities of the registration parameters being anomalies; and
perform a t-test of the difference between means on a set of calibrated registration parameters and a set of registration parameters with an unknown calibration state.
20. The system (10) according to either one of claims 18 and 19, wherein the one or more processors (38, 76, 82, 96, 98) are further programmed to:
analyze the received or generated sets of registration parameters to identify subject motion; and
report the identified subject motion to the user.
PCT/IB2015/054067 2014-06-17 2015-05-29 Magnetic resonance (mr)-based attenuation correction and monitor alignment calibration WO2015193756A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462013029P 2014-06-17 2014-06-17
US62/013,029 2014-06-17

Publications (2)

Publication Number Publication Date
WO2015193756A2 true WO2015193756A2 (en) 2015-12-23
WO2015193756A3 WO2015193756A3 (en) 2016-03-24

Family

ID=53719798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/054067 WO2015193756A2 (en) 2014-06-17 2015-05-29 Magnetic resonance (mr)-based attenuation correction and monitor alignment calibration

Country Status (1)

Country Link
WO (1) WO2015193756A2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7471851B2 (en) * 2005-11-10 2008-12-30 Honeywell International Inc. Method and apparatus for propagating high resolution detail between multimodal data sets
JP5462865B2 (en) * 2008-05-15 2014-04-02 コーニンクレッカ フィリップス エヌ ヴェ Use of non-attenuated corrected PET emission images to compensate for imperfect anatomical images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Also Published As

Publication number Publication date
WO2015193756A3 (en) 2016-03-24

Similar Documents

Publication Publication Date Title
US8600136B2 (en) Method for generation of attenuation map in PET-MR
US9474495B2 (en) System and method for joint estimation of attenuation and activity information
Salomon et al. Simultaneous reconstruction of activity and attenuation for PET/MR
EP2887874B1 (en) Mr receive coil localization and mr-based attenuation correction
EP2684066B1 (en) Mr segmentation using nuclear emission data in hybrid nuclear imaging/mr
CN106255994B (en) For being filtered in the reconstruction of positron emission tomography (PET) list mode iterative approximation
US9841515B2 (en) Dead pixel identification in positron emission tomography (PET)
US8923592B2 (en) Methods and systems for performing attenuation correction
US20100074501A1 (en) Co-Registering Attenuation Data and Emission Data in Combined Magnetic Resonance/Positron Emission Tomography (MR/PET) Imaging Apparatus
US20170319154A1 (en) Outside-fov activity estimation using surview and prior patient data in positron emission tomography
EP3292426B1 (en) Solving outside-field of view scatter correction problem in positron emission tomography via digital experimentation
EP3210187B1 (en) Classified truncation compensation
US20160066874A1 (en) Attenuation correction of positron emission tomography data using magnetic resonance images depicting bone density variations
US10101475B2 (en) Dead pixel compensation in positron emission tomography (PET)
WO2015193756A2 (en) Magnetic resonance (mr)-based attenuation correction and monitor alignment calibration
US10064589B2 (en) Method, apparatus, and article for pet attenuation correction utilizing MRI
Mollet et al. Experimental evaluation of simultaneous emission and transmission imaging using TOF information
Ladefoged et al. PET/MRI attenuation correction
Watson et al. A sparse transmission method for PET attenuation correction in the head

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15741599

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15741599

Country of ref document: EP

Kind code of ref document: A2