WO2012109641A2 - Systems, methods and computer readable storage mediums storing instructions for 3d registration of medical images - Google Patents

Systems, methods and computer readable storage mediums storing instructions for 3d registration of medical images Download PDF

Info

Publication number
WO2012109641A2
WO2012109641A2 PCT/US2012/024821 US2012024821W WO2012109641A2 WO 2012109641 A2 WO2012109641 A2 WO 2012109641A2 US 2012024821 W US2012024821 W US 2012024821W WO 2012109641 A2 WO2012109641 A2 WO 2012109641A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
landmarks
internal
similarities
Prior art date
Application number
PCT/US2012/024821
Other languages
French (fr)
Other versions
WO2012109641A3 (en
Inventor
Baowei Fei
Xiaofeng Yang
Original Assignee
Emory University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Emory University filed Critical Emory University
Publication of WO2012109641A2 publication Critical patent/WO2012109641A2/en
Publication of WO2012109641A3 publication Critical patent/WO2012109641A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4375Detecting, measuring or recording for evaluating the reproductive systems for evaluating the male reproductive system
    • A61B5/4381Prostate evaluation or disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4416Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • Prostate cancer is the second leading cause of cancer death in men in the United States.
  • Prostate specific antigen (PSA) measured via a blood test and digital rectal examination (DRE) are used to screen for prostate cancer, followed by a transrectal ultrasound (TRUS) guided biopsy to confirm.
  • TRUS-guided biopsy is the clinical standard for definitive diagnosis of prostate cancer. While 2D TRUS-guided biopsy is routinely performed, however, 2D TRUS images do not provide 3D location of the biopsy sample. Consequently, the physician must mentally estimate the 3D location of the biopsy needle based on limited 2D information, thus leading to suboptimal biopsy targeting.
  • Multimodality registration is one of the most interesting topics, because it paves the way to construct a comprehensive understanding of anatomic or pathologic structure by integrating information gained from different imaging modalities.
  • Non-rigid registration is the building block for a variety of medical image analysis tasks, such as multi-modality information fusion, atlas-based image segmentation and computational anatomy.
  • voxel-wise/intensity-based methods can be generally classified into two main categories: voxel-wise/intensity-based methods and landmark/feature -based methods.
  • voxel-wise methods the attributes used for characterizing voxels can be often not optimal and equally utilizing all imaging data may undermine the performance of the optimization process. See, e.g., Ou, Y. and Davatzikos, C, Inf. Process Med Imaging, 2009, 21 :50-62.
  • Feature based methods are often used due to their speed. See, e.g., B. Zitova and J. Flusser, Image Vis. Comput, 2003, 21(11): 977-1000.
  • Some hybrid methods integrate geometric features and intensity based local similarity measures for computing correspondences. See, e.g., P. Cachier, J.et al., In MICCAI, 2001, 734-742; and D. Shen and C. Davatzikos, IEEE Trans Med Imaging, 2002,
  • Feature registration requires manually identified landmarks that could vary between different operators. Moreover, registration of a limited number of landmarks may be difficult to recover non-rigid deformation which is distinct at each location.
  • Feature -based non-rigid image registration methods are often subject to the impact of image noise, feature outliers and deformations.
  • the presence of noise makes it difficult for the extracted feature points to be exactly matched.
  • the outliers are those feature points detected in one image without correspondences in the other. These outliers need to be rejected during the matching process.
  • Non-rigid registrations often involve irregular deformations, which may lead to additional non-exactly matches and outliers.
  • a method for non-rigid registration that is capable of determining exactly corresponding points, rejecting outliers and determining an accurate transformation that describes the irregular deformation.
  • TPS-RPM Thin-Plate Splines Robust Point Matching
  • Yang et al. developed a hybrid deformable matching algorithm using automatically extracted feature points and local salient region features to register images. See Yang et al., 2006 CVPR, 2006, 1825-1832. In this paper, the correspondences are optimized using the Euclidean distance-based geometric features and the intensity-based local salient region features.
  • Zhan et al. used boundaries and internal landmarks to register prostate histological and MR images. See Zhan et al., Acad. Radiol., 2007, 14(11): 1367-1381.
  • a phantom implanted with 48 seeds was imaged with TRUS and CT. TRUS images were filtered, compounded, and registered to the reconstructed implants by using an intensity-based metric. See Fallavollita et al., Med Phys., 2010, 37(6):2749-2760.
  • Hu et al. proposed a "model-to-image" registration approach.
  • a deformable model of the gland surface derived from a magnetic resonance (MR) image, was registered automatically to a TRUS volume, by maximizing the likelihood of a particular model shape given a voxel-intensity- based feature, and this future is an estimate of surface normal vectors at the boundary of the gland.
  • MR magnetic resonance
  • the disclosure relates to systems, methods, and computer-readable mediums storing instructions for processing at least a first image of an organ from a first imaging
  • the disclosure may relate to a method for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images.
  • the method may include processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and generating at least one registered image.
  • the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image
  • the second image may include an ultrasound (US) image
  • the organ may be the prostate.
  • the surface landmarks may include prostate boundaries
  • the internal landmarks may include salient internal anatomical regions.
  • the similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volumes may be determined based on overlapping volume matching.
  • the method may further include determining surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image.
  • the method may further include optimizing the
  • the optimizing may include minimizing the distances of
  • the method may include outputting the registered images.
  • the registered images may be outputted to a display.
  • the registered images may be outputted to a biopsy system to be displayed with a biopsy probe.
  • the processing may include integrating similarities; and applying smooth constraints.
  • the disclosure may relate to a computer-readable storage medium storing instructions for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images.
  • the instructions may include processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and generating at least one registered image.
  • the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image
  • the second image may include an ultrasound (US) image.
  • CT computer tomography
  • MR magnetic resonance
  • US ultrasound
  • the organ may be a prostate.
  • the surface landmarks may include prostate boundaries, and the internal landmarks may include salient internal anatomical regions.
  • the similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volume may be determined based on overlapping volume matching.
  • the medium may further include instructions for determining surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image; and determining the anatomical region for each image.
  • the processing may include integrating similarities; and applying smooth constraints.
  • the medium may further include optimizing the processing to register the images.
  • the disclosure may relate to system configured to a process at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images an image processor.
  • the system may include an image processor.
  • the image processor may be configured to process the first and second images to register the images.
  • the process may include determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images.
  • the processor may be configured to generate at least one registered image.
  • the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image
  • the second image may include an ultrasound (US) image
  • the organ may be a prostate.
  • the surface landmarks may include prostate boundaries
  • the internal landmarks may include salient internal anatomical regions. The similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volumes may be determined based on overlapping volume matching.
  • the processor may be configured to optimize the process to register the images.
  • Figure 1 shows a method according to embodiments for processing images to generate a registered image
  • Figure 2 illustrates an example of a schematic diagram of the registration
  • Figure 3 shows exemplary steps according to embodiments for processing images
  • Figure 4 illustrates an example of registered images
  • Figure 5 shows an example of a system according to embodiments for
  • TPS-RPM algorithm allows outliers existing only in the alignment point set, while it is unable to handle outliers in the reference point set.
  • TPS-RPM algorithm allows outliers existing only in the alignment point set, while it is unable to handle outliers in the reference point set.
  • many of them may have no correspondence.
  • the disclosure relates to a hybrid approach to registration that may simultaneously optimize the similarities of at least images (from different imaging modalities) from point-based registration and volume overlap matching terms.
  • the registration may be obtained by minimizing the distances of corresponding points at the surface and within the prostate, and by maximizing the overlap ratio of bladder neck of both images.
  • the hybrid approach may not only capture deformations at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. B-splines may be used for generating a smooth non-rigid spatial transformation.
  • the 3D non-rigid registration methods may be used to combine PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy.
  • Combined PET/CT can offer metabolic, functional, and anatomic information.
  • TRUS transrectal ultrasound
  • TRUS transrectal -ultrasound
  • US ultrasound
  • TRUS images TRUS images
  • TRUS images TRUS images
  • the disclosure is not limited to prostate, TRUS, and TRUS images, and may be applied to ultrasound guided biopsies and/or ultrasound images of other anatomical landmarks, including, but not limited, to breast(s), lung(s), lymph node(s), kidney, cervix, and liver.
  • the methods of the disclosure are not limited to the steps described herein. The steps may be individually modified or omitted, as well as additional steps may be added. In some embodiments, all of the steps of the method may be performed automatically. In other embodiments, some steps of the method may be performed manually. [0040] The methods of the disclosure are not limited to the order of steps shown in the figures. The steps may occur simultaneously, sequentially, or a combination thereof.
  • applying may refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods may be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the disclosure.
  • Figure 1 illustrates a method 100 according to embodiments to process at least one ultrasound images and at least one another modality image, for example, pretreatment or preoperative medical images, to register the images.
  • the processing method may include a step of receiving images.
  • the images may be of an organ of a patient.
  • the organ may include but is not limited to prostate, breast, liver, and cervix.
  • the images may include image(s) from different medical imaging devices.
  • the images may include image data.
  • Each image may include at least one image of an organ from a modality.
  • the image may be preprocessed.
  • the receiving step may include receiving 110 a (first) image from a (first) modality.
  • the images may be pretreatment or preoperative images.
  • the images may be CT/PET, or MR image(s) of the prostate imaged by a PET/CT and/or MR image capable system.
  • the receiving step may include receiving 112 a (second) image(s) of the organ from a (second) different modality.
  • the modality may include but is not limited to ultrasound (US) images.
  • the images may be ultrasound images of the prostate imaged with a TRUS. Determining Step
  • the method may include a step of determining landmarks and/or features of each of the images.
  • the determining step may automatically determine at least two types of landmarks.
  • the landmarks may include surface landmarks and internal landmarks.
  • the determining step may also determine additional features of the images.
  • the additional features may include anatomical features or regions.
  • a step of determining 120 may include steps of determining at least one surface landmark 121 and at least one internal landmark 122 for the first image 110, and may include steps of determining at least one surface landmark 126, and at least one internal landmark 127 for the second image 112.
  • the method may further include determining volume 123 for the first image 110 and determining the volume 128 for the second image 112.
  • additional landmarks and/or features of each image may be determined.
  • the surface landmark(s) may include at least a portion of a boundary of the organ imaged.
  • the surface landmark may include at least one point along the organ boundary.
  • the surface landmark may include at least one point along the prostate capsule.
  • the internal landmarks may include salient internal anatomical regions.
  • the internal landmarks may include gland tissues containing fluid, calcification, and anatomical features or regions, for example, urethra and bladder neck.
  • the landmarks and volumes may be specific to the organ imaged.
  • the volume may include at least one volume of the organ.
  • the volume may include a volume of the organ, e.g., the prostate.
  • the volume may include the volume of the entire organ or gland (prostate).
  • the volume may optionally or additionally include volume of anatomical regions of the organ.
  • the volume may include the volume of the bladder neck region.
  • the landmarks and/or features may be determined by any known method.
  • the determining the landmarks and/or features may be determined by segmenting each image.
  • the determining the surface landmark may include segmenting the organ boundaries, for example, the prostate capsule, from each image.
  • the determining may further include generate a triangular mesh for each prostate surface using a marching cubes algorithm with the vertices of the surface selected as the surface landmarks. See, e.g., A.L. Yuil and J.J. Kosowsky, Neural Computation, 1994, 6(3):341 -356.
  • the surface landmark, and other landmarks and features may be determined according to other methods.
  • the surface landmark, and other landmarks and features may be automatically determined.
  • the step 120 of determining the landmarks and features for each image may occur simultaneously. In other embodiments, the determining may occur sequentially.
  • the method may further include a step 130 of processing the landmarks and features to register the images.
  • the processing may occur after each landmark and volume is generated and/or determined.
  • the steps of determining landmarks and processing may occur simultaneously.
  • the processing may occur after some or all landmarks or features (e.g., volume) are generated or determined.
  • the processing step 300 may include a step (310) of determining similarities between each landmark and feature determined for each CT and US image.
  • the similarity between surface and internal landmarks may be defined by geometric features and the similarity between the volumes of region(s) may be defined as volume overlap matching.
  • the processing step may be based on at least three registration criteria.
  • the first and second registration criterion may relate to finding similarities between the surface and internal landmarks.
  • the third registration criterion may relate to minimization of volume overlap between the volumes.
  • the minimization may be specific to a region. For example, the minimization may include minimizing the volume overlap of the bladder neck volume and/or gland volume.
  • Figure 2 shows a schematic diagram of the registration method according to
  • x T and y may represent surface landmarks of the prostate from the segmented CT and US images, respectively
  • u T and v may represent internal landmarks (e.g. urethra and calcification) within the prostate on the CT and US images, respectively.
  • B CT and B us may represent the volume of a region of the gland (e.g., in the example, the volume of the entire gland or bladder neck region) on the CT and US images, respectively.
  • the determined surface landmarks and internal landmarks may be assumed to be
  • a binary corresponding matrix P may be defined with dimension / + 1 x ( J + 1) and Q may be defined with dimension K + 1) x (L + 1)
  • the I x / inner submatrix may define the correspondences of X and Y, and the K x L inner submatrix may define the correspondences of U and V.
  • p tJ and q kl may have real values between 0 and 1 , which denote the fuzzy correspondences between landmarks. See, e.g., Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 1 14-141.
  • the surface landmarks may be determined first for each image.
  • the points on the organ boundaries, such as prostate boundaries, may be selected as the first type of landmarks to be used to process the images to register the images.
  • the step of processing may include a step of determining applying a similarity function to the surface landmarks.
  • the similarity between surface landmarks in respective CT and US images may be defined a Euclidean distance between their vectors— smaller distance indicates higher similarity between them.
  • each surface landmark is actually a vertex of the surface, its spatial relations with vertices in the neighborhood may be used to describe the geometric properties around the surface landmark.
  • an affine invariant attribute vector may be used to characterize the geometric anatomy around each surface landmark. See, e.g., Lorensen et al., SIGGRAPH Comput. Graph, 1987, 21 (4): 163-169. Assuming x t is a surface landmark under study, its geometric attribute may be defined as the volume of the tetrahedron formed by , and its neighboring vertices.
  • the volume of the tetrahedron formed by the immediate neighbors reflects local shape information
  • the volumes of the tetrahedrons formed by the second or higher level neighbors may represent more global geometric properties around x,.
  • the similarity between two surface landmarks x, and j;, respectively, in CT and US images may be defined by a Euclidean distance between their normalized attribute vectors.
  • P may to take values from interval [0, 1] in hybrid energy function (2), for example, using a softassign technique. See, for example, Belongie et al., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2002, 24(4):509-522.
  • the continuous property of P may acknowledge ambiguous matches between X and Y.
  • the first term may be the geometric feature -based energy term defined by Euclidean distance. Similarly on two surface landmarks is measured by the Euclidean distance between their geometric feature vectors— smaller distance indicates higher similarity between them.
  • the second term may be an entropy term comes from the deterministic annealing technique, which is used to directly control the fuzziness of P.
  • may be the temperature parameter.
  • the third term may be used to direct the correspondences matrixes P converging to binary. With a higher, the correspondences may be forced to be fuzzier and become a factor in "convexifying" the objective function. Although ⁇ is gradually reduced to zero, the fuzzy correspondences may become binary. See, e.g., Chui et al., Computer Vision and Image Understanding, 2002, 8(2-3): 114-141. The third term may be used to balance the outlier rejection.
  • the method may include determining the similarities between a second type of landmarks, the internal landmarks of the CT and US images. This may occur after determining the similarities between the surface landmarks.
  • Q may take values from interval [0, 1] in hybrid energy function (3), by using a softassign technique.
  • the continuous property of Q may acknowledge ambiguous matches between U and V.
  • the first term may be the geometric feature -based energy term defined by Euclidean distance.
  • two internal landmarks may be measured by the Euclidean distance between their geometric feature vectors— smaller distance indicates higher similarity between them.
  • the second term may be an entropy term comes from the deterministic annealing technique, which is used to directly control the fuzziness of Q. ⁇ may be called the temperature parameter.
  • the third term may be used to balance the outlier rejection.
  • the goal of the registration may be to determine an optimal correspondence matrix P and Q and an optimal spatial transform /, that matches these two points' sets X and Y, U and V as closely as possible.
  • “Close” may mean not only in the Euclidean sense, but also in the sense of salient region feature similarity.
  • the hybrid energy function may be minimized as follows,
  • E LS (f) aE ss (f ⁇ x T , y s ⁇ ) + fiE ls (f ⁇ u ⁇ , v ⁇ ⁇ )
  • E ss (f ⁇ x T , y s ⁇ ) and E IS (f ⁇ u ⁇ T , v ⁇ s ⁇ ) may be the similarity metric on both surface landmarks and internal landmarks in CT and US images. /may be the bending energy, and ⁇ may be balancing parameters.
  • the similarities between the volume may be determined by minimizing bladder neck volume overlap between the volumes of each modality image. In some embodiments, the similarities between the volume of other regions and/or the entire organ or gland may also be or alternatively be determined.
  • the method may include a third registration criterion:
  • minimization of at least one volume of a region e.g., entire gland region or bladder neck region.
  • a region e.g., entire gland region or bladder neck region
  • B CT may be the radiologist defined organ or gland region (e.g., bladder neck ground truth region or entire gland region) in CT image, and B us may be the actual organ or gland region (e.g., bladder neck region or entire gland region) in US image.
  • B us may be the actual organ or gland region (e.g., bladder neck region or entire gland region) in US image.
  • the processing step may include integrating (step 320) the similarities between respective landmarks and features, and adding smoothness constraints (step 330) on the estimated transformation between segmented CT and US images. See, e.g., Zhan et al., Acad. Radiol., 2007, 14(11):1367-1381 (2007); Yang et al., CVPR, 2006, 1825-1832; and Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 114-141.
  • the transformation between CT and US images may be represented by a general function, which can be modeled by various function basis.
  • a transformation basis function may include but is not limited to multiquadratic, thin-plate spline, radial basis, or B-spline. See respectively, e.g., Jekeli, Computers & Mathematics with Applications, 1994, 28(7):43-46; Stammberger et al., Magn Reson. Med., 2000, 44(4):592-601, and Bookstein, F.L., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 1989, 11(6):567-585; Arad, N.
  • B-splines may be the transformation basis.
  • E(f) E LS (f) + yE VM (f ⁇ B CT , B us ⁇ ) + AE s (f) (g)
  • E LS ( f) may be the similarity metric on both surface landmarks and internal landmarks in CT and US images.
  • E rM (f ⁇ B CT , B us ⁇ ) may be the similarity metric between the volume (e.g., in this example the bladder neck) bladder neck B CT m CT image and the bladder nsckB us in US image.
  • E s (f) may be the regularization of the transformation, which is described by the bending energy of , ⁇ , ⁇ and ⁇ may be balancing parameters.
  • the steps of the processing the images for registration may be accomplished by applying an overall similarity function (see equation (8)) to the segmented images.
  • a , ⁇ , ⁇ , and ⁇ may be the weights for each energy term.
  • E ss may be the similarity for surface landmarks
  • E IS may be the similarity for internal landmarks.
  • E ⁇ may be the energy term for the bladder-neck volume matching and E s may be the smoothness constraint term, ⁇ and
  • T may be the temperature parameters and its respective weighted term may be an entropy term that comes from the deterministic annealing technique. See, e.g., A.L. Yuil and J.J. Kosowsky, Neural Computation, 1994, 6(3): 341-356.
  • ⁇ and ⁇ may be the weight for outlier rejection term.
  • Matrixes and q u may be the fuzzy correspondence matrixes. See, e.g., Chui, H. and Rangarajan, A.,
  • the overall similarity function is not limited to the landmarks and features provided in the equation (8). In some embodiments, the overall similarity function may be modified to register additional landmarks and/or features may be added.
  • the similarity functions are not limited to the volume of the bladder neck region. This is only an illustrative example. The method may be modified for volumes of other regions, such as the entire prostate, or other prostate regions.
  • the method may include a step 132 of optimizing the registration of the images.
  • the optimizing may include applying an alternative optimization algorithm.
  • the optimizing step may minimize the overall similarity function by applying an alternating optimization algorithm that successively updates the correspondences matrixes p tJ and3 ⁇ 4 , and the transformation function / The optimizing step may be repeated until there are no updates of the correspondence matrixes P and Q.
  • the optimizing step may include a (first) step of updating the fixed transformation /, the correspondence matrixes between landmarks by minimizing E( f) .
  • the updated correspondence matrixes may then be treated as the (fixed) temporary correspondences between landmarks.
  • the optimizing step may include a (second) step of updating the transformation function / with the fixed temporary correspondence matrixes p tj and q kl .
  • the two steps may be alternatively repeated until there are no updates of the correspondence matrixes P and Q. It is worth noting that ⁇ and ⁇ in equation (8) may decrease with the progress of iterations, which means less and less smoothness constraints are placed on the transformation between CT and TRUS image.
  • the method may include a step 140 of generating registered images.
  • the images may be generated after the registration is optimized (e.g., there are no updates of the correspondence matrixes P and Q).
  • the generating may include warping the CT image to the US image.
  • Figure 4 shows an example 400 of registration results of a pre-and post-biopsy TRUS images of the same patient.
  • Column 410 shows pre -biopsy images at three directions.
  • Column 420 shows the generated registered post-biopsy images at the three directions.
  • Column 430 shows the corresponding residual between pre- and registered post-biopsy images.
  • Bar 440 shows the intensity range of residual volume.
  • the method may include a step 150 of outputting registered images.
  • the outputting may include but is not limited to displaying the registered image(s), printing the registered image(s), and storing the registered image(s) remotely or locally.
  • the registered image(s) with may be transmitted for further processing.
  • the registered image(s) may be transmitted to an ultrasound system to be displayed.
  • a location of a biopsy probe may be displayed on the registered image.
  • the methods according to embodiments can successfully register the anatomical structures inside the prostate, by using the detected internal landmarks commonly available in both CT and US images. See, e.g., Taylor, et al., Ultrasound Med Biol., 2004, 30(2):161-168; and Jacobs et al., 1989, Med Phys., (26): 1568-1578.
  • the similarity between boundary and internal landmarks may be defined by geometric features, whereas the similarity between bladder neck may be defined by volume overlap matching.
  • the registration framework may incorporate the geometric and image features of landmarks and regions, not nonrigidly matches landmarks based on their spatial relations like those described in Chui et al. See Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 114-141.
  • Figure 5 shows an example of a system 500 configured to process and register images of an organ, for example, a prostate, to register ultrasound images.
  • the system for carrying out the embodiments of the methods disclosed herein is not limited to the system shown in Figure 5. Other systems may be used.
  • the system 500 may include at least two image acquisition systems (modalities) to acquire image data of a patient.
  • the image acquisition devices may include at least an a first and second image acquisition systems 510 and 512.
  • the image acquisition device may be of different modalities.
  • the system may include additional image acquisition systems.
  • One (first) of the image acquisition systems may be a system configured to acquire CT or MR images.
  • the system may include but is not limited to PET/CT, CT, or MR dedicated systems.
  • the other (second) of the image acquisitions systems may be an ultrasound system.
  • the ultrasound system may be a part of a biopsy system, and may include an ultrasound probe.
  • the ultrasound system may be configured to acquire transrectal ultrasound (TRUS) images.
  • TRUS transrectal ultrasound
  • the image acquisition devices may be communicably connected to a medical image storage device 514 as well as a wired or wireless network.
  • the system 500 may further include a computer system 520 to carry out the classifying of the tissue and generating a classified image.
  • the computer system 520 may further be used to control the operation of the system or a computer separate system may be included.
  • the computer system 520 may also be communicably connected to another computer system as well as a wired or wireless network.
  • the computer system 520 may receive or obtain the image data from the image acquisition devices 510 and 512 or from another module provided on the network, for example, a medical image storage device 514.
  • the computer system 520 may include a number of modules that communicate with each other through electrical and/or data connections (not shown). Data connections may be direct wired links or may be fiber optic connections or wireless communications links or the like.
  • the computer system 520 may also be connected to permanent or back-up memory storage, a network, or may communicate with a separate system control through a link (not shown).
  • the modules may include a CPU 522, a memory 524, an image processor 530, an input device 526, a display 528, and a printer interface 529.
  • the CPU 522 may any known central processing unit, a processor, or a microprocessor.
  • the CPU 522 may be coupled directly or indirectly to memory elements.
  • the memory 524 may include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof.
  • the memory may also include a frame buffer for storing image data arrays.
  • the present disclosure may be implemented as a routine that is stored in memory 524 and executed by the CPU 522.
  • the computer system 520 may be a general purpose computer system that becomes a specific purpose computer system when executing the routine of the disclosure.
  • the computer system 520 may also include an operating system and micro instruction code.
  • the various processes and functions described herein may either be part of the micro instruction code or part of the application program or routine (or combination thereof) that is executed via the operating system.
  • various other peripheral devices may be connected to the computer platform such as an additional data storage device, a printing device, and I/O devices.
  • the input device 526 may include a mouse, joystick, keyboard, track ball, touch activated screen, light wand, voice control, or any similar or equivalent input device, and may be used for interactive geometry prescription.
  • the input device 526 may control the production, display of images on the display 528, and printing of the images by the printer interface 529.
  • the display 528 may be any known display screen and the printer interface 529 may any known printer, either locally or network connected.
  • the image processor 530 may be any known central processing unit, a processor, or a microprocessor. In some embodiments, the image processor 530 may process and register the images to generate registered images. In other embodiments, the image processor 530 may be replaced by image processing functionality on the CPU 522.
  • the image processor 530 may be configured to determine landmarks and/or features, process and register the images (data) from the image acquisition devices 510 and 512 and/or the medical image storage device 514. In some embodiments, the image processor 530 may be configured to implement the methods according to embodiments to generate registered images.
  • the registered images may be stored in the memory 524.
  • another computer system may assume the image registration or other functions of the image processor 530.
  • the image data stored in the memory 524 may be archived in long term storage or may be further processed by the image processor 530 and presented on the display 528.
  • the registered images may be transmitted to an image acquisition system, for example, the ultrasound system 510, to be displayed.
  • the embodiments of the disclosure be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof.
  • the disclosure may be implemented in software as an application program tangible embodied on a computer readable program storage device.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the system and methods of the present disclosure may be implemented in the form of a software application running on a computer system, for example, a mainframe, personal computer (PC), handheld computer, server, etc.
  • the software application may be stored on a recording media locally accessible by the computer system and accessible via a hard wired or wireless connection to a network, for example, a local area network, or the Internet.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Reproductive Health (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

Systems, methods and computer-readable storage mediums relate to processing two different modalities images of a patient to generate registered images. The processing may be based on three different registration criteria. The criteria may include determining similarities between surface and internal landmarks defined by geometric features and similarities between the anatomical features (e.g., bladder neck region) defined by volume overlap matching.

Description

SYSTEMS, METHODS AND COMPUTER READABLE STORAGE MEDIUMS STORING INSTRUCTIONS FOR 3D REGISTRATION OF MEDICAL IMAGES
ACKNOWLEDGEMENTS
[0001 ] This invention was made with government support under Grants RO 1 CA156775, NIH P50CA128301, and ULI RR0258008 awarded by the National Institutes of Health. The government has certain rights in the invention.
CROSS REFERENCE TO RELATED APPLICATION
[0002] This application claims priority to United States Provisional Application Serial
Number 61/441,798 filed February 11, 2011, which is hereby incorporated by this reference in its entirety.
BACKGROUND
[0003] Prostate cancer is the second leading cause of cancer death in men in the United States. Prostate specific antigen (PSA) measured via a blood test and digital rectal examination (DRE) are used to screen for prostate cancer, followed by a transrectal ultrasound (TRUS) guided biopsy to confirm. TRUS-guided biopsy is the clinical standard for definitive diagnosis of prostate cancer. While 2D TRUS-guided biopsy is routinely performed, however, 2D TRUS images do not provide 3D location of the biopsy sample. Consequently, the physician must mentally estimate the 3D location of the biopsy needle based on limited 2D information, thus leading to suboptimal biopsy targeting.
[0004] In the recent two decades, medical image registration has become a hot research area, with various applications in longitudinal study, population-based disease study, image information fusion, and image guided intervention. See, e.g., Thompson et al., Cereb. Cortex., 2001 Jan., 11(1): 1- 16: 111-16; Fan et al., In Proc. MICAAI, 2005, 1 -8; Rouet et al., IEEE Trans Inf. Technol. Biomed, 2000 Jun., 4(2):126-136; and Hill et al., Radiology, 1994, 191(2)447-454. Multimodality registration is one of the most interesting topics, because it paves the way to construct a comprehensive understanding of anatomic or pathologic structure by integrating information gained from different imaging modalities. Non-rigid registration is the building block for a variety of medical image analysis tasks, such as multi-modality information fusion, atlas-based image segmentation and computational anatomy.
[0005] Existing non-rigid registration methods can be generally classified into two main categories: voxel-wise/intensity-based methods and landmark/feature -based methods. For voxel-wise methods, the attributes used for characterizing voxels can be often not optimal and equally utilizing all imaging data may undermine the performance of the optimization process. See, e.g., Ou, Y. and Davatzikos, C, Inf. Process Med Imaging, 2009, 21 :50-62. Feature based methods are often used due to their speed. See, e.g., B. Zitova and J. Flusser, Image Vis. Comput, 2003, 21(11): 977-1000. These methods use sparse features extracted from images, such as points, curves, and surface patches, and the registration task is to find their correspondences and compute an optimal transformation. The key for feature -based methods is to find true correspondences between two feature sets. Most of them use Euclidean distance-based geometric features for solving correspondences, for example, the iterative closest point algorithm, the softassign method, shape context-based methods, and kernel correlation-based methods. See, e.g., Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 114-141 ; A. Rangarajan et al., 1997, In Proc. IPMI, 1230:29-42;
Belongie et al., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2002, 24(4):509- 522; B. Jian and B.C. Vemuri, In Proc. ICCV., 2005, 1246-1251 ; and Y. Tsin and T .Kanade, In Proc. ECCV, 2004, 558-569. Some of them use intensity-based local similarity measures, such as cross correlation, mutual information etc., to determine correspondences. See, e.g., Cachier et al., Computer Vision and Image Understanding, 2002, 89(2-3):272-298; and Guimond et al., IEEE Trans Med Imaging, 2001, 20(l):58-69. Some hybrid methods integrate geometric features and intensity based local similarity measures for computing correspondences. See, e.g., P. Cachier, J.et al., In MICCAI, 2001, 734-742; and D. Shen and C. Davatzikos, IEEE Trans Med Imaging, 2002,
21(11): 1421-1439. Feature registration requires manually identified landmarks that could vary between different operators. Moreover, registration of a limited number of landmarks may be difficult to recover non-rigid deformation which is distinct at each location.
[0006] Feature -based non-rigid image registration methods are often subject to the impact of image noise, feature outliers and deformations. The presence of noise makes it difficult for the extracted feature points to be exactly matched. The outliers are those feature points detected in one image without correspondences in the other. These outliers need to be rejected during the matching process. Non-rigid registrations often involve irregular deformations, which may lead to additional non-exactly matches and outliers. Thus, there is a need for a method for non-rigid registration that is capable of determining exactly corresponding points, rejecting outliers and determining an accurate transformation that describes the irregular deformation.
[0007] The TPS-RPM (Thin-Plate Splines Robust Point Matching) algorithm has been proposed for point matching. See, e.g., Chui, H. and Rangarajan, A., Computer Vision and Image
Understanding, 2003, 89(2-3): 114-141. The basic idea of this algorithm is to use the softassign technique allowing fuzzy and partial matches between two point sets. See, e.g., A. Rangarajan et al., In Proc. IPMI, 1997, 1230:29-42; and A.L. Yuille and J.J. Kosowsky, Neural Computation, 1994, 6(3):341 -356.
[0008] Yang et al. developed a hybrid deformable matching algorithm using automatically extracted feature points and local salient region features to register images. See Yang et al., 2006 CVPR, 2006, 1825-1832. In this paper, the correspondences are optimized using the Euclidean distance-based geometric features and the intensity-based local salient region features.
[0009] Zhan et al. used boundaries and internal landmarks to register prostate histological and MR images. See Zhan et al., Acad. Radiol., 2007, 14(11): 1367-1381. In Fallavollita et al., a phantom implanted with 48 seeds was imaged with TRUS and CT. TRUS images were filtered, compounded, and registered to the reconstructed implants by using an intensity-based metric. See Fallavollita et al., Med Phys., 2010, 37(6):2749-2760.
[0010] Hu et al. proposed a "model-to-image" registration approach. A deformable model of the gland surface, derived from a magnetic resonance (MR) image, was registered automatically to a TRUS volume, by maximizing the likelihood of a particular model shape given a voxel-intensity- based feature, and this future is an estimate of surface normal vectors at the boundary of the gland. See Hu et al., Med Image Comput. Comput. Assist. Interv., 2009, 12(l):787-794.
SUMMARY
[0011] The disclosure relates to systems, methods, and computer-readable mediums storing instructions for processing at least a first image of an organ from a first imaging
modality and a second image of an organ from a second modality to register the images.
[0012] In some embodiments, the disclosure may relate to a method for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images. In some embodiments, the method may include processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and generating at least one registered image.
[0013] In some embodiments, the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image may include an ultrasound (US) image. In some embodiments, the organ may be the prostate. The surface landmarks may include prostate boundaries, and the internal landmarks may include salient internal anatomical regions. In some embodiments, the similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volumes may be determined based on overlapping volume matching.
[0014] In some embodiments, the method may further include determining surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image.
[0015] In some embodiments, the method may further include optimizing the
processing of the images. The optimizing may include minimizing the distances of
corresponding points at the surface and within the prostate, and maximizing the overlap ratio of bladder neck between the images. [0016] In some embodiments, the method may include outputting the registered images. The registered images may be outputted to a display. In some embodiments, the registered images may be outputted to a biopsy system to be displayed with a biopsy probe.
[0017] In some embodiments, the processing may include integrating similarities; and applying smooth constraints.
[0018] In some embodiments, the disclosure may relate to a computer-readable storage medium storing instructions for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images. The instructions may include processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and generating at least one registered image.
[0019] In some embodiments, the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image may include an ultrasound (US) image.
[0020] In some embodiments, the organ may be a prostate. The surface landmarks may include prostate boundaries, and the internal landmarks may include salient internal anatomical regions. In some embodiments, the similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volume may be determined based on overlapping volume matching.
[0021] In some embodiments, the medium may further include instructions for determining surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image; and determining the anatomical region for each image.
[0022] In some embodiments, the processing may include integrating similarities; and applying smooth constraints. The medium may further include optimizing the processing to register the images.
[0023] In some embodiments, the disclosure may relate to system configured to a process at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images an image processor. The system may include an image processor. In some embodiments, the image processor may be configured to process the first and second images to register the images. The process may include determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images. The processor may be configured to generate at least one registered image.
[0024] In some embodiments, the first image may include at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image may include an ultrasound (US) image. The organ may be a prostate. In some embodiments, the surface landmarks may include prostate boundaries, and the internal landmarks may include salient internal anatomical regions. The similarities between the surface landmarks and internal landmarks may be determined based on geometric features, and the similarities between the volumes may be determined based on overlapping volume matching.
[0025] In some embodiments, the processor may be configured to optimize the process to register the images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The disclosure can be better understood with the reference to the following drawings and description. The components in the figures are not necessarily to scale,
emphasis being placed upon illustrating the principles of the disclosure.
[0027] Figure 1 shows a method according to embodiments for processing images to generate a registered image;
[0028] Figure 2 illustrates an example of a schematic diagram of the registration
method;
[0029] Figure 3 shows exemplary steps according to embodiments for processing images;
[0030] Figure 4 illustrates an example of registered images; and
[0031] Figure 5 shows an example of a system according to embodiments for
registering images.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0032] The following description, numerous specific details are set forth such as examples of specific components, devices, methods, etc., in order to provide a thorough understanding of embodiments of the disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice embodiments of the disclosure. In other instances, well-known materials or methods have not been described in detail in order to avoid unnecessarily obscuring embodiments of the disclosure. While the disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
[0033] For feature -based non-rigid image registrations, there are generally two problems that need be resolved: the point correspondence and the transformation. Registration could be solved only by determining the right correspondence, where high level information of the points like lines, shapes and spatial relationships was used as attributes of points to locate point correspondence. See, e.g., Feldmar, J. and Ayache, N., International Journal of Computer Vision, 1996, 18(2):99-119; and Cross, A.D.J, and Hancock, E.R., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 1998, 20(11): 1236-1253. These methods usually are restricted to affine or projective transformation and difficult to handle outliers. The other popular methods solve the correspondence and transformation iteratively. Most of these registration methods fall into a similar framework that iterates between two steps: seek correspondence and solve transformation. These methods are distinguished by the types of transformation to be recovered, the way to define the correspondence and the approach to solve transformation.
[0034] One registration method, the TPS-RPM algorithm, allows outliers existing only in the alignment point set, while it is unable to handle outliers in the reference point set. However, with automatically extracted feature points, many of them may have no correspondence.
[0035] According to embodiments, the disclosure relates to a hybrid approach to registration that may simultaneously optimize the similarities of at least images (from different imaging modalities) from point-based registration and volume overlap matching terms. For example, with respect to the prostate, the registration may be obtained by minimizing the distances of corresponding points at the surface and within the prostate, and by maximizing the overlap ratio of bladder neck of both images. The hybrid approach may not only capture deformations at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. B-splines may be used for generating a smooth non-rigid spatial transformation.
[0036] In some embodiments, the 3D non-rigid registration methods may be used to combine PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Combined PET/CT can offer metabolic, functional, and anatomic information. Thus, by registering PET/CT images with TRUS images, the metabolic images from PET would be able to be used to direct targeted biopsy of the prostate.
[0037] The methods are described with respect to transrectal -ultrasound (TRUS) guided prostate biopsy, prostate, ultrasound (US) images, and TRUS images. However, it should be understood that the disclosure is not limited to prostate, TRUS, and TRUS images, and may be applied to ultrasound guided biopsies and/or ultrasound images of other anatomical landmarks, including, but not limited, to breast(s), lung(s), lymph node(s), kidney, cervix, and liver.
[0038] The methods are described with respect to PET/CT images and MR images. However, it should be understood that the disclosure is not limited to combining PET/CT images and/or MR images with ultrasound images, but may be applied to images from other modalities.
REGISTRATION METHODS
[0039] The methods of the disclosure are not limited to the steps described herein. The steps may be individually modified or omitted, as well as additional steps may be added. In some embodiments, all of the steps of the method may be performed automatically. In other embodiments, some steps of the method may be performed manually. [0040] The methods of the disclosure are not limited to the order of steps shown in the figures. The steps may occur simultaneously, sequentially, or a combination thereof.
[0041] Unless stated otherwise as apparent from the following discussion, it will be appreciated that terms such as "analyzing," decomposing," "receiving," "classifying," "preprocessing,"
"correcting, "slicing," "separating," "displaying," "storing," "printing," "quantifying," "filtering," "combining," "reconstructing," "segmenting," "generating," "registering," "determining,"
"obtaining," "processing," "computing," "selecting," "estimating," "detecting," "tracking,"
"applying," "outputting, "defining," or the like may refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Embodiments of the methods described herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods may be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement embodiments of the disclosure.
[0042] Figure 1 illustrates a method 100 according to embodiments to process at least one ultrasound images and at least one another modality image, for example, pretreatment or preoperative medical images, to register the images.
Receiving Step
[0043] In some embodiments, the processing method may include a step of receiving images. The images may be of an organ of a patient. The organ may include but is not limited to prostate, breast, liver, and cervix. The images may include image(s) from different medical imaging devices. The images may include image data. Each image may include at least one image of an organ from a modality. The image may be preprocessed.
[0044] As shown in Figure 1 , the receiving step may include receiving 110 a (first) image from a (first) modality. The images may be pretreatment or preoperative images. The images may be CT/PET, or MR image(s) of the prostate imaged by a PET/CT and/or MR image capable system. In some embodiments, the receiving step may include receiving 112 a (second) image(s) of the organ from a (second) different modality. The modality may include but is not limited to ultrasound (US) images. The images may be ultrasound images of the prostate imaged with a TRUS. Determining Step
[0045] In some embodiments, the method may include a step of determining landmarks and/or features of each of the images. The determining step may automatically determine at least two types of landmarks. The landmarks may include surface landmarks and internal landmarks. The determining step may also determine additional features of the images. The additional features may include anatomical features or regions.
[0046] As shown in Figure 1, a step of determining 120 may include steps of determining at least one surface landmark 121 and at least one internal landmark 122 for the first image 110, and may include steps of determining at least one surface landmark 126, and at least one internal landmark 127 for the second image 112. The method may further include determining volume 123 for the first image 110 and determining the volume 128 for the second image 112. In some embodiments, additional landmarks and/or features of each image may be determined.
[0047] In some embodiments, the surface landmark(s) may include at least a portion of a boundary of the organ imaged. The surface landmark may include at least one point along the organ boundary. For example, with respect to the prostate, the surface landmark may include at least one point along the prostate capsule.
[0048] In some embodiments, the internal landmarks may include salient internal anatomical regions. For example, with respect to the prostate, the internal landmarks may include gland tissues containing fluid, calcification, and anatomical features or regions, for example, urethra and bladder neck.
[0049] In some embodiments, the landmarks and volumes may be specific to the organ imaged.
[0050] In some embodiments, the volume may include at least one volume of the organ. The volume may include a volume of the organ, e.g., the prostate. For example, the volume may include the volume of the entire organ or gland (prostate). In other embodiments, the volume may optionally or additionally include volume of anatomical regions of the organ. For example, the volume may include the volume of the bladder neck region.
[0051] The landmarks and/or features may be determined by any known method. In some embodiments, the determining the landmarks and/or features may be determined by segmenting each image. For example, in some embodiments, the determining the surface landmark may include segmenting the organ boundaries, for example, the prostate capsule, from each image. The determining may further include generate a triangular mesh for each prostate surface using a marching cubes algorithm with the vertices of the surface selected as the surface landmarks. See, e.g., A.L. Yuil and J.J. Kosowsky, Neural Computation, 1994, 6(3):341 -356. In other embodiments, the surface landmark, and other landmarks and features may be determined according to other methods. The surface landmark, and other landmarks and features may be automatically determined. [0052] As shown in Figure 1, the step 120 of determining the landmarks and features for each image may occur simultaneously. In other embodiments, the determining may occur sequentially.
Processing and Optimizing Steps
[0053] In some embodiments, the method may further include a step 130 of processing the landmarks and features to register the images. In some embodiments, the processing may occur after each landmark and volume is generated and/or determined. In some embodiments, the steps of determining landmarks and processing may occur simultaneously. In some embodiments, the processing may occur after some or all landmarks or features (e.g., volume) are generated or determined.
[0054] In some embodiments, the processing step 300 may include a step (310) of determining similarities between each landmark and feature determined for each CT and US image. The similarity between surface and internal landmarks may be defined by geometric features and the similarity between the volumes of region(s) may be defined as volume overlap matching. The processing step may be based on at least three registration criteria. The first and second registration criterion may relate to finding similarities between the surface and internal landmarks. The third registration criterion may relate to minimization of volume overlap between the volumes. The minimization may be specific to a region. For example, the minimization may include minimizing the volume overlap of the bladder neck volume and/or gland volume.
[0055] Figure 2 shows a schematic diagram of the registration method according to
embodiments. x T and y may represent surface landmarks of the prostate from the segmented CT and US images, respectively, u T and v may represent internal landmarks (e.g. urethra and calcification) within the prostate on the CT and US images, respectively. BCT and Bus may represent the volume of a region of the gland (e.g., in the example, the volume of the entire gland or bladder neck region) on the CT and US images, respectively.
[0056] The determined surface landmarks and internal landmarks may be assumed to be
{x| i = !,■■■ and {u\ k = 1, · · · in CT image, and {y\ j = !,■■■ and {v| = 1,— in US image.
The correspondences between the boundary and the internal landmarks may respectively described by two fuzzy correspondence matrixes P and Q. A binary corresponding matrix P may be defined with dimension / + 1 x ( J + 1) and Q may be defined with dimension K + 1) x (L + 1)
Figure imgf000010_0001
(1) [0058] The matrix P = and Q = {qkl} may consist of two parts. The I x / inner submatrix may define the correspondences of X and Y, and the K x L inner submatrix may define the correspondences of U and V. ptJ and qkl may have real values between 0 and 1 , which denote the fuzzy correspondences between landmarks. See, e.g., Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 1 14-141.
[0059] If ; is mapped to y . , then pt. = 1 , otherwise pt. = 0 . If uk is mapped to v, , then qt. = 1 , otherwise qy = 0 . The (J + l)'k or (L + l)th column and the (I + l)'k or (K + \)th row define the outliers in X and Y or U and V respectively. If a landmark cannot find its correspondence, it may be regarded as an outlier and the extra entry of this landmark may be set as 1. That is, if xi oruk is an outlier, then
Pi J+\ = 1 or ¾ £+i = 1 · Similarly, if y . or , is an outlier, then pI+l J = 1 rqK+1 1 = 1 . P and Q may satisfy the row and column normalization conditions, and P may be subject to
7+1 7+1
p,. = l(j = 1 · · · · · · and Q may be subject to
i=l j=\
{∑¾ = i(/ = i - .. - ..
[0060] In some embodiments, the surface landmarks may be determined first for each image. The points on the organ boundaries, such as prostate boundaries, may be selected as the first type of landmarks to be used to process the images to register the images.
[0061] In some embodiments, the step of processing may include a step of determining applying a similarity function to the surface landmarks. The similarity between surface landmarks in respective CT and US images may be defined a Euclidean distance between their vectors— smaller distance indicates higher similarity between them.
[0062] Because each surface landmark is actually a vertex of the surface, its spatial relations with vertices in the neighborhood may be used to describe the geometric properties around the surface landmark. In particular, an affine invariant attribute vector may be used to characterize the geometric anatomy around each surface landmark. See, e.g., Lorensen et al., SIGGRAPH Comput. Graph, 1987, 21 (4): 163-169. Assuming xt is a surface landmark under study, its geometric attribute may be defined as the volume of the tetrahedron formed by , and its neighboring vertices. Although the volume of the tetrahedron formed by the immediate neighbors reflects local shape information, the volumes of the tetrahedrons formed by the second or higher level neighbors may represent more global geometric properties around x,. By using this attribute vector, the similarity between two surface landmarks x, and j;, respectively, in CT and US images, may be defined by a Euclidean distance between their normalized attribute vectors.
Figure imgf000012_0001
(2)
[0064] P may to take values from interval [0, 1] in hybrid energy function (2), for example, using a softassign technique. See, for example, Belongie et al., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2002, 24(4):509-522. The continuous property of P may acknowledge ambiguous matches between X and Y. The first term may be the geometric feature -based energy term defined by Euclidean distance. Similarly on two surface landmarks is measured by the Euclidean distance between their geometric feature vectors— smaller distance indicates higher similarity between them. The second term may be an entropy term comes from the deterministic annealing technique, which is used to directly control the fuzziness of P. See, e.g., Yuille, A.L. and Kosowsky, J.J., Neural Computation, 1994, 6(3):341-356. δ may be the temperature parameter. The third term may be used to direct the correspondences matrixes P converging to binary. With a higher, the correspondences may be forced to be fuzzier and become a factor in "convexifying" the objective function. Although δ is gradually reduced to zero, the fuzzy correspondences may become binary. See, e.g., Chui et al., Computer Vision and Image Understanding, 2002, 8(2-3): 114-141. The third term may be used to balance the outlier rejection.
[0065] In some embodiments, the method may include determining the similarities between a second type of landmarks, the internal landmarks of the CT and US images. This may occur after determining the similarities between the surface landmarks.
[0066] This may help to address the imaging deficiencies of CT and US images. There can be no good soft tissue contrast in CT images and can be big noise in US images. See, e.g., Smith et al., Int. J. Radiat. Oncol. Biol. Phys., 2007, 67(4): 1238-1247. Compared with the surface landmarks, it can be relatively difficult to define the landmarks within the prostate capsules, because the same anatomical structures might have different appearances or shapes in the CT and US images.
[0067] After the internal landmarks are automatically determined, it may be necessary to define the similarity between internal landmarks in CT and US images to determine the correspondences between the images. The registration of prostate CT and US images can be a multimodality image registration problem. To address this problem, geometric features like surfaces and internal landmarks may be used to aid in the registration.
[0068] To capture rich image information around each internal landmark for determining its corresponding landmarks in the other modality image, multiple local patches with different sizes around landmarks may be integrated to measure the similarity between internal landmarks.
[0069] The similarity between each internal landmark may be described as follows. EIS (f) =∑∑¾ | ^ - («f )|2 + ^∑∑¾ lo (¾) - ∑∑¾
[0070] k=1 1=1 k=1 1=1 k=1 1=1
(3)
[0071] Q may take values from interval [0, 1] in hybrid energy function (3), by using a softassign technique. The continuous property of Q may acknowledge ambiguous matches between U and V. The first term may be the geometric feature -based energy term defined by Euclidean distance.
Similarly, two internal landmarks may be measured by the Euclidean distance between their geometric feature vectors— smaller distance indicates higher similarity between them. The second term may be an entropy term comes from the deterministic annealing technique, which is used to directly control the fuzziness of Q. δ may be called the temperature parameter. The third term may be used to balance the outlier rejection.
[0072] For surface and internal landmarks, the goal of the registration may be to determine an optimal correspondence matrix P and Q and an optimal spatial transform /, that matches these two points' sets X and Y, U and V as closely as possible. "Close" may mean not only in the Euclidean sense, but also in the sense of salient region feature similarity. For registration, the hybrid energy function may be minimized as follows,
[0o73] ELS (f) = aEss (f{x T,y s}) + fiEls(f{u^, v^ })
(4)
[0074] Ess (f {x T , y s } ) and EIS (f {u^T , v^s } ) may be the similarity metric on both surface landmarks and internal landmarks in CT and US images. /may be the bending energy, and β may be balancing parameters.
[0075] In some embodiments, the similarities between the volume, for example, the bladder neck volume or gland volume, may be determined by minimizing bladder neck volume overlap between the volumes of each modality image. In some embodiments, the similarities between the volume of other regions and/or the entire organ or gland may also be or alternatively be determined.
[0076] Surface and internal landmarks described in the first and second registration criterion aim to capture 3D distortions at prostate boundary and internal structures (e.g., urethra and calcification). However, due to the lack of blob-like structures around bladder neck volume, very few or even no internal landmarks can be detected within and around bladder neck volume. Consequently, the distortions around bladder neck region may be less likely to be captured. To specifically capture distortions at bladder neck volume, the method may include a third registration criterion:
minimization of at least one volume of a region (e.g., entire gland region or bladder neck region) overlap between the two volumes. The similarity between volumes may be described as follows:
Figure imgf000014_0001
(5)
[0078] BCT may be the radiologist defined organ or gland region (e.g., bladder neck ground truth region or entire gland region) in CT image, and Bus may be the actual organ or gland region (e.g., bladder neck region or entire gland region) in US image.
[0079] In some embodiments, after the similarities between the same -type landmarks are determined, the processing step may include integrating (step 320) the similarities between respective landmarks and features, and adding smoothness constraints (step 330) on the estimated transformation between segmented CT and US images. See, e.g., Zhan et al., Acad. Radiol., 2007, 14(11):1367-1381 (2007); Yang et al., CVPR, 2006, 1825-1832; and Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 114-141.
[0080] In some embodiments, the transformation between CT and US images may be represented by a general function, which can be modeled by various function basis. Examples of a transformation basis function may include but is not limited to multiquadratic, thin-plate spline, radial basis, or B-spline. See respectively, e.g., Jekeli, Computers & Mathematics with Applications, 1994, 28(7):43-46; Stammberger et al., Magn Reson. Med., 2000, 44(4):592-601, and Bookstein, F.L., Pattern Analysis and Machine Intelligence, IEEE Transactions on, 1989, 11(6):567-585; Arad, N. and Reisfeld, Computer Graphics Forum, 1995, 14(l):35-46; and Xie, Z. and Farin, G.E., Visualization and Computer Graphics, IEEE Transactions on, 2004, 10(l):85-94, which are incorporated by reference in their entirety. In some embodiments, B-splines may be the transformation basis.
[0081] The transformation may described as follows. The following hybrid energy function may be minimized for registration,
[0082] E(f) = ELS(f) + yEVM(f{BCT, Bus}) + AEs(f) (g)
[0083] ELS ( f) may be the similarity metric on both surface landmarks and internal landmarks in CT and US images. ErM(f {BCT , Bus}) may be the similarity metric between the volume (e.g., in this example the bladder neck) bladder neck BCTm CT image and the bladder nsckBus in US image. Es (f) may be the regularization of the transformation, which is described by the bending energy of , β , γ and λ may be balancing parameters.
[0084] By optimizing an overall similarity function that integrates the similarities between landmarks and the smoothness constraints on the estimated transformation between CT and US images, the correspondences between the landmarks and importantly the dense transformation between CT and US images may be simultaneously obtained. [0085] In non-rigid registration, smoothness is necessary to restrict the mappings not being too arbitrary. The local deformation may ought to be characterized as a smooth function to discourage arbitrary unrealistic shape deformation. A smoothness penalty term may be introduced to regularize the local deformation by the second order spatial derivatives written as
Es {f)
086] HI ) dxdydzl
dx2 dy2
[0 (7)·
[0087] In some embodiments, the steps of the processing the images for registration, e.g., determining or defining similarities between each landmark or feature, integrating the similarities and adding smoothness, and optimization, may be accomplished by applying an overall similarity function (see equation (8)) to the segmented images.
Based on equations (1-7), the overall similarity function may be written as:
Figure imgf000015_0001
I J K L I J K L
+ ∑∑ Pij log(/¾ ) +∑∑ qkl log(¾/ )] - [∑∑/¾ +∑∑¾]
1=1 i=l j=l k=l 1=1
Figure imgf000015_0002
(8)
[0090] a , β , γ , and λ may be the weights for each energy term. Ess may be the similarity for surface landmarks, and EIS may be the similarity for internal landmarks. E^ may be the energy term for the bladder-neck volume matching and Es may be the smoothness constraint term, δ and
T may be the temperature parameters and its respective weighted term may be an entropy term that comes from the deterministic annealing technique. See, e.g., A.L. Yuil and J.J. Kosowsky, Neural Computation, 1994, 6(3): 341-356. ζ and η may be the weight for outlier rejection term. Matrixes and qu may be the fuzzy correspondence matrixes. See, e.g., Chui, H. and Rangarajan, A.,
Computer Vision and Image Understanding, 2002, 89(2-3): 114-141. may denote the
transformation between CT and TRUS images. See, e.g., Wang, H. and Fei, B., ICBBE 2008, The 2nd International Conference on, 2008, 2353-2356.
[0091] The overall similarity function is not limited to the landmarks and features provided in the equation (8). In some embodiments, the overall similarity function may be modified to register additional landmarks and/or features may be added.
[0092] It should be understood that the similarity functions are not limited to the volume of the bladder neck region. This is only an illustrative example. The method may be modified for volumes of other regions, such as the entire prostate, or other prostate regions. [0093] In some embodiments, the method may include a step 132 of optimizing the registration of the images. The optimizing may include applying an alternative optimization algorithm. In some embodiments, the optimizing step may minimize the overall similarity function by applying an alternating optimization algorithm that successively updates the correspondences matrixes ptJ and¾ , and the transformation function / The optimizing step may be repeated until there are no updates of the correspondence matrixes P and Q.
[0094] In some embodiments, the optimizing step may include a (first) step of updating the fixed transformation /, the correspondence matrixes between landmarks by minimizing E( f) . The updated correspondence matrixes may then be treated as the (fixed) temporary correspondences between landmarks. The optimizing step may include a (second) step of updating the transformation function / with the fixed temporary correspondence matrixes ptj and qkl . The two steps may be alternatively repeated until there are no updates of the correspondence matrixes P and Q. It is worth noting that δ and τ in equation (8) may decrease with the progress of iterations, which means less and less smoothness constraints are placed on the transformation between CT and TRUS image.
Generating Step
[0095] In some embodiments, the method may include a step 140 of generating registered images. The images may be generated after the registration is optimized (e.g., there are no updates of the correspondence matrixes P and Q). In some embodiments, the generating may include warping the CT image to the US image.
[0096] Figure 4 shows an example 400 of registration results of a pre-and post-biopsy TRUS images of the same patient. Column 410 shows pre -biopsy images at three directions. Column 420 shows the generated registered post-biopsy images at the three directions. Column 430 shows the corresponding residual between pre- and registered post-biopsy images. Bar 440 shows the intensity range of residual volume.
Outputting Step
[0097] In some embodiments, the method may include a step 150 of outputting registered images. In some embodiments, the outputting may include but is not limited to displaying the registered image(s), printing the registered image(s), and storing the registered image(s) remotely or locally. In other embodiments, the registered image(s) with may be transmitted for further processing. In some embodiments, the registered image(s) may be transmitted to an ultrasound system to be displayed. In some embodiments, a location of a biopsy probe may be displayed on the registered image.
[0098] Compared with previous methods, which are only guided by the aligned boundaries of anatomic structures, the methods according to embodiments can successfully register the anatomical structures inside the prostate, by using the detected internal landmarks commonly available in both CT and US images. See, e.g., Taylor, et al., Ultrasound Med Biol., 2004, 30(2):161-168; and Jacobs et al., 1989, Med Phys., (26): 1568-1578.
[0099] Considering the different properties of the two types of landmarks, the similarity between boundary and internal landmarks may be defined by geometric features, whereas the similarity between bladder neck may be defined by volume overlap matching.
[00100] The registration framework according to embodiments may incorporate the geometric and image features of landmarks and regions, not nonrigidly matches landmarks based on their spatial relations like those described in Chui et al. See Chui, H. and Rangarajan, A., Computer Vision and Image Understanding, 2002, 89(2-3): 114-141.
SYSTEM IMPLEMENTATION
[00101] Figure 5 shows an example of a system 500 configured to process and register images of an organ, for example, a prostate, to register ultrasound images. The system for carrying out the embodiments of the methods disclosed herein is not limited to the system shown in Figure 5. Other systems may be used.
[00102] In some embodiments, the system 500 may include at least two image acquisition systems (modalities) to acquire image data of a patient. The image acquisition devices may include at least an a first and second image acquisition systems 510 and 512. The image acquisition device may be of different modalities. In some embodiments, the system may include additional image acquisition systems.
[00103] One (first) of the image acquisition systems may be a system configured to acquire CT or MR images. The system may include but is not limited to PET/CT, CT, or MR dedicated systems.
[00104] The other (second) of the image acquisitions systems may be an ultrasound system. In some embodiments, the ultrasound system may be a part of a biopsy system, and may include an ultrasound probe. The ultrasound system may be configured to acquire transrectal ultrasound (TRUS) images.
[00105] The image acquisition devices may be communicably connected to a medical image storage device 514 as well as a wired or wireless network.
[00106] The system 500 may further include a computer system 520 to carry out the classifying of the tissue and generating a classified image. The computer system 520 may further be used to control the operation of the system or a computer separate system may be included.
[00107] The computer system 520 may also be communicably connected to another computer system as well as a wired or wireless network. The computer system 520 may receive or obtain the image data from the image acquisition devices 510 and 512 or from another module provided on the network, for example, a medical image storage device 514. [00108] The computer system 520 may include a number of modules that communicate with each other through electrical and/or data connections (not shown). Data connections may be direct wired links or may be fiber optic connections or wireless communications links or the like. The computer system 520 may also be connected to permanent or back-up memory storage, a network, or may communicate with a separate system control through a link (not shown). The modules may include a CPU 522, a memory 524, an image processor 530, an input device 526, a display 528, and a printer interface 529.
[00109] The CPU 522 may any known central processing unit, a processor, or a microprocessor. The CPU 522 may be coupled directly or indirectly to memory elements. The memory 524 may include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof. The memory may also include a frame buffer for storing image data arrays.
[00110] The present disclosure may be implemented as a routine that is stored in memory 524 and executed by the CPU 522. As such, the computer system 520 may be a general purpose computer system that becomes a specific purpose computer system when executing the routine of the disclosure.
[00111] The computer system 520 may also include an operating system and micro instruction code. The various processes and functions described herein may either be part of the micro instruction code or part of the application program or routine (or combination thereof) that is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device, a printing device, and I/O devices.
[00112] The input device 526 may include a mouse, joystick, keyboard, track ball, touch activated screen, light wand, voice control, or any similar or equivalent input device, and may be used for interactive geometry prescription. The input device 526 may control the production, display of images on the display 528, and printing of the images by the printer interface 529. The display 528 may be any known display screen and the printer interface 529 may any known printer, either locally or network connected.
[00113] The image processor 530 may be any known central processing unit, a processor, or a microprocessor. In some embodiments, the image processor 530 may process and register the images to generate registered images. In other embodiments, the image processor 530 may be replaced by image processing functionality on the CPU 522.
[00114] In some embodiments, the image processor 530 may be configured to determine landmarks and/or features, process and register the images (data) from the image acquisition devices 510 and 512 and/or the medical image storage device 514. In some embodiments, the image processor 530 may be configured to implement the methods according to embodiments to generate registered images.
[00115] In some embodiments, the registered images may be stored in the memory 524. In other embodiments, another computer system may assume the image registration or other functions of the image processor 530. In response to commands received from the input device 526, the image data stored in the memory 524 may be archived in long term storage or may be further processed by the image processor 530 and presented on the display 528. In some embodiments, the registered images may be transmitted to an image acquisition system, for example, the ultrasound system 510, to be displayed.
[00116] It is to be understood that the embodiments of the disclosure be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof. In one embodiment, the disclosure may be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. The system and methods of the present disclosure may be implemented in the form of a software application running on a computer system, for example, a mainframe, personal computer (PC), handheld computer, server, etc. The software application may be stored on a recording media locally accessible by the computer system and accessible via a hard wired or wireless connection to a network, for example, a local area network, or the Internet.
[00117] It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which the disclosure is programmed. Given the teachings of the disclosure provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the disclosure.
[00118] All references cited herein are hereby incorporated by reference in their entirety.
[00119] While various embodiments of the disclosure have been described, the description is intended to be exemplary rather than limiting and it will be appeared to those of ordinary skill in the art that may more embodiments and implementations are possible that are within the scope of the disclosure.

Claims

CLAIMS What is claimed:
1. A method for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images, comprising:
processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and
generating at least one registered image.
2. The method according to claim 1, wherein the first image includes at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image includes an ultrasound (US) image.
3. The method according to claim 2, wherein the organ is the prostate.
4. The method according to claim 3, wherein the surface landmarks include prostate boundaries, the internal landmarks include salient internal anatomical regions.
5. The method according to claim 4, wherein the similarities between the surface landmarks and internal landmarks are determined based on geometric features, and the similarities between the volume are determined based on overlapping volume matching of at least one anatomical region.
6. The method according to claim 4, the method further comprising:
determining the surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image.
7. The method according to claim 1, further comprising:
optimizing the processing of the images.
8. The method according to claim 1, further comprising:
outputting the registered images.
9. The method according to claim 1, wherein the processing further comprises:
integrating the similarities; and
applying smooth constraints.
10. A computer-readable storage medium storing instructions for processing at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images, the instructions comprising:
processing the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and
generating at least one registered image.
11. The medium according to claim 10,
wherein the first image includes at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image includes an ultrasound (US) image.
12. The medium according to claim 11, wherein:
the organ is the prostate; and
the surface landmarks include prostate boundaries and the internal landmarks include salient internal anatomical regions.
13. The medium according to claim 12, wherein the similarities between the surface landmarks and internal landmarks are determined based on geometric features, and the similarities between the volumes are determined based on overlapping volume matching.
14. The medium according to claim 12, the medium further comprising instructions:
determining the surface and internal landmarks for each image, wherein the determining of at least one of the landmarks includes segmenting each image.
15. The medium according to claim 11 , wherein:
the processing includes:
integrating the similarities; and
applying smooth constraints,
the medium further includes instructions for optimizing the processing to register the images.
16. A system configured to process at least a first image of an organ from a first imaging modality and a second image of an organ from a second modality to register the images, comprising: an image processor, the image processor being configured to:
process the first and second images to register the images, the processing including determining similarities between at least surface landmarks, internal landmarks, and volume provided in each of the first and second images; and generate at least one registered image.
17. The system according to claim 16, wherein the first image includes at least one of a computer tomography (CT) image or a magnetic resonance (MR) image, and the second image includes an ultrasound (US) image.
18. The system according to claim 16, wherein:
the organ is a prostate; and
the surface landmarks include prostate boundaries; and
the internal landmarks include salient internal anatomical regions.
19. The system according to claim 18, wherein the similarities between the surface landmarks and internal landmarks are determined based on geometric features and the similarities between the volumes are determined based on overlapping volume matching.
20. The system according to claim 19, wherein the processor is configured to optimize the process to register the images.
PCT/US2012/024821 2011-02-11 2012-02-13 Systems, methods and computer readable storage mediums storing instructions for 3d registration of medical images WO2012109641A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161441798P 2011-02-11 2011-02-11
US61/441,798 2011-02-11

Publications (2)

Publication Number Publication Date
WO2012109641A2 true WO2012109641A2 (en) 2012-08-16
WO2012109641A3 WO2012109641A3 (en) 2012-10-18

Family

ID=46639239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/024821 WO2012109641A2 (en) 2011-02-11 2012-02-13 Systems, methods and computer readable storage mediums storing instructions for 3d registration of medical images

Country Status (1)

Country Link
WO (1) WO2012109641A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014212089A1 (en) * 2014-06-24 2015-07-23 Siemens Aktiengesellschaft Method for monitoring the image of a minimally invasive procedure, image processing device and ultrasound image recording device
WO2016039763A1 (en) * 2014-09-12 2016-03-17 Analogic Corporation Image registration fiducials
CN116211353A (en) * 2023-05-06 2023-06-06 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070242901A1 (en) * 2006-04-17 2007-10-18 Xiaolei Huang Robust click-point linking with geometric configuration context: interactive localized registration approach
US20080044105A1 (en) * 2004-07-07 2008-02-21 Jan Boese Method for Determining a Transformation of Coordinates of Different Images of an Object
US20080205719A1 (en) * 2005-06-15 2008-08-28 Koninklijke Philips Electronics, N.V. Method of Model-Based Elastic Image Registration For Comparing a First and a Second Image
US20100254583A1 (en) * 2007-12-18 2010-10-07 Koninklijke Philips Electronics N.V. System for multimodality fusion of imaging data based on statistical models of anatomy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080044105A1 (en) * 2004-07-07 2008-02-21 Jan Boese Method for Determining a Transformation of Coordinates of Different Images of an Object
US20080205719A1 (en) * 2005-06-15 2008-08-28 Koninklijke Philips Electronics, N.V. Method of Model-Based Elastic Image Registration For Comparing a First and a Second Image
US20070242901A1 (en) * 2006-04-17 2007-10-18 Xiaolei Huang Robust click-point linking with geometric configuration context: interactive localized registration approach
US20100254583A1 (en) * 2007-12-18 2010-10-07 Koninklijke Philips Electronics N.V. System for multimodality fusion of imaging data based on statistical models of anatomy

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014212089A1 (en) * 2014-06-24 2015-07-23 Siemens Aktiengesellschaft Method for monitoring the image of a minimally invasive procedure, image processing device and ultrasound image recording device
WO2016039763A1 (en) * 2014-09-12 2016-03-17 Analogic Corporation Image registration fiducials
US20170281135A1 (en) * 2014-09-12 2017-10-05 Analogic Corporation Image Registration Fiducials
CN116211353A (en) * 2023-05-06 2023-06-06 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system
CN116211353B (en) * 2023-05-06 2023-07-04 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system

Also Published As

Publication number Publication date
WO2012109641A3 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
Hu et al. Weakly-supervised convolutional neural networks for multimodal image registration
US7876938B2 (en) System and method for whole body landmark detection, segmentation and change quantification in digital images
US7738683B2 (en) Abnormality detection in medical images
Vandemeulebroucke et al. Automated segmentation of a motion mask to preserve sliding motion in deformable registration of thoracic CT
Rueckert et al. Model-based and data-driven strategies in medical image computing
Linguraru et al. Automated segmentation and quantification of liver and spleen from CT images using normalized probabilistic atlases and enhancement estimation
Martin et al. Automated segmentation of the prostate in 3D MR images using a probabilistic atlas and a spatially constrained deformable model
Crum et al. Non-rigid image registration: theory and practice
US7653263B2 (en) Method and system for volumetric comparative image analysis and diagnosis
Häme et al. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation
Kang et al. Heart chambers and whole heart segmentation techniques
CN107886508B (en) Differential subtraction method and medical image processing method and system
Wang et al. A review of deformation models in medical image registration
Zhan et al. Registering histologic and MR images of prostate for image-based cancer detection
El-Baz et al. Automatic analysis of 3D low dose CT images for early diagnosis of lung cancer
Göçeri Fully automated liver segmentation using Sobolev gradient‐based level set evolution
Shen et al. Optimized prostate biopsy via a statistical atlas of cancer spatial distribution
US20070014448A1 (en) Method and system for lateral comparative image analysis and diagnosis
Banerjee et al. Fast and robust 3D ultrasound registration–block and game theoretic matching
WO2007037848A2 (en) Systems and methods for computer aided diagnosis and decision support in whole-body imaging
Abbasi et al. Medical image registration using unsupervised deep neural network: A scoping literature review
EP2901417A1 (en) A system and method for annotating images by propagating information
Jung et al. Deep learning for medical image analysis: Applications to computed tomography and magnetic resonance imaging
Xiang et al. CorteXpert: A model-based method for automatic renal cortex segmentation
Li et al. Joint probabilistic model of shape and intensity for multiple abdominal organ segmentation from volumetric CT images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12745197

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12745197

Country of ref document: EP

Kind code of ref document: A2