WO2011110867A1 - Apparatus and method for registering medical images containing a tubular organ - Google Patents

Apparatus and method for registering medical images containing a tubular organ Download PDF

Info

Publication number
WO2011110867A1
WO2011110867A1 PCT/GB2011/050488 GB2011050488W WO2011110867A1 WO 2011110867 A1 WO2011110867 A1 WO 2011110867A1 GB 2011050488 W GB2011050488 W GB 2011050488W WO 2011110867 A1 WO2011110867 A1 WO 2011110867A1
Authority
WO
WIPO (PCT)
Prior art keywords
tubular organ
dimensional
mapping
medical image
dimensional medical
Prior art date
Application number
PCT/GB2011/050488
Other languages
French (fr)
Inventor
Jamie Mcclelland
Holger Roth
Mingxing Hu
David Hawkes
Steve Halligan
Original Assignee
Ucl Business Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ucl Business Plc filed Critical Ucl Business Plc
Publication of WO2011110867A1 publication Critical patent/WO2011110867A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine

Definitions

  • the present invention relates to an apparatus and method for registering medical images containing a tubular organ such as the colon, thereby establishing a correspondence between the first and second images.
  • Colorectal cancer is one of the main cancer types leading to more than 630,000 deaths each year worldwide [1].
  • Traditional colonoscopy using a video endoscope can have miss rates of up to 27 % for adenomas smaller or equal to 5 mm [2].
  • Such traditional colonoscopy can cause significant discomfort and is not without risk of perforation of the gut.
  • CT Computed Tomography
  • CT colonography is a new technology that combines CT scanning with 3D image analysis and visualisation to produce images of the patient's colon that mimic those obtained during colonoscopy [3], hence the alternative term, "virtual colonoscopy” [4].
  • CT colonography can be used to detect colon cancer and its polyp precursors, and has attracted considerable medical and lay attention because the procedure is safer and more acceptable to patients than alternatives.
  • CT colonography is now becoming established in the USA and Europe (and also in Japan) as a standard screening tool for colorectal cancer - for example, it is practised in about 35% of NHS hospitals, mainly to diagnose symptomatic cancer.
  • Computer-aided-detection (CAD) is expected to elevate acceptance of CT colonography even further.
  • the bowel is cleansed before the procedure by administering a powerful laxative.
  • the bowel is then inflated with carbon dioxide gas prior to each image via a plastic tube inserted into the anus.
  • Remaining faecal material and fluids can be tagged with contrast agent such as a barium salt and removed digitally.
  • contrast agent such as a barium salt
  • faecal remnants or folds of the colonic wall can still mimic the appearance of polyps leading to false positives.
  • some regions of the colon may not be sufficiently inflated causing these regions to collapse so that none of the surface features are visible in these areas.
  • CT images are usually taken both prone and supine so that the colon falls into a different position. This can help to reduce the incidence of false positives, i.e. the erroneous detection of lesions (or other features of interest) that subsequently turn out to be false. For example, turning and re- insufflating the colon changes the colon shape and may dislodge faecal matter attached to the colon wall. Thus viewing a given feature in both CT images (prone and supine) can assist a clinician with diagnosis. Additionally, regions that have collapsed in one of the views will hopefully be re-inflated in the other view, enabling those regions to be examined.
  • Using two CT images (such as prone and supine) in conjunction with one another generally involves establishing some correspondence between the two CT images, so that a feature seen in a first image can be identified as the same feature in a second image.
  • the radiologist establishes this spatial correspondence between the two views by eye.
  • this is a difficult task for even the most experienced radiologist and hence can introduce delays and errors in the diagnostic process.
  • US 2004/0264753 describes a system that provides a computer-based registration of prone and supine CT images of the colon, but does not give any details of how the image registration is actually performed.
  • One known approach is to align extracted centre lines of both views [6] and to use this set of coordinates as an index of location, see also US 2004/0136584.
  • this method provides no information on rotation around the centre line and centre line registration is likely to lead to errors of several centimetres along the colon.
  • Another approach has been to define several anatomical landmarks [7], like the anus, cecum and flexures, in order to align both 3D colon images.
  • flexures are difficult to locate accurately and identification of only a small number of points is insufficient to describe the complex folding and deformation of the colon between prone and supine views.
  • Fukano et al [20] aim to establish correspondence between the colon surfaces by matching haustral folds extracted from prone and supine data (a similar approach is described in US
  • Fukano et al demonstrate that haustral folds can be detected robustly, it is very challenging to establish the correct correspondence between views, as their results indicate. For example, they report 65.1% of corresponding large haustral folds and 13.3% of small haustral folds as being matched correctly.
  • Suh et al. [8] a voxel-based method has been proposed by Suh et al. [8]. This method also uses the centre lines to generate an initial deformation field and then treats the registration task as an optical flow process.
  • Suh et al. [24] use their voxel-based method to try to handle cases where the colon has partially collapsed in one of the views by allowing the colon to grow or shrink during the registration.
  • this makes it even more challenging to appropriately constrain the registration to provide robust and accurate results, as evidenced by the limited accuracy reported for their method (average error after registration of 30.1 mm for 4 cases each evaluated using a single polyp).
  • Zeng el al. [21] presented a method based on conformal mapping combined with feature matching in order to establish correspondences between the prone and supine surface. They detect four flexures and one teniae coli in order to divide the colon surface into five segments and map each segment to a rectangle. Correspondence between prone and supine surfaces is then established for each rectangular segment individually. Therefore the method relies on being able to accurately determine exactly the same segments on the prone and supine surfaces, which can be very difficult even for fully distended colons, and may not be possible for cases with local colonic collapse. Furthermore, they established correspondence between the mapped segments using only a sparse point set of features extracted from some 'prominent' haustral folds, which are unlikely to provide an accurate alignment of the detailed colonic surface.
  • One embodiment of the invention provides a method for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ.
  • the method comprises: segmenting the first three- dimensional medical image containing the tubular organ and extracting a first surface representing the surface of the tubular organ from the first three-dimensional medical image; segmenting the second three-dimensional medical image containing the tubular organ and extracting a second surface representing the surface of the tubular organ from the second three-dimensional medical image;
  • the first three-dimensional medical image containing the tubular organ and the second three-dimensional medical image containing the tubular organ comprise Computed
  • CT Tomography
  • the medical images might be obtained using other imaging techniques, such as magnetic resonance imaging (MRI), ultrasound, derived from an optical colonoscopy, etc.
  • MRI magnetic resonance imaging
  • ultrasound ultrasound
  • derived from an optical colonoscopy etc.
  • the first medical image may be derived from one imaging technique
  • the second medical image may be derived from a different imaging technique.
  • the first three-dimensional medical image containing a tubular organ and the second three-dimensional medical image containing the tubular organ are taken in first and second positions respectively.
  • the first position may comprise the prone position and the second position may comprise the supine position.
  • Other examples of different positions might be left and right side (lateral) images. It will be appreciated that in many cases changing the body position between the first and second images leads to (non-rigid) deformation of the tubular organ, which adds complexity to the image registration to be performed.
  • extracting the first and second surfaces includes reducing the topological complexity of each surface. This helps to facilitate mapping or projecting the extracted surface of the tubular organ onto a two-dimensional (flat) representation.
  • Each of said first and second surfaces may comprise a cylindrical surface to represent the surface of the tubular organ. This can be achieved, for example in the case of images of the colon, by forming a first hole in each surface to represent the anus and a second hole in each surface to represent the cecum.
  • other surface shapes may be used depending on the particular anatomy in question. For example, a branching topology may be appropriate in some cases.
  • each of the first and second mappings comprises a conformal mapping - i.e. a mapping that preserves angle.
  • mapping may be derived using the Ricci flow algorithm or any other appropriate technique.
  • One advantage of using a conformal mapping is that the two-dimensional representation can be considered as depicting an unfolded view of the tubular organ.
  • other forms of mapping may be used, especially if only the ultimate image registration is of interest (rather than the intermediate two-dimensional representation of the tubular organ).
  • the first and second two-dimensional representations of the surface of the tubular organ each has a first dimension corresponding to distance along the tubular organ and a second dimension corresponding to angular position around the tubular organ.
  • this helps to provide a two-dimensional representation that can be readily interpreted as representing an unfolded view of the tubular organ (if so desired).
  • the centre line along a tubular organ is an important parameter for many existing systems that provide visualization, CAD, etc, and hence such a representation supports easier comparison with these existing systems.
  • the correlation or registration between images is generally performed based on some physical property or measure that varies across the surface of the tubular organ.
  • a portion of the first two- dimensional representation having a first value for the property will usually correspond to a portion of the second two-dimensional representation that has a similar value for this property (allowing for deformation, etc), since these two corresponding image locations are considered to represent the same physical position on the surface of the tubular organ, and should therefore have the same physical properties.
  • the property used for the image registration is a measure based on local shape or curvature, while in another embodiment, the property used for the image registration is based on image intensity (from the original three-dimensional medical images).
  • the property or measure used for the registration may be calculated from the two-dimensional representation of the surface of the tubular organ, or from the original three-dimensional medical images (in the vicinity of the surface of the tubular organ).
  • the first/second mapping can then be used to locate the property derived from the original first/second three-dimensional medical images onto the first/second two-dimensional representations for subsequent use in the image registration.
  • the third mapping is performed within the framework of a cylindrical topology.
  • This reflects the tubular nature of the tubular organ, and so can help to provide a better and physically more appropriate mapping.
  • each of the first and second two-dimensional representations is repeated for use in determining the third mapping to reflect a cylindrical topology. This mimics the periodic nature of traversing around the circumference of the tubular organ, thereby reflecting the cylindrical topology.
  • the algorithm used for creating the third mapping may have an intrinsic understanding of the cylindrical topology of the two- dimensional representations, thereby avoiding the use of such image repetitions.
  • the third mapping may automatically wrap each of the first and second two-dimensional representations in a cyclical fashion perpendicular to the central axis of said tubular organ.
  • the third mapping comprises a non-rigid 2-D B-spline registration.
  • the registration may be performed in multiple stages, recovering larger deformation at first, and then smaller deformations later on.
  • One embodiment of the invention further comprises accommodating one or more collapsed segments in said tubular organ in the first and/or second medical images.
  • the one or more collapsed segments divide the tubular organ into multiple non-collapsed segments.
  • this accommodation comprises mapping each non-collapsed segment to an individual image representing a two-dimensional representation of the surface of the tubular organ for that non-collapsed segment, and then forming an aggregate image of the individual images for use as said first or second two- dimensional representation of the surface of the tubular organ (as appropriate).
  • the aggregate image of the individual images may be provided with null values in regions corresponding to the collapsed segments; these said null values can then be ignored when determining said third mapping.
  • forming an aggregate of the individual images for use as said first or second two-dimensional representation of the surface of the tubular organ includes estimating the length of each collapsed segment and each non-collapsed segment. The positions of said individual images
  • the lengths may be estimated based on location along a centre-line of the tubular organ.
  • Forming an aggregate of the individual images may also include rotating the individual images about the central axis of tubular organ to provide a consistent angular orientation between the multiple non-collapsed segments, i.e. to ensure azimuthal alignment between the multiple non-collapsed segments.
  • the tubular organ may be split into a number of disconnected segments in one or both medical images. Each segment can be mapped to a two-dimensional cylindrical representation, and then combined into a single continuous two-dimensional cylindrical representation of the whole tubular organ by estimating the length of the missing segments.
  • the approach described herein is used as a method of generating the centre line for the tubular organ for the first and second images (with the two centre lines then being automatically registered with one another as part of the overall image registration).
  • the third mapping is regularized using one or more computational, physical, or bio-mechanical constraints. For example, there may be a limit to the amount of possible rotation by the colon, and this can then be used as a constraint for determining the third mapping.
  • One possibility is to use a biomechanical model to estimate the ease or difficulty of any estimated deformation of the tubular organ between the first and second images, and then to use this as part of the procedure for determining the most likely image registration.
  • the third mapping may be based at least in part on anatomical features. For example, if a specific anatomical feature is clearly visible in both images, then this can be considered to act as a constraint on the registration, since the first and second images must coincide properly with one another in respect of this feature.
  • the anatomical feature might be used to initialize the registration with a starting estimate that coarsely aligns the features and therefore serves as a basis for performing a finer registration to obtain the desired result.
  • the non-rigid registration determines a displacement between each point on the surface of the tubular organ in the first three-dimensional medical image and a corresponding point on the surface of the tubular organ in the second three-dimensional medical image.
  • the calculated displacement between the surface of the tubular organ in the first three-dimensional medical image and the surface of the tubular organ in the second three-dimensional medical image can then be used to determine the displacement of image locations neighbouring said surface of the tubular organ.
  • the displacement of an image location neighbouring the surface of the tubular organ may be determined as being the same as the displacement of the point on the surface of the tubular organ which is closest to said image location. This then allows the registration between the first and second three- dimensional medical images to extend beyond the surface of the tubular organ to the neighbouring areas. (It will be appreciated that the registration will generally lose accuracy with increasing distance from the surface of the tubular organ).
  • the tubular organ comprises the colon.
  • the image registration may be performed with respect to images of other tubular organs, such as the small bowel, oesophagus, etc.
  • the approach described herein can be used in a wide range of medical systems, for example to provide a visualization of corresponding regions from the first three-dimensional medical image containing the tubular organ and from the second three-dimensional medical image containing the tubular organ.
  • the visualization may flag to a clinician that one portion of a first image representing a region of medical interest corresponds to a given portion of the second image. This then allows the clinician to study the correct portion of the second image for investigating further the region of medical interest.
  • the approach described herein can assist with a computer- aided- detection system. For example, certain features that are confirmed (following image registration) to be present in both the first and second images may have a higher likelihood of being genuine features of interest than features that are found to be present in only one image.
  • the above described methods may be performed by running one or more computer programs comprising program instructions for implementing such a method.
  • the instructions may be stored on a non-transitory medium (such as an optical disk, solid state memory, disk drive, etc) and loaded into a memory of a computer for execution by a processor of the computer.
  • a non-transitory medium such as an optical disk, solid state memory, disk drive, etc
  • Another embodiment of the invention provides apparatus for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ.
  • the apparatus is configured to: segment the first three- dimensional medical image containing the tubular organ and extract a first surface representing the surface of the tubular organ from the first three-dimensional medical image; segment the second three- dimensional medical image containing the tubular organ and extract a second surface representing the surface of the tubular organ from the second three-dimensional medical image; generate a first mapping that maps the first surface to a first two-dimensional representation of the surface of the tubular organ, wherein said first two-dimensional representation of the surface of the tubular organ reflects the value of a property at each position on the surface of the tubular organ as derived from the first three-dimensional medical image; generate a second mapping that maps the second surface to a second two-dimensional representation of the surface of the tubular organ, wherein said second two-dimensional representation of the surface of the tubular organ reflects the value of the property at each position on the surface of the tubular organ as derived from the second three-dimensional medical image; determine a third mapping for transforming between the first two-dimensional representation of the surface of the tubular organ and the
  • a registration method for establishing spatial correspondence for the inner colon surface extracted from prone and supine CT colon images.
  • the registration process includes finding a unique indexing system, which reduces the registration task from a 3D- to 2D-problem by using a one-to-one conformal mapping of the entire inner colon surface to a cylindrical representation, where one dimension corresponds to length along the colon and the other dimension corresponds to the angular orientation. Images that correspond to 3D positions can now be generated, including shape indices computed on 3D surfaces. This allows a non-rigid registration of the prone and supine colon surfaces which can handle the large deformations between both positions. Furthermore, this framework could be easily extended to include a statistic or set of statistics derived from the original CT-images.
  • One embodiment of the invention provides an automated method of establishing correspondence between the colon surfaces visualised in prone and supine CT images.
  • a non-rigid registration on a 2D manifold is used in order to establish a full correspondence between all points on the 3D colonic surface in the different images.
  • Such an approach has the potential to save time, provide a more accurate diagnosis and improve computer aided detection (CAD) algorithms.
  • CAD computer aided detection
  • the confirmation by a second scan of the presence of a lesion seen in a first scan, or the rejection of a candidate lesion from a first scan not supported in the second scan is facilitated by establishing a spatial correspondence of points in the colon surface extracted from one CT scan (image) with surface points extracted from the other CT scan (image).
  • This approach can assist conventional radiological interpretation as well as potentially reducing false positive rates in CAD systems.
  • the approach can also facilitate comparison with optical colonoscopy images.
  • One embodiment of the invention provides a method based on a 2D manifold representing the internal colon lumen surface.
  • the colon is a tube and the internal surface can be mapped to a plane with two indices describing any location. Each location corresponds to a 3D point in a CT scan and can act as an index to a rich set of both surface and volume features.
  • a registration algorithm may be used whereby all transforms take place within this surface, but use information extracted from the local 3D shape of the surface and potentially local voxel statistics to provide significant constraints for the non- rigid registration.
  • Figure 1 is a high-level flowchart showing a method for acquiring and processing CT colonography images in accordance with one embodiment of the invention.
  • Figure 2 is a flowchart showing a method for preprocessing CT colonography images in accordance with one embodiment of the invention.
  • Figure 3 is a flowchart showing a method for registering two CT colonography images in accordance with one embodiment of the invention.
  • Figure 4 is a schematic diagram which depicts surface registration using a 2D manifold in accordance with one embodiment of the invention.
  • Figure 5 is a schematic diagram showing the sampling of an unfolded mesh after conformal mapping in accordance with one embodiment of the invention.
  • Figure 6 shows raster images of a colonic surface for prone, supine and deformed supine in accordance with one embodiment of the invention.
  • Figure 7 is a histogram illustrating registration error found by an experiment in accordance with one embodiment of the invention.
  • Figure 8 shows examples of 3D renderings of the colonic surface illustrating local colon collapse leading to disconnected segments.
  • Figure 9 shows raster images of a colonic surface for prone, supine, and deformed supine for a case where the colon in the prone view forms two disconnected segments.
  • Figure 10 is a histogram illustrating registration error found by a second experiment in accordance with one embodiment of the invention.
  • Figure 1 is a high-level flowchart showing a method for acquiring and processing CT colonography images in accordance with one embodiment of the invention.
  • the method begins with the acquisition of suitable CT images (110), which are then subject to preprocessing (120) to obtain a representation of the colon surface.
  • the method continues with image registration of the preprocessed images (130), and concludes with analysis and exploitation of the registered images (140).
  • image acquisition and image preprocessing from Figure 1 are generally already known to the skilled person.
  • the main focus herein concerns the image registration, which may be used to establish correspondence between the same point on the colon surface in each of prone and supine CT images (and hence assist with the visualization of the images).
  • the CT images may be acquired according to standard clinical practice. It will be appreciated that CT produces a three-dimensional data set that will be referred to herein as an image, with each element of the image being referred to as a voxel.
  • CT produces a three-dimensional data set that will be referred to herein as an image, with each element of the image being referred to as a voxel.
  • two CT images are acquired, one with the patient in the prone position, one with the patient in the supine position.
  • the provision of two images in different positions helps to distinguish between faeces and polyps in the colon, as faecal matter is expected to move relative to the colonic wall between these positions while polyps will not. Acquiring the images in these two positions also makes it less likely that any given portion of the colon will be collapsed (and therefore features in the colon wall rendered indistinguishable) in both sets of images.
  • images may be acquired with the patient in one or more alternative (or additional) positions.
  • the patient may be imaged in the left or right lateral position since this resembles the OC position.
  • the images may be acquired with or without faecal tagging and with or without a CT contrast agent.
  • preprocessing 120 includes removing faecal matter using an electronic cleansing algorithm.
  • FIG. 2 is a flowchart showing a method for preprocessing CT colonography images in accordance with one embodiment of the invention.
  • the preprocessing 120 includes segmentation of the colon and extraction of the colon wall as a triangulated mesh. (N.B. This segmentation and extraction might alternatively be viewed as the first portion of the image registration 130).
  • each CT image is first segmented (210) to distinguish the colon lumen from the surrounding tissue.
  • the segmentation may be performed, for example, by using a thresholding approach, since the colon is filled with air and the contrast between air and the colon is high. It will be appreciated that any other segmentation technique may be used as appropriate.
  • the segmentation may be assisted by the use of a priori anatomical knowledge.
  • the plastic tube which is used for inflating the colon, is generally removed (interactively and/or automatically) from the segmentation so as to not affect the surface construction. If some regions of the colon have collapsed, or if there is residual colonic fluid due to suboptimal bowel preparation, then the segmentation of the colon may result in a number of disconnected segments rather than a single structure.
  • the (inner) surface of the segmented colon ( ⁇ J ) is now represented as a triangular mesh, for example by using the Marching Cubes algorithm (220) [10]. It will be appreciated that any other technique for generating a mesh (and any other suitable form of surface representation) may be used as appropriate.
  • the surface is now edited (interactively and/or automatically) to produce a genus-zero surface (230), which is topologically equivalent to the surface of a sphere. In other words, the resulting surface does not contain any closed loops or similar complexities ("handles").
  • the topological correction may also be performed on the segmented colon before the surface is extracted. In this case, the segmentation is modified (either manually or automatically) so that it is topologically equivalent to a sphere, and it is then possible to extract a genus-zero surface that does not require any further topological correction.
  • the anus and the cecum are now identified (interactively and/or automatically) (240) and a hole is created in the mesh at each of these points to produce a structure that now has the topology of a tube.
  • the CT images may be further preprocessed (interactively and/or automatically) to label the segments of the colon according to their anatomic location.
  • the genus- 1 surface may then be processed, for example by smoothing (to describe a continuous surface) and/or by simplifying, for example using quadric decimation [11], to reduce subsequent computational time for the conformal mapping.
  • colon segmentation is formed of a number of disconnected segments then a surface is extracted for each individual segment.
  • the start and end point of each segment is identified (manually or automatically), and the surface for each segment is then processed as above so that each segment has the topology of a tube.
  • Figure 3 is a flowchart showing a method for registering two CT colonography images in accordance with one embodiment of the invention.
  • image registration 130 the prone and supine CT images (and any other CT images which have been acquired) are non-rigidly registered to establish correspondence between the images.
  • the registration process includes initially "unfolding" the colon using a conformal mapping (310) [12, 13], such as the Ricci flow algorithm [14], or another method such as that proposed in WO 2010/142624, to provide a one-to-one mapping of the entire colonic surface to a 2D representation, which is topologically a cylinder with the anus at one end and a point in the cecum at the other.
  • This cylindrical representation provides an x and y coordinate for every point on the colon, specifying the relative distance along and around the colon.
  • the conformal mappings produce unfolded images that preserve the topology of the colon while providing a one-to-one mapping between the 3D surface and the 2D image. Note that unfolding the colon in order to produce 2D images of the inside of the colon, for example to enable better examination of the surface of the colon and aid detection of polyps, is described in [15] and also in WO
  • the one-to-one mapping allows 2D cylindrical images to be generated for any property of the colon surface.
  • an image may be generated using one or more of any measure derived from the 3D triangulated surface, and/or one or more of any measure derived from the 3D CT image data at the surface of the colon.
  • measures include principal curvature, curvedness and shape index from the triangulated surface, and statistics of the original CT-intensities in the region of the surface, for example profiles of image intensity reconstructed perpendicular to the colon surface or direct measures of curvature from the local grey values.
  • each one of these is mapped to an individual 2D image, each representing a segment of the cylinder representing the full colon.
  • the length of each collapsed and uncollapsed segment is estimated, and the order of the different segments is established.
  • the length of each well- distended segment can be estimated based on the length of its centerline. Assuming that a collapsed segment is relatively straight, its length can be estimated as the Euclidean distance between the centerlines of the well- distended segments. Other methods for estimating the lengths of the collapsed and uncollapsed segments may also be used.
  • a 2D image for the full colon can be formed by scaling the length of each segment appropriately and combining them into a single 2D image.
  • the uncollapsed segments may need shifting in the y direction (around the circumference of the colon) as the angular orientation for each segment will be arbitrary. This rotation can be done manually or automatically, e.g. by minimising the 3D distance between points with the same 2D y coordinate on either side of a collapsed segment.
  • the collapsed regions of the 2D image are assigned a null value and are ignored during the 2D non-rigid registration described below.
  • the 2D images can be generated for any of the properties from the 3D colon surface or original 3D CT image as described above.
  • Anatomical features such as the hepatic and splenic Hectares, haustral folds, or teniae coli identified either on the 3D surface or on the colon segmentations or in the 3D CT scans may be mapped onto the 2D cylindrical images. Such features may also be detected directly on the 2D images. If corresponding features are identified on the both the prone and supine 2D images (and any others that are used), these can also be used to constrain or initialise the 2D non-rigid registration described below. For example, this initialization could be performed by using a simple linear stretch/compress along the length of the colon, applying an initial warp (e.g. thin-plate-spline) to one of the images, or by initialising the b-spline transformation so that the features are approximately aligned at the start of the non-rigid registration.
  • an initial warp e.g. thin-plate-spline
  • the 2D cylindrical surface images of the colon allow a 2D non-rigid registration of the prone and supine colon surfaces to be performed (320), for example using an iterative B-Spline registration method. It will be appreciated that any other algorithm for performing a non-rigid 2D registration may be used as appropriate.
  • the standard 2D registration procedure is modified to treat the colon surfaces as cylinders rather than flat 2D planes, i.e. they repeat as you move around the colon, but not as you move along it. The resulting correspondence established both along the length of the colon and around the circumference of the colon is able to compensate for twist and expansion of the colon.
  • the similarity between the two cylindrical images can be determined during the registration using one or a combination of the 3D surface and CT measures mentioned above. Null values in the 2D images, corresponding to collapsed segments of the colon, are ignored during the registration.
  • the registrations can be regularized using one or more computational, physical, or bio- mechanical constraints, such as penalizing the bending energy of the deformation or local changes in volume or colon surface area. This can help the optimization of the registration, as well as ensure that plausible results are generated.
  • the 2D registration establishes a 1-to-l correspondence between the two (or more) cylindrical representations of the colon. This correspondence can then be mapped back onto the surfaces in 3D space (via the conformal mappings) so that the 3D coordinate of any point on the colon surface from one image can be mapped to the corresponding 3D point on the surface from the other image (330).
  • the displacement is required at locations close to, but not directly on, the colon surface.
  • the displacement of the colon surface can be propagated to the surrounding voxels, e.g. by using the displacement of the surface point closest to the desired voxel.
  • Standard CAD software may now be used to process the CT images and to identify and/or further investigate polyp candidates.
  • the results of the image registration can be used to assist with this work. For example if a feature is present in the prone image but absent from the corresponding location (as determined by the image registration) of the supine image, this increases the likelihood that the potential polyp in fact comprises faecal matter (that moved between images).
  • the centre line of the colon which is often used as an aid to visualization of the colon, can be determined from the registered 2D cylindrical images.
  • points with the same coordinate along the colon define closed loops around the circumference of the colon surface.
  • the conformal mappings can be used to determine the 3D coordinates of the points on these loops.
  • a point is defined in the middle of each loop, for example using the mean of the 3D coordinates. Therefore, by finding the middle of successive loops along the length of the colon in this manner, a centre line can be defined for each of the 3D colon surfaces.
  • This centre line may be further processed, for example by smoothing the centre line coordinates to produce a trajectory without loops that maintains the association of each point in the surface with a unique point on the centre line.
  • This method of extracting the centre line has the advantages that: (i) every point on the surface is automatically associated with a point on the centre line, and (ii) the prone and supine (and any other) centre lines are automatically registered to each other.
  • a workstation can display a range of views of the registered images, including a 'virtual colonoscopy view', orthogonal slices through the CT volume and orthogonal slices through the segmented CT volume.
  • the identified centre line can be overlaid on these views if so desired.
  • a linked pointer when held over a point in one view, can be used to indicate the corresponding points in the alternative view (or views).
  • One possibility is that user indicates a point near to the image surface in one CT image.
  • the system displays a cursor (such as a small arrow or cross) at the point nearest to the same surface point as computed in the other view.
  • Another possibility is to start with a 3D surface or volume rendering of the inside of the colon in one CT image (view A) from a position and direction defined by the centre line location and direction.
  • the system can then use the conformal mapping and cylindrical registration to identify the nearest centre line coordinate and its direction in the other CT image (view B). With this information a surface or volume rendered view can be generated in the same direction as in view A.
  • One way to visualize spatial correspondence between the two views is to use the cursor to identify a point in the surface in view A. The best estimate of that location in the other view (view B) can then be indicated by another arrow, cross or cursor. If an estimate of registration accuracy or precision is computed from the registration step, the projection of the 95% or 99% confidence limit of the estimated spatial correspondence can be displayed as a line contour or colour change on the rendered view.
  • overlaid on the views of the CT images are markers indicating potential polyps.
  • a clinician can delete a marker that identifies a point that does not correspond to a polyp, or insert a marker at a point that he/she identifies as being a potential polyp.
  • the current curvature at vertex * ⁇ ' :. is the desired Gaussian curvature
  • u t is computed from a circle packing metric [17]. It can be shown that the Ricci flow represents the gradient flow of an energy function which can be minimized using the gradient descent method.
  • the target curvature is determined. For the purpose of parameterisation, the target curvature is set to zero for all vertices.
  • the inner colonic surface is obtained by extracting triangulated meshes of the inner colonic surfaces using segmentations of the air inside the prone and supine colons computed by the method described in [22]. It was ensured that the segmentations of the large intestine were topologically correct, using manual or automatic editing of the segmentations where appropriate.
  • the segmentation provides the input for a marching cubes algorithm with subsequent smoothing and decimation. This results in a closed and simply connected mesh along the air-to-tissue border in the CT-image.
  • each original genus-zero surface is converted to a genus-one surface 53 ⁇ 4 [17]. Therefore, the inner colonic surface 5; (which is topologically equal to a sphere) is converted to a torus-like surface. A hole is punched into the cecum and the rectum at user (or machine) identified positions. The remaining faces are copied to a new mesh with an inverse orientation of its faces, so that the normal vectors are pointing towards the inside of the colon. Subsequently the copied mesh is joined at the boundaries of the previously produced holes with the original surface triangulation. The resulting mesh 3 ⁇ 4 provides the input for the Ricci flow computation that provides the two-dimensional coordinates of each location within the surface.
  • the Ricci flow algorithm converges to a planar surface with local Gaussian curvature tending to zero everywhere by iteratively updating the edge lengths of the triangles.
  • the optimisation is run until the maximum difference between all 3 ⁇ 4 and 3 ⁇ 4 is close enough to zero to produce a suitable parameterisation 3 ⁇ 4 that can be embedded into planar space. This is computed in a similar manner to [17], where each planar triangle is computed based on its final edge length.
  • the planar mesh is repeated so that a rectangular raster-image will fully sample all points around the colon.
  • This is illustrated in Figure 5, in which the grey (wiggly) bands each represents a repeat of the 2D coordinates of the surface for a further 360 degrees clockwise and anticlockwise.
  • the straight horizontal lines represent the re-sampled complete colon surface in a form suitable for registration, where the top (0') and bottom (360') edges of the image correspond to the same point on the colon surface, thus representing the inner colonic surface as a cylinder.
  • the horizontal axis (x) corresponds to position along the colon from cecum to rectum and the vertical axis (y) corresponds to rotation around the circumference of the colon.
  • Each pixel of has an interpolated value of the corresponding shape index SI from the three-dimensional surface $i .
  • S is defined as
  • K ⁇ and ⁇ ⁇ are the principal curvatures extracted from .
  • the interpolated value is computed based on the three corner values of each triangle in 3 ⁇ 4 which correspond to the 3D vertices of .
  • the 2D manifolds are used to generate shape index images. These shape index images are first aligned in the j-direction to account for differences in the 0 position arbitrarily assigned by the planar embedding. This is performed automatically by applying a circular shift in the j- direction to a first image ( ) that minimises the Sum of Squared Differences (SSD) between it and a second image ( ⁇ s ).
  • the B-spline registration may be performed on the flat Euclidean plane (and not necessarily in a cylindrical framework, although the latter option might also be used). Therefore J 'i and ⁇ i. are repeated in the j-direction, resulting in images with a resolution of ⁇ i x " 2 3 ⁇ 4- , so as to simulate the cylindrical images during the registration.
  • a 2D B-spline registration is then performed with the shifted »f i as target and z. as source using the implementation provided by [19].
  • the registration is performed in two stages, the first to recover the larger deformations, and the second to recover the finer deformations.
  • the first stage consists of five resolution levels.
  • the second stage consists of three resolution levels and uses the result from the first stage as the starting transformation for the coarsest level. Both the image and B-spline control point grid resolutions are doubled at each level.
  • the final resolution level uses images with 3000 x 300 ( ⁇ ' 23 ⁇ 4 ) pixels and control points spaced every 12.5 pixels in both directions. SSD is used as the similarity measure.
  • the gradient of the cost function is smoothed at each iteration using a Gaussian kernel with a standard deviation of 3 control points for the first stage and one control point for the second. No additional constraint term is used for the first stage but bending energy is used for the second. Gaussian smoothing of the 2D images is applied at each resolution level during the first stage of registration but is not used for the second.
  • correspondence points in both images was determined by redoing the validation following an interval of several days to reduce recall bias.
  • the repeated validation was performed using the same coordinates from the supine datasets.
  • the radiologist was blind to the results of the previous matching exercise. The results suggest a significant difficulty in finding correct correspondences in the prone and supine CT-images.
  • the non-rigid B-spline registrations described above are performed in a cylindrical framework.
  • the hepatic and splenic flexures were used to provide a good initialisation for the non-rigid registration. These flexures can be detected automatically or manually in the 3-D data sets of the first and second images and mapped onto the 2D cylindrical representations Si and 3 ⁇ 4 using conformal mappings ⁇ and ⁇ 2 respectively.
  • One of the 2D cylindrical representations, for example Si was then linearly stretched and compressed in the x- direction so that the flexures have the same x-location in both Si and 3 ⁇ 4 . In this implementation, no alignment in the y-direction was performed, as this matching is fully recovered by the cylindrical B- spline registration. Shape index images, / / and I 2 , were then generated from Si and 3 ⁇ 4, as described above.
  • the alignment between / / and was established using a cylindrical non- rigid B-spline registration method For standard B-spline registrations, the control point grid extends outside the image by at least one control point spacing in each direction so that the deformation is defined over the whole image. For the cylindrical registrations, the control point grid does not extend outside the images in the y-direction (around the cylinder). Instead, when an extended control point is required, the corresponding value is taken from the opposite side of the grid.
  • any displacement in the x-direction (along the colon) at each end of the image was prevented by fixing the x- displacement of the first and last three control points to be zero, which ensures that the ends of the images are aligned with each other, while still allowing for twists around the colon.
  • SSD was again used as the similarity measure, and bending energy and volume preserving penalty terms were used to constrain the registration, as described in [19].
  • a coarse- to-fine approach was used in order to capture first the largest deformations and then the smaller differences between both input images. This was achieved with a seven-level multi-resolution approach using Ij as target and h as source. Both the image and B-spline control point grid resolutions were doubled with increasing resolution levels. The final resolution level used images with 4096 ⁇ 256 (n(x) x n (y)) pixels. The control point spacing was 16 pixels in both directions at each resolution level. The gradient of the cost function was smoothed after each iteration using a Gaussian kernel with a standard deviation of 3.
  • Gaussian smoothing of the 2D images was applied at each resolution level with a standard deviation of two pixels.
  • the objective function weights for both penalty terms was set to 1 x e " 4 (where e is the base of natural logarithms).
  • the above approach can handle datasets where the colon is represented as a number of disconnected segments (rather than as a single connected object). This is helpful, since despite colonic insufflation, short segments of colonic collapse commonly occur during investigations, especially when the patient changes position from supine to prone. Furthermore, residual colonic fluid due to suboptimal bowel preparation may occlude the colonic lumen, resulting in more than one colonic segment for 3D reconstruction.
  • Figure 8 shows an example of a patient's colon with a collapse in the descending colon (DC) in the supine position.
  • the image on the left of Figure 8 represents the prone position, while the image on the right represents the supine position.
  • the rectangular box on the right image marks a portion of the colon that is collapsed in the supine position, but fully distended in the prone position.
  • the segmentation method described in [22] can be used to determine a set of disconnected colon segments.
  • the beginning point and end point of each segment, as well as the correct order of the segments, may be specified manually by the radiologist
  • each collapsed and uncollapsed segment was determined as discussed above.
  • the angular alignment between each segment was determined as the shift around the y-axis which minimizes the 3D distance between points with the same angular orientation on either side of the collapse.
  • Figure 9 shows an example the cylindrical images / of such a case (obtained from the patient data for the investigation discussed below).
  • Figure 9 provides cylindrical representations as raster images of the collapsed supine (top), prone (middle) and deformed supine (bottom) endoluminal colon surface.
  • top the location of a polyp is marked before registration (top) and after registration (middle and bottom). It can be seen from Figure 9 that despite the missing data in the collapsed section of the descending colon, both supine colon segments are reasonably well registered with the fully distended prone endoluminal colon surface.
  • colonography data acquired as part of normal day-to-day clinical practice.
  • the CT colonography had been performed in accordance with current recommendations for good clinical practice and any detected polyps subsequently validated via optical colonoscopy.
  • 24 patients were selected whose colon was not under- distended in both the prone or supine positions and who had either fluid 'tagging' (the increased radio- density allows 'digital cleansing' of residual fluid) or little remaining fluid. This allowed a continuous segmentation over the full length of the colon using the methods described in [22].
  • the registration error was measured on the basis of clinically validated polyps and haustral folds.
  • experienced radiologists identified polyps in both prone and supine CT colonography scans using 2D multi-planar reformats and endoscopy data for guidance.
  • the endoluminal extent of the polyps was labeled to provide reference coordinates for validation.
  • Polyp labels were checked and corrected if necessary and then matched by eye between the prone and supine view by an experienced colonography radiologist.
  • the cases were selected to present a widespread distribution of polyps throughout the colonic length so that registration accuracy could be investigated over the entire endoluminal surface.
  • any polyps in the 2D cylindrical images I were masked, such that those pixels lying on or close to the polyp were ignored when computing the similarity measure during registration, so that the polyps used for validation did not bias the registration results.
  • a pair of reference points were identified for each manually matched polyp in the prone and supine views.
  • the reference points were defined as the points at the centre of the intersecting surface between the extracted endoluminal colon surfaces S and the segmented polyps. Therefore, these points lie on the surfaces SI and S2 respectively.
  • the center point c(x,y) was computed as the center of mass of the intersecting pixels in the 2D images /.
  • Each 2D reference point c(x,y) corresponds to a 3D point on the surfaces S which lies inside the volume of the polyp.
  • the registration error in mm was then determined by transforming each reference point from surface Si using the mapping/, and calculating the 3D Euclidean distance to the corresponding reference point on surface 3 ⁇ 4.
  • Table 2 shows the results of assessing the registrations using the polyps of the 13 validations sets.
  • the error after the cylindrical parameterization but before the B-spline registration is denoted as Polyp Parameterization Error (PPE), and the error after the B-spline registration is denoted as Polyp
  • PPE Polyp Parameterization Error
  • the PPE results show that cylindrical parameterization on its own is not enough to align the datasets - the cylindrical non-rigid B-spline registration is required for a more accurate alignment.
  • the PRE had a mean ( ⁇ std. dev.) of 5.7 ( ⁇ 3.4) mm for 13 validation patients with a single polyp each, and all 13 polyps were well aligned. This result is sufficiently accurate to direct the radiologist to an area of the endoluminal surface, which is close to the suspected lesion in both views, even in the case of local colonic collapse (patients 17 to 21).
  • the hepatic flexure was not used to initialize the registration for patient 12 and patients 18-20, as the distances along the centerline between prone and supine varied more than fvar (here, 5%).
  • the resulting error for 9 polyps in the 8 development cases was 6.6 ( ⁇ 4.2) mm after non-rigid registration (PRE ) and therefore slightly higher than PRE of the validation set.
  • the polyps for development of the registration method occurred in the ascending colon (AC), transverse colon (TC), descending colon (DC) and sigmoid colon (SC).
  • a radiologist (with experience in over 500 validated colonography studies) then manually identified corresponding folds from the prone and supine views. Any folds where the radiologist could not be certain of correspondence were not used for validation, but this still provided an average of 90 pairs of corresponding folds per patient, with a total of 1175 pairs over all 13 validation cases (patients 9 to 21). The center points of the corresponding folds were then used as corresponding reference points for assessing the registration.
  • the Fold Registration Error was assessed in the same way as for the PRE, but using the haustral fold centers as reference points. Using this large set of reference points, the FRE was 7.7 ( ⁇ 7.4) mm for a total of 1175 points distributed over all 13 validation patients. In comparison, just using the cylindrical parameterization on its own (before B-spline registration), results in a Fold
  • FPE Parameterization Error
  • FPE Parameterization Error
  • Figure 10 A histogram of the registration error (FRE ) is shown in Figure 10.
  • the normalized distributions of FRE for un-collapsed and collapsed cases are colored differently and displayed next to each other for comparison. It can be seen that the majority of points (95%) lie below an error of 22.8 mm, with a maximum error of 44.1 mm. However, the FRE is slightly higher for the 5 collapsed cases with 9.7 ( ⁇ 8.7) mm as opposed to the 8 un-collapsed cases with R£Of 6.6 ( ⁇ 6.3) mm.
  • the haustral folds are almost always aligned with another haustral fold in the other image, but this is not always the correct corresponding fold.
  • segmented haustral folds an analysis was performed to see how many of the folds were aligned with the correct corresponding fold, and how many were misaligned by one or more fold. According to this analysis, 82% of all 1175 reference points were assigned to the correct corresponding fold, 15%> of reference points were misaligned by just one fold, and 3%> were misaligned by between two and three folds.
  • the embodiments described above have focussed on performing a registration between two images. In some cases, there may be a need to determine a registration between three or more images.
  • One option is to take one of these three (or more) images as a reference image, and then determine the registration of every other image with respect to the reference image using the above technique for performing a registration of two images. This would then establish a common spatial correspondence between all the images.
  • the approach described herein supports the use of a 2D representation of the cylindrical geometry of the colon for prone-supine CT registration.
  • Measures derived from the 3D surface geometry of the colon and/or measures of the intensity distribution in the original CT volume can be used to guide registration. Examples include the direct calculation of surface geometry from grey value distributions, 1 st and higher order derivatives, derivates convolved with Gaussian functions covering a range of spatial resolutions, profiles of intensity normal to the colon surface, local texture measures and any other local statistical measure.
  • the registration can also be assisted by features extracted from the surface, such as representations of haustral folds, tenaie coli, diverticulae or polyps.
  • Computational, physical, and/or bio-mechanical constraints may be used to regularize the search for a match between the 2D representations of the prone and supine CT scans. For example, a limit may be applied to the twist per mm and/or the stretch per mm.
  • a conformal mapping may be used to generate a cylindrical representation and coordinate system for the colon.
  • the displacement of the colon surface can then be used to determine the displacement of surrounding locations.
  • the cylindrical registration can also be used to generate centre lines for the prone and supine colon surfaces that automatically have correspondence with every point on the surfaces and are automatically registered to each other.
  • the centre lines, cylindrical registrations, and conformal mappings can then be used for tasks such as visualizing correspondence between the prone and supine colon surfaces and refining the CAD polyp identification.
  • the image registration procedure described herein may be implemented using appropriate software running on suitable apparatus (one or more general purpose computer workstations, specialised medical hardware, etc).
  • the software comprises computer instructions that when implemented by one or more processors in the suitable apparatus cause the apparatus to perform the described image registration procedure.
  • the image registration procedure described herein may also be implemented in whole or in part using special purpose hardware, for example, one or more graphical processing units.

Abstract

One embodiment of the invention provides a method for registering a first three-dimensional medical image of a colon with a second three-dimensional medical image of the colon. The method includes segmenting the first and second three-dimensional images of the colon and extracting a first and second surface representing the surface of the colon from the first and second three-dimensional images respectively. The method further includes generating first and second mappings that map the first and second surfaces to first and second two-dimensional representations of the colon surface respectively. The method further includes determining a third mapping for transforming between the first two-dimensional representation of the colon surface and the second two-dimensional representation of the colon surface. The registration of the first three-dimensional image of the colon with the second three-dimensional image of the colon is performed on the basis of said first, second and third mappings. The method can also be used with other tubular organs apart from the colon.

Description

APPARATUS AND METHOD FOR REGISTERING MEDICAL IMAGES CONTAINING A
TUBULAR ORGAN
Field of the Invention
The present invention relates to an apparatus and method for registering medical images containing a tubular organ such as the colon, thereby establishing a correspondence between the first and second images.
Background of the Invention
Colorectal cancer is one of the main cancer types leading to more than 630,000 deaths each year worldwide [1]. Traditional colonoscopy using a video endoscope can have miss rates of up to 27 % for adenomas smaller or equal to 5 mm [2]. Furthermore, such traditional colonoscopy can cause significant discomfort and is not without risk of perforation of the gut. These drawbacks have led to the development of alternative screening methods including those based on Computed Tomography (CT).
CT colonography is a new technology that combines CT scanning with 3D image analysis and visualisation to produce images of the patient's colon that mimic those obtained during colonoscopy [3], hence the alternative term, "virtual colonoscopy" [4]. CT colonography can be used to detect colon cancer and its polyp precursors, and has attracted considerable medical and lay attention because the procedure is safer and more acceptable to patients than alternatives. CT colonography is now becoming established in the USA and Europe (and also in Japan) as a standard screening tool for colorectal cancer - for example, it is practised in about 35% of NHS hospitals, mainly to diagnose symptomatic cancer. Computer-aided-detection (CAD) is expected to elevate acceptance of CT colonography even further.
In order to perform CT colonography, the bowel is cleansed before the procedure by administering a powerful laxative. The bowel is then inflated with carbon dioxide gas prior to each image via a plastic tube inserted into the anus. Remaining faecal material and fluids can be tagged with contrast agent such as a barium salt and removed digitally. However faecal remnants or folds of the colonic wall can still mimic the appearance of polyps leading to false positives. Additionally, some regions of the colon may not be sufficiently inflated causing these regions to collapse so that none of the surface features are visible in these areas.
CT images are usually taken both prone and supine so that the colon falls into a different position. This can help to reduce the incidence of false positives, i.e. the erroneous detection of lesions (or other features of interest) that subsequently turn out to be false. For example, turning and re- insufflating the colon changes the colon shape and may dislodge faecal matter attached to the colon wall. Thus viewing a given feature in both CT images (prone and supine) can assist a clinician with diagnosis. Additionally, regions that have collapsed in one of the views will hopefully be re-inflated in the other view, enabling those regions to be examined.
Using two CT images (such as prone and supine) in conjunction with one another generally involves establishing some correspondence between the two CT images, so that a feature seen in a first image can be identified as the same feature in a second image. In many cases, the radiologist establishes this spatial correspondence between the two views by eye. However, this is a difficult task for even the most experienced radiologist and hence can introduce delays and errors in the diagnostic process.
There have been various attempts to assist the clinician by performing a computer-based registration of prone and supine CT images of the colon. US 2004/0264753 describes a system that provides a computer-based registration of prone and supine CT images of the colon, but does not give any details of how the image registration is actually performed. One known approach is to align extracted centre lines of both views [6] and to use this set of coordinates as an index of location, see also US 2004/0136584. However, this method provides no information on rotation around the centre line and centre line registration is likely to lead to errors of several centimetres along the colon. Another approach has been to define several anatomical landmarks [7], like the anus, cecum and flexures, in order to align both 3D colon images. However, flexures are difficult to locate accurately and identification of only a small number of points is insufficient to describe the complex folding and deformation of the colon between prone and supine views.
Fukano et al [20] aim to establish correspondence between the colon surfaces by matching haustral folds extracted from prone and supine data (a similar approach is described in US
2006/0215896). Although Fukano et al demonstrate that haustral folds can be detected robustly, it is very challenging to establish the correct correspondence between views, as their results indicate. For example, they report 65.1% of corresponding large haustral folds and 13.3% of small haustral folds as being matched correctly.
In addition, a voxel-based method has been proposed by Suh et al. [8]. This method also uses the centre lines to generate an initial deformation field and then treats the registration task as an optical flow process. However, given the changes in shape and location of anatomy between prone and supine images, such that there are significant differences in the position and insufflation of the colon between views, it has been found that conventional image intensity based non-rigid registration algorithms are generally not sufficiently robust or accurate for such a task. Suh et al. [24] use their voxel-based method to try to handle cases where the colon has partially collapsed in one of the views by allowing the colon to grow or shrink during the registration. However, this makes it even more challenging to appropriately constrain the registration to provide robust and accurate results, as evidenced by the limited accuracy reported for their method (average error after registration of 30.1 mm for 4 cases each evaluated using a single polyp).
It is also known to use the teniae coli as additional features for producing a deformation field with a rotational component, see Lamy et al. [9] and US 2007/0270682 Huang et al. [21]. However, teniae coli are difficult to extract automatically over the full length of the colon, and provide little information about the deformation that can occur in the along-colon direction. Hence in US
2007/0270682, manual interaction is required to align polyps in the along-colon direction.
Recently (after the priority date of the present application), Zeng el al. [21] presented a method based on conformal mapping combined with feature matching in order to establish correspondences between the prone and supine surface. They detect four flexures and one teniae coli in order to divide the colon surface into five segments and map each segment to a rectangle. Correspondence between prone and supine surfaces is then established for each rectangular segment individually. Therefore the method relies on being able to accurately determine exactly the same segments on the prone and supine surfaces, which can be very difficult even for fully distended colons, and may not be possible for cases with local colonic collapse. Furthermore, they established correspondence between the mapped segments using only a sparse point set of features extracted from some 'prominent' haustral folds, which are unlikely to provide an accurate alignment of the detailed colonic surface.
There are various other factors that may also cause differences between medical images (apart from bodily position), such as different imaging techniques (CT, MRI, etc), different image orientations, different image exposure levels, etc. In addition, there may be deformation or modifications of the colon (or other organ to be imaged) caused by any difference in time between the first and second images, which may be a short or long period - perhaps just minutes, or maybe months or years, depending on the clinical situation. These differences in the organ with time might arise from growth (development), injury, illness, general bodily metabolism (breathing, digestion, circulation, etc), and so on. Image registration may have to contend with a combination of the above factors.
Current computer-based image registration techniques, such as used in virtual colonoscopy, generally suffer from limited accuracy and/or the need for extensive (and time-consuming) human involvement. Summary of the Invention
One embodiment of the invention provides a method for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ. The method comprises: segmenting the first three- dimensional medical image containing the tubular organ and extracting a first surface representing the surface of the tubular organ from the first three-dimensional medical image; segmenting the second three-dimensional medical image containing the tubular organ and extracting a second surface representing the surface of the tubular organ from the second three-dimensional medical image;
generating a first mapping that maps the first surface to a first two-dimensional representation of the surface of the tubular organ, wherein said first two-dimensional representation of the surface of the tubular organ reflects the value of a property at each position on the surface of the tubular organ as derived from the first three-dimensional medical image; generating a second mapping that maps the second surface to a second two-dimensional representation of the surface of the tubular organ, wherein said second two-dimensional representation of the surface of the tubular organ reflects the value of the property at each position on the surface of the tubular organ as derived from the second three- dimensional medical image; determining a third mapping for transforming between the first two- dimensional representation of the surface of the tubular organ and the second two-dimensional representation of the surface of the tubular organ; and registering the first three-dimensional medical image containing the tubular organ with the second three-dimensional medical image containing the tubular organ on the basis of said first, second and third mappings. Such an approach helps to provide an accurate and efficient image registration with reduced (or no) manual input from a clinician.
In one embodiment, the first three-dimensional medical image containing the tubular organ and the second three-dimensional medical image containing the tubular organ comprise Computed
Tomography (CT) images. However, the medical images might be obtained using other imaging techniques, such as magnetic resonance imaging (MRI), ultrasound, derived from an optical colonoscopy, etc. In some cases the first medical image may be derived from one imaging technique, and the second medical image may be derived from a different imaging technique.
In one embodiment, the first three-dimensional medical image containing a tubular organ and the second three-dimensional medical image containing the tubular organ are taken in first and second positions respectively. For example, the first position may comprise the prone position and the second position may comprise the supine position. Other examples of different positions might be left and right side (lateral) images. It will be appreciated that in many cases changing the body position between the first and second images leads to (non-rigid) deformation of the tubular organ, which adds complexity to the image registration to be performed.
In one embodiment, extracting the first and second surfaces includes reducing the topological complexity of each surface. This helps to facilitate mapping or projecting the extracted surface of the tubular organ onto a two-dimensional (flat) representation. Each of said first and second surfaces may comprise a cylindrical surface to represent the surface of the tubular organ. This can be achieved, for example in the case of images of the colon, by forming a first hole in each surface to represent the anus and a second hole in each surface to represent the cecum. However, other surface shapes may be used depending on the particular anatomy in question. For example, a branching topology may be appropriate in some cases. In one embodiment, each of the first and second mappings comprises a conformal mapping - i.e. a mapping that preserves angle. Such a mapping may be derived using the Ricci flow algorithm or any other appropriate technique. One advantage of using a conformal mapping is that the two-dimensional representation can be considered as depicting an unfolded view of the tubular organ. However, in other implementations, other forms of mapping may be used, especially if only the ultimate image registration is of interest (rather than the intermediate two-dimensional representation of the tubular organ).
In one embodiment, the first and second two-dimensional representations of the surface of the tubular organ each has a first dimension corresponding to distance along the tubular organ and a second dimension corresponding to angular position around the tubular organ. Again, this helps to provide a two-dimensional representation that can be readily interpreted as representing an unfolded view of the tubular organ (if so desired). Note that the centre line along a tubular organ is an important parameter for many existing systems that provide visualization, CAD, etc, and hence such a representation supports easier comparison with these existing systems.
The correlation or registration between images is generally performed based on some physical property or measure that varies across the surface of the tubular organ. A portion of the first two- dimensional representation having a first value for the property will usually correspond to a portion of the second two-dimensional representation that has a similar value for this property (allowing for deformation, etc), since these two corresponding image locations are considered to represent the same physical position on the surface of the tubular organ, and should therefore have the same physical properties. In one embodiment, the property used for the image registration is a measure based on local shape or curvature, while in another embodiment, the property used for the image registration is based on image intensity (from the original three-dimensional medical images). The property or measure used for the registration may be calculated from the two-dimensional representation of the surface of the tubular organ, or from the original three-dimensional medical images (in the vicinity of the surface of the tubular organ). The first/second mapping can then be used to locate the property derived from the original first/second three-dimensional medical images onto the first/second two-dimensional representations for subsequent use in the image registration.
In one embodiment, wherein the third mapping is performed within the framework of a cylindrical topology. This reflects the tubular nature of the tubular organ, and so can help to provide a better and physically more appropriate mapping. In some embodiments, each of the first and second two-dimensional representations is repeated for use in determining the third mapping to reflect a cylindrical topology. This mimics the periodic nature of traversing around the circumference of the tubular organ, thereby reflecting the cylindrical topology. In other embodiments, the algorithm used for creating the third mapping may have an intrinsic understanding of the cylindrical topology of the two- dimensional representations, thereby avoiding the use of such image repetitions. For example, the third mapping may automatically wrap each of the first and second two-dimensional representations in a cyclical fashion perpendicular to the central axis of said tubular organ.
In one embodiment, the third mapping comprises a non-rigid 2-D B-spline registration. The registration may be performed in multiple stages, recovering larger deformation at first, and then smaller deformations later on.
One embodiment of the invention further comprises accommodating one or more collapsed segments in said tubular organ in the first and/or second medical images. The one or more collapsed segments divide the tubular organ into multiple non-collapsed segments. In one embodiment, this accommodation comprises mapping each non-collapsed segment to an individual image representing a two-dimensional representation of the surface of the tubular organ for that non-collapsed segment, and then forming an aggregate image of the individual images for use as said first or second two- dimensional representation of the surface of the tubular organ (as appropriate). The aggregate image of the individual images may be provided with null values in regions corresponding to the collapsed segments; these said null values can then be ignored when determining said third mapping.
In one embodiment, forming an aggregate of the individual images for use as said first or second two-dimensional representation of the surface of the tubular organ includes estimating the length of each collapsed segment and each non-collapsed segment. The positions of said individual images
(representing the non-collapsed segments) within said aggregate image (representing the whole organ) can then be determined based on the estimated lengths. For example, the lengths may be estimated based on location along a centre-line of the tubular organ. Forming an aggregate of the individual images may also include rotating the individual images about the central axis of tubular organ to provide a consistent angular orientation between the multiple non-collapsed segments, i.e. to ensure azimuthal alignment between the multiple non-collapsed segments.
In one embodiment, the tubular organ may be split into a number of disconnected segments in one or both medical images. Each segment can be mapped to a two-dimensional cylindrical representation, and then combined into a single continuous two-dimensional cylindrical representation of the whole tubular organ by estimating the length of the missing segments.
In one embodiment, the approach described herein is used as a method of generating the centre line for the tubular organ for the first and second images (with the two centre lines then being automatically registered with one another as part of the overall image registration).
In one embodiment, the third mapping is regularized using one or more computational, physical, or bio-mechanical constraints. For example, there may be a limit to the amount of possible rotation by the colon, and this can then be used as a constraint for determining the third mapping. One possibility is to use a biomechanical model to estimate the ease or difficulty of any estimated deformation of the tubular organ between the first and second images, and then to use this as part of the procedure for determining the most likely image registration. In one embodiment, the third mapping may be based at least in part on anatomical features. For example, if a specific anatomical feature is clearly visible in both images, then this can be considered to act as a constraint on the registration, since the first and second images must coincide properly with one another in respect of this feature. Alternatively the anatomical feature might be used to initialize the registration with a starting estimate that coarsely aligns the features and therefore serves as a basis for performing a finer registration to obtain the desired result.
In one embodiment, the non-rigid registration determines a displacement between each point on the surface of the tubular organ in the first three-dimensional medical image and a corresponding point on the surface of the tubular organ in the second three-dimensional medical image. The calculated displacement between the surface of the tubular organ in the first three-dimensional medical image and the surface of the tubular organ in the second three-dimensional medical image can then be used to determine the displacement of image locations neighbouring said surface of the tubular organ. For example, the displacement of an image location neighbouring the surface of the tubular organ may be determined as being the same as the displacement of the point on the surface of the tubular organ which is closest to said image location. This then allows the registration between the first and second three- dimensional medical images to extend beyond the surface of the tubular organ to the neighbouring areas. (It will be appreciated that the registration will generally lose accuracy with increasing distance from the surface of the tubular organ).
In one embodiment, the tubular organ comprises the colon. In other embodiments, the image registration may be performed with respect to images of other tubular organs, such as the small bowel, oesophagus, etc.
The approach described herein can be used in a wide range of medical systems, for example to provide a visualization of corresponding regions from the first three-dimensional medical image containing the tubular organ and from the second three-dimensional medical image containing the tubular organ. In one embodiment, the visualization may flag to a clinician that one portion of a first image representing a region of medical interest corresponds to a given portion of the second image. This then allows the clinician to study the correct portion of the second image for investigating further the region of medical interest. In addition, the approach described herein can assist with a computer- aided- detection system. For example, certain features that are confirmed (following image registration) to be present in both the first and second images may have a higher likelihood of being genuine features of interest than features that are found to be present in only one image.
The above described methods may be performed by running one or more computer programs comprising program instructions for implementing such a method. The instructions may be stored on a non-transitory medium (such as an optical disk, solid state memory, disk drive, etc) and loaded into a memory of a computer for execution by a processor of the computer. Another embodiment of the invention provides apparatus for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ. The apparatus is configured to: segment the first three- dimensional medical image containing the tubular organ and extract a first surface representing the surface of the tubular organ from the first three-dimensional medical image; segment the second three- dimensional medical image containing the tubular organ and extract a second surface representing the surface of the tubular organ from the second three-dimensional medical image; generate a first mapping that maps the first surface to a first two-dimensional representation of the surface of the tubular organ, wherein said first two-dimensional representation of the surface of the tubular organ reflects the value of a property at each position on the surface of the tubular organ as derived from the first three-dimensional medical image; generate a second mapping that maps the second surface to a second two-dimensional representation of the surface of the tubular organ, wherein said second two-dimensional representation of the surface of the tubular organ reflects the value of the property at each position on the surface of the tubular organ as derived from the second three-dimensional medical image; determine a third mapping for transforming between the first two-dimensional representation of the surface of the tubular organ and the second two-dimensional representation of the surface of the tubular organ; and register the first three-dimensional medical image containing the tubular organ with the second three-dimensional medical image containing the tubular organ on the basis of said first, second and third mappings. Such an apparatus will generally benefit from the same features as described above with reference to the method embodiment.
In accordance with one embodiment of the invention, a registration method is provided for establishing spatial correspondence for the inner colon surface extracted from prone and supine CT colon images. The registration process includes finding a unique indexing system, which reduces the registration task from a 3D- to 2D-problem by using a one-to-one conformal mapping of the entire inner colon surface to a cylindrical representation, where one dimension corresponds to length along the colon and the other dimension corresponds to the angular orientation. Images that correspond to 3D positions can now be generated, including shape indices computed on 3D surfaces. This allows a non-rigid registration of the prone and supine colon surfaces which can handle the large deformations between both positions. Furthermore, this framework could be easily extended to include a statistic or set of statistics derived from the original CT-images.
One embodiment of the invention provides an automated method of establishing correspondence between the colon surfaces visualised in prone and supine CT images. In particular, a non-rigid registration on a 2D manifold is used in order to establish a full correspondence between all points on the 3D colonic surface in the different images. Such an approach has the potential to save time, provide a more accurate diagnosis and improve computer aided detection (CAD) algorithms. For example, the confirmation by a second scan of the presence of a lesion seen in a first scan, or the rejection of a candidate lesion from a first scan not supported in the second scan, is facilitated by establishing a spatial correspondence of points in the colon surface extracted from one CT scan (image) with surface points extracted from the other CT scan (image). This approach can assist conventional radiological interpretation as well as potentially reducing false positive rates in CAD systems. The approach can also facilitate comparison with optical colonoscopy images.
One embodiment of the invention provides a method based on a 2D manifold representing the internal colon lumen surface. The colon is a tube and the internal surface can be mapped to a plane with two indices describing any location. Each location corresponds to a 3D point in a CT scan and can act as an index to a rich set of both surface and volume features. A registration algorithm may be used whereby all transforms take place within this surface, but use information extracted from the local 3D shape of the surface and potentially local voxel statistics to provide significant constraints for the non- rigid registration.
Brief Description of the Drawings
Various embodiments of the invention will now be described in detail by way of example only with reference to the following drawings:
Figure 1 is a high-level flowchart showing a method for acquiring and processing CT colonography images in accordance with one embodiment of the invention.
Figure 2 is a flowchart showing a method for preprocessing CT colonography images in accordance with one embodiment of the invention.
Figure 3 is a flowchart showing a method for registering two CT colonography images in accordance with one embodiment of the invention.
Figure 4 is a schematic diagram which depicts surface registration using a 2D manifold in accordance with one embodiment of the invention.
Figure 5 is a schematic diagram showing the sampling of an unfolded mesh after conformal mapping in accordance with one embodiment of the invention.
Figure 6 shows raster images of a colonic surface for prone, supine and deformed supine in accordance with one embodiment of the invention.
Figure 7 is a histogram illustrating registration error found by an experiment in accordance with one embodiment of the invention.
Figure 8 shows examples of 3D renderings of the colonic surface illustrating local colon collapse leading to disconnected segments.
Figure 9 shows raster images of a colonic surface for prone, supine, and deformed supine for a case where the colon in the prone view forms two disconnected segments. Figure 10 is a histogram illustrating registration error found by a second experiment in accordance with one embodiment of the invention.
Detailed Description
Figure 1 is a high-level flowchart showing a method for acquiring and processing CT colonography images in accordance with one embodiment of the invention. The method begins with the acquisition of suitable CT images (110), which are then subject to preprocessing (120) to obtain a representation of the colon surface. The method continues with image registration of the preprocessed images (130), and concludes with analysis and exploitation of the registered images (140). It will be appreciated that the image acquisition and image preprocessing from Figure 1 are generally already known to the skilled person. The main focus herein concerns the image registration, which may be used to establish correspondence between the same point on the colon surface in each of prone and supine CT images (and hence assist with the visualization of the images).
Considering operation 110 in more detail, the CT images may be acquired according to standard clinical practice. It will be appreciated that CT produces a three-dimensional data set that will be referred to herein as an image, with each element of the image being referred to as a voxel. Usually two CT images are acquired, one with the patient in the prone position, one with the patient in the supine position. The provision of two images in different positions helps to distinguish between faeces and polyps in the colon, as faecal matter is expected to move relative to the colonic wall between these positions while polyps will not. Acquiring the images in these two positions also makes it less likely that any given portion of the colon will be collapsed (and therefore features in the colon wall rendered indistinguishable) in both sets of images.
In some procedures, images may be acquired with the patient in one or more alternative (or additional) positions. For example, the patient may be imaged in the left or right lateral position since this resembles the OC position. The images may be acquired with or without faecal tagging and with or without a CT contrast agent.
Considering operation 120 in more detail, any desired preprocessing of the CT images is now performed. The preprocessing is usually performed automatically by computer for reasons of efficiency, although some manual processing may be utilised if so desired. In one embodiment, where faecal tagging has been used during imaging, preprocessing 120 includes removing faecal matter using an electronic cleansing algorithm.
Figure 2 is a flowchart showing a method for preprocessing CT colonography images in accordance with one embodiment of the invention. The preprocessing 120 includes segmentation of the colon and extraction of the colon wall as a triangulated mesh. (N.B. This segmentation and extraction might alternatively be viewed as the first portion of the image registration 130). In particular, each CT image is first segmented (210) to distinguish the colon lumen from the surrounding tissue. The segmentation may be performed, for example, by using a thresholding approach, since the colon is filled with air and the contrast between air and the colon is high. It will be appreciated that any other segmentation technique may be used as appropriate. In some embodiments, the segmentation may be assisted by the use of a priori anatomical knowledge. The plastic tube, which is used for inflating the colon, is generally removed (interactively and/or automatically) from the segmentation so as to not affect the surface construction. If some regions of the colon have collapsed, or if there is residual colonic fluid due to suboptimal bowel preparation, then the segmentation of the colon may result in a number of disconnected segments rather than a single structure.
The (inner) surface of the segmented colon (^J ) is now represented as a triangular mesh, for example by using the Marching Cubes algorithm (220) [10]. It will be appreciated that any other technique for generating a mesh (and any other suitable form of surface representation) may be used as appropriate. The surface is now edited (interactively and/or automatically) to produce a genus-zero surface (230), which is topologically equivalent to the surface of a sphere. In other words, the resulting surface does not contain any closed loops or similar complexities ("handles"). The topological correction may also be performed on the segmented colon before the surface is extracted. In this case, the segmentation is modified (either manually or automatically) so that it is topologically equivalent to a sphere, and it is then possible to extract a genus-zero surface that does not require any further topological correction.
The anus and the cecum are now identified (interactively and/or automatically) (240) and a hole is created in the mesh at each of these points to produce a structure that now has the topology of a tube. The CT images may be further preprocessed (interactively and/or automatically) to label the segments of the colon according to their anatomic location. The genus- 1 surface may then be processed, for example by smoothing (to describe a continuous surface) and/or by simplifying, for example using quadric decimation [11], to reduce subsequent computational time for the conformal mapping.
If the colon segmentation is formed of a number of disconnected segments then a surface is extracted for each individual segment. The start and end point of each segment is identified (manually or automatically), and the surface for each segment is then processed as above so that each segment has the topology of a tube.
Figure 3 is a flowchart showing a method for registering two CT colonography images in accordance with one embodiment of the invention. In image registration 130, the prone and supine CT images (and any other CT images which have been acquired) are non-rigidly registered to establish correspondence between the images. The registration process includes initially "unfolding" the colon using a conformal mapping (310) [12, 13], such as the Ricci flow algorithm [14], or another method such as that proposed in WO 2010/142624, to provide a one-to-one mapping of the entire colonic surface to a 2D representation, which is topologically a cylinder with the anus at one end and a point in the cecum at the other. This cylindrical representation provides an x and y coordinate for every point on the colon, specifying the relative distance along and around the colon.
The conformal mappings produce unfolded images that preserve the topology of the colon while providing a one-to-one mapping between the 3D surface and the 2D image. Note that unfolding the colon in order to produce 2D images of the inside of the colon, for example to enable better examination of the surface of the colon and aid detection of polyps, is described in [15] and also in WO
2010/142624.
The one-to-one mapping allows 2D cylindrical images to be generated for any property of the colon surface. In particular, such an image may be generated using one or more of any measure derived from the 3D triangulated surface, and/or one or more of any measure derived from the 3D CT image data at the surface of the colon. Such measures include principal curvature, curvedness and shape index from the triangulated surface, and statistics of the original CT-intensities in the region of the surface, for example profiles of image intensity reconstructed perpendicular to the colon surface or direct measures of curvature from the local grey values.
If the colon surface is formed of a number of disconnected segments, each one of these is mapped to an individual 2D image, each representing a segment of the cylinder representing the full colon. In order to form a single 2D image from the disconnected segments, the length of each collapsed and uncollapsed segment is estimated, and the order of the different segments is established. The length of each well- distended segment can be estimated based on the length of its centerline. Assuming that a collapsed segment is relatively straight, its length can be estimated as the Euclidean distance between the centerlines of the well- distended segments. Other methods for estimating the lengths of the collapsed and uncollapsed segments may also be used.
Once the relative length of each segment is known, a 2D image for the full colon can be formed by scaling the length of each segment appropriately and combining them into a single 2D image. The uncollapsed segments may need shifting in the y direction (around the circumference of the colon) as the angular orientation for each segment will be arbitrary. This rotation can be done manually or automatically, e.g. by minimising the 3D distance between points with the same 2D y coordinate on either side of a collapsed segment.
The collapsed regions of the 2D image are assigned a null value and are ignored during the 2D non-rigid registration described below. For the uncollapsed regions, the 2D images can be generated for any of the properties from the 3D colon surface or original 3D CT image as described above.
Anatomical features such as the hepatic and splenic Hectares, haustral folds, or teniae coli identified either on the 3D surface or on the colon segmentations or in the 3D CT scans may be mapped onto the 2D cylindrical images. Such features may also be detected directly on the 2D images. If corresponding features are identified on the both the prone and supine 2D images (and any others that are used), these can also be used to constrain or initialise the 2D non-rigid registration described below. For example, this initialization could be performed by using a simple linear stretch/compress along the length of the colon, applying an initial warp (e.g. thin-plate-spline) to one of the images, or by initialising the b-spline transformation so that the features are approximately aligned at the start of the non-rigid registration.
The 2D cylindrical surface images of the colon allow a 2D non-rigid registration of the prone and supine colon surfaces to be performed (320), for example using an iterative B-Spline registration method. It will be appreciated that any other algorithm for performing a non-rigid 2D registration may be used as appropriate. The standard 2D registration procedure is modified to treat the colon surfaces as cylinders rather than flat 2D planes, i.e. they repeat as you move around the colon, but not as you move along it. The resulting correspondence established both along the length of the colon and around the circumference of the colon is able to compensate for twist and expansion of the colon.
The similarity between the two cylindrical images can be determined during the registration using one or a combination of the 3D surface and CT measures mentioned above. Null values in the 2D images, corresponding to collapsed segments of the colon, are ignored during the registration. In addition, the registrations can be regularized using one or more computational, physical, or bio- mechanical constraints, such as penalizing the bending energy of the deformation or local changes in volume or colon surface area. This can help the optimization of the registration, as well as ensure that plausible results are generated.
The 2D registration establishes a 1-to-l correspondence between the two (or more) cylindrical representations of the colon. This correspondence can then be mapped back onto the surfaces in 3D space (via the conformal mappings) so that the 3D coordinate of any point on the colon surface from one image can be mapped to the corresponding 3D point on the surface from the other image (330).
For some computer-aided detection (CAD) purposes, the displacement is required at locations close to, but not directly on, the colon surface. In these circumstances the displacement of the colon surface can be propagated to the surrounding voxels, e.g. by using the displacement of the surface point closest to the desired voxel.
Standard CAD software may now be used to process the CT images and to identify and/or further investigate polyp candidates. The results of the image registration can be used to assist with this work. For example if a feature is present in the prone image but absent from the corresponding location (as determined by the image registration) of the supine image, this increases the likelihood that the potential polyp in fact comprises faecal matter (that moved between images).
The centre line of the colon, which is often used as an aid to visualization of the colon, can be determined from the registered 2D cylindrical images. Thus in these images, points with the same coordinate along the colon define closed loops around the circumference of the colon surface. The conformal mappings can be used to determine the 3D coordinates of the points on these loops. A point is defined in the middle of each loop, for example using the mean of the 3D coordinates. Therefore, by finding the middle of successive loops along the length of the colon in this manner, a centre line can be defined for each of the 3D colon surfaces. This centre line may be further processed, for example by smoothing the centre line coordinates to produce a trajectory without loops that maintains the association of each point in the surface with a unique point on the centre line. This method of extracting the centre line has the advantages that: (i) every point on the surface is automatically associated with a point on the centre line, and (ii) the prone and supine (and any other) centre lines are automatically registered to each other.
A workstation can display a range of views of the registered images, including a 'virtual colonoscopy view', orthogonal slices through the CT volume and orthogonal slices through the segmented CT volume. The identified centre line can be overlaid on these views if so desired. A linked pointer, when held over a point in one view, can be used to indicate the corresponding points in the alternative view (or views).
This can be achieved in a variety of methods. One possibility is that user indicates a point near to the image surface in one CT image. The system then displays a cursor (such as a small arrow or cross) at the point nearest to the same surface point as computed in the other view. Another possibility is to start with a 3D surface or volume rendering of the inside of the colon in one CT image (view A) from a position and direction defined by the centre line location and direction. The system can then use the conformal mapping and cylindrical registration to identify the nearest centre line coordinate and its direction in the other CT image (view B). With this information a surface or volume rendered view can be generated in the same direction as in view A.
One way to visualize spatial correspondence between the two views is to use the cursor to identify a point in the surface in view A. The best estimate of that location in the other view (view B) can then be indicated by another arrow, cross or cursor. If an estimate of registration accuracy or precision is computed from the registration step, the projection of the 95% or 99% confidence limit of the estimated spatial correspondence can be displayed as a line contour or colour change on the rendered view.
In one embodiment, overlaid on the views of the CT images are markers indicating potential polyps. A clinician can delete a marker that identifies a point that does not correspond to a polyp, or insert a marker at a point that he/she identifies as being a potential polyp.
The approach described herein utilises the fact that a surface Sl in r2 can be represented using a one-to-one mapping φι to ^ in r2 , while likewise S2 (a second surface in r3 ) can be mapped to D2 respectively. The transformation function between the three-dimensional surfaces Sl and S2 can then be derived as
/ = φ -1 ο / ο φ1 , Sl→S where / is the registration / : Dl→D2 between the two flat surfaces. This is illustrated in Figure 4, which depicts the principle of surface registration using a 2D manifold. Note that in Figure 4, the intensity scale indicates the shape index at each coordinate of the surface computed from the 3D inner colonic surfaces s, .
One recently developed technique for discrete surface parameterisation is the Ricci flow method. This method deforms a surface proportional to its local curvature, where the curvature values evolve similarly to a heat diffusion process. This technique was first introduced by Hamilton [16] for Riemannian geometry and can also be used to parameterise surfaces with an arbitrary topology [17]. Qiu et al. [14] have used this technique to unfold the colon.
The Ricci flow is defined as : = Λ,— Λ,
ax
where is the current curvature at vertex *·':. , is the desired Gaussian curvature, and ut is computed from a circle packing metric [17]. It can be shown that the Ricci flow represents the gradient flow of an energy function which can be minimized using the gradient descent method. In applying the algorithm to the colon surface, the target curvature is determined. For the purpose of parameterisation, the target curvature is set to zero for all vertices.
The inner colonic surface is obtained by extracting triangulated meshes of the inner colonic surfaces using segmentations of the air inside the prone and supine colons computed by the method described in [22]. It was ensured that the segmentations of the large intestine were topologically correct, using manual or automatic editing of the segmentations where appropriate. The segmentation provides the input for a marching cubes algorithm with subsequent smoothing and decimation. This results in a closed and simply connected mesh along the air-to-tissue border in the CT-image.
To apply the Ricci flow method to the colon, each original genus-zero surface is converted to a genus-one surface 5¾ [17]. Therefore, the inner colonic surface 5;; (which is topologically equal to a sphere) is converted to a torus-like surface. A hole is punched into the cecum and the rectum at user (or machine) identified positions. The remaining faces are copied to a new mesh with an inverse orientation of its faces, so that the normal vectors are pointing towards the inside of the colon. Subsequently the copied mesh is joined at the boundaries of the previously produced holes with the original surface triangulation. The resulting mesh ¾ provides the input for the Ricci flow computation that provides the two-dimensional coordinates of each location within the surface.
The Ricci flow algorithm converges to a planar surface with local Gaussian curvature tending to zero everywhere by iteratively updating the edge lengths of the triangles. The optimisation is run until the maximum difference between all ¾ and ¾ is close enough to zero to produce a suitable parameterisation ¾ that can be embedded into planar space. This is computed in a similar manner to [17], where each planar triangle is computed based on its final edge length.
As ¾ is not rectangular, the planar mesh is repeated so that a rectangular raster-image will fully sample all points around the colon. This is illustrated in Figure 5, in which the grey (wiggly) bands each represents a repeat of the 2D coordinates of the surface for a further 360 degrees clockwise and anticlockwise. The straight horizontal lines represent the re-sampled complete colon surface in a form suitable for registration, where the top (0') and bottom (360') edges of the image correspond to the same point on the colon surface, thus representing the inner colonic surface as a cylinder. The horizontal axis (x) corresponds to position along the colon from cecum to rectum and the vertical axis (y) corresponds to rotation around the circumference of the colon. Each pixel of has an interpolated value of the corresponding shape index SI from the three-dimensional surface $i . S is defined as
1 I / ¾ + Kf \
Si = arctan i I
2 π \ K, - JE, / where K± and κΒ are the principal curvatures extracted from . Any other measure, such as local curvature or voxel grey value statistic derived from the original CT scan, could be associated with a pixel in The interpolated value is computed based on the three corner values of each triangle in ¾ which correspond to the 3D vertices of . The resulting prone and supine raster-images with a resolution of nx = 300® and ny = are shown in Figure 6.
In accordance with one embodiment of the invention, for establishing spatial correspondence between prone and supine images, the 2D manifolds are used to generate shape index images. These shape index images are first aligned in the j-direction to account for differences in the 0 position arbitrarily assigned by the planar embedding. This is performed automatically by applying a circular shift in the j- direction to a first image ( ) that minimises the Sum of Squared Differences (SSD) between it and a second image (^s ). The B-spline registration may be performed on the flat Euclidean plane (and not necessarily in a cylindrical framework, although the latter option might also be used). Therefore J'i and ^i. are repeated in the j-direction, resulting in images with a resolution of }ix " 2¾- , so as to simulate the cylindrical images during the registration.
A 2D B-spline registration is then performed with the shifted »f i as target and z. as source using the implementation provided by [19]. The registration is performed in two stages, the first to recover the larger deformations, and the second to recover the finer deformations. The first stage consists of five resolution levels. The second stage consists of three resolution levels and uses the result from the first stage as the starting transformation for the coarsest level. Both the image and B-spline control point grid resolutions are doubled at each level. The final resolution level uses images with 3000 x 300 ( χ ' 2¾ ) pixels and control points spaced every 12.5 pixels in both directions. SSD is used as the similarity measure. The gradient of the cost function is smoothed at each iteration using a Gaussian kernel with a standard deviation of 3 control points for the first stage and one control point for the second. No additional constraint term is used for the first stage but bending energy is used for the second. Gaussian smoothing of the 2D images is applied at each resolution level during the first stage of registration but is not used for the second.
The central half of the B-spline registration result, from 90 to 270 degrees of the y-axis, covers the whole inner colon surface and should have a similar displacement at y = 90 degrees and y = 270 degrees due to the duplication of image data in the j-direction prior to registration. To force the result to be fully cylindrical, i.e. so that the transformation is continuous from y = 90 degrees to y = 270 degrees, the displacements of the control points at y = 90 degrees and y = 270 degrees are averaged together, and the displacements of the control point before y = 90 degrees and after y = 270 degrees are replaced with the corresponding control point displacements from the central section. This results in a continuous transformation around the entire inner colon surface and allows the mapping between and to be determined. From this mapping, the full 3D mapping /, as shown in Figure 4, can be readily derived.
In one investigation, ethical permission and informed consent were obtained to utilise anonymised CT colonography datasets. Colonic cleansing and insufflations were undertaken in accordance with current recommendations [5] for all subjects used in the study. A radiologist with experience of over 500 validated colonography studies matched pairs of reference points from the original prone and supine CT slices of six patients. Using separate multiplanar reformats, a combination of polyps, normal anatomical structures and diverticula were identified from multiple colonic segments resulting in an average of ten pairs of coordinates per patient. The error of establishing the
correspondence points in both images was determined by redoing the validation following an interval of several days to reduce recall bias. The repeated validation was performed using the same coordinates from the supine datasets. The radiologist was blind to the results of the previous matching exercise. The results suggest a significant difficulty in finding correct correspondences in the prone and supine CT-images.
A further analysis was performed that involved the removal of outliers based on the maximum likelihood estimate using the median according to σ =1.4828(l+5/(«-3))med|E|, where the in-liers are defined by — Y and γ =1.96σ. This gives a threshold γ of 14.6mm in order to get reliable landmarks to validate the registration and reduces the human observer error from (8.2±12.5) mm to (3.8+2.9) mm.
In order to determine the registration error, the closest surface points were found on Si to the average of the in-lier landmark coordinates in the prone image. The closest points were then found on to the corresponding landmarks in the supine image. These points on were transformed using the 3D mapping/, and the distance calculated to the corresponding points on Si. The validation results are summarized in Table 1 below, and show the mean (μ) and standard derivation (σ) for each case and all together: (1) just using the direct mapping between Dj and D2 after initial alignment in y-direction, and (2) after establishing the spatial correspondence using the b-spline registration. Patients 1 to 3 had polyps where the registration errors were 11, 5.7 and 0.9 mm respectively. The histogram of the registration error (see Figure 7) shows that the majority (91 %) lie below 20 mm. This level of accuracy is helpful for a screening process.
Patients 4 and 6 have clearly higher errors which correspond to the data points larger than 20 mm in Figure 7. This may be due to the fact that for some of the points it is not clear whether the corresponding landmark is assigned wrongly to the neighboring fold or whether the registration fails. (The statistics exclude one point which lay in the rectum and had a large error due to the deformations in the parameterization in that area, but in any event the rectum is of little clinical relevance when screening for colorectal lesions). Note that one limitation of the above study is the lack of an accurate standard, in that the reported accuracy of the results is almost certainly limited by observer error in picking corresponding landmarks in the prone and supine CT images.
Patient #points μΐ σΐ μ2 σ2
1 10 (10) 23.19 9.53 6.84 4.61
2 11 (12) 12.01 7.18 6.76 3.63
3 10 (14) 20.89 12.63 6.15 3.38
4 10 (11) 23.11 12.60 8.42 11.61
5 8 (9) 10.59 4.67 4.36 2.54
6 7 (9) 24.81 16.72 17.37 17.35
All 56 (65) 19.12 11.91 7.55 8.87
Table 1. Registration error in mm using the extracted surface points nearest to the landmarks before (1) and after (2) the 2D b-spline registration
In some embodiments, the non-rigid B-spline registrations described above, are performed in a cylindrical framework. In one example implementation of such an embodiment, the hepatic and splenic flexures were used to provide a good initialisation for the non-rigid registration. These flexures can be detected automatically or manually in the 3-D data sets of the first and second images and mapped onto the 2D cylindrical representations Si and ¾ using conformal mappings Φι and Φ 2 respectively. One of the 2D cylindrical representations, for example Si, was then linearly stretched and compressed in the x- direction so that the flexures have the same x-location in both Si and ¾ . In this implementation, no alignment in the y-direction was performed, as this matching is fully recovered by the cylindrical B- spline registration. Shape index images, // and I2, were then generated from Si and ¾, as described above.
In this implementation, the alignment between // and was established using a cylindrical non- rigid B-spline registration method. For standard B-spline registrations, the control point grid extends outside the image by at least one control point spacing in each direction so that the deformation is defined over the whole image. For the cylindrical registrations, the control point grid does not extend outside the images in the y-direction (around the cylinder). Instead, when an extended control point is required, the corresponding value is taken from the opposite side of the grid. In addition, any displacement in the x-direction (along the colon) at each end of the image was prevented by fixing the x- displacement of the first and last three control points to be zero, which ensures that the ends of the images are aligned with each other, while still allowing for twists around the colon.
In this implementation, SSD was again used as the similarity measure, and bending energy and volume preserving penalty terms were used to constrain the registration, as described in [19]. A coarse- to-fine approach was used in order to capture first the largest deformations and then the smaller differences between both input images. This was achieved with a seven-level multi-resolution approach using Ij as target and h as source. Both the image and B-spline control point grid resolutions were doubled with increasing resolution levels. The final resolution level used images with 4096 χ 256 (n(x) x n(y)) pixels. The control point spacing was 16 pixels in both directions at each resolution level. The gradient of the cost function was smoothed after each iteration using a Gaussian kernel with a standard deviation of 3. Gaussian smoothing of the 2D images was applied at each resolution level with a standard deviation of two pixels. The objective function weights for both penalty terms was set to 1 x e" 4 (where e is the base of natural logarithms). These parameters have been found to recover the majority of the deformation between two images for data used for tuning, while preventing unrealistic deformations from occurring.
The use of a cylindrical B-spline registration results in a continuous transformation around the entire endoluminal colon surface and leads to a mapping between SI and S2. From this mapping, the full 3D mapping (corresponding to /, as shown in Fig. 4) can be readily determined as discussed above.
The above approach can handle datasets where the colon is represented as a number of disconnected segments (rather than as a single connected object). This is helpful, since despite colonic insufflation, short segments of colonic collapse commonly occur during investigations, especially when the patient changes position from supine to prone. Furthermore, residual colonic fluid due to suboptimal bowel preparation may occlude the colonic lumen, resulting in more than one colonic segment for 3D reconstruction.
Figure 8 shows an example of a patient's colon with a collapse in the descending colon (DC) in the supine position. The image on the left of Figure 8 represents the prone position, while the image on the right represents the supine position. The rectangular box on the right image marks a portion of the colon that is collapsed in the supine position, but fully distended in the prone position.
If the colon is locally severely under- distended, the segmentation method described in [22] can be used to determine a set of disconnected colon segments. The beginning point and end point of each segment, as well as the correct order of the segments, may be specified manually by the radiologist
(most 3D imaging systems for colonoscopies allow a radiologist to manually choose the order in which the centerline links disconnected colonic segments). In one implementation, the length of each collapsed and uncollapsed segment was determined as discussed above. The angular alignment between each segment was determined as the shift around the y-axis which minimizes the 3D distance between points with the same angular orientation on either side of the collapse.
Figure 9 shows an example the cylindrical images / of such a case (obtained from the patient data for the investigation discussed below). In particular, Figure 9 provides cylindrical representations as raster images of the collapsed supine (top), prone (middle) and deformed supine (bottom) endoluminal colon surface. To the left of the diagram, the location of a polyp is marked before registration (top) and after registration (middle and bottom). It can be seen from Figure 9 that despite the missing data in the collapsed section of the descending colon, both supine colon segments are reasonably well registered with the fully distended prone endoluminal colon surface.
In another investigation, ethical permission was obtained to utilize anonymized CT
colonography data acquired as part of normal day-to-day clinical practice. The CT colonography had been performed in accordance with current recommendations for good clinical practice and any detected polyps subsequently validated via optical colonoscopy. For establishing spatial correspondence across complete endoluminal surfaces, 24 patients were selected whose colon was not under- distended in both the prone or supine positions and who had either fluid 'tagging' (the increased radio- density allows 'digital cleansing' of residual fluid) or little remaining fluid. This allowed a continuous segmentation over the full length of the colon using the methods described in [22].
The datasets were randomly allocated into development and validation sets (using random permutation), with 12 cases each. During the development, it was discovered that it can be difficult to identify corresponding features by eye in the cylindrical image representations for some cases. Closer examination revealed that this was due either to large differences in distension of the colon in the prone and supine views or to insufficient fluid tagging. Large differences in distension can lead to considerable local dissimilarity of surface features, such as folds, which may occur over part or all of the colon. Furthermore, differences in the colon surface can occur due to insufficient fluid tagging for accurate digital cleansing, which can lead to artifacts in the segmentation.
In view of the above factors, 4 development datasets with marked differences in local distension were excluded from the study, leaving 8 remaining development cases (patients 1 to 8). The development set was used to tune the registration algorithm parameters. In addition, 4 cases of the validation set which showed large differences in the cylindrical images were also excluded, resulting in a total of 8 data sets with fully connected colon segmentations in both views for validation (patients 9 to 16). A further 5 cases were selected for validation of the method on cases with local colonic collapse (patients 17 to 21). For these cases, the distension and surface features of the 3D endoluminal surfaces S were judged by eye to be sufficiently similar in the well-distended segments before execution of the registration algorithm. This resulted in a total of 13 cases used for validation: 8 fully connected sets and 5 with local colonic collapse.
In order to assess the spatial accuracy of the proposed registration method, the registration error was measured on the basis of clinically validated polyps and haustral folds. Regarding the former, experienced radiologists identified polyps in both prone and supine CT colonography scans using 2D multi-planar reformats and endoscopy data for guidance. The endoluminal extent of the polyps was labeled to provide reference coordinates for validation. Polyp labels were checked and corrected if necessary and then matched by eye between the prone and supine view by an experienced colonography radiologist. The cases were selected to present a widespread distribution of polyps throughout the colonic length so that registration accuracy could be investigated over the entire endoluminal surface. However, any polyps in the 2D cylindrical images I were masked, such that those pixels lying on or close to the polyp were ignored when computing the similarity measure during registration, so that the polyps used for validation did not bias the registration results.
In order to determine registration error, a pair of reference points were identified for each manually matched polyp in the prone and supine views. The reference points were defined as the points at the centre of the intersecting surface between the extracted endoluminal colon surfaces S and the segmented polyps. Therefore, these points lie on the surfaces SI and S2 respectively. The center point c(x,y) was computed as the center of mass of the intersecting pixels in the 2D images /. Each 2D reference point c(x,y) corresponds to a 3D point on the surfaces S which lies inside the volume of the polyp. The registration error in mm was then determined by transforming each reference point from surface Si using the mapping/, and calculating the 3D Euclidean distance to the corresponding reference point on surface ¾.
Table 2 shows the results of assessing the registrations using the polyps of the 13 validations sets. The error after the cylindrical parameterization but before the B-spline registration is denoted as Polyp Parameterization Error (PPE), and the error after the B-spline registration is denoted as Polyp
Registration Error (PRE). Before calculating PPE, the images were translated in the j-direction (around the colon) to minimize the SSD between the images, as the 0 degrees position is arbitrarily assigned by the cylindrical parameterization.
The PPE results show that cylindrical parameterization on its own is not enough to align the datasets - the cylindrical non-rigid B-spline registration is required for a more accurate alignment. The PRE had a mean (± std. dev.) of 5.7 (± 3.4) mm for 13 validation patients with a single polyp each, and all 13 polyps were well aligned. This result is sufficiently accurate to direct the radiologist to an area of the endoluminal surface, which is close to the suspected lesion in both views, even in the case of local colonic collapse (patients 17 to 21). The hepatic flexure was not used to initialize the registration for patient 12 and patients 18-20, as the distances along the centerline between prone and supine varied more than fvar (here, 5%).
However, the cylindrical registration was still able to align features well.
The resulting error for 9 polyps in the 8 development cases was 6.6 (± 4.2) mm after non-rigid registration (PRE ) and therefore slightly higher than PRE of the validation set. The polyps for development of the registration method occurred in the ascending colon (AC), transverse colon (TC), descending colon (DC) and sigmoid colon (SC).
:nt Polyp Collapsed Collapsed PPE (mm) PRE (mm) location location in location in
prone supine
9 AC none none 32.4 3.0
10 Cecum none none 13.7 6.0
1 1 Cecum none none 30.2 3.1
12 Cecum none none 41 .9 2.4
13 DC none none 15.7 6.8
14 AC none none 1 1 .8 4.6
15 DC none none 23.9 3.6
16 AC none none 18.5 1 1 .1
17 Cecum none 1 x DC 24.8 9.4
18 AC none 1 x SC 62.6 3.9
19 Rectum 1 x DC 1 x DC 55.9 6.0
20 Cecum 3 x (DC, SC) None 13.3 12.4
21 AC 1 x DC 1 x DC 39.0 1 .5
Mean 29.5 5.7
16.4 3.4
Table 2. Registration error in mm for 13 polyps in the 13 patients used for validation of the registration method. These included 8 fully connected cases (patients 9 to 16) and 5 cases with local colonic collapse (patients 17 to 19). The Polyp Parameterization Error (PPE ) gives the error in aligning the polyps after cylindrical parameterization but before registration, while the Polyp Registration Error (PRE ) gives the error after cylindrical registration.
Although polyps can provide definite points of correspondence on the colon surface and give a good estimate of the registration performance, their number is limited to only one polyp per case in our validation set. In order to assess the registration quality over the entire endoluminal colon surface, corresponding haustral folds were chosen from the prone and supine datasets. Reference point coordinates were provided to lie centrally on the fold in both views. The haustral fold centers were automatically calculated by first segmenting each fold on the colon surfaces S using a graph cut method (23) based on the principal curvatures ι and κ2. Then, the center of each fold was computed as the vertex which has the lowest maximum distance to any vertex on the border of the segmented fold. Using the cylindrical representations to establish regions of likely correspondence and virtual colonoscopic views for assurance, a radiologist (with experience in over 500 validated colonography studies) then manually identified corresponding folds from the prone and supine views. Any folds where the radiologist could not be certain of correspondence were not used for validation, but this still provided an average of 90 pairs of corresponding folds per patient, with a total of 1175 pairs over all 13 validation cases (patients 9 to 21). The center points of the corresponding folds were then used as corresponding reference points for assessing the registration.
The Fold Registration Error (FRE ) was assessed in the same way as for the PRE, but using the haustral fold centers as reference points. Using this large set of reference points, the FRE was 7.7 (± 7.4) mm for a total of 1175 points distributed over all 13 validation patients. In comparison, just using the cylindrical parameterization on its own (before B-spline registration), results in a Fold
Parameterization Error (FPE ) of 23.4 (± 12.3) mm. A histogram of the registration error (FRE ) is shown in Figure 10. Here, the normalized distributions of FRE for un-collapsed and collapsed cases are colored differently and displayed next to each other for comparison. It can be seen that the majority of points (95%) lie below an error of 22.8 mm, with a maximum error of 44.1 mm. However, the FRE is slightly higher for the 5 collapsed cases with 9.7 (± 8.7) mm as opposed to the 8 un-collapsed cases with R£Of 6.6 (± 6.3) mm.
Using the approach described herein, the haustral folds are almost always aligned with another haustral fold in the other image, but this is not always the correct corresponding fold. Using the segmented haustral folds, an analysis was performed to see how many of the folds were aligned with the correct corresponding fold, and how many were misaligned by one or more fold. According to this analysis, 82% of all 1175 reference points were assigned to the correct corresponding fold, 15%> of reference points were misaligned by just one fold, and 3%> were misaligned by between two and three folds. (Of course, this assumes that the radiologist correctly labeled corresponding haustral folds in the first place, whereas it may well be that at least some of the apparently misregistered data is due to such observer error). In any event, the identification of corresponding haustral folds is generally high.
In line with the FRE results, 88%> of haustral folds in the 8 un-collapsed cases were assigned to the correct corresponding fold, whereas 71 % of haustral folds were correctly matched in the 5 cases with local colonic collapse.
The embodiments described above have focussed on performing a registration between two images. In some cases, there may be a need to determine a registration between three or more images. One option is to take one of these three (or more) images as a reference image, and then determine the registration of every other image with respect to the reference image using the above technique for performing a registration of two images. This would then establish a common spatial correspondence between all the images. It may also be possible to use a technique that takes three or more two- dimensional representations (such as produced by the conformal mappings) and then generate a single overall registration that applies to all of these two-dimensional representations (and hence their source three-dimensional images). The approach described herein supports the use of a 2D representation of the cylindrical geometry of the colon for prone-supine CT registration. Measures derived from the 3D surface geometry of the colon and/or measures of the intensity distribution in the original CT volume can be used to guide registration. Examples include the direct calculation of surface geometry from grey value distributions, 1st and higher order derivatives, derivates convolved with Gaussian functions covering a range of spatial resolutions, profiles of intensity normal to the colon surface, local texture measures and any other local statistical measure. The registration can also be assisted by features extracted from the surface, such as representations of haustral folds, tenaie coli, diverticulae or polyps.
Computational, physical, and/or bio-mechanical constraints may be used to regularize the search for a match between the 2D representations of the prone and supine CT scans. For example, a limit may be applied to the twist per mm and/or the stretch per mm.
A conformal mapping may be used to generate a cylindrical representation and coordinate system for the colon. The displacement of the colon surface can then be used to determine the displacement of surrounding locations. The cylindrical registration can also be used to generate centre lines for the prone and supine colon surfaces that automatically have correspondence with every point on the surfaces and are automatically registered to each other. The centre lines, cylindrical registrations, and conformal mappings can then be used for tasks such as visualizing correspondence between the prone and supine colon surfaces and refining the CAD polyp identification.
The image registration procedure described herein may be implemented using appropriate software running on suitable apparatus (one or more general purpose computer workstations, specialised medical hardware, etc). The software comprises computer instructions that when implemented by one or more processors in the suitable apparatus cause the apparatus to perform the described image registration procedure. The image registration procedure described herein may also be implemented in whole or in part using special purpose hardware, for example, one or more graphical processing units.
In summary, the above embodiments and applications are provided by way of example only.
The skilled person will be aware of many potential modifications and applications that remain within the scope of the present invention as defined by the appended claims and their equivalents.
References
I. WHO. Cancer, http://www.who.int/mediacentre/factsheets/fs297/en/, 2006. Retrieved on 2010- 02-26.
2. Rex DK, Cutler CS, Lemmel GT, Rahmani EY, Clark DW, Helper DJ, Lehman GA, Mark DG.
"Colonoscopic miss rates of adenomas determined by back-to-back colonoscopies",
Gastroenterology. 1997 Jan; 112(l):24-8.
3. Virtually Assisted Optical Colonoscopy. J. Marino, F. Qiu, A. Kaufman SPIE 2008.
4. Co-Registration of Virtual and Optical Colonoscopy Views. J. Marino, F. Qiu, A. Kaufman.
MICCAI 2008 workshop on Computational and Visualisation Challenges in the New era of virtual colonoscopy.
5. Taylor SA, Laghi A, Lefere P, Halligan S, Stoker J (2007) European society of gastrointestinal and abdominal radiology (ESGAR): consensus statement on CT colonography. Eur Radiol 17:575-579.
6. B. Acar, S. Napel, D. S. Paik, P. Li and J. Yee. Registration of supine and prone ct
colonography data: Method and evaluation. Radiology, pages 221-332, 2001.
7. J. Nappi, A. Okamura, H. Frimmel, A. Dachman., and H. Yoshida. Region-based supine-prone correspondence for reduction of false positive cad poly candidates in ct colonography.
Acadamic Radiology, 12:695-707, 2005.
8. Suh, J.W., Wyatt, C.L., Deformable registration of supine and prone colons for computed
tomographic colonography (2009) Journal of Computer Assisted Tomography, 33 (6), pp. 902-
911.
9. Julien Lamy and Ronald M. Summers, "Intra-patient colon surface registration based on teniae coir, Proc. SPIE 6514, 65140C, (2007).
10. William E. Lorensen, Harvey E. Cline: Marching Cubes: A high resolution 3D surface
construction algorithm. In: Computer Graphics, Vol. 21, Nr. 4, July 1987.
II. M. Garland and P. S. Heckbert. "Surface Simplification using Quadric Error Metrics ".
Conference Proceedings of SIGGRAPH 1997, pp. 209-216.
12. Haker, S, Angenent, S, Tannenbaum, A, Kikinis, R, "Nondistorting flattening maps and the 3-D visualization of colon CT images", IEEE Transactions on Medical Imaging, 2000, 19(7), 665-
670.
13. Hong, W, Gu, X, Qiu, F, Jin, M, Kaufman, A, "Conformal virtual colon flattening", Proceedings of the ACM Solid and Physical Modeling Symposium 2006, 2006.
14. Feng Qiu, Zhe Fan, Xiaotian Yin, Arie Kaufman, and Xianfeng David Gu, "Colon Flattening with Discrete Ricci Flow", MICCAI workshop 2008. 15. Johnson K., Johnson C, Fletcher J., MacCarty R., Summers R. CT colonography using 360- degree virtual dissection: a feasibility study. AJR Am J Roentgenol, 186:9095, (2006).
16. R S Hamilton, "Three Manifolds with Positive Ricci Curvature", J. Differential Geometry, vol.
17, pp. 255-306, 1982
17. Jin, M.; Kim, J.; Luo, F.; Gu, X., "Discrete Surface Ricci Flow," Visualization and Computer
Graphics, IEEE Transactions on , vol.14, no.5, pp.1030-1043, Sept.-Oct. 2008
18. Bennett Chow and Feng Luo. "Combinatorial Ricci flows on surfaces ". J. Differential Geom, 63(1):97-129, 2003.
19. Marc Modat, Gerard R Ridgway, Zeike A Taylor, Manja Lehmann, Josephine Barnes, Nick C Fox, David J Hawkes, and Sebastien Ourselin. Fast free-form deformation using graphics processing units, Comput. Methods Programs Biomed. 98(3):278-284, 2010, (Source code available on: http://sourceforge.net/projects/niftyreg/)
20. E. Fukano, M. Oda, T. Kitasaka, Y. Suenaga, T. Takayama, H. Takabatake, M. Mori, H. Natori, S. Nawano, and K. Mori. "Haustral fold registration in CT colonography and its application to registration of virtual stretched view of the colon". SPIE Medical Imaging 2010: Computer-
Aided Diagnosis, 7624(1):762420.
21. W. Zeng, J. Marino, K. Chaitanya Gurijala, X. Gu, and A. Kaufman. Supine and Prone Colon Registration Using Quasi-Conformal Mapping. Visualization and Computer Graphics, IEEE Transactions on, 16(6): 1348-1357, Nov. -Dec. 2010.
22. G. Slabaugh, X. Yang, X. Ye, R. Boyes, and G. Beddoe. A Robust and Fast System for CTC
Computer-Aided Detection of Colorectal Lesions . Algorithms, 3(1):21— 43, 2010.
23. Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph custs.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222-1239, 2002.
24. Jung W. Suh and Christopher L. Wyatt. Registration of prone and supine colons in the presence of topological changes. SPIE Medical Imaging 2008: Physiology, Function, and Structure from
Medical Images, 6916(1):69160C, 2008.

Claims

Claims
1. A method for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ, comprising:
segmenting the first three-dimensional medical image containing the tubular organ and extracting a first surface representing the surface of the tubular organ from the first three-dimensional medical image;
segmenting the second three-dimensional medical image containing the tubular organ and extracting a second surface representing the surface of the tubular organ from the second three- dimensional medical image;
generating a first mapping that maps the first surface to a first two-dimensional representation of the surface of the tubular organ, wherein said first two-dimensional representation of the surface of the tubular organ reflects the value of a property at each position on the surface of the tubular organ as derived from the first three-dimensional medical image;
generating a second mapping that maps the second surface to a second two-dimensional representation of the surface of the tubular organ, wherein said second two-dimensional representation of the surface of the tubular organ reflects the value of the property at each position on the surface of the tubular organ as derived from the second three-dimensional medical image;
determining a third mapping for transforming between the first two-dimensional representation of the surface of the tubular organ and the second two-dimensional representation of the surface of the tubular organ; and
registering the first three-dimensional medical image containing the tubular organ with the second three-dimensional medical image containing the tubular organ on the basis of said first, second and third mappings.
2. The method of claim 1, wherein the first three-dimensional medical image containing the tubular organ and the second three-dimensional medical image containing the tubular organ comprise Computed Tomography images.
3. The method of claim 1 or 2, wherein the first three-dimensional medical image containing a tubular organ and the second three-dimensional medical image containing the tubular organ are taken in first and second body positions respectively.
4. The method of claim 3, wherein said first and second body positions comprise prone and supine respectively.
5. The method of any preceding claim, wherein extracting the first and second surfaces includes reducing the topological complexity of each surface.
6. The method of any preceding claim, wherein each of said first and second surfaces comprises a cylindrical surface to represent the surface of the tubular organ.
7. The method of any preceding claim, wherein each of said first and second mappings comprises a conformal mapping.
8. The method of claim 7, wherein said conformal mapping is derived using the Ricci flow algorithm.
9. The method of any preceding claim, wherein said first and second two-dimensional representations of the surface of the tubular organ each have a first dimension corresponding to distance along the tubular organ and a second dimension corresponding to angular position around the tubular organ.
10. The method of any preceding claim, wherein said property derived from said first and second three-dimensional images is determined directly from said first and second three-dimensional medical images.
11. The method of claim 10, wherein said property is derived from image intensity profiles.
12. The method of any preceding claim, wherein said property derived from said first and second three-dimensional medical images is determined from the first and second surfaces respectively.
13. The method of claim 12, wherein said property comprises a measure based on local shape or curvature.
14. The method of any preceding claim, wherein the third mapping is performed within the framework of a cylindrical topology.
15. The method of claim 14, wherein each of the first and second two-dimensional representations is repeated for use in determining the third mapping to reflect said cylindrical topology.
16. The method of claim 14, wherein the third mapping applies said cylindrical topology automatically to each of the first and second two-dimensional representations by cyclically wrapping each of the first and second two-dimensional representations in a direction perpendicular to the central axis of said tubular organ.
17. The method of any preceding claim, wherein said third mapping comprises a non-rigid 2-D B- spline registration.
18. The method of any preceding claim, wherein the third mapping is based at least in part on one or more anatomical features.
19. The method of claim 18, wherein the one or more anatomical features are used as landmarks to initialise the third mapping with a coarse alignment that serves as a basis for then performing a finer registration.
20. The method of any preceding claim, wherein the third mapping is regularized using one or more computational, physical, or bio-mechanical constraints.
21. The method of any preceding claim, further comprising accommodating one or more collapsed segments in said tubular organ in the first and/or second medical images, wherein said one or more collapsed segments divide the tubular organ into multiple non-collapsed segments.
22. The method of claim 21, wherein accommodating one or more collapsed segments in said tubular organ comprises mapping each non-collapsed segment to an individual image representing a two-dimensional representation of the surface of the tubular organ for that non-collapsed segment, and then forming an aggregate image of the individual images for use as said first or second two- dimensional representation of the surface of the tubular organ.
23. The method of claim 22, wherein said aggregate image of the individual images includes null values in regions corresponding to the collapsed segments, and wherein said null values are ignored when determining said third mapping.
24. The method of claim 22 or 23, wherein forming an aggregate of the individual images for use as said first or second two-dimensional representation of the surface of the tubular organ includes estimating the length of each collapsed segment and each non-collapsed segment, and determining the positioning of said individual images within said aggregate image based on the estimated lengths.
25. The method of any of claims 22 to 24, wherein said tubular organ has a central axis, and forming an aggregate of the individual images includes rotating the individual images about the axis of tubular organ to provide a consistent angular orientation between the multiple non-collapsed segments.
26. The method of any preceding claim, further comprising generating a first centre line from the first mapping and a second centre line from the second mapping.
27. The method of any preceding claim, wherein said registering determines a displacement between each point on the surface of the tubular organ in the first three-dimensional medical image and a corresponding point on the surface of the tubular organ in the second three-dimensional medical image, and wherein said displacement between the surface of the tubular organ in the first three- dimensional medical image and the surface of the tubular organ in the second three-dimensional medical image is used to determine the displacement of image locations neighbouring said surface of the tubular organ.
28. The method of claim 27, wherein the displacement of an image location neighbouring the surface of the tubular organ is determined as being the same as the displacement of the point on the surface of the tubular organ which is closest to said image location.
29. The method of any preceding claim, wherein the tubular organ comprises the colon.
30. The method of claim 29, wherein extracting the first and second surface includes forming a first hole in each surface to represent the anus and a second hole in each surface to represent the cecum.
31. A computer program comprising program instructions for implementing the method of any preceding claim when the program instructions are executed by a computer.
32. A computer program product comprising the computer program of claim 31 as encoded on a machine-readable storage medium.
33. The use of the method of any of claims 1 to 30 in a computer aided detection system.
34. The use of the method of any of claims 1 to 30 for providing a visualization of corresponding regions from the first three-dimensional medical image containing the tubular organ and from the second three-dimensional medical image containing the tubular organ.
35. Apparatus adapted to implement the method of any preceding claim.
36. Apparatus for performing a non-rigid registration of a first three-dimensional medical image containing a tubular organ with a second three-dimensional medical image containing the tubular organ, said apparatus being configured to:
segment the first three-dimensional medical image containing the tubular organ and extract a first surface representing the surface of the tubular organ from the first three-dimensional medical image;
segment the second three-dimensional medical image containing the tubular organ and extract a second surface representing the surface of the tubular organ from the second three-dimensional medical image;
generate a first mapping that maps the first surface to a first two-dimensional representation of the surface of the tubular organ, wherein said first two-dimensional representation of the surface of the tubular organ reflects the value of a property at each position on the surface of the tubular organ as derived from the first three-dimensional medical image;
generate a second mapping that maps the second surface to a second two-dimensional representation of the surface of the tubular organ, wherein said second two-dimensional representation of the surface of the tubular organ reflects the value of the property at each position on the surface of the tubular organ as derived from the second three-dimensional medical image;
determine a third mapping for transforming between the first two-dimensional representation of the surface of the tubular organ and the second two-dimensional representation of the surface of the tubular organ; and
register the first three-dimensional medical image containing the tubular organ with the second three-dimensional medical image containing the tubular organ on the basis of said first, second and third mappings.
37. The apparatus of claim 36, wherein the first three-dimensional medical image containing the tubular organ and the second three-dimensional medical image containing the tubular organ comprise Computed Tomography images.
38. The apparatus of claim 36 or 37, wherein the first three-dimensional medical image containing a tubular organ and the second three-dimensional medical image containing the tubular organ are taken in first and second body positions respectively.
39. The apparatus of claim 38, wherein said first and second body positions comprise prone and supine respectively.
40. The apparatus of any of claims 36 to 39, wherein extracting the first and second surfaces includes reducing the topological complexity of each surface.
41. The apparatus of any of claims 36 to 40, wherein each of said first and second surfaces comprises a cylindrical surface to represent the surface of the tubular organ.
42. The apparatus of any of claims 36 to 41, wherein each of said first and second mappings comprises a conformal mapping.
43. The apparatus of claim 42, wherein said conformal mapping is derived using the Ricci flow algorithm.
44. The apparatus of any of claims 36 to 43, wherein said first and second two-dimensional representations of the surface of the tubular organ each have a first dimension corresponding to distance along the tubular organ and a second dimension corresponding to angular position around the tubular organ.
45. The apparatus of any of claims 36 to 44, wherein said property derived from said first and second three-dimensional images is determined directly from said first and second three-dimensional medical images.
46. The apparatus of claim 45, wherein said property is derived from image intensity profiles.
47. The apparatus of any of claims 36 to 46, wherein said property derived from said first and second three-dimensional medical images is determined from the first and second surfaces respectively.
48. The apparatus of claim 47, wherein said property comprises a measure based on local shape or curvature.
49. The apparatus of any of claims 36 to 48, wherein the third mapping is performed within the framework of a cylindrical topology.
50. The apparatus of claim 49, wherein each of the first and second two-dimensional representations is repeated for use in determining the third mapping to reflect said cylindrical topology.
51. The apparatus of claim 49, wherein the third mapping applies said cylindrical topology automatically to each of the first and second two-dimensional representations by cyclically wrapping each of the first and second two-dimensional representations in a direction perpendicular to the central axis of said tubular organ.
52. The apparatus of any of claims 36 to 51, wherein said third mapping comprises a non-rigid 2-D B-spline registration.
53. The apparatus of any of claims 36 to 52, wherein the third mapping is based at least in part on one or more anatomical features.
54. The apparatus of claim 53, wherein the one or more anatomical features are used as landmarks to initialise the third mapping with a coarse alignment that serves as a basis for then performing a finer registration.
55. The apparatus of any of claims 36 to 54, wherein the third mapping is regularized using one or more computational, physical, or bio-mechanical constraints.
56. The apparatus of any of claims 36 to 55, wherein the apparatus is further configured to accommodate one or more collapsed segments in said tubular organ in the first and/or second medical images, wherein said one or more collapsed segments divide the tubular organ into multiple non- collapsed segments.
57. The apparatus of claim 56, wherein accommodating one or more collapsed segments in said tubular organ comprises mapping each non-collapsed segment to an individual image representing a two-dimensional representation of the surface of the tubular organ for that non-collapsed segment, and then forming an aggregate image of the individual images for use as said first or second two- dimensional representation of the surface of the tubular organ.
58. The apparatus of claim 57, wherein said aggregate image of the individual images includes null values in regions corresponding to the collapsed segments, and wherein said null values are ignored when determining said third mapping.
59. The apparatus of claim 57 or 58, wherein forming an aggregate of the individual images for use as said first or second two-dimensional representation of the surface of the tubular organ includes estimating the length of each collapsed segment and each non-collapsed segment, and determining the positioning of said individual images within said aggregate image based on the estimated lengths.
60. The apparatus of any of claims 57 to 59, wherein said tubular organ has a central axis, and forming an aggregate of the individual images includes rotating the individual images about the axis of tubular organ to provide a consistent angular orientation between the multiple non-collapsed segments.
61. The apparatus of any of claims 36 to 60, wherein the apparatus if further configured to generate a first centre line from the first mapping and a second centre line from the second mapping.
62. The apparatus of any of claims 36 to 61, wherein said registering determines a displacement between each point on the surface of the tubular organ in the first three-dimensional medical image and a corresponding point on the surface of the tubular organ in the second three-dimensional medical image, and wherein said displacement between the surface of the tubular organ in the first three- dimensional medical image and the surface of the tubular organ in the second three-dimensional medical image is used to determine the displacement of image locations neighbouring said surface of the tubular organ.
63. The apparatus of claim 62, wherein the displacement of an image location neighbouring the surface of the tubular organ is determined as being the same as the displacement of the point on the surface of the tubular organ which is closest to said image location.
64. The apparatus of any of claims 36 to 63, wherein the tubular organ comprises the colon.
65. The apparatus of claim 64, wherein extracting the first and second surface includes forming a first hole in each surface to represent the anus and a second hole in each surface to represent the cecum.
66. A method for registering first and second three-dimensional medical images substantially as described herein with reference to the accompanying drawings.
67. A computer program for registering first and second three-dimensional medical images substantially as described herein with reference to the accompanying drawings.
68. A computer program for registering first and second three-dimensional medical images substantially as described herein with reference to the accompanying drawings.
PCT/GB2011/050488 2010-03-11 2011-03-11 Apparatus and method for registering medical images containing a tubular organ WO2011110867A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1004084.8A GB201004084D0 (en) 2010-03-11 2010-03-11 Apparatus and method for registering medical images containing a tubular organ
GB1004084.8 2010-03-11

Publications (1)

Publication Number Publication Date
WO2011110867A1 true WO2011110867A1 (en) 2011-09-15

Family

ID=42261440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2011/050488 WO2011110867A1 (en) 2010-03-11 2011-03-11 Apparatus and method for registering medical images containing a tubular organ

Country Status (2)

Country Link
GB (1) GB201004084D0 (en)
WO (1) WO2011110867A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140362080A1 (en) * 2011-10-21 2014-12-11 The Research Foundation Of State University Of New York System and Method for Context Preserving Maps Of Tubular Structures
CN111462492A (en) * 2020-04-10 2020-07-28 中南大学 Key road section detection method based on Rich flow
US11288846B2 (en) 2016-09-29 2022-03-29 Koninklijke Philips N.V. CBCT to MR registration via occluded shape reconstruction and robust point matching

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040136584A1 (en) 2002-09-27 2004-07-15 Burak Acar Method for matching and registering medical image data
US20040264753A1 (en) 2003-06-24 2004-12-30 Renaud Capolunghi Methods and apparatus to facilitate review of CT colonography exams
US20060215896A1 (en) 2003-10-31 2006-09-28 General Electric Company Method and apparatus for virtual subtraction of stool from registration and shape based analysis of prone and supine scans of the colon
US20070270682A1 (en) 2006-05-17 2007-11-22 The Gov't Of The U.S., As Represented By The Secretary Of Health & Human Services, N.I.H. Teniae coli guided navigation and registration for virtual colonoscopy
WO2010142624A1 (en) 2009-06-09 2010-12-16 Ibbt Vzw Method for mapping tubular surfaces to a cylinder

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040136584A1 (en) 2002-09-27 2004-07-15 Burak Acar Method for matching and registering medical image data
US20040264753A1 (en) 2003-06-24 2004-12-30 Renaud Capolunghi Methods and apparatus to facilitate review of CT colonography exams
US20060215896A1 (en) 2003-10-31 2006-09-28 General Electric Company Method and apparatus for virtual subtraction of stool from registration and shape based analysis of prone and supine scans of the colon
US20070270682A1 (en) 2006-05-17 2007-11-22 The Gov't Of The U.S., As Represented By The Secretary Of Health & Human Services, N.I.H. Teniae coli guided navigation and registration for virtual colonoscopy
WO2010142624A1 (en) 2009-06-09 2010-12-16 Ibbt Vzw Method for mapping tubular surfaces to a cylinder

Non-Patent Citations (24)

* Cited by examiner, † Cited by third party
Title
B. ACAR; S. NAPEL; D. S. PAIK; P. LI; J. YEE: "Registration of supine and prone ct colonography data: Method and evaluation", RADIOLOGY, 2001, pages 221 - 332
BENNETT CHOW; FENG LUO: "Combinatorial Ricci flows on surfaces", J. DIFFERENTIAL GEOM., vol. 63, no. 1, 2003, pages 97 - 129
E. FUKANO; M. ODA; T. KITASAKA; Y. SUENAGA; T. TAKAYAMA; H. TAKABATAKE; M. MORI; H. NATORI; S. NAWANO; K. MORI: "Haustral fold registration in CT colonography and its application to registration of virtual stretched view of the colon", SPIE MEDICAL IMAGING 2010: COMPUTER-AIDED DIAGNOSIS, vol. 7624, no. 1, pages 762420
FENG QIU; ZHE FAN; XIAOTIAN YIN; ARIE KAUFMAN; XIANFENG DAVID GU: "Colon Flattening with Discrete Ricci Flow", MICCAI WORKSHOP, 2008
G. SLABAUGH; X. YANG; X. YE; R. BOYES; G. BEDDOE: "A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions", ALGORITHMS, vol. 3, no. 1, 2010, pages 21 - 43, XP055005695, DOI: doi:10.3390/a3010021
HAKER, S; ANGENENT, S; TANNENBAUM, A; KIKINIS, R: "Nondistorting flattening maps and the 3-D visualization of colon CT images", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 19, no. 7, 2000, pages 665 - 670, XP011035999
HONG, W; GU, X; QIU, F; JIN, M; KAUFMAN, A: "Conformal virtual colon flattening", PROCEEDINGS OF THE ACM SOLID AND PHYSICAL MODELING SYMPOSIUM 2006, 2006
J. MARINO; F. QIU; A. KAUFMAN: "Co-Registration of Virtual and Optical Colonoscopy Views", MICCAI 2008 WORKSHOP ON COMPUTATIONAL AND VISUALISATION CHALLENGES IN THE NEW ERA OF VIRTUAL COLONOSCOPY
J. MARINO; F. QIU; A. KAUFMAN: "Virtually Assisted Optical Colonoscopy", SPIE, 2008
J. NÄPPI; A. OKAMURA; H. FRIMMEL; A. DACHMAN; H. YOSHIDA: "Region-based supine-prone correspondence for reduction of false positive cad poly candidates in ct colonography", ACADAMIC RADIOLOGY, vol. 12, 2005, pages 695 - 707
JIN, M.; KIM, J.; LUO, F.; GU, X.: "Discrete Surface Ricci Flow", VISUALIZATION AND COMPUTER GRAPHICS, IEEE TRANSACTIONS, vol. 14, no. 5, September 2008 (2008-09-01), pages 1030 - 1043, XP011344511, DOI: doi:10.1109/TVCG.2008.57
JOHNSON K.; JOHNSON C.; FLETCHER J.; MACCARTY R.; SUMMERS R.: "CT colonography using 360- degree virtual dissection: a feasibility study", AJR AM J ROENTGENOL, vol. 186, 2006, pages 9095
JULIEN LAMY; RONALD M. SUMMERS: "Intra-patient colon surface registration based on teniae coli", PROC. SPIE, vol. 6514, 2007, pages 65140C
JUNG W. SUH; CHRISTOPHER L. WYATT: "Registration of prone and supine colons in the presence of topological changes", SPIE MEDICAL IMAGING 2008: PHYSIOLOGY, FUNCTION, AND STRUCTURE FROM MEDICAL IMAGES, vol. 6916, no. 1, 2008, pages 69160C
M. GARLAND; P. S. HECKBERT: "Surface Simplification using Quadric Error Metrics", CONFERENCE PROCEEDINGS OFSIGGRAPH, 1997, pages 209 - 216, XP000765818
MARC MODAT; GERARD R RIDGWAY; ZEIKE A TAYLOR; MANJA LEHMANN; JOSEPHINE BARNES; NICK C FOX; DAVID J HAWKES; SEBASTIEN OURSELIN: "Fastfree-form deformation using graphics processing units", COMPUT. METHODS PROGRAMS BIOMED., vol. 98, no. 3, 2010, pages 278 - 284, Retrieved from the Internet <URL:sourceforge.net/projects/niftyreg>
R S HAMILTON: "Three Manifolds with Positive Ricci Curvature", J. DIFFERENTIAL GEOMETRY, vol. 17, 1982, pages 255 - 306
REX DK; CUTLER CS; LEMMEL GT; RAHMANI EY; CLARK DW; HELPER DJ; LEHMAN GA; MARK DG: "Colonoscopic miss rates of adenomas determined by back-to-back colonoscopies", GASTROENTEROLOGY, vol. 112, no. 1, January 1997 (1997-01-01), pages 24 - 8, XP005178555, DOI: doi:10.1016/S0016-5085(97)70214-2
RONALD M SUMMERS ET AL: "Normalized Distance Along the Colon Centerline: A Method for Correlating Polyp Location on CT Colonography and Optical Colonoscopy", AMERICAN JOURNAL OF ROENTGENOLOGY, AMERICAN ROENTGEN RAY SOCIETY, US, vol. 193, no. 5, 1 November 2009 (2009-11-01), pages 1296 - 1304, XP007918555, ISSN: 0361-803X, DOI: DOI:10.2214/AJR.09.2611 *
SUH, J.W.; WYATT, C.L.: "Deformable registration of supine and prone colons for computed tomographic colonography (2009)", JOURNAL OF COMPUTER ASSISTED TOMOGRAPHY, vol. 33, no. 6, pages 902 - 911
TAYLOR SA; LAGHI A; LEFERE P; HALLIGAN S; STOKER J: "European society of gastrointestinal and abdominal radiology (ESGAR): consensus statement on CT colonography", EUR RADIOL, vol. 17, 2007, pages 575 - 579, XP019473270
W. ZENG; J. MARINO; K. CHAITANYA GURIJALA; X. GU; A. KAUFMAN: "Supine and Prone Colon Registration Using Quasi-Conformal Mapping. Visualization and Computer Graphics", IEEE TRANSACTIONS ON, vol. 16, no. 6, November 2010 (2010-11-01), pages 1348 - 1357
WILLIAM E. LORENSEN; HARVEY E. CLINE: "Marching Cubes: A high resolution 3D surface construction algorithm.", COMPUTER GRAPHICS, vol. 21, no. 4, July 1987 (1987-07-01), XP001377038
Y. BOYKOV; O. VEKSLER; R. ZABIH: "Fast approximate energy minimization via graph custs. Pattern Analysis and Machine Intelligence", IEEE TRANSACTIONS ON, vol. 23, no. 11, 2002, pages 1222 - 1239

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140362080A1 (en) * 2011-10-21 2014-12-11 The Research Foundation Of State University Of New York System and Method for Context Preserving Maps Of Tubular Structures
US9792729B2 (en) * 2011-10-21 2017-10-17 The Research Foundation For The State University Of New York System and method for context preserving maps of tubular structures
US11288846B2 (en) 2016-09-29 2022-03-29 Koninklijke Philips N.V. CBCT to MR registration via occluded shape reconstruction and robust point matching
CN111462492A (en) * 2020-04-10 2020-07-28 中南大学 Key road section detection method based on Rich flow

Also Published As

Publication number Publication date
GB201004084D0 (en) 2010-04-28

Similar Documents

Publication Publication Date Title
Ferrante et al. Slice-to-volume medical image registration: A survey
Zeng et al. Supine and prone colon registration using quasi-conformal mapping
US10198872B2 (en) 3D reconstruction and registration of endoscopic data
Betke et al. Landmark detection in the chest and registration of lung surfaces with an application to nodule registration
Li et al. Image registration based on autocorrelation of local structure
Kreiser et al. A survey of flattening‐based medical visualization techniques
Hong et al. 3D reconstruction of virtual colon structures from colonoscopy images
Roth et al. Registration of the endoluminal surfaces of the colon derived from prone and supine CT colonography
EP2189942A2 (en) Method and system for registering a medical image
CN103402434B (en) Medical diagnostic imaging apparatus, medical image display apparatus and medical image-processing apparatus
US20060050991A1 (en) System and method for segmenting a structure of interest using an interpolation of a separating surface in an area of attachment to a structure having similar properties
EP2244633A2 (en) Medical image reporting system and method
Kretschmer et al. ADR-anatomy-driven reformation
Nadeem et al. Corresponding supine and prone colon visualization using eigenfunction analysis and fold modeling
Xiong et al. Tracking the motion trajectories of junction structures in 4D CT images of the lung
Lu et al. An improved method of automatic colon segmentation for virtual colon unfolding
WO2011110867A1 (en) Apparatus and method for registering medical images containing a tubular organ
Silva et al. Fast volumetric registration method for tumor follow‐up in pulmonary CT exams
Zeng et al. Volumetric colon wall unfolding using harmonic differentials
Astaraki et al. Autopaint: A self-inpainting method for unsupervised anomaly detection
Wildeman et al. 2D/3D registration of micro-CT data to multi-view photographs based on a 3D distance map
Falta et al. Lung250M-4B: a combined 3D dataset for CT-and point cloud-based intra-patient lung registration
Cao et al. Tracking regional tissue volume and function change in lung using image registration
Ma et al. Supine to prone colon registration and visualization based on optimal mass transport
Nakao et al. Analysis of heterogeneity of pneumothorax-associated deformation using model-based registration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11712301

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11712301

Country of ref document: EP

Kind code of ref document: A1