WO2007144620A2 - Automatic quantification of changes in images - Google Patents

Automatic quantification of changes in images Download PDF

Info

Publication number
WO2007144620A2
WO2007144620A2 PCT/GB2007/002206 GB2007002206W WO2007144620A2 WO 2007144620 A2 WO2007144620 A2 WO 2007144620A2 GB 2007002206 W GB2007002206 W GB 2007002206W WO 2007144620 A2 WO2007144620 A2 WO 2007144620A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
bone
images
region
interest
Prior art date
Application number
PCT/GB2007/002206
Other languages
French (fr)
Other versions
WO2007144620A3 (en
Inventor
Derek L. G. Hill
Kelvin K. Leung
Nadeem Saeed
Original Assignee
Ucl Business Plc
Glaxo Group Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ucl Business Plc, Glaxo Group Limited filed Critical Ucl Business Plc
Publication of WO2007144620A2 publication Critical patent/WO2007144620A2/en
Publication of WO2007144620A3 publication Critical patent/WO2007144620A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20128Atlas-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to image processing and particularly to the automatic quantification of changes in images, such as changes in multiple bones in MR or CT images of joints or changes in MR images of the brain.
  • Image based biomarkers have great potential for speeding up the drug discovery and development pipeline because imaging is non-invasive, can be used repeatedly, and can be used to localise changes in the body due to disease and therapy. Frequently, imaging changes involve changes in the size, shape and brightness of image features. It is frequently desirable to track multiple image features simultaneously, though the features of interest are known in advance. In order to extract maximum value from images, it is necessary to quantify both size/shape changes and brightness changes from all features of interest. Furthermore, the locations of features of interest in the images can vary substantially between time points, and the relative position of features of interest can also change. This normally confounds techniques to track these changes across time points.
  • the skull ensures that the positioning changes between time points are limited to a rigid body transformation, and techniques for determining rigid body transformations between brain scans are well established. In most other parts of the body, however, more complicated transformations are required to correct for repositioning.
  • MRI allows visualisation of relevant changes to the joint including cartilage thinning in OA, and bone erosion, bone oedema and synovial inflammation in RA.
  • CT allows visualization of bone destruction due to erosions in RA, and joint and bone damage in OA.
  • manual segmentations of images are often carried out to measure bone volumes.
  • An observer may take several hours to accurately segment a bone from an image.
  • manual segmentation may not accurately reflect local changes due to the bone lesion.
  • the intra- observer variability of manual segmentations is of the order of a few percent (around 2%). This is not sufficient to reliably measure changes in bone lesion that are about 1% of the volume of the bone.
  • the amount of change in the bone volume may be comparable to the geometric distortion caused by field inhomogeneity, gradient miscalibration, and other errors relating to the stability of the MR scanner. Accordingly, the technique of manual segmentation is limited in its use and accuracy.
  • Segmentation propagation is a technique that makes use of the deformation field calculated from the registration of two images to propagate a region of interest from one image to another one. Segmentation propagation has been used to quantify small cerebral ventricular volume changes in treated growth hormone patients (M. Holden et al. "Quantification of small cerebral ventricular volume changes in treated growth hormone patients using nonrigid registration," IEEE Trans. Med Imaging, vol. 21, no. 10, pp. 1292-1301, Oct. 2002).
  • Figure 1 shows the principle of segmentation propagation.
  • I A represents the atlas (source) image with a segmented structure, RA, defined by a connected set of boundary points at voxel locations, shown as dots and lines.
  • I 1 and I 2 represent the baseline and follow-up images of a subject.
  • Non-rigid registration of I A to Ii and I 2 produces the transformations T AI and T A2 - Image IA is transformed by T A1 and TA 2 into the space of I 1 and I 2 , which results in propagated structures R 1 and R 2 .
  • T AI and T A2 - Image IA is transformed by T A1 and TA 2 into the space of I 1 and I 2 , which results in propagated structures R 1 and R 2 .
  • the transformed set of boundary points does not, in general, coincide with the voxel locations OfI 1 and I 2 .
  • Quantification of disease progression in RA is an example of making use of image biomarkers from longitudinal MRI data.
  • Other diseases in which a similar approach is desirable include Alzheimer's disease (AD), multiple sclerosis (MS), and osteoarthritis (OA).
  • AD Alzheimer's disease
  • MS multiple sclerosis
  • OA osteoarthritis
  • volumetric MR images can be acquired longitudinally in order to assess disease progression or response to treatment.
  • Existing techniques for analysing longitudinal images often compare each image in the temporal sequence to the baseline, and hence extract image features to study progression of the disease over time. This pair- wise analysis of the images does not make full use of the time-domain information available. Longitudinal volumetric CT, just beginning to be widely available clinically, also has the potential to be applied in a similar way.
  • the present invention provides highly automated methods for the analysis of multiple structures of interest.
  • a method for automatically delineating bones from images is based on the segmentation propagation technique discussed above.
  • the method is particularly applicable to the complex bony anatomy of the hand, wrist and foot, but is also applicable to other joints such as the knee, rotator cuff and hip.
  • the method can be used to enable the volume of bones to be measured, and also the relative position and orientation of bones to be assessed.
  • the present invention provides a method for delineating a bone in an image comprising: constructing an atlas from a single subject; approximately delineating the bone from the image using rigid registration; accurately delineating the bone from the image using inter-subject non-rigid registration followed by segmentation propagation.
  • the step of constructing an atlas may comprise manually segmenting the bone of interest from a reference image.
  • the step of approximately delineating the bone may comprise identifying a region of interest around the bone of interest in the atlas, and registering voxels in the region of interest of the image to the reference image.
  • the step of approximately delineating the bone may comprise: rigidly registering the region of interest in the atlas to the image, then rigidly registering the region of interest in the image to follow-up images in a temporal series.
  • the step of approximately delineating the bone may comprise using the correlation coefficient CC as a similarity measure where, for a set of n data points (x,-, yj),
  • the step of accurately delineating the bone may comprise a four stage registration process comprising: an affine registration with 12 degrees of freedom; a cubic B-spline non-rigid registration with a control point spacing corresponding to 20 pixels in a high resolution plane; a cubic B-spline non-rigid registration with a control point spacing corresponding to 10 pixels in a high resolution plane; and a cubic B-spline non-rigid registration with a control point spacing corresponding to 5 pixels in a high resolution plane.
  • Each registration stage may use the result of the previous stage as the starting estimate.
  • the final deformation field may be used to propagate the manual segmentation in the reference image to obtain a boundary of the bone of interest in the image.
  • CC and normalised mutual information may be used as similarity measures in the affine and non- rigid registrations respectively. Simulated annealing and steepest gradient decent may be used to optimise the similarity measures in the affine and non-rigid registration respectively.
  • the present invention also provides a method of identifying a lesion in a follow-up image taken at a later time than a baseline image comprising: delineating a bone in both the baseline and the follow-up image using the method recited above; transforming the delineated images using the results of the rigid registration; generating a difference image by subtracting the baseline image from the transformed follow-up, image; applying Otsu's thresholding to the difference image to identify a high-intensity lesion in the image.
  • the method may further comprise the step of analysing the difference image to give the volume of the bone lesion.
  • the number of voxels in the region determined to be bone lesion may be used to ascertain the volume of the bone lesion.
  • the method may be used to identify the region of a bone lesion for more than one image, each image taken at a different time point for a single subject.
  • Each difference image may be thresholded by the mean plus the standard deviation of the intensity histogram.
  • the result may be thresholded again using Otsu's algorithm to obtain a thresholded difference image of the bone for each time point.
  • the thresholded difference images at all of the time points may be summed together to obtain a summed thresholded difference image of the bone.
  • the summed image may be filtered by morphological "opening" and morphological "dilation" operations.
  • the result may be used as a mask to segment the summed thresholded difference image of the talus bone to obtain a region of bone lesion.
  • a further morphological "dilation" may be performed on the region of bone lesion to obtain a final region of bone lesion for a single subject.
  • the final regions of bone lesions found for a number of subjects may be used to obtain the regions of bone lesion at more than one time point for more than one subject.
  • the region of bone lesion in each subject may be transformed into a reference coordinate space of an atlas. This may be performed by using the results of an inter-subject non- rigid registration.
  • the region of bone lesion for all time points of all subjects may then be summed and labelled. Each region of bone lesion may then be transformed back to the co-ordinate space of the thresholded difference image of the bone.
  • a method for quantifying size, shape and brightness changes in images has wide applicability and can be used, for example, on images of joints, and images of the brain.
  • the method uses spatio-temporal segmentation.
  • the method is an integrated spatio- temporal analysis technique that incorporates information from all of a temporal series of images (more than 2) to directly extract 4D (position and intensity) image features in order to quantify temporal changes in bone diseases.
  • the invention comprises a method for identifying a particular feature in a region of interest in a series of images taken at different times, comprising: providing more than two difference images of the region of interest, each image taken at a different time point; mapping the difference images to a 5-dimensional feature space; segmenting the feature space using a mean shift algorithm to identify the particular feature in the region of interest at each of the plurality of time points; and summing all of the particular features from all of the images at the plurality of time points.
  • the 5 -dimensional feature space may comprise three spatial dimensions, one intensity dimension and one time dimension.
  • the step of providing the difference images may comprise: constructing an atlas (from one or more subjects) for use in delineating regions of interest from the serial images, and defining a common co-ordinate system for analysis; determining the regions of interest using non-rigid registration and segmentation propagation to automatically delineate the regions of interest from the serial images using a pre-segmented atlas image from a reference subject; generating difference images by subtracting the baseline image from the registered follow-up images for later analysis.
  • the region of interest may be a bone.
  • the particular feature may be a bone lesion.
  • the region of interest may be the brain, and the particular feature may be a brain lesion.
  • the 5-dimensional feature space may be built by mapping the position, intensity and time values of each voxel to a point in the feature space. Where the images acquired have multiple intensities (e.g. proton density and T2 weighted), these can all be used in the feature space.
  • intensities e.g. proton density and T2 weighted
  • the step of segmenting the feature space may comprise studying the underlying probability density function of the points using a mean shift algorithm.
  • the step of segmenting the feature space results in 4-dimensional segmentations of the particular feature that extend across multiple time points.
  • the temporal aspect improves sensitivity to real changes, and reduces the likelihood of false positive and negative lesions.
  • the method may be repeated for a plurality of subjects in order to identify the particular feature in a region of interest in a plurality of images taken at different times for the plurality of subjects.
  • the method may then further comprise transforming the results for each subject to the image space of an atlas to identify candidate bone lesion regions.
  • the method may further comprise summing all of the particular features of all of the subjects, and carrying out a connected component analysis.
  • the step of transforming the results for each subject to the image space of an atlas may comprise performing inter- subject non-rigid registration to enable the mapping of bone lesions of different subjects to a common reference co-ordinate system defined by an atlas bone. This allows the comparison of localised bone lesion volume changes in anatomically corresponding locations between different subjects or groups.
  • the number of voxels in each candidate bone lesion region may be counted in order to calculate the volume of the candidate bone lesion regions.
  • the advantages of this method include that localised temporal changes are delineated as 4D segmentations so that changes identified in one follow-up image are directly related to changes identified in another follow-up image, and that results are more consistent and more robust to errors (due to registration) or image artefacts present in one follow- up image.
  • Figure 1 shows the principles of segmentation propagation
  • Figure 2 shows a process according to an embodiment of the invention to obtain the regions of bone lesion found in the talus bone at all time points of one subject.
  • Figure 3 shows mathematical morphological operators applied after each aggregation stage in a method according to an embodiment of the invention.
  • Figure 4 shows two-dimensional cases of the kernels used in figure 3.
  • Figure 5 shows a process according to an embodiment of the invention to obtain the regions of bone lesion found in the talus bone at all time points of all subjects.
  • Figure 6 shows results of a rigid registration of serial images from the talus bone of a subject.
  • Figure 7 shows graphs of volume of automatic and manual segmentations of talus bones against time.
  • Figure 8 shows intermediate results of the first stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system.
  • Figure 9 shows intermediate results of the second stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system
  • Figure 10 shows the surface rendering of candidate bone lesion regions overlaid with the semi-transparent talus bone in the atlas.
  • Figure 11 shows a graph of average bone lesion volume of male and female subjects in a particular region against time.
  • Figure 12 shows a process according to an embodiment of the invention to obtain the bone lesions of the talus from the 5 time points of one subject using spatio-temporal segmentation.
  • Figure 13 shows a process according to an embodiment of the invention to obtain the candidate bone lesion regions from the bone lesions of all the subjects.
  • Figure 14 shows a comparison of the bone lesion segmented by OT and ST at different time points of the talus bone of subject 9.
  • Figure 15 shows a graph of average bone lesion volume of male and female subjects in a region obtained using the method of the present invention against time.
  • Figure 16 shows the axial, coronal and sagittal views of an MR image of an ankle of a rat in which the bone lesions that occur at the same location in more than two subjects are highlighted.
  • Figure 17 shows a graph of the results of the size of a region of bone lesion of all subjects against time.
  • Figure 18 shows a graph of the results of the size of a region of bone lesion of male and female subjects against time.
  • Figure 19 shows the difference images of all the time points of the brain of an MS patient (left) and the highlighted MS lesions segmented by spatio-temporal segmentation (right).
  • a typical bone lesion appears as a high-intensity region in Tl -weighted images.
  • a “candidate bone lesion region” is defined as a region in the bone where bone lesions are likely to occur.
  • the data set consisted of a group of 12 Lewis rats (subjects 1 - 12; 6 male, 6 female).
  • RA inducing agent proteoglycan polysaccharide (PG-PS) from Streptococcus pyogenes
  • PG-PS proteoglycan polysaccharide
  • MR scans of the right ankle were taken at day -12, day -4, day +3, day +10, day +14 and day +21.
  • MR scans of the left ankle were taken at day -12 and day -4 to provide a control.
  • An atlas is a reference image with labelled structures.
  • the baseline image of subject 1 was randomly chosen to be the reference image.
  • An expert manually segmented the talus bone in the reference image, which took about 3 hours.
  • the reference image plus the manually segmented bone formed the atlas.
  • the resulting deformation field was applied to the manually segmented talus bone to obtain the boundary of the bone in that image. All automatic segmentations were derived from these two manual segmentations.
  • the joint is composed of multiple bones, each of which is rigid, but which move with respect to one another in a non-rigid fashion.
  • a region of interest (ROI) around the talus bone was identified in the atlas, and only voxels in this region were used in the registration. This improved the registration accuracy, as within the ROI, the rigid body assumption was valid.
  • the talus bones in all the time points were roughly delineated by using the results of a two-stage registration: (a) the rigid registration of the ROI in the atlas to each baseline image and then (b) the rigid registration of the ROI in the baseline image to the follow- up images in the temporal series.
  • Correlation coefficient (CC) was used as the similarity measure and for a set of n data
  • the first stage was an affine registration with 12 degrees of freedom to compensate for the global motion and gross differences between source and target images.
  • the second, third and fourth stages were cubic B-spline non-rigid registrations with control point spacings of 1.172 mm, 0.586 mm and 0.293 mm respectively to compensate for local deformation. These control point spacings corresponded to 20 pixels (1.172 mm), 10 pixels (0.586 mm), and 5 pixels (0.293 mm) in the high resolution plane.
  • the larger control point spacing allowed the modelling of global non-rigid deformation and the smaller control point spacing allowed the modelling of highly local deformation.
  • Each registration stage used the result of the previous one as the starting estimate, i.e. by dilating the segmentation calculated in the previous stage.
  • the final deformation field was used to propagate the manual segmentation in the reference image to obtain a boundary of the corresponding bone in the target image.
  • CC for its larger capture range
  • normalised mutual information for handling intensity differences due to inter- subject variability
  • Simulated annealing and steepest gradient decent were used to optimise the similarity measures in the affine and non-rigid registration respectively.
  • the running time of segmentation propagation for a bone was 2 - 4 hours.
  • the bone lesions were mapped to the reference co-ordinate system of the atlas using the results of the inter-subject non-rigid registration from the accurate delineation of the talus bone.
  • the method is described in the context of the talus bone, but can equally be applied to other bones, or to multiple bones simultaneously.
  • Figure 2 shows the process to obtain the regions of bone lesion found in the talus bone at all time points of one subject.
  • the difference image of the talus bone was thresholded by the mean plus the standard deviation of the intensity histogram.
  • the result was thresholded again by using Otsu's algorithm to obtain the thresholded difference image of the talus bone, which mainly consisted of voxels of bone lesion.
  • the thresholded difference images of the talus bone at all the time points of a subject were summed together to obtain the summed thresholded difference image of the talus bone.
  • a two-stage process was used to aggregate the bone lesions in the reference co-ordinate system: (a) the bone lesions from the same time points of different subjects were summed to generate candidate bone lesion regions for each time point; (b) those candidate bone lesion regions were summed to generate candidate bone lesion regions for all the time points.
  • Each stage is described in more detail below.
  • mathematical morphology was applied to the aggregated bone lesions, as shown in figure 3. Binary opening operators were used to remove isolated bone lesions or errors due to misregistration or the thresholding algorithm.
  • the kernels shown in figure 4 were constructed to avoid removing excessive voxels in the y-direction due to the anisotropic voxel size.
  • a binary dilation operator of kernel size 3 x 3 x 3 was then used to recover the shapes of candidate bone lesion regions in the input bone lesions.
  • the results were a time series of five binary images containing candidate bone lesion regions of all the subjects.
  • Figure 5 shows the process to obtain the regions of bone lesion found in the talus bone at all time points of all subjects.
  • the regions of bone lesion in all the subjects had to be in the same co-ordinate space. This was achieved by transforming the regions of bone lesion to a reference or "atlas" co-ordinate space common to all bones.
  • each region of bone lesion it was transformed back to the co-ordinate space of the thresholded difference image of the talus bone (i.e. the baseline image of each subject).
  • the candidate bone lesion regions in the image space of the atlas were transformed to the image space of the baseline image of each subject by applying the non-rigid deformation fields from the automatic delineation. Misregistration or interpolation artefacts could result in a high signal in the difference image at the bone boundary.
  • the automatic segmentations were eroded by a kernel of size 3 x 3 x 3 so that voxels around the edge of the automatic segmentation were ignored. The eroded segmentations were used as masks to apply to the transformed candidate bone lesion regions.
  • FIG. 6 An example of the results of rigid registrations is shown in figure 6.
  • the top row shows the baseline image (at day -4), the middle row shows the registered follow-up image (at day +21), the bottom row shows the difference image generated by subtracting the baseline image from the transformed follow-up image.
  • Each row shows the transaxial, coronal and sagittal views (from left to right) of the image.
  • the talus bone is within the white rectangle.
  • the black arrows indicate the location of a bone lesion at day +21.
  • Figure 8 shows intermediate results of the first stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system. Images on the top row show the bone lesions obtained after summing the thresholded bone lesions at day +21. Images on the bottom row show the bone lesions obtained after applying the mathematical morphology to the aggregated bone lesions on the top row. The transaxial, coronal and sagittal views of the same image are shown from left to right. The brighter the bone lesion voxel, the more subjects in which a bone lesion was found in that voxel.
  • Figure 9 shows intermediate results of the second stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system.
  • Images on the top row show the bone lesions obtained after summing the results of the first stage of the aggregation.
  • Images on the bottom row show the bone lesions obtained after the mathematical morphology to the aggregated bone lesions on the top row.
  • the transaxial, coronal and sagittal views of the same image are shown from left to right. The brighter the bone lesion voxel, the more subjects in which a bone lesion was found in that voxel.
  • the average bone lesion volume in male and female subjects in candidate bone lesion region 24 is shown in figure 11.
  • the error bars in the graphs indicate the standard deviation of the average bone lesion volume at each time point. Increase in bone lesion volume in region 24 over time can be observed.
  • the invention automatically delineates multiple bones from serial MR images of joints, and quantifies their lesion load. It can be applied to a single bone in a joint, or multiple bones simultaneously.
  • the user interaction required for this technique may be less than 2 minutes for an image, compared to about 8 hours per image for manual segmentation.
  • the computational time is no more than 8 hours per image, and requires no supervision.
  • the invention saves hours of human labour in the analysis of serial MR images of joints in RA studies.
  • a 5D feature space is built by mapping the (x, y, z, intensity, time) value of each voxel to a point in the feature space.
  • the images acquired have multiple MRI intensities (eg: proton density and T2 weighted) these can all be used in the feature space.
  • the feature space is then segmented by studying the density of the points using the mean shift technique (D. Comaniciu et al., "Mean Shift: A Robust Approach Toward Feature Space Analysis", IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603 — 619, 2002) and a Gaussian kernel, which has not previously been applied to this sort of data.
  • the results are 4D segmentations of bone lesions that extend across multiple time points (see figure 12). The temporal aspect improves sensitivity to real changes, and reduces the likelihood of false positive and false negative lesions.
  • Mean shift is a robust non-parametric feature space analysis technique that has been applied to the segmentation of grey level and colour images, the spatio-temporal segmentation of video sequences, and the delineation of structures in brain MRI.
  • Mean shift has advantages over other clustering techniques by not requiring a prior knowledge of the number of clusters, and by not making any assumption about the shape of the clusters in the feature space.
  • Density estimation-based non-parametric clustering techniques regard a feature space as the empirical probability density function (p.d.f.) of the represented parameters. Dense regions (i.e. regions with dense points) in the feature space correspond to local maxima of the p.d.f, which are the modes of the unknown density.
  • Mean shift locates these local maxima or modes so that clusters associated with them can be delineated.
  • Other reasons for using mean shift are that (1) the method is simple to use because the result only depends on the window width of a given kernel in each dimension and there is the potential for an autonomous image segmentation algorithm by combining a window width selection technique with prior knowledge about the images; (2) the feature space used for the 3D grey level segmentation can be easily extended to include the dimension in time by giving each voxel a time value depending on which follow-up image it belongs to. Furthermore, the feature space can also be extended to include data from multi-spectral images, which has been used to classify tissues in the MCP joints of the hands in RA.
  • inter-subject non-rigid registration enables the mapping of bone lesions of different subjects to a common reference coordinate system defined by an atlas bone. This allows the comparison of localised bone lesion volume changes in anatomically corresponding locations between different subjects or groups (see figure 13).
  • the segmented bone lesion can be thresholded to binary images so that 1 in a voxel means bone lesion.
  • bone lesions that are found in only one time point can be removed by thresholding (using the value 2) to ensure that the final candidate bone regions consist of bone lesions that appear in two or more time points.
  • candidate bone lesion regions which were defined in the co-ordinate system of the atlas, in the talus bones, are defined by the connected components in the image.
  • the candidate bone lesion regions in the image space of the atlas can be transformed to the image space of the base-line image of each subject by applying the non-rigid deformation fields obtained from the determination of ROI. Since the 4D segmentations of bone lesions might not completely overlap with the candidate bone lesion regions, the 4D segmentations can be assigned to the regions in the following way: (1) If a bone lesion voxel is inside a region, it is assigned to that region; (2) If a bone lesion voxel is outside any region, it can be assigned to the region that contains the highest number of voxels from the same 4D segmentations of that voxel. Otherwise, the voxel is discarded.
  • ST is used to refer to the results and the method of the present invention
  • OT is used to refer to previous results.
  • the lesion segmentations of subject 9 from OT and ST were more similar at the time point day +21 than at the time point day - 4 as shown in Fig. 4.
  • the images are shown in coronal views with segmentations highlighted. In each sub-figure, the segmentation on the left was obtained by OT, and the segmentation on the right was obtained by ST.
  • the mean similarity indices of the lesion segmentations of all the subjects generated by OT and ST at the time point day +21 and day - 4 were 0.756( ⁇ 0.061) and 0.498( ⁇ 0.217) respectively. Since the volume of bone lesion at day +21 was larger than that in day - 4 according to the disease model, the results showed that the performance of OT was close to the performance of ST when the volume of bone lesion was large.
  • FIG. 16 shows the result of the application to locate common sites of bone lesion in the talus bone of an animal model.
  • Figure 16 shows the axial, coronal and sagittal views of an MR image of an ankle of a rat. The bone lesions that occur at the same location in more than 2 subjects are highlighted.
  • Graphs of size of bone lesion against time for a region of bone lesion in the talus bone are shown in figures 17 and 18.
  • Figure 17 shows a graph of size of a region of bone lesion of all the subjects against time.
  • Figure 18 shows a graph of a region of bone lesion of male and female subjects against time. Local changes, instead of global changes, in the bone were measured. Furthermore, changes of smaller than 1% of the volume of the bone were detected. This sensitivity to small lesions has not previously been demonstrated from in- vivo imaging, and preliminary validation indicates good agreement with histology.
  • the invention can also be applied to study kinematics of bone movement in 4D (3D + time) MR images of joints. Multiple bones in a joint are segmented automatically in each time frame. Animation of the bone movement in 3D can be generated from 3D surface rendering of multiple bones in all the time frames.
  • the intensity analysis can be carried out using multi-spectral data, as well as images with just a single intensity per voxel.
  • the invention can also be used to automatically line up images in standard planes for use in visual scoring (e.g. OMERACT RAMIS), rather than relying on radiographic technique to achieve this.
  • visual scoring e.g. OMERACT RAMIS
  • the invention can also be applied to other parts of the body that can be studied using longitudinal MRI, and which exhibit subtle changes in structure volumes and intensities, such as Multiple Sclerosis (MS).
  • MS Multiple Sclerosis
  • the left side of figure 19 shows the different images of all the time points of the brain for a MS patient and the right side shows the MS lesions segmented by the spatio-temporal segmentation.
  • Previous methods for quantifying longitudinal changes in MS include (a) the measurement of brain atrophy; (b) the assessment of MS lesions by calculating their total volume or counting their total number, which involve manual or semi-automatic segmentations. By using spatio- temporal segmentation, MS lesions can be segmented automatically.
  • non- rigid registration of the follow-up images and the baseline image generates deformation fields, which can be used to measure the brain atrophy by calculating the Jacobians of the deformation fields.
  • the atrophy rates of the brain shown in figure 19 are found to be 0.997, 0.990, 0.991, 0.987 and 0.985 from the first point to the last time point. This demonstrates the potential of the spatio-temporal segmentation and non-rigid image registration in quantifying longitudinal changes in MS brains.
  • the current invention can also be applied to images of other modalities, such as CT images.
  • the invention can also be applied to the analysis of single time point or longitudinal Dynamic Contrast Enhanced (DCE) MRI images of joints. Multiple joints, and corresponding regions of synovium, are segmentated automatically from the DCE-MRI images and used to quantify contrast update parameters for those joints either within one dynamic sequence, or between dynamic sequence acquired at separate imaging investigations.
  • DCE Dynamic Contrast Enhanced

Abstract

The present invention relates to image processing and particularly to the automatic quantification of changes in images, such as changes in MR or CT images of joints or changes in MR images of the brain. Highly automated methods for the analysis of multiple structures of interest are provided. Methods for automatically delineating bones from images and methods for quantifying size, shape and brightness changes in images are also provided. The invention has wide applicability and can be used, for example, on images of joints, and images of the brain.

Description

AUTOMATIC QUANTIFICATION OF CHANGES IN IMAGES
FIELD OF THE INVENTION
The present invention relates to image processing and particularly to the automatic quantification of changes in images, such as changes in multiple bones in MR or CT images of joints or changes in MR images of the brain.
BACKGROUND TO THE INVENTION
Image based biomarkers have great potential for speeding up the drug discovery and development pipeline because imaging is non-invasive, can be used repeatedly, and can be used to localise changes in the body due to disease and therapy. Frequently, imaging changes involve changes in the size, shape and brightness of image features. It is frequently desirable to track multiple image features simultaneously, though the features of interest are known in advance. In order to extract maximum value from images, it is necessary to quantify both size/shape changes and brightness changes from all features of interest. Furthermore, the locations of features of interest in the images can vary substantially between time points, and the relative position of features of interest can also change. This normally confounds techniques to track these changes across time points. In the brain, the organ in which image-based biomarkers are most mature, the skull ensures that the positioning changes between time points are limited to a rigid body transformation, and techniques for determining rigid body transformations between brain scans are well established. In most other parts of the body, however, more complicated transformations are required to correct for repositioning.
The monitoring of image changes over time in joints provides important information on the progression of important diseases such as rheumatoid arthritis (RA) and osteoarthritis (OA), and on the effect of drug treatment. For example, MRI allows visualisation of relevant changes to the joint including cartilage thinning in OA, and bone erosion, bone oedema and synovial inflammation in RA. CT allows visualization of bone destruction due to erosions in RA, and joint and bone damage in OA.
In order to quantify bone lesions, manual segmentations of images are often carried out to measure bone volumes. However, it is a tedious and time consuming task. An observer may take several hours to accurately segment a bone from an image. In addition, since the volume of a bone is a global measure and there may be bone growth or bone remodelling in some part of the bone between images, manual segmentation may not accurately reflect local changes due to the bone lesion. Furthermore, the intra- observer variability of manual segmentations is of the order of a few percent (around 2%). This is not sufficient to reliably measure changes in bone lesion that are about 1% of the volume of the bone. Furthermore, the amount of change in the bone volume may be comparable to the geometric distortion caused by field inhomogeneity, gradient miscalibration, and other errors relating to the stability of the MR scanner. Accordingly, the technique of manual segmentation is limited in its use and accuracy.
The Outcome Measures in Rheumatoid Arthritis Clinical Trials (OMERACT) rheumatoid arthritis magnetic resonance image scoring system (RAMRIS) has been suggested for the evaluation of inflammatory and destructive changes in RA hands and wrists (F. McQueen et al. "OMERACT Rheumatoid Arthritis Magnetic Resonance Imaging Studies. Summary of OMERACT 6 MR Imaging Module," J Rheumatol., vol. 30, no. 6, pp. 1387-1392, June 2003). With this system a set of MR protocols and manual scoring methods are used to assess bone erosions, synovitis and bone oedema. This approach relies heavily on manual visual assessment and expert knowledge about anatomy of the joints.
More sophisticated image analysis has been used in the analysis of joint images. Segmentation propagation is a technique that makes use of the deformation field calculated from the registration of two images to propagate a region of interest from one image to another one. Segmentation propagation has been used to quantify small cerebral ventricular volume changes in treated growth hormone patients (M. Holden et al. "Quantification of small cerebral ventricular volume changes in treated growth hormone patients using nonrigid registration," IEEE Trans. Med Imaging, vol. 21, no. 10, pp. 1292-1301, Oct. 2002). Figure 1 shows the principle of segmentation propagation. IA represents the atlas (source) image with a segmented structure, RA, defined by a connected set of boundary points at voxel locations, shown as dots and lines. I1 and I2 represent the baseline and follow-up images of a subject. Non-rigid registration of IA to Ii and I2 produces the transformations TAI and TA2- Image IA is transformed by TA1 and TA2 into the space of I1 and I2, which results in propagated structures R1 and R2. Because a transformation results generally in translations of boundary points by a non-integer number of voxels, the transformed set of boundary points does not, in general, coincide with the voxel locations OfI1 and I2.
Quantification of disease progression in RA is an example of making use of image biomarkers from longitudinal MRI data. Other diseases in which a similar approach is desirable include Alzheimer's disease (AD), multiple sclerosis (MS), and osteoarthritis (OA). In all cases, volumetric MR images can be acquired longitudinally in order to assess disease progression or response to treatment. Existing techniques for analysing longitudinal images often compare each image in the temporal sequence to the baseline, and hence extract image features to study progression of the disease over time. This pair- wise analysis of the images does not make full use of the time-domain information available. Longitudinal volumetric CT, just beginning to be widely available clinically, also has the potential to be applied in a similar way.
Therefore, there is a need for an improved technique for delineating bones from MR or CT images. There is also a need for an improved technique for quantifying changes in images due to therapy and diseases, for example RA, OA, AD and MS.
SUMMARY OF THE INVENTION
The present invention provides highly automated methods for the analysis of multiple structures of interest.
In a first embodiment of the present invention, there is provided a method for automatically delineating bones from images. The method is based on the segmentation propagation technique discussed above.
The method is particularly applicable to the complex bony anatomy of the hand, wrist and foot, but is also applicable to other joints such as the knee, rotator cuff and hip.
The method can be used to enable the volume of bones to be measured, and also the relative position and orientation of bones to be assessed. The present invention provides a method for delineating a bone in an image comprising: constructing an atlas from a single subject; approximately delineating the bone from the image using rigid registration; accurately delineating the bone from the image using inter-subject non-rigid registration followed by segmentation propagation.
The step of constructing an atlas may comprise manually segmenting the bone of interest from a reference image.
The step of approximately delineating the bone may comprise identifying a region of interest around the bone of interest in the atlas, and registering voxels in the region of interest of the image to the reference image.
The step of approximately delineating the bone may comprise: rigidly registering the region of interest in the atlas to the image, then rigidly registering the region of interest in the image to follow-up images in a temporal series.
The step of approximately delineating the bone may comprise using the correlation coefficient CC as a similarity measure where, for a set of n data points (x,-, yj),
CC . Alternatively, other similarity measures, such as the
Figure imgf000006_0001
normalised mutual information, can also be used. Simulated annealing may be used to optimise the similarity measure.
The step of accurately delineating the bone may comprise a four stage registration process comprising: an affine registration with 12 degrees of freedom; a cubic B-spline non-rigid registration with a control point spacing corresponding to 20 pixels in a high resolution plane; a cubic B-spline non-rigid registration with a control point spacing corresponding to 10 pixels in a high resolution plane; and a cubic B-spline non-rigid registration with a control point spacing corresponding to 5 pixels in a high resolution plane. Each registration stage may use the result of the previous stage as the starting estimate. The final deformation field may be used to propagate the manual segmentation in the reference image to obtain a boundary of the bone of interest in the image. CC and normalised mutual information may be used as similarity measures in the affine and non- rigid registrations respectively. Simulated annealing and steepest gradient decent may be used to optimise the similarity measures in the affine and non-rigid registration respectively.
The present invention also provides a method of identifying a lesion in a follow-up image taken at a later time than a baseline image comprising: delineating a bone in both the baseline and the follow-up image using the method recited above; transforming the delineated images using the results of the rigid registration; generating a difference image by subtracting the baseline image from the transformed follow-up, image; applying Otsu's thresholding to the difference image to identify a high-intensity lesion in the image.
The method may further comprise the step of analysing the difference image to give the volume of the bone lesion. The number of voxels in the region determined to be bone lesion may be used to ascertain the volume of the bone lesion.
The method may be used to identify the region of a bone lesion for more than one image, each image taken at a different time point for a single subject. Each difference image may be thresholded by the mean plus the standard deviation of the intensity histogram. The result may be thresholded again using Otsu's algorithm to obtain a thresholded difference image of the bone for each time point. The thresholded difference images at all of the time points may be summed together to obtain a summed thresholded difference image of the bone. The summed image may be filtered by morphological "opening" and morphological "dilation" operations. The result may be used as a mask to segment the summed thresholded difference image of the talus bone to obtain a region of bone lesion. A further morphological "dilation" may be performed on the region of bone lesion to obtain a final region of bone lesion for a single subject. The final regions of bone lesions found for a number of subjects may be used to obtain the regions of bone lesion at more than one time point for more than one subject. The region of bone lesion in each subject may be transformed into a reference coordinate space of an atlas. This may be performed by using the results of an inter-subject non- rigid registration. The region of bone lesion for all time points of all subjects may then be summed and labelled. Each region of bone lesion may then be transformed back to the co-ordinate space of the thresholded difference image of the bone.
In another embodiment of the present invention, there is provided a method for quantifying size, shape and brightness changes in images. The invention has wide applicability and can be used, for example, on images of joints, and images of the brain. The method uses spatio-temporal segmentation. The method is an integrated spatio- temporal analysis technique that incorporates information from all of a temporal series of images (more than 2) to directly extract 4D (position and intensity) image features in order to quantify temporal changes in bone diseases.
Although many spatio-temporal analysis techniques have been proposed to analyse fMRI data, they are not directly applicable to the problem addressed by the present invention because (1) they focus on studying features that change in intensity, whereas the present invention identifies features that change in both size and intensity; (2) there is a limitation on the number of time points in longitudinal structural imaging studies such as in RA, which generally have fewer time points than fMRI studies; (3) there is no reference pattern in RA against which the intensity variation can be matched.
The invention comprises a method for identifying a particular feature in a region of interest in a series of images taken at different times, comprising: providing more than two difference images of the region of interest, each image taken at a different time point; mapping the difference images to a 5-dimensional feature space; segmenting the feature space using a mean shift algorithm to identify the particular feature in the region of interest at each of the plurality of time points; and summing all of the particular features from all of the images at the plurality of time points. The 5 -dimensional feature space may comprise three spatial dimensions, one intensity dimension and one time dimension.
The method detailed above may be used to provide the difference images of the region of interest. In a preferred embodiment, the step of providing the difference images may comprise: constructing an atlas (from one or more subjects) for use in delineating regions of interest from the serial images, and defining a common co-ordinate system for analysis; determining the regions of interest using non-rigid registration and segmentation propagation to automatically delineate the regions of interest from the serial images using a pre-segmented atlas image from a reference subject; generating difference images by subtracting the baseline image from the registered follow-up images for later analysis.
The region of interest may be a bone. The particular feature may be a bone lesion. Alternatively, the region of interest may be the brain, and the particular feature may be a brain lesion.
The 5-dimensional feature space may be built by mapping the position, intensity and time values of each voxel to a point in the feature space. Where the images acquired have multiple intensities (e.g. proton density and T2 weighted), these can all be used in the feature space.
The step of segmenting the feature space may comprise studying the underlying probability density function of the points using a mean shift algorithm. The step of segmenting the feature space results in 4-dimensional segmentations of the particular feature that extend across multiple time points. The temporal aspect improves sensitivity to real changes, and reduces the likelihood of false positive and negative lesions.
The method may be repeated for a plurality of subjects in order to identify the particular feature in a region of interest in a plurality of images taken at different times for the plurality of subjects. The method may then further comprise transforming the results for each subject to the image space of an atlas to identify candidate bone lesion regions. The method may further comprise summing all of the particular features of all of the subjects, and carrying out a connected component analysis. The step of transforming the results for each subject to the image space of an atlas may comprise performing inter- subject non-rigid registration to enable the mapping of bone lesions of different subjects to a common reference co-ordinate system defined by an atlas bone. This allows the comparison of localised bone lesion volume changes in anatomically corresponding locations between different subjects or groups. The number of voxels in each candidate bone lesion region may be counted in order to calculate the volume of the candidate bone lesion regions.
The advantages of this method include that localised temporal changes are delineated as 4D segmentations so that changes identified in one follow-up image are directly related to changes identified in another follow-up image, and that results are more consistent and more robust to errors (due to registration) or image artefacts present in one follow- up image.
BRIEF DESCRIPTION OF THE DRAWINGS Preferred embodiments of the invention will now be described with reference to the following figures in which:
Figure 1 shows the principles of segmentation propagation; Figure 2 shows a process according to an embodiment of the invention to obtain the regions of bone lesion found in the talus bone at all time points of one subject. Figure 3 shows mathematical morphological operators applied after each aggregation stage in a method according to an embodiment of the invention.
Figure 4 shows two-dimensional cases of the kernels used in figure 3. (a) is the upper half kernel in which all the voxels at y = -1 are zero, (b) is the lower half kernel in which all the voxels at y = +1 are zero. Figure 5 shows a process according to an embodiment of the invention to obtain the regions of bone lesion found in the talus bone at all time points of all subjects.
Figure 6 shows results of a rigid registration of serial images from the talus bone of a subject. Figure 7 shows graphs of volume of automatic and manual segmentations of talus bones against time.
Figure 8 shows intermediate results of the first stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system. Figure 9 shows intermediate results of the second stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system
Figure 10 shows the surface rendering of candidate bone lesion regions overlaid with the semi-transparent talus bone in the atlas.
Figure 11 shows a graph of average bone lesion volume of male and female subjects in a particular region against time.
Figure 12 shows a process according to an embodiment of the invention to obtain the bone lesions of the talus from the 5 time points of one subject using spatio-temporal segmentation.
Figure 13 shows a process according to an embodiment of the invention to obtain the candidate bone lesion regions from the bone lesions of all the subjects.
Figure 14 shows a comparison of the bone lesion segmented by OT and ST at different time points of the talus bone of subject 9.
Figure 15 shows a graph of average bone lesion volume of male and female subjects in a region obtained using the method of the present invention against time. Figure 16 shows the axial, coronal and sagittal views of an MR image of an ankle of a rat in which the bone lesions that occur at the same location in more than two subjects are highlighted.
Figure 17 shows a graph of the results of the size of a region of bone lesion of all subjects against time. Figure 18 shows a graph of the results of the size of a region of bone lesion of male and female subjects against time.
Figure 19 shows the difference images of all the time points of the brain of an MS patient (left) and the highlighted MS lesions segmented by spatio-temporal segmentation (right). DETAILED DESCRIPTION OF THE INVENTION
Delineation of bones using segmentation propagation and quantification of changes in bone in serial MR images of joints
Definitions
Any abnormal change in the bone is referred to as a "bone lesion". A typical bone lesion appears as a high-intensity region in Tl -weighted images. A "candidate bone lesion region" is defined as a region in the bone where bone lesions are likely to occur.
Assumptions and observations
Y) In the present example it is assumed that the bone lesions caused by the disease arise in similar locations in all subjects. This is supported by: (a) juxtaarticular cartilage and bone erosion at the edge of the articular cartilage and adjacent subchondral bone is a characteristic pattern of human RA; (b) pannus formation and erosion of the margins of the articular cartilage arise in a histological assessment of the animal model; (c) the experiment conditions are carefully kept to be very similar for all the subjects.
2) In the present example it is assumed that the progression of bone lesions in all subjects are similar due to similar experimental conditions.
Image data and disease model
The data set consisted of a group of 12 Lewis rats (subjects 1 - 12; 6 male, 6 female). An
RA inducing agent (proteoglycan polysaccharide (PG-PS) from Streptococcus pyogenes) was injected into the right rear ankle of the rats at day -14. Reactivation at day 0 to produce joint inflammation was carried out by injecting PG-PS intravenously into the tail vein (Esser et al., "Reactivation of streptococcal cell wall-induced arthritis by homologous and heterologous cell wall polymers," Arthritis Rheum., vol. 28, no. 12, page 1402, 1985). MR scans of the right ankle were taken at day -12, day -4, day +3, day +10, day +14 and day +21. MR scans of the left ankle were taken at day -12 and day -4 to provide a control. The Tl -weighted images were acquired on a 7T 20cm bore (Bruker Biospec ™) system and a Birdcage coil using a 3D gradient echo sequence with the following parameters: TE = 3ms, TR = 14ms, flip angle = 30°, FOV =15 x 40 x 15 mm3, 3. Only data from subjects 1 - 11 were analysed because subject 12 (female) died after the first MR scan.
Atlas construction
An atlas is a reference image with labelled structures. The baseline image of subject 1 was randomly chosen to be the reference image. An expert manually segmented the talus bone in the reference image, which took about 3 hours. The reference image plus the manually segmented bone formed the atlas. After registering the baseline image of subject 1 to an unseen image, the resulting deformation field was applied to the manually segmented talus bone to obtain the boundary of the bone in that image. All automatic segmentations were derived from these two manual segmentations.
Approximate delineation of the talus bone The joint is composed of multiple bones, each of which is rigid, but which move with respect to one another in a non-rigid fashion. For the work described here, only the talus bone was considered, but the method could also be applied to other bones, or multiple bones. Instead of registering the whole image at each time point to the baseline (target), a region of interest (ROI) around the talus bone was identified in the atlas, and only voxels in this region were used in the registration. This improved the registration accuracy, as within the ROI, the rigid body assumption was valid.
The talus bones in all the time points were roughly delineated by using the results of a two-stage registration: (a) the rigid registration of the ROI in the atlas to each baseline image and then (b) the rigid registration of the ROI in the baseline image to the follow- up images in the temporal series.
Correlation coefficient (CC) was used as the similarity measure and for a set of n data
T] (JC. -x)(yt —y) points (xh y,), CC = =_=! — =- — .
ZJ(X 1 -χ)∑,(yt -y) Simulated annealing was used to optimise the similarity measure and the running time for a rigid registration was less than 10 minutes. Accurate delineation of the talus bone
Inter-subject non-rigid registration (D. Rueckert et al. "Nonrigid registration using tree- form deformations: application to breast MR images" Medical Imaging, IEEE Transactions on, vol. 18, no. 8, p. 712, 1999) was used to automatically delineate the talus bones from all the time points. This was a 4-stage registration process, which used the result of the two-stage registration from the approximate delineation of the talus bone as a starting estimate. The source image was the ROI in the atlas, and the target images were all time points of the 11 subjects. Each time point was treated independently.
The first stage was an affine registration with 12 degrees of freedom to compensate for the global motion and gross differences between source and target images. The second, third and fourth stages were cubic B-spline non-rigid registrations with control point spacings of 1.172 mm, 0.586 mm and 0.293 mm respectively to compensate for local deformation. These control point spacings corresponded to 20 pixels (1.172 mm), 10 pixels (0.586 mm), and 5 pixels (0.293 mm) in the high resolution plane. The larger control point spacing allowed the modelling of global non-rigid deformation and the smaller control point spacing allowed the modelling of highly local deformation. Each registration stage used the result of the previous one as the starting estimate, i.e. by dilating the segmentation calculated in the previous stage. The final deformation field was used to propagate the manual segmentation in the reference image to obtain a boundary of the corresponding bone in the target image. CC (for its larger capture range) and normalised mutual information (for handling intensity differences due to inter- subject variability) were used as the similarity measures in the affine and non-rigid registration respectively. Simulated annealing and steepest gradient decent were used to optimise the similarity measures in the affine and non-rigid registration respectively. The running time of segmentation propagation for a bone was 2 - 4 hours.
Identification of bone lesions in a reference coordinate system
Follow-up images were transformed using the results of the rigid registration of the baseline image and the follow-up images from the approximate delineation of the bone. The interpolation was performed using a windowed-sine function (Hanning window of size (diameter) = 7 voxels). Difference images were then generated by subtracting the baseline image from the transformed follow-up images. Otsu's thresholding (N. Otsu "A threshold selection method from gray-level histograms" IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, p. 62, 1979) was applied to all the difference images to identify the high-intensity bone lesions in the talus bones. Finally, the bone lesions were mapped to the reference co-ordinate system of the atlas using the results of the inter-subject non-rigid registration from the accurate delineation of the talus bone. The method is described in the context of the talus bone, but can equally be applied to other bones, or to multiple bones simultaneously.
Figure 2 shows the process to obtain the regions of bone lesion found in the talus bone at all time points of one subject. The difference image of the talus bone was thresholded by the mean plus the standard deviation of the intensity histogram. The result was thresholded again by using Otsu's algorithm to obtain the thresholded difference image of the talus bone, which mainly consisted of voxels of bone lesion. The thresholded difference images of the talus bone at all the time points of a subject were summed together to obtain the summed thresholded difference image of the talus bone.
Identification of candidate bone lesion regions in a reference coordinate system As can be seen from figure 2, the summed image was filtered by morphological "opening" and "dilation" operations using a kernel of size (3, 3, 3). The result was used as a mask to segment the summed thresholded difference image of the talus bone to obtain regions of bone lesion. Since there might be "holes" in these regions of bone lesion, the final regions of bone lesion were obtained by performing dilation on the image once.
Based on the assumption that the progression of bone lesions in all the subjects are similar due to similar experimental conditions, a two-stage process was used to aggregate the bone lesions in the reference co-ordinate system: (a) the bone lesions from the same time points of different subjects were summed to generate candidate bone lesion regions for each time point; (b) those candidate bone lesion regions were summed to generate candidate bone lesion regions for all the time points. Each stage is described in more detail below. In the first stage, after summing the bone lesions from the same time points, mathematical morphology was applied to the aggregated bone lesions, as shown in figure 3. Binary opening operators were used to remove isolated bone lesions or errors due to misregistration or the thresholding algorithm. The kernels shown in figure 4 were constructed to avoid removing excessive voxels in the y-direction due to the anisotropic voxel size. A binary dilation operator of kernel size 3 x 3 x 3 was then used to recover the shapes of candidate bone lesion regions in the input bone lesions. The results were a time series of five binary images containing candidate bone lesion regions of all the subjects.
In the second stage, the time series of five binary images were summed. Bone lesions that were found in only one time point were removed by thresholding (using the value 2) to ensure that the final candidate bone regions consisted of bone lesions that appeared in two or more time points. Isolated candidate bone lesion regions were removed by the same mathematical morphology used in the first stage. Finally, candidate bone lesion regions, which were defined in the image space of the atlas, in the talus bones, were defined by the connected components in the resulting image.
Figure 5 shows the process to obtain the regions of bone lesion found in the talus bone at all time points of all subjects. In order to sum up all the regions of bone lesion found in the talus bone at all time points of all subjects, the regions of bone lesion in all the subjects had to be in the same co-ordinate space. This was achieved by transforming the regions of bone lesion to a reference or "atlas" co-ordinate space common to all bones.
Each region of bone lesion found in the talus bone at all the time points in all the subjects was then labelled.
Calculation of bone lesion volume
Having identified each region of bone lesion, it was transformed back to the co-ordinate space of the thresholded difference image of the talus bone (i.e. the baseline image of each subject). The candidate bone lesion regions in the image space of the atlas were transformed to the image space of the baseline image of each subject by applying the non-rigid deformation fields from the automatic delineation. Misregistration or interpolation artefacts could result in a high signal in the difference image at the bone boundary. To reduce these edge effects on the result, the automatic segmentations were eroded by a kernel of size 3 x 3 x 3 so that voxels around the edge of the automatic segmentation were ignored. The eroded segmentations were used as masks to apply to the transformed candidate bone lesion regions. Then, the number of voxels in each masked candidate bone lesion region in the Otsu's thresholded difference images was counted to give the bone lesion volume. The average bone lesion volumes in male and female subjects were then calculated in all the candidate bone lesion regions.
RESULTS Generation of difference images
An example of the results of rigid registrations is shown in figure 6. The top row shows the baseline image (at day -4), the middle row shows the registered follow-up image (at day +21), the bottom row shows the difference image generated by subtracting the baseline image from the transformed follow-up image. Each row shows the transaxial, coronal and sagittal views (from left to right) of the image. The talus bone is within the white rectangle. The black arrows indicate the location of a bone lesion at day +21.
Automatic and manual delineation of talus bone
The volume of automatic and manual segmentations of talus bones is plotted against time in figure 7.
Comparison with manual segmentation
The mean similarity index and the mean percentage difference between the volume of automatic and manual segmentations of the talus bone of subjects 1, 2 and 3 at all the time points are summarised in Table I.
Estimates of intra-observer and inter-observer variability of subject 2 and subject 3 assessed by the similarity index are summarised in Table II. TABLE I
Figure imgf000018_0001
TABLE II
Figure imgf000018_0002
Identification of bone lesion regions
Some intermediate results are shown in figures 8 and 9. Figure 8 shows intermediate results of the first stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system. Images on the top row show the bone lesions obtained after summing the thresholded bone lesions at day +21. Images on the bottom row show the bone lesions obtained after applying the mathematical morphology to the aggregated bone lesions on the top row. The transaxial, coronal and sagittal views of the same image are shown from left to right. The brighter the bone lesion voxel, the more subjects in which a bone lesion was found in that voxel. Figure 9 shows intermediate results of the second stage of the aggregation in the identification of candidate bone lesion regions in a reference coordinate system. Images on the top row show the bone lesions obtained after summing the results of the first stage of the aggregation. Images on the bottom row show the bone lesions obtained after the mathematical morphology to the aggregated bone lesions on the top row. The transaxial, coronal and sagittal views of the same image are shown from left to right. The brighter the bone lesion voxel, the more subjects in which a bone lesion was found in that voxel.
The surface rendering of candidate bone lesion regions are shown in figure 10. In total, eight candidate bone lesion regions (region 93, region 41, region 37, region 24, region 121, region 105, region 7 and region 1) were found. Calculation of bone lesion volume
The average bone lesion volume in male and female subjects in candidate bone lesion region 24 is shown in figure 11. The error bars in the graphs indicate the standard deviation of the average bone lesion volume at each time point. Increase in bone lesion volume in region 24 over time can be observed.
Benefits
The invention automatically delineates multiple bones from serial MR images of joints, and quantifies their lesion load. It can be applied to a single bone in a joint, or multiple bones simultaneously.
Apart from the initial manual segmentation for creating the atlas, the user interaction required for this technique may be less than 2 minutes for an image, compared to about 8 hours per image for manual segmentation. The computational time is no more than 8 hours per image, and requires no supervision. The invention saves hours of human labour in the analysis of serial MR images of joints in RA studies.
Spatio-temporal segmentation of bone lesions
An embodiment of the integrated spatio-temporal analysis of the present invention will now be described. The steps of constructing an atlas, determining the regions of interest and generating difference images in this method are the same as described earlier with regard to the delineation of bones using segmentation propagation.
Spatio-temporal segmentation For each subject, a 5D feature space is built by mapping the (x, y, z, intensity, time) value of each voxel to a point in the feature space. Where the images acquired have multiple MRI intensities (eg: proton density and T2 weighted) these can all be used in the feature space. The feature space is then segmented by studying the density of the points using the mean shift technique (D. Comaniciu et al., "Mean Shift: A Robust Approach Toward Feature Space Analysis", IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603 — 619, 2002) and a Gaussian kernel, which has not previously been applied to this sort of data. The results are 4D segmentations of bone lesions that extend across multiple time points (see figure 12). The temporal aspect improves sensitivity to real changes, and reduces the likelihood of false positive and false negative lesions.
Mean shift is a robust non-parametric feature space analysis technique that has been applied to the segmentation of grey level and colour images, the spatio-temporal segmentation of video sequences, and the delineation of structures in brain MRI. Mean shift has advantages over other clustering techniques by not requiring a prior knowledge of the number of clusters, and by not making any assumption about the shape of the clusters in the feature space. Density estimation-based non-parametric clustering techniques regard a feature space as the empirical probability density function (p.d.f.) of the represented parameters. Dense regions (i.e. regions with dense points) in the feature space correspond to local maxima of the p.d.f, which are the modes of the unknown density. Mean shift locates these local maxima or modes so that clusters associated with them can be delineated. Other reasons for using mean shift are that (1) the method is simple to use because the result only depends on the window width of a given kernel in each dimension and there is the potential for an autonomous image segmentation algorithm by combining a window width selection technique with prior knowledge about the images; (2) the feature space used for the 3D grey level segmentation can be easily extended to include the dimension in time by giving each voxel a time value depending on which follow-up image it belongs to. Furthermore, the feature space can also be extended to include data from multi-spectral images, which has been used to classify tissues in the MCP joints of the hands in RA.
Identification of candidate bone lesion regions Next, inter-subject non-rigid registration enables the mapping of bone lesions of different subjects to a common reference coordinate system defined by an atlas bone. This allows the comparison of localised bone lesion volume changes in anatomically corresponding locations between different subjects or groups (see figure 13). The segmented bone lesion can be thresholded to binary images so that 1 in a voxel means bone lesion.
After summing the binary images, bone lesions that are found in only one time point can be removed by thresholding (using the value 2) to ensure that the final candidate bone regions consist of bone lesions that appear in two or more time points. Candidate bone lesion regions, which were defined in the co-ordinate system of the atlas, in the talus bones, are defined by the connected components in the image.
Calculation of bone lesion volume
The candidate bone lesion regions in the image space of the atlas can be transformed to the image space of the base-line image of each subject by applying the non-rigid deformation fields obtained from the determination of ROI. Since the 4D segmentations of bone lesions might not completely overlap with the candidate bone lesion regions, the 4D segmentations can be assigned to the regions in the following way: (1) If a bone lesion voxel is inside a region, it is assigned to that region; (2) If a bone lesion voxel is outside any region, it can be assigned to the region that contains the highest number of voxels from the same 4D segmentations of that voxel. Otherwise, the voxel is discarded.
RESULTS
"ST" is used to refer to the results and the method of the present invention, and "OT" is used to refer to previous results. The lesion segmentations of subject 9 from OT and ST were more similar at the time point day +21 than at the time point day - 4 as shown in Fig. 4. The images are shown in coronal views with segmentations highlighted. In each sub-figure, the segmentation on the left was obtained by OT, and the segmentation on the right was obtained by ST. Furthermore, the mean similarity indices of the lesion segmentations of all the subjects generated by OT and ST at the time point day +21 and day - 4 were 0.756( ± 0.061) and 0.498( ± 0.217) respectively. Since the volume of bone lesion at day +21 was larger than that in day - 4 according to the disease model, the results showed that the performance of OT was close to the performance of ST when the volume of bone lesion was large.
Seven candidate bone lesion regions were found. Their locations were similar to ones found using OT. A new candidate bone lesion region was also found, and the number of lesions belonging to each candidate bone lesion region was different. The average bone lesion volume of region 3 is shown in figure 15. Error bars denote the standard deviations of the average bone volume. When examining the influence of time and gender on the bone lesion volume in this longitudinal study, a repeated measure analysis of variance was used. The bone lesion volumes in all the candidate regions at all the time points were analysed in an analysis of variance using SPSS (SPSS Inc., Chicago, Illinois, USA) with time (day -4 vs. day +3 vs. day +10 vs. day +14 vs. day +21) as a within-subject factor and gender (male vs. female) as a between-subject factor. The ANOVA yielded significant time effect in regions 1, 3, 4, 6, and 7, and significant gender between-subject effect in region 3 only at P < 0.05.
Benefits
The invention automatically quantifies shape, size and location of bone lesion in serial MR images of joints. Figure 16 shows the result of the application to locate common sites of bone lesion in the talus bone of an animal model. Figure 16 shows the axial, coronal and sagittal views of an MR image of an ankle of a rat. The bone lesions that occur at the same location in more than 2 subjects are highlighted. Graphs of size of bone lesion against time for a region of bone lesion in the talus bone are shown in figures 17 and 18. Figure 17 shows a graph of size of a region of bone lesion of all the subjects against time. Figure 18 shows a graph of a region of bone lesion of male and female subjects against time. Local changes, instead of global changes, in the bone were measured. Furthermore, changes of smaller than 1% of the volume of the bone were detected. This sensitivity to small lesions has not previously been demonstrated from in- vivo imaging, and preliminary validation indicates good agreement with histology.
Apart from the preparation of an atlas, no expert knowledge about the anatomy of the bone is required to apply the invention to the problem.
Alternative Examples of the Invention
The same approach can be applied to osteoarthritis to detect local or widespread cartilage thinning and to other diseases involving intensity change within joints.
The invention can also be applied to study kinematics of bone movement in 4D (3D + time) MR images of joints. Multiple bones in a joint are segmented automatically in each time frame. Animation of the bone movement in 3D can be generated from 3D surface rendering of multiple bones in all the time frames.
The intensity analysis can be carried out using multi-spectral data, as well as images with just a single intensity per voxel.
The invention can also be used to automatically line up images in standard planes for use in visual scoring (e.g. OMERACT RAMIS), rather than relying on radiographic technique to achieve this.
The invention can also be applied to other parts of the body that can be studied using longitudinal MRI, and which exhibit subtle changes in structure volumes and intensities, such as Multiple Sclerosis (MS). The left side of figure 19 shows the different images of all the time points of the brain for a MS patient and the right side shows the MS lesions segmented by the spatio-temporal segmentation. Previous methods for quantifying longitudinal changes in MS include (a) the measurement of brain atrophy; (b) the assessment of MS lesions by calculating their total volume or counting their total number, which involve manual or semi-automatic segmentations. By using spatio- temporal segmentation, MS lesions can be segmented automatically. In addition, non- rigid registration of the follow-up images and the baseline image generates deformation fields, which can be used to measure the brain atrophy by calculating the Jacobians of the deformation fields. The atrophy rates of the brain shown in figure 19 are found to be 0.997, 0.990, 0.991, 0.987 and 0.985 from the first point to the last time point. This demonstrates the potential of the spatio-temporal segmentation and non-rigid image registration in quantifying longitudinal changes in MS brains.
The current invention can also be applied to images of other modalities, such as CT images.
The invention can also be applied to the analysis of single time point or longitudinal Dynamic Contrast Enhanced (DCE) MRI images of joints. Multiple joints, and corresponding regions of synovium, are segmentated automatically from the DCE-MRI images and used to quantify contrast update parameters for those joints either within one dynamic sequence, or between dynamic sequence acquired at separate imaging investigations.
It will of course be understood that the present invention has been described by way of example, and that modifications of detail can be made within the scope of the invention as defined by the following claims.

Claims

1. A method for identifying a particular feature in a region of interest in a series of images taken of a single subject at different times, comprising: providing more than two difference images of the region of interest, each image taken at a different time point; mapping the difference images to a 5-dimensional feature space; segmenting the feature space using a mean shift algorithm to identify the particular feature in the region of interest at each of the plurality of time points; and summing all of the particular features from all of the images at the plurality of time points.
2. The method of claim 1 wherein the 5-dimensional feature space comprises three spatial dimensions, one intensity dimension and one time dimension.
3. The method of claim 1 or claim 2 wherein the step of providing each of the more than two difference images of the region of interest comprises: constructing an atlas from one or more subjects; approximately delineating the region of interest from a follow-up image using rigid registration; accurately delineating the region of interest from the follow-up image using inter-subject non-rigid registration followed by segmentation propagation; subtracting a baseline image from the registered follow-up image to generate a difference image.
4. The method of any of claims 1 to 3 wherein the region of interest is a bone and the particular feature is a bone lesion.
5. The method of any of claims 1 to 3 wherein the region of interest is the brain and the particular feature is a brain lesion.
6. The method of any of claims 1 to 5 wherein the 5-dimensional feature space is built by mapping the position, intensity and time values of each voxel to a point in the feature space.
7. The method of claim 1 to 6 wherein the step of segmenting the feature space comprises studying the underlying probability density function of the points using a mean shift algorithm.
8. The method of any of claims 1 to 7 wherein the step of segmenting the feature space results in 4-dimensional segmentations of the particular feature that extend across multiple time points.
9. A method for identifying a particular feature in a region of interest in a plurality of series of images taken at different times, each series of images taken of a different subject, comprising: performing the method of any of claims 1 to 8 for each series of images.
10. The method of claim 9 further comprising: transforming the results for each subject to the image space of an atlas to identify candidate particular features, summing all of the particular features of all of the subjects, and carrying out a connected component analysis.
11. The method of claim 10 wherein the step of transforming the results for each subject to the image space of an atlas comprises performing inter-subject non-rigid registration to enable the mapping of particular features of different subjects to a common reference co-ordinate system defined by an atlas particular feature.
12. The method of claim 11, further comprising counting the number of voxels in each candidate particular feature in order to calculate the volume of the candidate particular feature.
13. A method for delineating a bone in an image comprising: constructing an atlas from a single subject; approximately delineating the bone from the image using rigid registration; accurately delineating the bone from the image using inter-subject non-rigid registration followed by segmentation propagation.
14. The method of claim 13 wherein the step of constructing an atlas comprises manually segmenting the bone of interest from a reference image.
15. The method of claim 13 or claim 14 wherein the step of approximately delineating the bone comprises identifying a region of interest around the bone of interest in the atlas, and registering voxels in the region of interest of the image to the reference image.
16. The method of any of claims 1 to 15 wherein the step of approximately delineating the bone comprises: rigidly registering the region of interest in the atlas to the image; then rigidly registering the region of interest in the image to follow-up images in a temporal series.
17. The method of claim 16 wherein a correlation coefficient CC is used as a similarity
measure where, for a set of n data points (xb y,), CC = .
Figure imgf000027_0001
18. The method of any of claims 13 to 17 wherein the step of accurately delineating the bone comprises a four stage registration process comprising: an affine registration with 12 degrees of freedom; a cubic B-spline non-rigid registration with a control point spacing corresponding to 20 pixels in a high resolution plane; a cubic B-spline non-rigid registration with a control point spacing corresponding to 10 pixels in a high resolution plane; and a cubic B-spline non-rigid registration with a control point spacing corresponding to 5 pixels in a high resolution plane.
19. The method of claim 18 wherein a resultant deformation field is used to propagate the bone of interest from the atlas to obtain a boundary of the bone of interest in the image.
20. A method of identifying a lesion in a follow-up image taken at a later time than a baseline image comprising: delineating a bone in both the baseline and the follow-up image using the method according to any of claims 14 to 19; transforming the delineated images using the results of the rigid registration; generating a difference image by subtracting the baseline image from the transformed follow-up image; applying Otsu's thresholding to the difference image to identify a high-intensity lesion in the image.
21. The method of claim 20, further comprising: determining the number of voxels in the region determined to be bone lesion; and ascertain the volume of the bone lesion from the number of voxels in the region.
22. A method of identifying a bone lesion in first and second images, wherein the subject of the second image is the same as the subject of the first image and the second image is taken at a later time point than the first image, comprising: delineating a bone in the first image using the method according to any of claims 14 to 19; delineating a bone in the second image using the method according to any of claims 14 to 19; transforming the first and second images using the results of the rigid registration; generating first and second difference images by subtracting the reference image from the first and second transformed images; thresholding each of the first and second difference images by the mean plus the standard deviation of an intensity histogram of each difference image; thresholded each of the first and second thresholded images using Otsu's algorithm; summing together the first and second thresholded difference images to obtain a summed thresholded difference image of the bone.
23. The method of claim 22, further comprising: filtering the summed image by morphological "opening" and morphological "dilation" operations; and using the filtered summed image as a mask to segment the summed thresholded difference image in order to identify a region of bone lesion.
24. The method of claim 23, further comprising: applying a further morphological "dilation" on the identified region of bone lesion to obtain a final region of bone lesion for a single subject.
25. A method of identifying candidate bone lesion regions in a reference coordinate system, comprising: identifying a bone lesion in first and second images using the method according to claims 22 to 24; identifying a bone lesion in third and fourth images using the method according to claims 22 to 24, wherein the subject of the third and fourth images is different from the subject of the first and second images; transforming the region of bone lesion for each subject into the reference coordinate space of the atlas using an inter-subject non-rigid registration; summing and labelling the transformed regions of bone lesion for all four images; transforming the region of bone legion in each image back to the co-ordinate space of the thresholded difference image of the bone.
26. The method of any preceding claim wherein each image is an MR image.
27. The method of any of claims 1 to 25 wherein each image is a CT image.
28. The method of any of claims 1 to 26 wherein each image is a single frame from a dynamic contrast enhanced MRI sequence.
29. The method of any of claims 1 to 26 wherein each image is a processed dynamic contrast enhanced MRI sequence from a particular imaging session.
PCT/GB2007/002206 2006-06-14 2007-06-14 Automatic quantification of changes in images WO2007144620A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0611774.1 2006-06-14
GBGB0611774.1A GB0611774D0 (en) 2006-06-14 2006-06-14 Automatic quantification of changes in images

Publications (2)

Publication Number Publication Date
WO2007144620A2 true WO2007144620A2 (en) 2007-12-21
WO2007144620A3 WO2007144620A3 (en) 2008-06-26

Family

ID=36775630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/002206 WO2007144620A2 (en) 2006-06-14 2007-06-14 Automatic quantification of changes in images

Country Status (2)

Country Link
GB (1) GB0611774D0 (en)
WO (1) WO2007144620A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226633B (en) * 2008-01-30 2011-04-20 哈尔滨工程大学 Method for segmentation of corps canopy image based on average dispersion
CN102063707A (en) * 2011-01-05 2011-05-18 西安电子科技大学 Mean shift based grey relation infrared imaging target segmentation method
WO2014155299A1 (en) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Interactive follow-up visualization
CN104050666B (en) * 2014-06-10 2017-07-11 电子科技大学 Brain MR image method for registering based on segmentation
EP3089107A4 (en) * 2013-12-27 2017-08-16 Samsung Electronics Co., Ltd. Apparatus and method for determining lesion similarity of medical image
US9875549B2 (en) 2010-11-18 2018-01-23 Bae Systems Plc Change detection in video data
KR101878182B1 (en) * 2011-12-02 2018-07-16 엘지디스플레이 주식회사 Device for detecting scene change and method for detecting scene change
CN110580728A (en) * 2019-09-16 2019-12-17 中南大学 CT-MR modal migration method based on structural feature self-enhancement

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060018548A1 (en) * 2004-02-13 2006-01-26 Weijie Chen Method, system, and computer software product for automated identification of temporal patterns with high initial enhancement in dynamic magnetic resonance breast imaging

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060018548A1 (en) * 2004-02-13 2006-01-26 Weijie Chen Method, system, and computer software product for automated identification of temporal patterns with high initial enhancement in dynamic magnetic resonance breast imaging

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
CHEN ET AL: "A Fuzzy C-Means (FCM)-Based Approach for Computerized Segmentation of Breast Lesions in Dynamic Contrast-Enhanced MR Images<1>" ACADEMIC RADIOLOGY, RESTON, VA, US, vol. 13, no. 1, January 2006 (2006-01), pages 63-72, XP005234317 ISSN: 1076-6332 *
COMANICIU D ET AL: "Mean shift: A robust approach toward feature space analysis" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 24, no. 5, May 2002 (2002-05), pages 603-619, XP002323348 ISSN: 0162-8828 *
DEMENTHON D: "Spatio-temporal segmentation of video by hierarchical mean shift analysis" PROCEEDINGS OF THE STATISTICAL METHODS IN VIDEO PROCESSING WORKSHOP, 2002, pages 115-120, XP002323406 *
GREENSPAN H ET AL: "PROBABILISTIC SPACE-TIME VIDEO MODELING VIA PIECEWISE GMM" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 26, no. 3, March 2004 (2004-03), pages 384-396, XP001190681 ISSN: 0162-8828 *
LISA JONASSON ET AL: "Representing Diffusion MRI in 5D for Segmentation of White Matter Tracts with a Level Set Method" INFORMATION PROCESSING IN MEDICAL IMAGING LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER-VERLAG, BE, vol. 3565, 2005, pages 311-320, XP019011597 ISBN: 3-540-26545-7 *
MAYER A ET AL: "Segmentation of brain MRI by adaptive mean shift" BIOMEDICAL IMAGING: MACRO TO NANO, 2006. 3RD IEEE INTERNATIONAL SYMPOSIUM ON APRIL 6, 2006, PISCATAWAY, NJ, USA,IEEE, 6 April 2006 (2006-04-06), pages 319-322, XP010912631 ISBN: 0-7803-9576-X *
REY D ET AL: "A spatio-temporal model-based statistical approach to detect evolving multiple sclerosis lesions" MATHEMATICAL METHODS IN BIOMEDICAL IMAGE ANALYSIS, 2001. MMBIA 2001. IEEE WORKSHOP ON KAUAI, HI, USA 9-10 DEC. 2001, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 9 December 2001 (2001-12-09), pages 105-112, XP010584865 ISBN: 0-7695-1336-0 *
WEI FENG ET AL: "Non-rigid objects detection and segmentation in video sequence using 3d mean shift analysis" MACHINE LEARNING AND CYBERNETICS, 2003 INTERNATIONAL CONFERENCE ON NOV. 2-5, 2003, PISCATAWAY, NJ, USA,IEEE, vol. 5, 2 November 2003 (2003-11-02), pages 3134-3139, XP010682279 ISBN: 0-7803-7865-2 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226633B (en) * 2008-01-30 2011-04-20 哈尔滨工程大学 Method for segmentation of corps canopy image based on average dispersion
US9875549B2 (en) 2010-11-18 2018-01-23 Bae Systems Plc Change detection in video data
CN102063707A (en) * 2011-01-05 2011-05-18 西安电子科技大学 Mean shift based grey relation infrared imaging target segmentation method
KR101878182B1 (en) * 2011-12-02 2018-07-16 엘지디스플레이 주식회사 Device for detecting scene change and method for detecting scene change
WO2014155299A1 (en) * 2013-03-28 2014-10-02 Koninklijke Philips N.V. Interactive follow-up visualization
US9558558B2 (en) 2013-03-28 2017-01-31 Koninklijke Philips N.V. Interactive follow-up visualization
EP3089107A4 (en) * 2013-12-27 2017-08-16 Samsung Electronics Co., Ltd. Apparatus and method for determining lesion similarity of medical image
US10296810B2 (en) 2013-12-27 2019-05-21 Samsung Electronics Co., Ltd. Apparatus and method for determining lesion similarity of medical image
CN104050666B (en) * 2014-06-10 2017-07-11 电子科技大学 Brain MR image method for registering based on segmentation
CN110580728A (en) * 2019-09-16 2019-12-17 中南大学 CT-MR modal migration method based on structural feature self-enhancement
CN110580728B (en) * 2019-09-16 2022-11-25 中南大学 CT-MR modal migration method based on structural feature self-enhancement

Also Published As

Publication number Publication date
GB0611774D0 (en) 2006-07-26
WO2007144620A3 (en) 2008-06-26

Similar Documents

Publication Publication Date Title
Haque et al. Deep learning approaches to biomedical image segmentation
Queirós et al. Fast automatic myocardial segmentation in 4D cine CMR datasets
Petitjean et al. A review of segmentation methods in short axis cardiac MR images
Lynch et al. Automatic segmentation of the left ventricle cavity and myocardium in MRI data
Khalifa et al. Dynamic contrast-enhanced MRI-based early detection of acute renal transplant rejection
US20070081712A1 (en) System and method for whole body landmark detection, segmentation and change quantification in digital images
WO2007144620A2 (en) Automatic quantification of changes in images
Xiaohua et al. Simultaneous segmentation and registration of contrast-enhanced breast MRI
Zhu et al. Automatic delineation of the myocardial wall from CT images via shape segmentation and variational region growing
Pedoia et al. Glial brain tumor detection by using symmetry analysis
Göçeri et al. A comparative performance evaluation of various approaches for liver segmentation from SPIR images
Guo et al. A novel myocardium segmentation approach based on neutrosophic active contour model
Włodarczyk et al. Fast automated segmentation of wrist bones in magnetic resonance images
Yi et al. A review of segmentation method for MR image
Liu et al. A model-based, semi-global segmentation approach for automatic 3-D point landmark localization in neuroimages
Suri et al. Medical image segmentation based on deformable models and its applications
Chen et al. Joint segmentation and discontinuity-preserving deformable registration: Application to cardiac cine-mr images
Shen Image registration by hierarchical matching of local spatial intensity histograms
Fallahi et al. Uterine fibroid segmentation on multiplan MRI using FCM, MPFCM and morphological operations
Yan et al. Automatic liver segmentation and hepatic fat fraction assessment in MRI
Jia et al. Active contour model with shape constraints for bone fracture detection
Angelini et al. Segmentation and quantitative evaluation of brain MRI data with a multiphase 3D implicit deformable model
Assley et al. A comparative study on medical image segmentation methods
Colliot et al. Segmentation of focal cortical dysplasia lesions using a feature-based level set
Morais et al. Fully automatic left ventricular myocardial strain estimation in 2D short-axis tagged magnetic resonance imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07733212

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07733212

Country of ref document: EP

Kind code of ref document: A2