WO2016060611A1 - Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors - Google Patents

Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors Download PDF

Info

Publication number
WO2016060611A1
WO2016060611A1 PCT/SG2014/000481 SG2014000481W WO2016060611A1 WO 2016060611 A1 WO2016060611 A1 WO 2016060611A1 SG 2014000481 W SG2014000481 W SG 2014000481W WO 2016060611 A1 WO2016060611 A1 WO 2016060611A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
region
organ
images
image
Prior art date
Application number
PCT/SG2014/000481
Other languages
French (fr)
Inventor
Zujun Hou
Yue Wang
Original Assignee
Agency For Science, Technology And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency For Science, Technology And Research filed Critical Agency For Science, Technology And Research
Priority to US15/519,145 priority Critical patent/US10176573B2/en
Priority to PCT/SG2014/000481 priority patent/WO2016060611A1/en
Priority to SG11201703074SA priority patent/SG11201703074SA/en
Priority to CN201480083962.6A priority patent/CN107004268A/en
Priority to EP14904094.1A priority patent/EP3207522A4/en
Publication of WO2016060611A1 publication Critical patent/WO2016060611A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/5608Data processing and visualization specially adapted for MR, e.g. for feature analysis and pattern recognition on the basis of measured MR data, segmentation of measured MR data, edge contour detection on the basis of measured MR data, for enhancing measured MR data in terms of signal-to-noise ratio by means of noise filtering or apodization, for enhancing measured MR data in terms of resolution by means for deblurring, windowing, zero filling, or generation of gray-scaled images, colour-coded images or images displaying vectors instead of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • G06T2207/10096Dynamic contrast-enhanced magnetic resonance imaging [DCE-MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates to processing image sequences arising from dynamic contrast-enhanced (DCE) imaging for tumor disease diagnosis.
  • DCE dynamic contrast-enhanced
  • pre-DCE modeling for segmenting and registering organ tissue of interest.
  • MRI magnetic resonance imaging
  • MRI techniques are widely used to image soft tissue within human (or animal) bodies and there is much work in developing techniques to perform the analysis in a way which characterizes the tissue being imaged, for instance as normal or diseased.
  • conventional MRI only provides information about the tissue morphology and does not provide information about tissue physiology.
  • DCE imaging using computed tomography (CT) or magnetic resonance imaging (MRI) is a functional imaging technique that can be used for in vivo assessment of tumor microcirculation. In recent years, DCE imaging has attracted increasing research interest as a potential biomarker for antiangiogenic drug treatment.
  • DCE images uses outlining regions-of-interest, as well as image registration to correct for any body movement during imaging, before tracer kinetic analysis.
  • image registration should be performed with respect to the tissue of interest instead of the whole image domain, which implies that the tissue of interest must be segmented first. This would typically require the user to manually outline the region-of-interest on multiple (usually about 50 or more) DCE images which is both time-consuming and tedious.
  • a method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest includes deriving at least a contour of an exterior of the organ's region-of-interest from one or more of a plurality of images; generating a spline function in response to the derived contour of the exterior of the organ's region-of-interest from the one or more of the plurality of images; registering the plurality of images wherein the organ's region-of-interest has been segmented; deriving a tracer curve for the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period; and kinetic modeling by fitting a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.
  • DCE dynamic contrast enhanced
  • a method for registering an organ's region-of-interest includes deriving mutual information in response to each of a plurality of dynamic contrast enhanced (DCE) images; and aligning segments in the organ's region-of-interest in response to the mutual information.
  • DCE dynamic contrast enhanced
  • Figure 1 shows a flowchart of a method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest in accordance with an embodiment.
  • DCE dynamic contrast enhanced
  • Figures 2A - 2D illustrate images of an organ's region-of-interest during preliminary segmentation.
  • Figures 3A - 3C illustrate images of an organ's region-of-interest during a spline function generation.
  • Figures 4A - 4D illustrate images of an organ's region-of-interest during segmentation in accordance with the spline function.
  • Figure 5 shows a flowchart of a method for segmenting each of a plurality - of DCE images.
  • Figure 6 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest.
  • Figure 7 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest over a time period.
  • Figure 8 shows a tracer concentration-time curve.
  • Figure 9A illustrates a tissue concentration-time curve before outlier data points in the tissue concentration-time curve are removed.
  • Figure 9B illustrates a tissue concentration-time curve after outlier data points in the tissue concentration-time curve are removed.
  • Figures 10A-10D illustrates how propagated shape information can help to resolve the problem of under segmentation due to adjoining tissues.
  • Figure 1 A illustrates images during registration in accordance with the method of mutual information over a time period.
  • Figure 11 B illustrates images during registration in accordance with the method of gradient correlation over a time period.
  • Figure 12A shows images obtained by segmentation by a conventional method.
  • Figure 12B shows images obtained by a method according to an embodiment of the invention.
  • Figure 13A shows maps of tissue physiological parameters associated with the kinetic model in accordance with an embodiment of the invention.
  • Figure 13B shows maps of tissue physiological parameters associated with the kinetic model without registering the images.
  • Figure 14 shows an exemplary computing device in accordance with an embodiment of the invention.
  • Such apparatus may be specially constructed for the required purposes, or may comprise a general purpose computer or other device selectively activated or reconfigured by a computer program stored in the computer.
  • the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus.
  • Various general purpose machines may be used with programs in accordance with the teachings herein.
  • the construction of more specialized apparatus to perform the required method steps may be appropriate.
  • the structure of a conventional general purpose computer will appear from the description below.
  • the present specification also implicitly discloses a computer program, in that it would be apparent to the person skilled in the art that the individual steps of the method described herein may be put into effect by computer code.
  • the computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein.
  • the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention.
  • the computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer.
  • the computer readable medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the preferred method.
  • the method (designated generally as reference numeral 100) comprises the following steps:
  • Step_102 Input DCE images.
  • At least one or more DCE images are input into an apparatus for processing and kinetic modeling of an organ's region-of- interest.
  • DCE images can be typically acquired at multiple time points and multiple slice locations of a tumor.
  • Step 04 Seed point selection.
  • the next step is to select a seed point.
  • a user only needs to indicate the tissue of interest by making a selection on the tissue. This is also known as deriving at least a contour of an exterior of the organ's region-of-interest.
  • Step 106 Seed point.
  • the location selected by the user in step 104 will serve as a seed point denoted by f s C eed( x o) > which represents the image intensity at location x 0 .
  • An example seed point can be seen in Figure 2A.
  • Step 108 Image segmentation. Following step 106, the next step is to segment the DCE image.
  • the DCE image may be segmented into foreground (consisting of tissues) and background (mostly composed of air). In an embodiment, this can be accomplished by using a method like the Otsu's method.
  • the Otsu's method is used to automatically perform clustering-based image thresholding, or reduce a graylevel image to a binary image.
  • the algorithm assumes that the image contains two classes of pixels following bi-modal histogram (foreground pixels and background pixels) which then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal.
  • a spline function is generated in response to the derived contour of the exterior of the organ's region-of-interest.
  • Step 110 Image registration.
  • the next step is to segment the DCE image.
  • the DCE images- are registered to focus on the tissue of interest and to crop the segmented region-of-interest.
  • Step 112 DCE modeling.
  • the next step is to derive a tracer curve for each pixel in the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period.
  • the tracer curve can be fitted to a kinetic model to derive values of the associated physiological parameters.
  • Step 114 Output parametric maps.
  • the next step is to fit a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.
  • the method 100 includes segmentation, registration and kinetic modeling of an organ's region-of-interest.
  • the segmentation and registration are important post-processing steps that can improve subsequent analysis of DCE images by kinetic modeling.
  • FIGs 2A - 2D illustrate images of the organ's region-of-interest during a preliminary segmentation.
  • a user only needs to indicate the tissue of interest by making a selection on the tissue.
  • the location selected by the user in step 04 will serve as the seed point denoted by f s c e u e r d (x 0 ), which represents the image intensity at location x 0 , shown as 202.
  • the user does not have to manually outline a tissue or lesion to indicate a region-of-interest.
  • Figure 2B shows an image 240 of the organ's region-of-interest after being segmented into foreground which mainly includes the tissues of the organ's region-of-interest.
  • the methods that are conventionally used to segment the image does not take into account the spatial relationship between pixels.
  • the resulting segmentation map 240 shown in Figure 2B, could be subject to intensity outliers, leading to the presence of holes within the segmented region-of-interest.
  • the tissue of interest would likely be in contact with another tissue and the boundary of the tissue of interest may not be well delineated.
  • Figure 2C shows an image 260 of the organ's region-of-interest after being morphological processed.
  • morphological image processing is a collection of non-linear operations which probe an image with a small shape or template called structure element.
  • the outputs of morphological image processing rely on the relative ordering of pixel values.
  • the structure element is positioned at all possible locations in the image and compared with the corresponding neighborhood of pixels.
  • the position where the structure element has been placed is called reference pixel, the choice of which is arbitrary.
  • A is an image, such as a binary image
  • S is a structuring element. denotes the operation of placing S with its reference pixel at the pixel (i,j) of A.
  • FIG. 2C illustrates the image 260 after performing a morphological image processing (for example, hole-filling and object-opening) using a disk-shaped structure element with a radius of 6 voxels.
  • a morphological image processing for example, hole-filling and object-opening
  • the tissue containing the seed tissue point is segmented, as shown in Figure 2D as image 280.
  • the seed tissue point is segmented by applying a 8-neighbourhood connected component analysis. More detail on the neighbourhood connected component analysis will be provided below.
  • the image 280 showing the seed tissue point 280 after segmentation.
  • a spline function is generated.
  • a B- spline function is generated.
  • a B-spline is a piecewise polynomial function of degree k as defined in a range u 0 ⁇ u ⁇ u m .
  • the i-th B-spline basis function of degree k, B i k (u) can be derived recursively by the Cox-de Boor recursion formula:
  • Basis function B i k (u) is non-zero on [u i; u i+k+1 ). On any knot span [Uj, u i+1 ), at most k + 1 basis functions of degree k are non-zero. Given n + 1 control points ⁇ , ⁇ - , ⁇ and m + 1 knots Uo, ⁇ ,— ⁇ -,,, the B-spline curve of degree k defined by these control points and knots is
  • Figures 3A - 3C illustrate images of an organ's region-of-interest during a spline function generation.
  • Figure 3A shows an image 320 obtained from the preliminary segmentation.
  • image 320 is identical to image 280.
  • Figure 3B shows image 340 having sampled control points 302 from the derived contour of the organ's region-of-interest.
  • Figure 3C shows a segmented image 360 having a seed point 304 that is constructed based on a B-spline function.
  • Tissues in consecutive neighboring frames and slices could share similarities in shape information, which can be used for further refinement of the segmented regions.
  • curr may be used to denote the control points in the current frame or slice.
  • Various approaches can be implemented to impose shape similarity constraints. For example, the positions of control points in curr can De varied so that predefined objectives function of £ i ast and curr become optimal.
  • FIGS 4A - 4D illustrate images of the organ's region-of-interest during segmentation in accordance with the spline function.
  • Figure 4A shows an image 420 obtained from a preliminary segmentation.
  • image 420 is identical to image 280.
  • Figure 4B shows the image 440 having control points 404 from the previous frames, which are obtained in accordance with the spline function, around a selection point 402.
  • Figure 4C shows an image 460 of the organ's region-of-interest with refined control points 406 in the current frame.
  • Figure 4D shows the image 480 of the organ's region-of-interest after segmentation in accordance with the spline function.
  • the image 480 is identical to image.280.
  • FIG. 5 shows a flowchart 500 of a method for segmenting each of a plurality of DCE images.
  • At least one or more DCE images are input into an apparatus for processing and kinetic modeling of an organ's region-of-interest in step 502.
  • the one or more DCE images are segmented into foreground (consisting of tissues) and background (mostly composed of air) in step 504.
  • the segmented irnages are then morphological processed in step 506.
  • a connected component analysis is carried out in step 508.
  • For a connected component analysis in order for two pixels (say p, q) to be connected, their values must both be 1 for a binary image and the pixels should be neighbors. In image processing, there are usually two types of neighborhoods. For a pixel p with coordinates(i,j), the set of pixels given by:
  • Ni(p) ⁇ ii + l l (i - l,pXi,j + ! , (i.j - O ⁇ (5)
  • N 8 P) N 4 p) U ⁇ (i + 1 + 1), (i + 1 - 1), (i - l,j + 1), (i - l,j - 1) ⁇ (6)
  • Two pixels are considered to 8-neighborhood connected if both pixels have value 1 and are within 8-neighborhood to one another
  • a tissue mask generation 510 The results of the connect component analysis in step 508 and a seed point selection in 509 are used to generate a tissue mask in step 510.
  • a determination step 512 is carried out to find out if the tissue mark is a first frame. In the event that it is a first frame, control points are generated in step 514. The control points are obtained in step 516. In the event that it is not a first frame, control points are evolved in step 518. In step 518, if pixel p at location (i, j) of a previous frame is a control point, then the pixel at location of (i, j) of a current frame will be labelled as a control point.
  • control points evolution This process of evolving the information of control points from previous frame to current frame is called control points evolution.
  • the control points obtained in steps 514 and 518 are used to refine segmentation in step 520.
  • the control points and seed point are updated in step 521. This information is helpful for segmenting the tissue of interest in the next frame or slice. These processes repeat until the last frame or slice and the output results are generated in step 522.
  • Figure 6 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest.
  • Figure 6 shows an image 220 having a location x (or 202) which has been selected from the organ's region-of-interest.
  • Figure 6 also shows an image 280 which is obtained from the preliminary segmentation.
  • Figure 6(c) shows how the selected location x is cropped in image 280, registered and then magnified for better resolution, as shown in image 600.
  • registration of DCE images focus on the organ's region-of-interest by cropping and magnifying the selected area.
  • the image registration process is to find a transform T that will map one image / onto another image g so that a predefined criterion that measures the similarity of two images is optimized.
  • Both rigid and non-rigid transformations can be applied for registration. While rigid transformation allows for only translational and rotational displacements between two images, non-rigid methods allows for deformable changes displacements between two images, non-rigid methods allows for deformable changes in tissue shape and size, with the deletion of existing voxels (or information) and the generation of new voxels through some form of interpolation or approximation methods.
  • rigid methods tend to preserve the original information in the aligned region and are usually more robust and efficient than non-rigid methods.
  • the registration process is used on colorectal tumors.
  • formation of a tumor is usually not drastic and only involves minor variations in the sizes of the cropped images.
  • a rigid registration method based on mutual information (Ml) which minimizes variation in the cropped images may be used.
  • Ml mutual information
  • possible missing or additional data due to variations in cropped image sizes can be treated as outliers in the tissue concentration-time curves, which can be detected before kinetic analysis by model- fitting. More details on the Ml method are provided below, followed by its application on DCE images.
  • - ⁇ , ⁇ , 1 ⁇ ⁇ ,, (7) which measures the dispersion in the image histogram.
  • the mutual information of two images can be defined as:
  • Figure 7 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest over a time period.
  • DCE imaging usually a few slices (or locations) of the tissue of interest are obtained at each time point (t).
  • t time point
  • n s slices acquired at each time point
  • n t time points in the DCE dataset.
  • each slice at time t k will be registered with the targeted slice at t x by the mutual information method, yielding optimal transformation parameters (rotation and translation) as well as the corresponding value of mutual information.
  • the slice with the maximum mutual information will be selected as the desired candidate. This registration process is illustrated in Figure 1 1A.
  • Tracer curve refers to a change in concentration of a contrast agent at a voxel or in an organ's region-of-interest as derived from registered DCE images.
  • a contrast agent typically refers to a substance that is comparatively opaque to x- rays that is introduced into an area of a body so as to contrast an internal part with its surrounding tissue in radiographic visualization.
  • a voxel typically refers to a value on a three-dimensional space.
  • Kinetic modeling is usually performed by fitting a kinetic model on a tracer curve derived at a voxel or for the organ's region-of- interest.
  • the generalized Tofts model is commonly used for fitting of DCE images, and its residue function is given by:
  • v v and v e denote the fractional vascular and interstitial volumes, respectively.
  • the free parameters used for fitting the AATH model are [v e , v p , F p , PS).
  • Tissue tracer concentration-time curves C tiss (t) sampled from DCE images are fitted using the above tracer kinetic models by minimizing the following cost function:
  • FIG. 8 shows a tracer concentration- time curve 810 obtained from sampling from a descending aorta visible on the DCE image 800.
  • the graph 8 0 is plotted concentration 814 against time 812.
  • Figure 9A illustrates a tissue concentration-time curve C tiss (t) before outlier data points in the tissue concentration-time curve are removed.
  • Figure 9A shows that outlier data points 902, 904 and 906 that are distant from other data points (e.g. data point 910) on the curve 920.
  • the outlier data points 902, 904 and 906 can be detected using robust regression based methods.
  • Figure 9B illustrates a tissue concentration-time curve 940 after outlier data points in the tissue concentration-time curve are removed.
  • the outlier data points 902, 904 and 906 shown in Figure 9A are removed and replaced by interpolated values as shown in Figure 9B.
  • Data points that are not distant from the other data points on the curve are not removed, for example data point 910. In this manner, outlier data points can be detected and removed before model-fitting.
  • Figures 10A-10D illustrates how propagated shape information can help to resolve the problem of under segmentation due to adjoining tissues.
  • Figure 10A shows an image 1020 of the organ's region-of-interest with a selected location x 0 shown as 1002. However, during imaging, neighboring tissues 1004 and structures could be displaced relative to each other. A main difficulty in segmentation is the occasional presence of adjoined neighboring tissues, which could result in under- segmentation.
  • Figure 10B shows an image 1040 of the organ's region-of-interest after a preliminary segmentation in which neighboring tissues in contact with the selected location x 0 are also segmented.
  • Figure 10B shows a location 1006 which includes the selected location 1002 and the neighboring tissues 1004.
  • Figure 10C shows an image 1060 of the organ's region-of-interest after propagating control points 1008 obtained from the previous frame or slice.
  • the control points 1008 are generated based on the derived contour of a previous frame and in accordance with the B-spline function.
  • Figure 10D shows the final result 1080 of the selected location 1010 after the adjoining tissues have been removed in accordance with the B-spline control points.
  • Figure 11A illustrates images 1102 obtained during registration in accordance with the method of mutual information over a time period.
  • Figure 11 B illustrates images 1104 during registration in accordance with the method of gradient correlation over a time period.
  • Mutual information compares image intensity
  • gradient correlation computes the similarity of image gradient.
  • image gradient is more sensitive to noise then image intensity.
  • image gradient could be more discriminative in object description if the object features are consistent throughout the aligned images. This may not be true for tumors in DCE imaging due the wash-in and wash-out of tracer.
  • the tumor at time points 27s and 32s are better aligned with the neighboring frames by the mutual information method than the gradient correlation method.
  • tissue intensity will change along with the wash- in and wash-out of a contrast agent which flow through the organ's region-of- interest over a time period. Consequently, tissue image features will be inconsistent throughout the process. For example in Figure 11 A, where the central region of the tumor appears to be hypo-intense after about 26 seconds. Segmentation which relies on the detection of object features, such as active contours, would yield inconsistent results among the DCE images at different time points.
  • Figure 12A shows images 1202, 1204 and 1206 obtained by segmentation by a conventional method, for example a B-snake method.
  • Figure 12B shows images 1252, 1254 and 1256 obtained by a method according to an embodiment of the invention.
  • the B-snake method evolves the contour according to object boundaries where the contour is encoded using B-splines.
  • the B-snake method over-segments these images because the contour is trapped by the hypo-intense region within the tumor.
  • Figure 13A shows maps of tissue physiological parameters 1302, 1304, 1306 and 1308 associated with the kinetic model in accordance with an embodiment of the invention, for example the AATH model.
  • Figure 13B shows maps of tissue physiological parameters 1310, 1312, 1314 and 1316 associated with the kinetic model without registering the images.
  • the maps generated with image registration 1302, 1304, 1306 and 1308, shown in Figure 13A clearly delineate a region near the center of the tissue, which has lower PS, lower v p and higher v e mentioned in Equations (11 ), (12) and (14).
  • the maps without registration 1310, 312, 1314 and 1316, shown in Figure 13(b) reveal a similar region, but are generally noisier.
  • the computing device 1400 further includes a display interface 1402 which performs operations for rendering images to an associated display 1430 and an audio interface 1432 for performing operations for playing audio content via associated speakers) 1434.
  • the term "computer program product” may refer, in part, to removable storage unit 1418, removable storage unit 1422, a hard disk installed in hard disk drive 1412, or a carrier wave carrying software over communication path 1426 (wireless link or cable) to communication interface 1424 via an interface 1450.
  • a computer readable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal.
  • These computer program products are devices for providing software to the computing device 1400.
  • Computer readable storage medium refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computing device 1400 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray DiscTM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto- optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computing device 1400.
  • Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computing device 1400 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • the computer programs are stored in main memory 1408 and/or secondary memory 14 0. Computer programs can also be received via the communication interface 1424. Such computer programs, when executed, enable the computing device 1400 to perform one or more features of embodiments discussed herein. In various embodiments, the computer programs, when executed, enable the processor 1404 to perform features via a communication infrastructure 1406 of the above-described embodiments. Accordingly, such computer programs represent controllers of the computer system 1400.
  • Software may be stored in a computer program product and loaded into the computing device 1400 using the removable storage drive 14 4, the hard disk drive 1412, or the interface 1420.
  • the computer program product may be downloaded to the computer system 1400 over the communications path 1426.
  • the software when executed by the processor 1404, causes the computing device 1400 to perform functions of embodiments described herein.
  • FIG. 14 It is to be understood that the embodiment of Figure 14 is presented merely by way of example. Therefore, in some embodiments one or more features of the computing device 1400 may be omitted. Also, in some embodiments, one or more features of the computing device 400 may be integrated. Additionally, in some embodiments, one or more features of the computing device 1400 may be split into one or more component parts. [0082] It will be appreciated by a person skilled in the art that numerous variations and/or modifications may be made to the present invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

A method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest is provided. The method includes deriving at least a contour of an exterior of the organ's region-of-interest from one or more of a plurality of images; generating a spline function in response to the derived contour of the exterior of the organ's region-of-interest from the one or more of the plurality of images; registering the plurality of images wherein the organ's region-of-interest has been segmented; deriving a tracer curve for the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period; and kinetic modeling by fitting a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.

Description

AUTOMATIC REGION-OF-INTEREST SEGMENTATION AND REGISTRATION OF DYNAMIC CONTRAST-ENHANCED IMAGES
OF COLORECTAL TUMORS
FIELD OF THE INVENTION
[0001] The present invention relates to processing image sequences arising from dynamic contrast-enhanced (DCE) imaging for tumor disease diagnosis. In particular, it relates to pre-DCE modeling for segmenting and registering organ tissue of interest.
BACKGROUND
[0002] An important aspect of research related to biomedical science is the detection and analysis of tumors in organs. With current technology, images of the organs are often analyzed manually in order to detect presence of a tumor. However, manual analysis of images is both time-consuming and tedious.
[0003] One conventional technique commonly used is magnetic resonance imaging (MRI). MRI techniques are widely used to image soft tissue within human (or animal) bodies and there is much work in developing techniques to perform the analysis in a way which characterizes the tissue being imaged, for instance as normal or diseased. However, to date, conventional MRI only provides information about the tissue morphology and does not provide information about tissue physiology.
[0004] Malignant tissues or tumors have a number of distinguishing characteristics. For example, to sustain their aggressive growth they generate millions of tiny "micro- vessels" that increase the local blood supply around the tumor to sustain its abnormal growth. A technique which is based on this physiology is dynamic contrast-enhanced (DCE) imaging. 1 [0005] DCE imaging using computed tomography (CT) or magnetic resonance imaging (MRI) is a functional imaging technique that can be used for in vivo assessment of tumor microcirculation. In recent years, DCE imaging has attracted increasing research interest as a potential biomarker for antiangiogenic drug treatment.
[0006] DCE images uses outlining regions-of-interest, as well as image registration to correct for any body movement during imaging, before tracer kinetic analysis. Ideally, image registration should be performed with respect to the tissue of interest instead of the whole image domain, which implies that the tissue of interest must be segmented first. This would typically require the user to manually outline the region-of-interest on multiple (usually about 50 or more) DCE images which is both time-consuming and tedious.
[0007] Thus, what is needed is a method that is able to non-manually process images for segmenting an organ's region-of-interest. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background of the disclosure.
SUMMARY
[0008] According to a first aspect of the invention, a method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest is provided. The method includes deriving at least a contour of an exterior of the organ's region-of-interest from one or more of a plurality of images; generating a spline function in response to the derived contour of the exterior of the organ's region-of-interest from the one or more of the plurality of images; registering the plurality of images wherein the organ's region-of-interest has been segmented; deriving a tracer curve for the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period; and kinetic modeling by fitting a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.
[0009] In accordance with another aspect, a method for registering an organ's region-of-interest is provided. The method includes deriving mutual information in response to each of a plurality of dynamic contrast enhanced (DCE) images; and aligning segments in the organ's region-of-interest in response to the mutual information.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The accompanying figures where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to illustrate various embodiments and to explain various principles and advantages in accordance with a present embodiment.
[0011] Figure 1 shows a flowchart of a method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest in accordance with an embodiment.
[0012] Figures 2A - 2D illustrate images of an organ's region-of-interest during preliminary segmentation.
[0013] Figures 3A - 3C illustrate images of an organ's region-of-interest during a spline function generation.
[0014] Figures 4A - 4D illustrate images of an organ's region-of-interest during segmentation in accordance with the spline function.
[0015] Figure 5 shows a flowchart of a method for segmenting each of a plurality - of DCE images.
[0016] Figure 6 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest.
[0017] Figure 7 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest over a time period. [0018] Figure 8 shows a tracer concentration-time curve.
[0019] Figure 9A illustrates a tissue concentration-time curve before outlier data points in the tissue concentration-time curve are removed.
[0020] Figure 9B illustrates a tissue concentration-time curve after outlier data points in the tissue concentration-time curve are removed.
[0021] Figures 10A-10D illustrates how propagated shape information can help to resolve the problem of under segmentation due to adjoining tissues.
[0022] Figure 1 A illustrates images during registration in accordance with the method of mutual information over a time period.
[0023] Figure 11 B illustrates images during registration in accordance with the method of gradient correlation over a time period.
[0024] Figure 12A shows images obtained by segmentation by a conventional method.
[0025] Figure 12B shows images obtained by a method according to an embodiment of the invention.
[0026] Figure 13A shows maps of tissue physiological parameters associated with the kinetic model in accordance with an embodiment of the invention.
[0027] Figure 13B shows maps of tissue physiological parameters associated with the kinetic model without registering the images.
[0028] Figure 14 shows an exemplary computing device in accordance with an embodiment of the invention. DETAILED DESCRIPTION
[0029] It should further be appreciated that the exemplary embodiments are only examples, and are not intended to limit the scope, applicability, operation, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention, it being understood that various changes may be made in the function and arrangement of elements and method of operation described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims.
[0030] Some portions of the description which follows are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
[0031] Unless specifically stated otherwise, and as apparent from the following, it will be appreciated that throughout the present specification, discussions utilizing terms such as "deriving", "segmenting", "registering", "kinetic modeling", "scanning", "calculating", "determining", "replacing", "generating", "initializing", "processing", "outputting", or the like, refer to the action and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices. [0032] The present specification also discloses apparatus for performing the operations of the methods. Such apparatus may be specially constructed for the required purposes, or may comprise a general purpose computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose machines may be used with programs in accordance with the teachings herein. Alternatively, the construction of more specialized apparatus to perform the required method steps may be appropriate. The structure of a conventional general purpose computer will appear from the description below.
[0033] In addition, the present specification also implicitly discloses a computer program, in that it would be apparent to the person skilled in the art that the individual steps of the method described herein may be put into effect by computer code. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. Moreover, the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention.
[0034] Furthermore, one or more of the steps of the computer program may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer. The computer readable medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the preferred method.
[0035] With reference to Figure 1 , there is provided a method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region- of-interest in accordance with an embodiment. The method (designated generally as reference numeral 100) comprises the following steps:
[0036] Step_102: Input DCE images.
For example, at least one or more DCE images are input into an apparatus for processing and kinetic modeling of an organ's region-of- interest. DCE images can be typically acquired at multiple time points and multiple slice locations of a tumor.
[0037] Step 04: Seed point selection.
Following step 102, the next step is to select a seed point. A user only needs to indicate the tissue of interest by making a selection on the tissue. This is also known as deriving at least a contour of an exterior of the organ's region-of-interest.
[00381 Step 106: Seed point.
The location selected by the user in step 104 will serve as a seed point denoted by fs Ceed(xo)> which represents the image intensity at location x0. An example seed point can be seen in Figure 2A.
[0039] Step 108: Image segmentation. Following step 106, the next step is to segment the DCE image. The DCE image may be segmented into foreground (consisting of tissues) and background (mostly composed of air). In an embodiment, this can be accomplished by using a method like the Otsu's method. The Otsu's method is used to automatically perform clustering-based image thresholding, or reduce a graylevel image to a binary image. The algorithm assumes that the image contains two classes of pixels following bi-modal histogram (foreground pixels and background pixels) which then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal.
In a DCE imaging dataset, there exists additional information from neighboring slices (from 3D spatial domain) and other time frames (from temporal domain) which relates to the tissue in the current image by either spatial continuity or temporal movement. This information is reflected in the similarity in object shape between neighboring slices and time frames. As such, a spline function is generated in response to the derived contour of the exterior of the organ's region-of-interest.
[0040] Step 110: Image registration.
Following step 108, the next step is to segment the DCE image. In an embodiment, the DCE images- are registered to focus on the tissue of interest and to crop the segmented region-of-interest.
[0041] Step 112: DCE modeling.
Following step 110, the next step is to derive a tracer curve for each pixel in the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period. The tracer curve can be fitted to a kinetic model to derive values of the associated physiological parameters.
[0042}Step 114: Output parametric maps.
Following step 110, the next step is to fit a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.
[0043] The method 100 includes segmentation, registration and kinetic modeling of an organ's region-of-interest. The segmentation and registration are important post-processing steps that can improve subsequent analysis of DCE images by kinetic modeling.
[0044] Figures 2A - 2D illustrate images of the organ's region-of-interest during a preliminary segmentation. As mentioned in the above, a user only needs to indicate the tissue of interest by making a selection on the tissue. As shown in Figure 2A as image 220, the location selected by the user in step 04 will serve as the seed point denoted by fs c e u e r d(x0), which represents the image intensity at location x0, shown as 202. Advantageously, the user does not have to manually outline a tissue or lesion to indicate a region-of-interest.
[0045] Figure 2B shows an image 240 of the organ's region-of-interest after being segmented into foreground which mainly includes the tissues of the organ's region-of-interest. The methods that are conventionally used to segment the image does not take into account the spatial relationship between pixels. As such, the resulting segmentation map 240, shown in Figure 2B, could be subject to intensity outliers, leading to the presence of holes within the segmented region-of-interest. Also, the tissue of interest would likely be in contact with another tissue and the boundary of the tissue of interest may not be well delineated. [0046] Figure 2C shows an image 260 of the organ's region-of-interest after being morphological processed. Typically, morphological image processing is a collection of non-linear operations which probe an image with a small shape or template called structure element. The outputs of morphological image processing rely on the relative ordering of pixel values. The structure element is positioned at all possible locations in the image and compared with the corresponding neighborhood of pixels. The position where the structure element has been placed is called reference pixel, the choice of which is arbitrary. For example, A is an image, such as a binary image, and S is a structuring element. denotes the operation of placing S with its reference pixel at the pixel (i,j) of A. Two basic morphological operations, the erosion of A by S (denoted as Α Θ S) and the dilation of A by S (denoted as A 0 S) can be defined asA Θ S = {(i, j): S i;j <= A} and A 0 S = (Ac Θ S)c,where Ac = 1 - A is the complement of A.
[0047] From this one can define the two widely used morphological operations, opening of A by S (denoted as ψ5(Α) and closing of A by S (denoted as φ5(Α) are defined as ψ5(Α) = (A Θ S)0S' and φ5(Α) = (A 0 S) Θ S', where S' is the reflection of S (namely rotation by 180° around its reference pixel). Figure 2C illustrates the image 260 after performing a morphological image processing (for example, hole-filling and object-opening) using a disk-shaped structure element with a radius of 6 voxels. Typically, the shape of structure element and the size of the image 260 will depend on the image tissue characteristics.
[0048] After that, the tissue containing the seed tissue point is segmented, as shown in Figure 2D as image 280. For example, the seed tissue point is segmented by applying a 8-neighbourhood connected component analysis. More detail on the neighbourhood connected component analysis will be provided below. The image 280 showing the seed tissue point 280 after segmentation.
[0049] Subsequent to the preliminary segmentation of the images of the organ's region-of-interest, a spline function is generated. In an embodiment, a B- spline function is generated. A B-spline is a piecewise polynomial function of degree k as defined in a range u0 < u < um. The points u = uj are the meeting places of the polynomial pieces, known as knots. The i-th B-spline basis function of degree k, Bi k(u), can be derived recursively by the Cox-de Boor recursion formula:
otherwise
i ^ Bi,k_1(u) + ;^^ Bi+1,k_1 (u). (2) ui+k-l-Ul ui+k-ui+1
Basis function Bi k(u) is non-zero on [ui; ui+k+1). On any knot span [Uj, ui+1), at most k + 1 basis functions of degree k are non-zero. Given n + 1 control points Ρο, Ρ^ - , Ρη and m + 1 knots Uo, ^,— ^-,,, the B-spline curve of degree k defined by these control points and knots is
C(u) =∑f=0 Bi,k(u)Pi (3)
The shape of the B-spline curve can be changed through modifying the positions of control points, the positions of knots or the degree of the curve. Note that n, m and k satisfy m = n + k + 1. By repeating certain knots and control points, the start and end of the generated curve can be joined together forming a closed loop, which can conveniently be utilized to encode the shape information of a segmented object.
[0050] After the preliminary segmentation, the shape of the segmented ROI can be characterized by its contour, using n + 1 control points that are uniformly spaced. Then a B-spline curve can be reconstructed through equations (2) to (3) which will be an approximation of the object contour. [0051] Figures 3A - 3C illustrate images of an organ's region-of-interest during a spline function generation. Figure 3A shows an image 320 obtained from the preliminary segmentation. In an embodiment, image 320 is identical to image 280. Figure 3B shows image 340 having sampled control points 302 from the derived contour of the organ's region-of-interest. Figure 3C shows a segmented image 360 having a seed point 304 that is constructed based on a B-spline function.
[0052] Tissues in consecutive neighboring frames and slices could share similarities in shape information, which can be used for further refinement of the segmented regions. Let j?last denote the n + 1 control points in the last frame/slice, and >iast(0 refers to the i-th control point. curr may be used to denote the control points in the current frame or slice. Various approaches can be implemented to impose shape similarity constraints. For example, the positions of control points in curr can De varied so that predefined objectives function of £ iastand curr become optimal.
[0053] For example, on the current image, let the region identified by the preliminary segmentation procedure be denoted by ilcurr, and dD.curr denotes its boundary. From dncurr, a point which is the nearest to the i-th control point in >lastcan be determined:
Pfurr = min, <ii(dnclirra), ^ast(i)), ' - (4)
where d(-) stands for a distance function (for example, the Euclidean distance). A collection of points {? urr , i = 0, ··· , η} can serve as the control points of the new tissue region which adds the shape information from last frame or slice to the segmentation result based on information from current image alone. [0054] Figures 4A - 4D illustrate images of the organ's region-of-interest during segmentation in accordance with the spline function. Figure 4A shows an image 420 obtained from a preliminary segmentation. In an embodiment, image 420 is identical to image 280. Figure 4B shows the image 440 having control points 404 from the previous frames, which are obtained in accordance with the spline function, around a selection point 402. Figure 4C shows an image 460 of the organ's region-of-interest with refined control points 406 in the current frame. Figure 4D shows the image 480 of the organ's region-of-interest after segmentation in accordance with the spline function. In an embodiment, the image 480 is identical to image.280.
[0055] Figure 5 shows a flowchart 500 of a method for segmenting each of a plurality of DCE images. At least one or more DCE images are input into an apparatus for processing and kinetic modeling of an organ's region-of-interest in step 502. The one or more DCE images are segmented into foreground (consisting of tissues) and background (mostly composed of air) in step 504. The segmented irnages are then morphological processed in step 506. A connected component analysis is carried out in step 508. For a connected component analysis, in order for two pixels (say p, q) to be connected, their values must both be 1 for a binary image and the pixels should be neighbors. In image processing, there are usually two types of neighborhoods. For a pixel p with coordinates(i,j), the set of pixels given by:
Ni(p) = {ii + l l (i - l,pXi,j + ! , (i.j - O} (5)
This is also known as a 4-neighborhood connected component analysis.
In order to determine a 8- neighborhood connected component, the following equation is used:
N8 P) = N4 p) U {(i + 1 + 1), (i + 1 - 1), (i - l,j + 1), (i - l,j - 1)} (6) Two pixels are considered to 8-neighborhood connected if both pixels have value 1 and are within 8-neighborhood to one another
[0056] The results of the connect component analysis in step 508 and a seed point selection in 509 are used to generate a tissue mask in step 510. During a tissue mask generation 510, a determination step 512 is carried out to find out if the tissue mark is a first frame. In the event that it is a first frame, control points are generated in step 514. The control points are obtained in step 516. In the event that it is not a first frame, control points are evolved in step 518. In step 518, if pixel p at location (i, j) of a previous frame is a control point, then the pixel at location of (i, j) of a current frame will be labelled as a control point. This process of evolving the information of control points from previous frame to current frame is called control points evolution. The control points obtained in steps 514 and 518 are used to refine segmentation in step 520. The control points and seed point are updated in step 521. This information is helpful for segmenting the tissue of interest in the next frame or slice. These processes repeat until the last frame or slice and the output results are generated in step 522.
[0057] Figure 6 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest. Figure 6 shows an image 220 having a location x (or 202) which has been selected from the organ's region-of-interest. Figure 6 also shows an image 280 which is obtained from the preliminary segmentation. Figure 6(c) shows how the selected location x is cropped in image 280, registered and then magnified for better resolution, as shown in image 600. Advantageously, registration of DCE images focus on the organ's region-of-interest by cropping and magnifying the selected area.
[0058] The image registration process is to find a transform T that will map one image / onto another image g so that a predefined criterion that measures the similarity of two images is optimized. Both rigid and non-rigid transformations can be applied for registration. While rigid transformation allows for only translational and rotational displacements between two images, non-rigid methods allows for deformable changes displacements between two images, non-rigid methods allows for deformable changes in tissue shape and size, with the deletion of existing voxels (or information) and the generation of new voxels through some form of interpolation or approximation methods. Advantageously, rigid methods tend to preserve the original information in the aligned region and are usually more robust and efficient than non-rigid methods.
[0059] In an embodiment, the registration process is used on colorectal tumors. For colorectal tumors, formation of a tumor is usually not drastic and only involves minor variations in the sizes of the cropped images. A rigid registration method based on mutual information (Ml) which minimizes variation in the cropped images may be used. For the application of the rigid registration method, possible missing or additional data due to variations in cropped image sizes can be treated as outliers in the tissue concentration-time curves, which can be detected before kinetic analysis by model- fitting. More details on the Ml method are provided below, followed by its application on DCE images.
[0060] Shannon's Entropy
Let pi denote the frequency of intensity i appearing in an image, and p(f) refers to the histogram (i.e., intensity distribution) of image /. Then Shannon's entropy is defined as
Η = -∑, ρ, 1οέ ρ,, (7) which measures the dispersion in the image histogram.
[0061] Joint entropy
Let Pi denote the frequency of the pair of intensities (ij) occurring in a pair of images and p(f, g) refers to the resulting 2-D histogram of images / and g . If the images are perfectly registered, the occurrence of intensity pairs will be very focused (localized) in the 2-D histogram. On the other hand, the dispersion in the 2-D histogram will be large for misaligned images. Naturally, to quantify the degree of histogram dispersion, a joint entropy can be defined in a similar, way: H(f, g) = - ij Pij logPij. (8)
Images can be registered by minimizing the joint entropy. [0062] Mutual information
The mutual information of two images can be defined as:
I(f, g) =∑i,i Pi g (;¾, (9) which measures the inherent dependence between images. If the images are independent, then pi;j = pipj and the mutual information is 0. It can be shown that
I(f, g) = H(f) + H(g) - H(f, g), (10) and thus, maximizing the mutual information is equivalent to minimizing the joint entropy. Mutual information includes the entropy of individual image, which would be advantageous in registering images.
[0063] Figure 7 illustrates images of an organ's region-of-interest during a registration of an organ's region-of-interest over a time period. For DCE imaging, usually a few slices (or locations) of the tissue of interest are obtained at each time point (t). Suppose there are ns slices acquired at each time point, and there are nt time points in the DCE dataset. For a particular slice at one point of time tx, the problem is to identify a slice among all slices at time tk that is most aligned with the targeted slice at To achieve this, each slice at time tk will be registered with the targeted slice at tx by the mutual information method, yielding optimal transformation parameters (rotation and translation) as well as the corresponding value of mutual information. After registering all ns slices at time tk, the slice with the maximum mutual information will be selected as the desired candidate. This registration process is illustrated in Figure 1 1A.
[0064] Tracer curve refers to a change in concentration of a contrast agent at a voxel or in an organ's region-of-interest as derived from registered DCE images. A contrast agent typically refers to a substance that is comparatively opaque to x- rays that is introduced into an area of a body so as to contrast an internal part with its surrounding tissue in radiographic visualization. A voxel typically refers to a value on a three-dimensional space. Kinetic modeling is usually performed by fitting a kinetic model on a tracer curve derived at a voxel or for the organ's region-of- interest.
[0065] Generalized Tofts model
The generalized Tofts model is commonly used for fitting of DCE images, and its residue function is given by:
RToftS(t) = transexp {- ^t} + Vp8(t), (11 )
where 6(t) is the Dirac delta function and Ktrans is the transfer constant. vv and ve denote the fractional vascular and interstitial volumes, respectively.
[0066] Adiabatic approximation to tissue homogeneity model
The residue function of the adiabatic approximation to tissue homogeneity (AATH) model is given by:
R ) = Jf (t) - Jf(t - Tc) + EJfCt - Tc)exp {- ^ (t - Tc)}, (12) where H t) is the Heaviside step function, Tc is the capillary transit time, Fp is the blood (plasma) flow and E is the extraction fraction. These parameters are related to the vascular fractional volume vv and permeability-surface area product PS by:
C Fp ' (13)
PS
(14) Fp+PS' '
Thus, the free parameters used for fitting the AATH model are [ve, vp, Fp, PS).
[0067] Model-fitting
Tissue tracer concentration-time curves Ctiss(t) sampled from DCE images are fitted using the above tracer kinetic models by minimizing the following cost function:
Z2(¾ =∑tk (CtiSS(tk) - FpCp®R(tk))2, (15)
where ϋ represents the parameters in the tracer kinetic model, and ® is the convolution operator. Cv t) refers to the tracer concentration-time curve in a feeding artery and it is commonly called the arterial input function. Figure 8 shows a tracer concentration- time curve 810 obtained from sampling from a descending aorta visible on the DCE image 800. The graph 8 0 is plotted concentration 814 against time 812.
[0068] Figure 9A illustrates a tissue concentration-time curve Ctiss(t) before outlier data points in the tissue concentration-time curve are removed. Figure 9A shows that outlier data points 902, 904 and 906 that are distant from other data points (e.g. data point 910) on the curve 920. The outlier data points 902, 904 and 906 can be detected using robust regression based methods.
[0069] Figure 9B illustrates a tissue concentration-time curve 940 after outlier data points in the tissue concentration-time curve are removed. The outlier data points 902, 904 and 906 shown in Figure 9A are removed and replaced by interpolated values as shown in Figure 9B. Data points that are not distant from the other data points on the curve are not removed, for example data point 910. In this manner, outlier data points can be detected and removed before model-fitting.
[0070] Figures 10A-10D illustrates how propagated shape information can help to resolve the problem of under segmentation due to adjoining tissues. Figure 10A shows an image 1020 of the organ's region-of-interest with a selected location x0 shown as 1002. However, during imaging, neighboring tissues 1004 and structures could be displaced relative to each other. A main difficulty in segmentation is the occasional presence of adjoined neighboring tissues, which could result in under- segmentation.
[0071] Figure 10B shows an image 1040 of the organ's region-of-interest after a preliminary segmentation in which neighboring tissues in contact with the selected location x0 are also segmented. Figure 10B shows a location 1006 which includes the selected location 1002 and the neighboring tissues 1004.
[0072] Figure 10C shows an image 1060 of the organ's region-of-interest after propagating control points 1008 obtained from the previous frame or slice. The control points 1008 are generated based on the derived contour of a previous frame and in accordance with the B-spline function. Figure 10D shows the final result 1080 of the selected location 1010 after the adjoining tissues have been removed in accordance with the B-spline control points. [0073] Figure 11A illustrates images 1102 obtained during registration in accordance with the method of mutual information over a time period. Figure 11 B illustrates images 1104 during registration in accordance with the method of gradient correlation over a time period. In both Figures 11A and 1 B, the image 1106 at t=0 serves as the template to which subsequent images 08, 1110, 1 12, 1114, 1116 and 1118 are registered. Mutual information compares image intensity, whereas gradient correlation computes the similarity of image gradient. Typically, image gradient is more sensitive to noise then image intensity. However, image gradient could be more discriminative in object description if the object features are consistent throughout the aligned images. This may not be true for tumors in DCE imaging due the wash-in and wash-out of tracer. For the results shown in Figures 11A and 1 B, it can be seen that from images 112 and 11 4, the tumor at time points 27s and 32s are better aligned with the neighboring frames by the mutual information method than the gradient correlation method.
[0074] During DCE imaging, tissue intensity will change along with the wash- in and wash-out of a contrast agent which flow through the organ's region-of- interest over a time period. Consequently, tissue image features will be inconsistent throughout the process. For example in Figure 11 A, where the central region of the tumor appears to be hypo-intense after about 26 seconds. Segmentation which relies on the detection of object features, such as active contours, would yield inconsistent results among the DCE images at different time points.
[0075] Figure 12A shows images 1202, 1204 and 1206 obtained by segmentation by a conventional method, for example a B-snake method. Figure 12B shows images 1252, 1254 and 1256 obtained by a method according to an embodiment of the invention. As an active contour model, the B-snake method evolves the contour according to object boundaries where the contour is encoded using B-splines. The B-snake method over-segments these images because the contour is trapped by the hypo-intense region within the tumor. [0076] Figure 13A shows maps of tissue physiological parameters 1302, 1304, 1306 and 1308 associated with the kinetic model in accordance with an embodiment of the invention, for example the AATH model. Figure 13B shows maps of tissue physiological parameters 1310, 1312, 1314 and 1316 associated with the kinetic model without registering the images. The maps generated with image registration 1302, 1304, 1306 and 1308, shown in Figure 13A, clearly delineate a region near the center of the tissue, which has lower PS, lower vp and higher ve mentioned in Equations (11 ), (12) and (14). In contrast, the maps without registration 1310, 312, 1314 and 1316, shown in Figure 13(b), reveal a similar region, but are generally noisier.
[0077] As shown in Figure 14, the computing device 1400 further includes a display interface 1402 which performs operations for rendering images to an associated display 1430 and an audio interface 1432 for performing operations for playing audio content via associated speakers) 1434.
[0078] As used herein, the term "computer program product" may refer, in part, to removable storage unit 1418, removable storage unit 1422, a hard disk installed in hard disk drive 1412, or a carrier wave carrying software over communication path 1426 (wireless link or cable) to communication interface 1424 via an interface 1450. A computer readable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal. These computer program products are devices for providing software to the computing device 1400. Computer readable storage medium refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computing device 1400 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray DiscTM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto- optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computing device 1400. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computing device 1400 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0079] The computer programs (also called computer program code) are stored in main memory 1408 and/or secondary memory 14 0. Computer programs can also be received via the communication interface 1424. Such computer programs, when executed, enable the computing device 1400 to perform one or more features of embodiments discussed herein. In various embodiments, the computer programs, when executed, enable the processor 1404 to perform features via a communication infrastructure 1406 of the above-described embodiments. Accordingly, such computer programs represent controllers of the computer system 1400.
[0080] Software may be stored in a computer program product and loaded into the computing device 1400 using the removable storage drive 14 4, the hard disk drive 1412, or the interface 1420. Alternatively, the computer program product may be downloaded to the computer system 1400 over the communications path 1426. The software, when executed by the processor 1404, causes the computing device 1400 to perform functions of embodiments described herein.
[0081] It is to be understood that the embodiment of Figure 14 is presented merely by way of example. Therefore, in some embodiments one or more features of the computing device 1400 may be omitted. Also, in some embodiments, one or more features of the computing device 400 may be integrated. Additionally, in some embodiments, one or more features of the computing device 1400 may be split into one or more component parts. [0082] It will be appreciated by a person skilled in the art that numerous variations and/or modifications may be made to the present invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects to be illustrative and not restrictive.

Claims

CLAIMS What is claimed is:
1. A method for dynamic contrast enhanced (DCE) image processing and kinetic modeling of an organ's region-of-interest, the method comprising: deriving at least a contour of an exterior of the organ's region-of-interest from one or more of a plurality of images;
generating a spline function in response to the derived contour of the exterior of the organ's region-of-interest from the one or more of the plurality of images;
segmenting the organ's region-of-interest in accordance with the spline function;
registering the plurality of images wherein the organ's region-of-interest has been segmented;
deriving a tracer curve for the organ's region-of-interest in the registered images, the tracer curve indicating a change in concentration of a contrast agent flowing through the organ's region-of-interest over a time period; and
kinetic modeling by fitting a kinetic model to the tracer curve to generate one or more maps of tissue physiological parameters associated with the kinetic model.
2. The method according to claim 1 , wherein segmenting the organ's region-of- interest comprises generating segments for each of the plurality of images which include the organ's region-of-interest.
3. The method according to claim 2, wherein segmenting^ each of the plurality of images comprises: identifying a plurality of voxels of the organ's region-of-interest in each of the plurality of images; and dividing the organ's region-of-interest in each image into segments in response to the identified plurality of voxels.
4. The method according to claim 3, wherein generating the spline function comprises encoding information on the derived contour of the exterior of the organ's region-of-interest for facilitating the segmentation of a successive image.
5. The method according to claim 1 , wherein the organ's region-of-interest is a tumor.
6. The method according to claim 5, wherein the tumor is a colorectal tumor.
7. The method according to claim 1 , wherein the kinetic model is based on a Tofts model.
8. The method according to claim 1 , wherein the kinetic model is based on an adiabatic approximation to tissue homogeneity (AATH) model.
9. The method according to claim 1 , wherein the spline function is a B-spline function.
10. A method for registering an organ's region-of-interest, the method comprising: deriving mutual information in response to each of a plurality of dynamic contrast enhanced (DCE) images; and aligning segments in the organ's region-of-interest in response to the mutual information.
PCT/SG2014/000481 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors WO2016060611A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US15/519,145 US10176573B2 (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors
PCT/SG2014/000481 WO2016060611A1 (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors
SG11201703074SA SG11201703074SA (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors
CN201480083962.6A CN107004268A (en) 2014-10-13 2014-10-13 The automatic region of interest regional partition and registration of the Dynamic constrasted enhancement image of colorectal carcinoma
EP14904094.1A EP3207522A4 (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2014/000481 WO2016060611A1 (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors

Publications (1)

Publication Number Publication Date
WO2016060611A1 true WO2016060611A1 (en) 2016-04-21

Family

ID=55747022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2014/000481 WO2016060611A1 (en) 2014-10-13 2014-10-13 Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors

Country Status (5)

Country Link
US (1) US10176573B2 (en)
EP (1) EP3207522A4 (en)
CN (1) CN107004268A (en)
SG (1) SG11201703074SA (en)
WO (1) WO2016060611A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, rainfall measurement device, computer equipment and storage medium
CN111447171A (en) * 2019-10-26 2020-07-24 泰州市海陵区一马商务信息咨询有限公司 Automated content data analysis platform and method

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10810740B2 (en) * 2016-07-20 2020-10-20 Tel Hashomer Medical Research Infrastructure And Services Ltd. System and method for automated characterization of solid tumors using medical imaging
US10832403B2 (en) * 2018-05-14 2020-11-10 Koninklijke Philips N.V. Systems, methods, and apparatuses for generating regions of interest from voxel mode based thresholds
CN109949274B (en) * 2019-02-25 2020-12-25 腾讯科技(深圳)有限公司 Image processing method, device and system
EP3939003B1 (en) 2019-03-12 2024-04-03 Bayer HealthCare, LLC Systems and methods for assessing a likelihood of cteph and identifying characteristics indicative thereof
US11030742B2 (en) * 2019-03-29 2021-06-08 GE Precision Healthcare LLC Systems and methods to facilitate review of liver tumor cases
ES2955349T3 (en) 2019-09-18 2023-11-30 Bayer Ag MRI image prediction using a prediction model trained by supervised learning
JP7535575B2 (en) * 2019-09-18 2024-08-16 バイエル、アクチエンゲゼルシャフト Systems, methods, and computer program products for predicting, forecasting, and/or assessing tissue properties - Patents.com
CN110930362B (en) * 2019-10-23 2023-10-27 北京图知天下科技有限责任公司 Screw safety detection method, device and system
CN111127586B (en) * 2019-12-14 2021-10-29 深圳先进技术研究院 Artery input function curve generation method and device
CN112489093A (en) * 2020-11-19 2021-03-12 哈尔滨工程大学 Sonar image registration method, sonar image registration device, terminal equipment and storage medium
CN113673521B (en) * 2021-08-27 2024-09-24 中汽创智科技有限公司 Segmentation data labeling method and device, electronic equipment and storage medium
CN117496277B (en) * 2024-01-02 2024-03-12 达州市中心医院(达州市人民医院) Rectal cancer image data modeling processing method and system based on artificial intelligence
CN118154602B (en) * 2024-05-10 2024-09-03 天津市肿瘤医院(天津医科大学肿瘤医院) Image analysis method and system based on colorectal polyp CT image dataset

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007059615A1 (en) * 2005-11-23 2007-05-31 The Medipattern Corporation Method and system of computer-aided quantitative and qualitative analysis of medical images
WO2010014712A1 (en) * 2008-07-29 2010-02-04 Board Of Trustees Of Michigan State University System and method for differentiating benign from malignant contrast-enhanced lesions
US20110257519A1 (en) * 2010-04-16 2011-10-20 Oslo Universitetssykehus Hf Estimating and correcting for contrast agent extravasation in tissue perfusion imaging
US20130202173A1 (en) * 2008-02-19 2013-08-08 vascuVis Inc. Classification of biological tissue by multi-mode data registration, segmentation and characterization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100470587C (en) * 2007-01-26 2009-03-18 清华大学 Method for segmenting abdominal organ in medical image
CN101623198A (en) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 Real-time tracking method for dynamic tumor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007059615A1 (en) * 2005-11-23 2007-05-31 The Medipattern Corporation Method and system of computer-aided quantitative and qualitative analysis of medical images
US20130202173A1 (en) * 2008-02-19 2013-08-08 vascuVis Inc. Classification of biological tissue by multi-mode data registration, segmentation and characterization
WO2010014712A1 (en) * 2008-07-29 2010-02-04 Board Of Trustees Of Michigan State University System and method for differentiating benign from malignant contrast-enhanced lesions
US20110257519A1 (en) * 2010-04-16 2011-10-20 Oslo Universitetssykehus Hf Estimating and correcting for contrast agent extravasation in tissue perfusion imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3207522A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, rainfall measurement device, computer equipment and storage medium
CN111447171A (en) * 2019-10-26 2020-07-24 泰州市海陵区一马商务信息咨询有限公司 Automated content data analysis platform and method

Also Published As

Publication number Publication date
EP3207522A4 (en) 2018-06-13
CN107004268A (en) 2017-08-01
US20170243349A1 (en) 2017-08-24
SG11201703074SA (en) 2017-05-30
EP3207522A1 (en) 2017-08-23
US10176573B2 (en) 2019-01-08

Similar Documents

Publication Publication Date Title
US10176573B2 (en) Automatic region-of-interest segmentation and registration of dynamic contrast-enhanced images of colorectal tumors
US10357218B2 (en) Methods and systems for extracting blood vessel
WO2018023917A1 (en) Method and system for extracting lower limb blood vessel
Nasor et al. Detection and Localization of Early‐Stage Multiple Brain Tumors Using a Hybrid Technique of Patch‐Based Processing, k‐means Clustering and Object Counting
US8326006B2 (en) Method for breast screening in fused mammography
JP6570145B2 (en) Method, program, and method and apparatus for constructing alternative projections for processing images
JP2019512361A (en) Method and system for segmentation of vasculature in a volumetric image data set
JP2014507231A (en) Method and apparatus for identifying latent anomalies in imaging data and its application to medical images
Abbas et al. Combined spline and B-spline for an improved automatic skin lesion segmentation in dermoscopic images using optimal color channel
Ganvir et al. Filtering method for pre-processing mammogram images for breast cancer detection
Sagar et al. Color channel based segmentation of skin lesion from clinical images for the detection of melanoma
Sivanesan et al. Unsupervised medical image segmentation with adversarial networks: From edge diagrams to segmentation maps
Hasan A hybrid approach of using particle swarm optimization and volumetric active contour without edge for segmenting brain tumors in MRI scan
Lee et al. Detection and segmentation of small renal masses in contrast-enhanced CT images using texture and context feature classification
Jodas et al. Lumen segmentation in magnetic resonance images of the carotid artery
Myint et al. Effective kidney segmentation using gradient based approach in abdominal CT images
Pandey et al. Morphological active contour based SVM model for lung cancer image segmentation
Eapen et al. Medical image segmentation for anatomical knowledge extraction
Abdalla et al. Automatic Segmentation and Detection System for Varicocele Using Ultrasound Images.
Anwar et al. Segmentation of liver tumor for computer aided diagnosis
KR101494975B1 (en) Nipple automatic detection system and the method in 3D automated breast ultrasound images
Almi'ani et al. A modified region growing based algorithm to vessel segmentation in magnetic resonance angiography
Nikravanshalmani et al. Segmentation and separation of cerebral aneurysms: A multi-phase approach
Habib et al. Automatic segmentation of abdominal aortic aneurysm
Fooladivanda et al. Breast-region segmentation in MRI using chest region atlas and SVM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14904094

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014904094

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 11201703074S

Country of ref document: SG

Ref document number: 15519145

Country of ref document: US