US20070253609A1 - Method, Apparatus and Computer Program Product for Automatic Segmenting of Cardiac Chambers - Google Patents

Method, Apparatus and Computer Program Product for Automatic Segmenting of Cardiac Chambers Download PDF

Info

Publication number
US20070253609A1
US20070253609A1 US11380681 US38068106A US2007253609A1 US 20070253609 A1 US20070253609 A1 US 20070253609A1 US 11380681 US11380681 US 11380681 US 38068106 A US38068106 A US 38068106A US 2007253609 A1 US2007253609 A1 US 2007253609A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
based
image
segmentation
method
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11380681
Other versions
US7864997B2 (en )
Inventor
Jean-Paul Aben
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pie Medical Imaging BV
Original Assignee
Pie Medical Imaging BV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Abstract

A method, apparatus and computer program product for machine-based segmentation of the cardiac chambers in a four-dimensional magnetic resonance image data that contains at least short-axis cine sequences. The segmenting is based on temporal variations in the data set and on features extracted from the image data. In particular, the method and apparatus process the image data to automatically segment an endocard based on fuzzy object extraction and to automatically segment an epicard through finding a radial minimum cost with dynamic sign definition and tissue classification. In addition, image data is preferably preprocessed to eliminate high local image variances and inhomogeneity. The image data is also preferably processed to identify positions of a base slice, an apex slice, an end-diastole, an end-systole and an LVRV connection.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates broadly to cardiothoracic imaging. More particularly, this invention relates to segmenting cardiac chambers in a four-dimensional magnetic resonance image.
  • 2. State of the Art
  • Accurate and reproducible assessment of the left and right ventricular volumes and ejection fraction is essential for the prognosis of patients with heart disease and for evaluating therapeutic response. Magnetic resonance (MR) imaging has proven to be an accurate and reproducible imaging technique for the quantitative analysis of left and right ventricular functions. Ventricular volumes and mass as well as regional functional parameters such as wall motion and wall thickening, especially for the left ventricle, can be obtained in the true short-axis plane of the ventricle. These short-axis views have various advantages. Since wall motion and wall thickening takes place mainly perpendicularly to the viewing direction, they can be followed best in this direction. Furthermore, partial volume effects are minimized. A major advantage of MR imaging is that it does not use ionizing radiation and is non-invasive since no contrast agent is used. Through using multiple imaged slices a combination of spatial and time coordinates is considered.
  • Chris A. Cocosco, in “Automatic cardiac region-of-interest computation in cine 3D structural MRI”, CARS and Elsevier, 2004, describes a method for extracting a region of interest around the heart in short-axis MRI cardiac scans. The method produces a region within four-dimensional (4D) datasets wherein the left and right ventricles are localized. No attempt is made to differentiate between the left and the right ventricle. Neither are the base and apex locations of the ventricle(s) defined. The method may in particular fail whenever high local variances are present in the 4D dataset that may originate from main blood vessels (lung/aorta) or from specifically superimposed text such as trigger time. The method may also fail when the 4D image contains an inhomogeneity. This inhomogeneity can be caused by lower sensitivity at a larger distance between the heart and the imaging coils of the MR apparatus which then leads to lower pixel intensity in the image.
  • SUMMARY OF THE INVENTION
  • The present invention presents a more systematic approach, in that it does not rely on anatomy assumptions. First, it locates the left and right ventricles. Then the base short axis slice and the apex of the left ventricle short-axis slice are located. Next, the start conditions for the segmentation process are determined. The locations of the left ventricle, the left ventricle epicard and finally the right ventricle endocard are detected approximately. Based on these start conditions the segmentation process starts. Note that various standard acquisitions of MR cardiac imaging are used, such as the short axis (SA) and long axis (LA) orientations. Finally the papillary muscles of the left ventricle are detected. Advantageously, the methodology of the present invention is fully automatic without user interactions based on intermediate results, which can yield 100% reproducibility.
  • Two procedures can be followed. In the first procedure, a 3D fuzzy extraction is followed by a fine tuning step to accurately locate the left ventricle endocard and epicard. In the second procedure, a new contour propagation algorithm locates the left ventricle endocard and epicard. Neither of the two above procedures relies on any anatomy assumption, which makes the segmenting also suitable to use with diseased ventricles that may be deformed to a certain extent.
  • The segmentation process is improved over the prior art segmentation techniques, which range from being purely data driven (using edge detection and region growing) on the one hand, to being model driven on the other hand.
  • One such prior art segmentation technique, the so-called “snake” model, presents a framework that is the basis of boundary driven image segmentation operations. The “snake” model is described by G. Kass in “Snakes: Active Contour Models”, Int. Journal of Computer Vision, 1, pp. 321-332, 1988. Briefly, the snake model refers to an energy minimization technique that seeks for the lowest potential of a curve-based function. This function is a compromise between boundary terms and image terms. However, this segmentation method is quite sensitive to noise and physically artifacted image data. Various modifications thereto have been proposed, but the method remains sensitive to user-defined start conditions, which endanger the reproducibility of the segmentation.
  • Another example of a prior art segmentation technique is described in U.S. Pat. No. 5,239,591, “Contour extraction in multi-phase, multi-slice cardiac MRI studies by propagation of seed contours between images”, which is incorporated by reference herein in its entirety.
  • The segmentation process of the present invention corrects for data acquisition and/or subject-caused movement in the acquired MR image data set by using system coordinates from an MR scanner apparatus. In this way, accuracy is improved without disturbing the sequence of image pick-up operations. Moreover, the segmentation of the endocardial and epicardial features starts from an operator-defined seed contour and is based on a radial minimum cost with dynamic sign definition and tissue classification. Apart from such elementary user input, all further operations are machine-controlled, which yields excellent reproducibility.
  • In addition, the methodology of the present invention automatically detects Left Ventricle Right Ventricle (LVRV) connection position. Clinicians are recognizing this information as highly helpful.
  • Moreover, the methodology of the present invention estimates volume(s) of the heart which is/are derived from the spatial location of the heart's apex position on a curve that is obtained from the long axis cine sequences. In addition, such volume estimates preferably are derived from a mitral valve position obtained from long axis cine views. Such processing improves the accuracy of such volume estimate(s).
  • Additional objects and advantages of the invention will become apparent to those skilled in the art upon reference to the detailed description taken in conjunction with the provided figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow chart illustrating various processing steps of the methodology of the present invention;
  • FIG. 2 is a schematic representation of in-plane movement which is identified and corrected by the methodology of the present invention;
  • FIG. 3 is a three-dimensional visualization of breathing movement of a subject person;
  • FIG. 4 is a flow chart illustrating the operations in locating the heart's left and right ventricle in accordance with the present invention;
  • FIG. 5 is an exemplary pixel intensity histogram of the heart's left ventricle;
  • FIG. 6 illustrates a mapping procedure from long-axes (LA) slices to short-axis (SA) slices or inversely;
  • FIG. 7 is a flow chart illustrating segmentation operations that start from user-supplied long-axis (LA) input;
  • FIG. 8 is a high-level schematic diagram of an exemplary MRI apparatus.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Turning now to FIG. 1, there is shown a flow chart of various processing steps in accordance with the present invention. First, in block 10 (localize Apex/Base of left ventricle, estimate epicard, find end of diastole, end of systole), the 4D MRI dataset is processed to localize the left and right ventricles therein. This dataset contains a sequence of short axis cine images that cover the heart, combined with two-chamber and four-chamber cine sequences.
  • As an alternative to the above, it is possible to work with short-axis images only. Within this operation, the base and apex short axis slices are located as well as the end systole and end diastole heart phase in time. In this respect, FIG. 4 illustrates a flow diagram of the locating process for the heart's left and right ventricles, as a substitute for block 10 in FIG. 1. The remainder of FIG. 1 will be discussed later.
  • Turning to FIG. 4, in block 40, the images are preprocessed to eliminate high local variance and inhomogeneity according to the expression (1) below. Therein, Ip is the 2D image array that is subjected to conventional Fourier transform methods to render the result more reliable. I p = 𝒥 - 1 { 𝒥 ( I Θ b ) · H } · T ( - P shift ) where H = 1 - - r 2 2 · 2 in case of inhomogeneity H = 1 otherwise b is a square structured element P shift is the displacing caused by acquisition ( 1 )
  • In the preprocessing operations of block 40, grey level erosion or “thinning” is effected, after which a filter eliminates low frequencies caused by inhomogeneity. The inhomogeneity is detected by resampling along a scan line that runs from the centre of the two chamber and four chamber epipolar intersection lines, called l2ch and l4ch respectively, in the first frame of the middle short-axis slice. This is followed by executing a first Gaussian derivative operation.
  • From the centre, the positions of the steepest absolute slopes lp and rp to the left and right are located. Between those positions a linear fit is performed based on the resampled image data. Whenever the absolute slope of this linear fit exceeds a certain threshold it is assumed that the short axis image data set contains an inhomogeneity.
  • Moreover, in the preprocessing operations of block 40, the displacement caused by the data acquisition is derived and corrected for. For acquiring the short-axis data set typically 8 mm thick cine short-axis slices with breath-holding by the patient will encompass the entire left ventricle from apex to base. During this process the MR acquisition parameters for the acquired spatial slice location could have changed such that they do not fit perfectly in the acquired bounding box. By using the absolute system coordinates obtained from the MR system (through the “headers” of the conventional DICOM file format), it is possible to correct for these ‘in-plane’ movements.
  • In this respect, FIG. 2 illustrates a spatial configuration of this in-plane movement. In this FIG. 2, item 100 is the bounding box, two successive planes have been labelled 102 and 104, respectively, and the plane wherein the actual in-plane movement occurs has been labelled 106. The line 108 should run continuously along the respective first elements that have been shown as perspective squares (110), of which the lowest one as shown has been shifted to the front of the bounding box.
  • Turning back to FIG. 4, the operations continue to block 41 to correct for such “in-plane” movement. Such operations begin by identifying the starting plane (102) and then computes the three-dimensional (3D) line l that runs perpendicularly through this starting plane (102) at the first image element (pixel/voxel) (110) of the image frame identified by starting plane (102). Next, for each one of a number of successor planes, the displacement between the first element of the successor plane under investigation and the 3D line/is computed. In pseudo code:
    V 1 =P 1 +μR 1 +γ·C 1
    l=P 1+(μ×·C 1)·α  (2)
  • Next, the intersection point P3D of the line l with the successor planes is defined in the 3D world coordinate system:
    P 3D =l∩V n {n>1}  (3)
  • Finally, the intersection point P3D is transformed into the two-dimensional (2D) coordinate system of the successor plane: P 2 D = [ 1 1 0 ] · S ( 1 / voxelsize n ) · R n · T ( - P n ) · P 3 D ( 4 )
    Here, the rotation matrix R is defined as the cross product of the plane direction vectors: R = [ R 1 x C 1 x N 1 x R 1 y C 1 y N 1 y R 1 z C 1 z N 1 z ] N = R C ( 5 )
  • For a particular plane (e.g., slice), the displacement is defined as P2D. All frames of the current plane will be shifted according to this vector P2D. The step is repeated for the remaining planes along the short-axis stack.
  • In addition to such ‘in-plane’ movement, the patient can move between the acquired short-axis cine sequences, the two-chamber and the four-chamber cine sequence. This moving is specific for each examination and its magnitude depends mainly on how well the patient reproduces each breath-holding in an MR cine sequence. Typically, the motion ranges from zero to 10 mm. In block 42 a a method is presented giving the physician the feasibility to evaluate in a quick and easy to use environment the quality of the MR acquisition in respect to slice movement. Thus, the physician can decide if some cine sequences should be taken once more while the patient is still available for MR acquisition. This improves the acquired dataset for further therapeutic usage without requiring an expensive patient recall for another MR.
  • The method in block 42 a allows the physician to inspect the acquired MR dataset is by viewing the images from each acquired plane (used for treatment decisions) in a 3D environment conform the absolute system coordinates from the MR system. Generally, each image contains visible landmarks, such as organs and their details. Interpretation by the physician, e.g. the left ventricle, right ventricle, and viewing each image plane in a 3D environment spatially conform the 3D coordinates allows to quickly localize any movement visually.
  • FIG. 3 illustrates a three-dimensional visualization of breathing movement of a subject. Rotating this 3D visualisation renders evaluating the quality regarding movements in the MR image dataset straightforward. In FIG. 3, item 130 presents a single short axis slice, which has the shape of a closed form. Item 132 shows a two-chamber cine sequence that has the shape of a curve that is closed at one end, the apex, and open at the other. Combining these two without intermediate movement at item 134 shows the two curves to intersect. With breathing (or other patient caused movement) at item 136 show the two curves to be relatively displaced. This may lead to decide an image retake for correction in block 42 a. Even better visualisation is acquired by image enhancing on the image dataset of each plane.
  • Automatically correction for breathing movements is achieved in block 42 b by image registration techniques. In such techniques, a two-chamber image is created by resampling each short-axis image along the intersection line (epipolar line) of the two-chamber image, which gives an image which can be used for registration against the true two-chamber plane found during the acquisition. Breathing movement can now be computed by correlating in the Fourier domain each resampled line against the corresponding two-chamber intersection. This method can be extended with the four-chamber view.
  • In block 43, after preprocessing all short-axis frames, the total variance image of the complete short-axis dataset is generated. Because the short-axis images are perpendicular to the long axis of the ventricle (its z-axis corresponds to the voxel thickness) and the heart generally remains in motion, it is possible to effectively compress the 4D motion/variance into a single 2D image: VAR = 1 n_frames i = 1 n_frames ( I p i - I _ p ) 2 I variance = MAX ( VAR ( short axis slice n ) ) + n = 1 n_slices ( VAR ( short axis slice n ) n_slices ( 6 )
    The mean intersection points derived from the two-chamber and four-chamber epipolar lines projected on each short-axis slice are smoothed and added to the overall variance image: p _ = 1 n_slices l 2 ch l 4 ch n_slices l 2 ch = V SA i V 2 ch mapped on SA i l 4 ch = V SA i V 4 ch mapped on SA i p = STDEV ( p ) ( 7 )
    The mapping back on the SA slice is performed by: P 2 D SA = [ 1 1 0 ] · S ( 1 / voxelsize SA ) · R SA · T ( - P SA ) · P 3 D ( 8 )
    The variance image is now adjusted with this information according to: I totalvariance = I variance · i = p y - p y p y + p y p x - p x p x + p x I variance ( i , j ) - ( ( x - p _ x ) 2 · p x + ( y - p _ x ) 2 · py ) ( 9 )
  • Next in block 44 (threshold, dilate, convex hull), the generated variance image Itotal variance is smoothed with a Gaussian. Also, a dual threshold is defined. Preferably, the dual threshold is defined according to the Otsu threshold method described in N. Otsu “A threshold selection method for grey-level histograms” IEEE Trans. Syst. Man Cybernet SMC-01 (1970) pp. 62-66, herein incorporated by reference in its entirety.
  • After threshold selection, the occurrence of the detected objects is defined. When the first object covers an area of at least 85% of the histogram (FIG. 5), the threshold is based on the single method. Otherwise the threshold is selected between second and third objects derived from the histogram of Itotal variance.
  • After thresholding/binarization, all holes in the binary image are filled in as based on a morphology operation. In a labelling operation the object with the largest area is extracted. The grey level erosion is undone again by a binary dilation with the same kernel as the earlier grey level erosion. Then, a binary dilation has a kernel size of approximately the myocard wall thickness. Finally a convex hull is fitted which localizes the complete left and right ventricles in the 4D short-axis dataset, defined as mask M. The short-axis slices are perpendicular to the long axis of the left ventricle, which allows to approximate the left ventricle by a circle in these short-axis slices.
  • In block 45, an adaptive Hough transform combined with the generated mask of the heart area, enables automatic location of the left ventricle. The Hough transform s well-known to persons skilled in the art for detecting features that can be described as a parametric equation. For a circle with a and b the coordinates of the centre and radius r, this is:
    x(t)=r·cos(t)+a
    y(t)=r·sin(t)+b  (10)
  • The localization of the left ventricle is done on the middle short-axis slice. Within this slice the most likely end-diastole frame is found by selecting the frame which has the most elements within the mask M and above the threshold as defined during the localization phase of the heart.
  • Within this frame, the edges are located preferably by a so-called Canny edge detector, which is well-known to persons skilled in the art. First, the frame is smoothed with a Gaussian filter to reduce noise and unwanted details. Then, magnitude and direction of the gradient are calculated. I = G x 2 + G Y 2 M θ = tan - 1 ( Gy / G x ) M ( 11 )
    The kernels used to derive Gx and Gy are: G X = [ - 1 - 2 - 1 0 0 0 1 2 1 ] , G y = [ - 1 0 1 - 2 0 2 - 1 0 1 ] ( 12 )
  • The gradient magnitude image ∇I is binarized to Ibin using a predefined threshold. This gives a feature map that contains the edges and background. The edges are likely located on the perimeter of the circle that must be detected. Therefore, every detected edge point at (x,y) of Ibin is parsed and suggested to be on a circle boundary with radius r between rmin and rmax. The position of the centre (Cx,Cy) is then predicted from the edge location (x,y), the gradient direction θ and its radius r.
    C x =x−r·cos(θ)
    C y =y−r·sin(θ)  (13)
      • Where rε[rmin,rmax.]
        All found elements (Cx,Cy) are accumulated into the Hough image space H which is processed with a second order Gaussian derivative within the mask M. 2 H = H · ( ( x 2 + y 2 ) - 2 σ 4 ) · - ( ( x 2 + y 2 ) 2 σ 2 ) M ( 14 )
  • Now the position with the highest value in ∇2H is selected as the radius of the circle (Cx,Cy), thus yielding the centre point of the left ventricle.
  • The frame Ip is binarized preferably with the Otsu threshold method within the mask M, so the threshold is based on the histogram:
    H p m =HIST(I p ∩M)  (15)
  • Because the threshold is based on the frame histogram Hpm it could happen that next to blood tissue and myocardium tissue, also background were represented in the histogram Hpm. Therefore, a second threshold is defined based on the histogram within the circle defined by its centre (Cx,Cy) and radius rmin. The threshold preferably used during the binarizing process is: Single Otsu on H p M T M ; ( μ M 0 , σ M 0 ) ; ( μ M 1 , σ M 1 ) Dual Otsu on H p M T Md 1 ; T Md 2 ; ( μ Md 0 , σ Md 0 ) ; ( μ Md 1 , σ Md 1 ) ; ( μ Md 2 , σ Md 2 ) Single Otsu on H p circle T C ; ( μ C 0 , σ C 0 ) ; ( μ C 1 , σ C 1 ) T = { μ C 1 - σ C 1 if ( T M > μ C 1 ) ^ ( T Md 1 > μ C 1 ) T Md 1 if ( T M > μ C 1 ) ^ ( T Md 1 μ C 1 ) T M otherwise ( 16 )
  • Within this binarized image the object is extracted in which the centre (Cx,Cy) and the mean position derived from the two- and four-chamber intersection lines p are located. The left ventricle may contain papillary muscles, so the centre (Cx,Cy) or p need not lie within the extracted object. In such case the process is repeated for the next slice, above and/or below.
  • After successfully locating the left ventricle according to block 45, the end-diastole and end-systole phases are automatically determined in block 46 (estimate ED/ES heart phase), by:
    ED=arg max(Areai)
    ES=arg min(Areai)  (17)
    Herein, Areai is the area of the object extracted with the threshold T obtained through applying expression 16 on the frame i from the current short-axis slice. Again the binarization is performed within the mask M.
  • Finally, in block 47 the base short-axis slice and apex short-axis slice are defined at the end-diastolic phase as computed by expression 17. For each frame Ip within the end-diastolic phase the frame is binarized according to expression 16 which gives Bi. The process starts at the “seed” frame in which the left ventricle centre position has been determined previously, looking at the frame above and below.
  • Within each frame the following steps are performed, looking once at the frames spatially above and once looking at the frames spatially below.
      • Perform thresholding on frame Ip according to expression 16 that gives Bi.
      • Intersect with mask M
      • Fill holes located in Bi by using a morphology flood fill algorithm.
      • Extract the object that obtains the centre position Cp The extracted object gives a global estimation of the endocardial contour location Bendo i
      • Define Areai and perimeteri of extracted object Bendo i
      • Compute the circularity (C) of the extracted object by: C = 4 · π · Area i Perimeter i 2 ( 18 )
        • If the circularity is below a predefined threshold, or if the computed area (Areai) is three times above the area of the frame in which the left ventricle position is defined, the process stops and a ventricle boundary is defined as Ventricle boundary[ ]=i
      • Centre position Cp is adjusted according to: C p { x = pixels x Area i y = pixels y Area i ( 19 )
      • The global estimation of the epicardial border is defined Bepi i
  • The algorithm for segmenting the epicardial border is based on a radial minimum cost algorithm and uses as starting entity the estimated boundary of the endocardial Bendo i . A detailed description will be given herebelow when describing the segmentation of the left ventricle epicardial.
  • The mean area is found from the frames above and below the seed frame. Therewith the apex and base slice are defined as: A _ rea 1 = i = seedframeb 1 Vb 1 Area i Vb 1 - seedframe A _ rea 2 = i = Vb 2 seedframe Area i seedframe - Vb 2 apex_slice = { Vb 1 if A _ rea 1 < A _ rea 2 Vb 2 otherwise base_slice = { Vb 1 if A _ rea 1 A _ rea 2 Vb 2 otherwise ( 20 )
    The right ventricle is localize by subtracting the global estimation of the epicardial border in each frame (Bepi i ) from the thresholded image Ip according to expression 16 masked with M, this yields to a global estimation of the right ventricles endocardial border, defined as RVendo i . This ends the discussion of FIG. 4.
  • Now, turning back to FIG. 1, in block 10, the location of the left ventricle is identified. Thus, the indicating in the 4D stack of the left/right ventricle position, the end-diastolic phase, end-systolic phase, apex slice, base slice and the global boundary positions of the left ventricle epicardial have been done.
  • In block 11, the segmentation of the endocardial boundary is performed in a complete slice, which gives a spatiotemporal 3D image (i.e. 2D plus time) within the area eroded n times around the estimated epicardial boundary and masked with M when segmenting the left ventricle. During the segmentation of the right ventricle, the estimated endocardial boundary masked with M is used. During the erosion a disc-structured element is used with radius one. For the segmentation a fuzzy connectedness algorithm can be used. The algorithm is preferably based upon on the well-known concept of fuzzy objects in which image elements (pixels/voxels) exhibit a similarity or “hanging togetherness” both in geometry and in grey scale. See J. K. Udupa, S. Samarasekera, “Fuzzy-Connectedness and Object Definition: Theory, Algorithms and Applications in Image Segmentation” Graphic Models, Image Processing, vol. 58, 246-261, 1996, herein incorporated by reference in its entirety.
  • Pixels/voxels relate to each other by their adjacency and similarity. For simplicity, adjacency is defined as two neighbouring pixels in a (3D equivalent of) 4-connectivity sense. Therefore, only left/right, top/bottom and front/back neighbours are considered to have a relation. This relation is expressed by their similarity with respect to pixel grey values (also called affinity). It depends on the application which relation is used for finding similarity. Among various standard relations, in the preferred embodiment affinity μ is expressed as a weighted sum of a Gaussian and an additive Gaussian. The additive Gaussian is based on the intensity (grey value) P and the Gaussian on the intensity difference (gradient) G.
    μ(P C ,P D)=F(PN(G)
    with
    P=(P c +P D)/2
    G=P c −P D  (21)
      • in which PC and PD are the intensities of two neighbour pixels. The function N is a Gaussian or standard Normal function, specified by the mean μ, and standard deviation σ.
  • The function F is an adaptive Gaussian: the pixels inside the endocardium have an intensity distribution given by the right peak in the image histogram in FIG. 5. First, we could take this distribution modelled as a Gaussian N2. However, additional knowledge is present: pixels outside the endocardium may not be included in the object to be extracted. Therefore, we have to penalise these pixels by adding a second term in F: F = S · ( 1 - N 1 ) S = { N 2 if P μ 2 N 2 with σ = 3 σ 2 if P > μ 2 ( 22 )
  • In this respect, FIG. 5 illustrates an exemplary pixel intensity histogram of the heart's left ventricle. In particular, S generally equals a Gaussian distribution function N2, but for intensities above the mean μ2 it is broadened by taking a standard deviation that is 3 times the true value. This pseudo Gaussian will cause pixels with large intensities to also be extracted when they would otherwise be excluded. This takes the fact into account that one or two frames sometimes show hyper intensities within the endocardium occur that are caused by blood flow which leads to locally high concentrations of protons. The second term in expression (22) is the penalty term: grey values below the threshold (dashed line in FIG. 5 separating the myocardium from the endocardium) will have an extra low affinity.
  • To start extracting the fuzzy object, a seed point is needed which should be located inside the expected object. Taking the symbol fo(c) to represent the scene value at position c, the pseudo code for the algorithm follows:
      • 1. All scene values are set to zero and the scene value at the seed point is set to 1.
      • 2. Push each neighbour of the seed point to the queue if its affinity strength is greater than 0.
      • 3. Enter a while loop:
        • a. Remove an element from the queue. Its position is denoted by c. If queue is empty then the extraction is completed.
        • b. Find maximum of weakest affinity strength around this element. For each neighbour d the affinity strength is computed and the minimum (weakest) of this strength and the existing scene value fo(d) at the position of the neighbour is taken. The maximum fmax of these values for all 6 neighbours is derived then.
        • c. If the maximum value fmax is larger than the existing scene value fo(c) then
          • 1. set scene value at c equal to the found maximum value, so fo(c)=fmax.
          • 2. push all elements around c with affinity strength >0 to queue.
        • d. endif
      • 2. endwhile
  • The output of the algorithm is a so-called “scene” giving at each position in the 3D image/stack the connectivity or measure of “hanging-togetherness” regarding the seed point. Also here, a segmentation step must assign a scene value to the object or not. The scene is threshold conform:
    T scene=sceneiix>minxˆσy>minyˆσg>ming ˆi≧minn}  (23)
  • Here, σx and σy are the standard deviations in x and y direction, respectively, of the elements included in the thresholded scene, and σg is the standard deviation of the grey values when implementing the threshold. From the seed frame a threshold is deduced from this segmented image: minx, miny, ming are derived, and minn are the numbers of elements found from this segmented frame.
  • After thresholding the scene with the value defined from expression 23, the endocardial contours for the left ventricle or right ventricle are extracted. The final step in segmenting the left ventricles endocardial contour is the fine tune step. Therein two methods are presented. In one method the endocardial contour is fine tuned by means of a convex hull operation. In the second method the endocardial contour is forced into a smooth circular behaviour, which is more realistic from a physiologic point of view. This applies especially when analysing regional functional parameters of the left ventricle, such as wall thickness, wall thickening and wall motion. It might mean that the endocardial volume is overestimated, for example in that the papillary muscle volume is added to the left ventricle blood volume. Deducing the papillary muscles and subtracting their volume from the endocardial volume will calculate the correct blood volume, see hereinafter. During the detection of the right ventricle endocardial contour the fine tune step is always the first method. The right ventricles epicardial boundary is not detected by the current procedure, caused by the limited spatial resolution of the MR images.
  • Block 12 presents the left ventricle epicardial segmentation. This will segment the epicardial contour as based on a radial minimum cost algorithm. The detected endocardial contour is used as seed data. From the endocardial contour a polar model is created with an origin wherefrom radial lines originate and in which the radials are separated by n degrees. The origin is the centre of gravity of the endocardial contour. This polar model is used to resample the original image.
  • The epicardial boundary also contains various transitions in grey levels, alternating from BRIGHT_TO_DARK and vice versa. Because we use derivatives for edge detection, the sign of the edge should be given. Thereto, the following approach with dynamic sign definition is introduced, as follows. For each polar scanline:
      • Compute the location of the position whereas the absolute derivative is maximum. pos = arg max x { x g ( x ) } ( 24 )
        • g represents the scanline
      • Define sign by: l mean = i = 0 pos g ( i ) pos r mean = i = pos resample_width g ( i ) resample_width - pos if ( l mean < r mean ) sign = 1 DARK_TO _BRIGHT if ( l mean r mean ) sign = - 1 BRIGHT_TO _DARK ( 25 )
  • The epicard contour is now described by: C s ( i ) = min ( - C s ( i , j ) samples j on scanline ) C S ( i , j ) = { ( α · ( sign ( j ) · ( i g j ( i ) ) ) + ( 1 - α ) ( sign ( j ) · ( 2 i 2 g j ( i ) ) ) ) + 1 2 n ( i , j ) } · grey_cos t ( i , j ) n ( i , j ) = t g previous j ( i ) + i g next j ( i ) grey_cos t ( i , j ) = { - 0.5 ( ( g j ( i ) - ( μ - σ ) ) 2 σ 2 ) if sign = BRIGHT_TO _DARK - 0.5 ( ( g j ( i ) - ( μ + σ ) ) 2 σ 2 ) otherwise ( 26 )
  • Based on the sign determination described earlier, the posterior junction of the right ventricle and the interventricular septum (the LVRV connection) is defined in block 13. For each frame inside the short-axis dataset the area is defined with the largest closed DARK_TO_BRIGHT sign by using expression 24 and 25. Then the first scan line sign in this frame going from dark to bright in a clockwise resample orientation of the above polar model is defined as the LVRV connection point.
  • An alternative segmentation for the left ventricle is presented in block 16. This method is based on expression 25. Therein, both the left ventricle endocardial and epicardial borders are segmented with a minimum cost approach. The main difference is that this method assumes an approximately correct location of a seed contour. From this assumption, segmentation propagates in time with each short-axis frame. The previous location of the contours is introduced into the cost function. Also the sign definition and tissue statistics are updated for each frame. The cost function is given by: C S ( i , j ) = ( α · ( sign ( j ) · ( i g j ( i ) ) ) + ( 1 - a ) ( sign ( j ) · ( 2 i 2 g j ( i ) ) ) ) · grey_cos t ( i , j ) · prev_cos t ( i , j ) where prev_cos t ( i , j ) = ( ( contour_position j - i ) 2 σ p 2 ) ( 27 )
    This approach is suitable for contour corrections in short-axis images.
  • The final step in segmenting the left ventricle is the identifying/segmenting of the papillary muscles in block 14, as follows. From knowing the boundary of the myocardium, the endocardial contour and the epicardial contour, the grey level distribution of the myocardium and the grey level associated to the blood pool can be derived accurately. This is done by computing the normal distribution of both tissues; then the associated threshold separates both distributions by using the Otsu threshold method in each frame Ip. In places where papillary muscles are located no blood can be present. The segmentation of the papillary is then reduced to thresholding the area given by the endocardial contour and extracting all segmented objects which exceed a predefined area based on binary morphology.
  • Block 15 presents a further improvement in the use of the two-chamber and four-chamber image set to define the 3D spatial left ventricle virtual apex position. Due to partial volume effects and the relative thick short-axis frame thickness used during the acquisition, the short-axis apex is difficult to localize, even by an experienced clinician. Now, in current state of the MRI art cine acquisition slices are about 8 mm thick, to have sufficient signal-to-noise ratio. Knowing the 3D spatial position of the left ventricle apex, it is possible to afterwards correct the volume of the left ventricle.
  • The spatial 3D apex position is computed in the end-diastolic phase and in the end-systolic phase as identified by expression 17. The derived endocardial contours in the short-axis frames in the end-diastolic phase are mapped on the two chamber and four chamber frames starting from the base slice to the apex slice as found when localizing the left ventricle. The mapping is done by:
    p 2ch[ ]endo∩l 2ch
    p 4ch[ ]endo∩l 4ch  (28)
  • The point lists p2ch and p4ch are mapped on to the long axis frames by transforming them first into the 3D world and then back onto the long axis plane as follows: P 3 D = T ( P SA ) · R SA · S ( voxelsize SA ) · point 2 D ( 29 ) point 2 D LA = [ 1 1 0 ] · S ( 1 / voxelsize LA ) · R LA · T ( P LA ) · P 3 D ( 30 )
  • The above steps repeat for all short axis frames until reaching the apex short-axis slice. In this respect, FIG. 6 illustrates a mapping procedure from long-axes (LA) slices to short-axis (SA) slices or inversely. Through the various nodes indicated by black dots for both 2CH and 4CH cine sequences a well-known Catmull-Row spline curve is fitted. Also the apex containing slice has been shown: s ( t ) [ x = a x t 3 + b x t 2 + c x t + d x y = a y t 3 + b y t 2 + c y t + d y ] t [ 0 , 1 ] ( 31 )
  • This model is used to segment the left ventricles endocardial in the long axis view of the end-diastolic phase. Again, a radial minimum cost algorithm localizes the endocardial border, wherein both the 2CH and 4CH cine sequences contribute two border points, in the lower Figure part. The following function (expression 32) defines the boundary, by using CS as a cost function. Herein, g represents the resample image by the polar model. C s ( i ) = min ( - C s ( i , j ) | samples j on scanline ) C s ( i , j ) = ( α · i g j ( i ) + ( 1 - α ) 2 i 2 g j ( i ) ) · ( I ES - I ED ) · - 0.5 ( ( g j ( i ) - ( μ + σ ) ) 2 σ 2 ) ( 32 )
  • The apex position is now located (expression 33) in the two-chamber and four-chamber for both long axis views, the 3D spatial apex being defined by expression 34: p midbase = p 1 base + p 2 base 2 p apex LA = max dist ( endo LA i - p midbase ) ( 33 ) p apex = T ( P 2 CH ) · R 2 CH · s ( voxelsize 2 CH ) · p apex 2 CH + T ( P 4 CH ) · R 4 CH · s ( voxelsize 4 CH ) · p apex 4 CH 2 ( 34 )
  • The same steps are repeated for the end-systolic phase. The apex position in the remaining phases is interpolated or extrapolated from the spatial 3D apex positions in both the end-diastolic and end-systolic phases: this will present the correct volume contribution for the apex slice, where the slice may, or may not intersect the apex position proper. In fact, the left ventricle volume computed through Simpson's rule is now corrected by fitting an ellipsoid through the 3D apex position and the apex slice.
  • In this respect, FIG. 7 illustrates the starting of segmentation through presenting user LA input. Briefly the user identifies the start conditions in the two- and four-chamber image dataset. The advantage of this method is that the volume can be corrected according to the spatial location of the mitral valve derived from the long axis views as well. Now first, in block 70, the endocardial and epicardial left ventricle border in the 2-chamber and 4-chamber view both the end-diastole and end-systole cardiac heart phase must be present, for instance by manually drawing a spline based contour based on expression 32. Although, this represents operator input, all of the further blocks in FIG. 7 are executed automatically. Now, in block 71, ‘in-plane’ movement and breathing movement are corrected as described with reference to blocks 41 and 42 of FIG. 4. Next, in block 72 these long axis contours are mapped on the short-axis dataset. In pseudo code:
  • Walk through all slices
    l 2ch =SA(slice)∩2CH Transformed into the 2CH
    l 4ch =SA(slice)∩4CH Transformed into the 4CH
    pt 2ch [ ]=s 2ch(t)∩l 2ch s(t) 2chamber spline
    pt 4ch [ ]=s 4ch(t)∩l 4ch s(t) 4chamber spline  (35)
  • Convert the point—lists pt2ch and pt4ch into the SA slice
  • Define the slices which are part of the ventricle
  • The segmenting of the left ventricle endocardial (block 11), epicardial (block 12), LVRV connection point (block 13) and the papillary muscles (block 14) are identical to previously described procedures. Next, in block 73, the volume as computed by Simpson's rule, corrected according to the spatial left ventricle apex and mitral valve position while excluding the volume ‘outside’ the left ventricle. Thereupon, the procedure is finished again.
  • FIG. 8 illustrates a high level functional block diagram of an exemplary MRI apparatus. For clarity, only three subsystems have been shown. Block 112 represents the MRI apparatus proper that operates under commands from user interface module 116 and will provide data to data processing module 114. The latter may be represented by any computer processing platform, including a workstation, personal computer and the like; it executes the generating of intermediate data and final results to user interface module 116, the latter furthermore allowing a dialog with data processing module 114. User interface 116 represents all kinds of input and output devices; it may also be embodied in a program storage device (e.g., CD-ROM or DVD or in one or more downloadable files) that store a computer program that is loaded onto and executed by a computer processing platform to carry out the inventive operations described herein. Further steps and procedures have either been detailed supra, or have been common knowledge to persons skilled in the art.
  • There have been described and illustrated herein several embodiments of a machine-based methodology for segmenting the image of the chambers of the human heart. While particular embodiments of the invention have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. It will therefore be appreciated by those skilled in the art that yet other modifications could be made to the provided invention without deviating from its spirit and scope as claimed.

Claims (20)

  1. 1. A method for machine-based segmentation of cardiac chambers in a four-dimensional magnetic resonance image that contains at least short-axis cine sequences, the segmentation based on temporal variations in the image, the method comprising:
    preprocessing image data to eliminate high local image variances and in homogeneity;
    processing image data to identify positions of a base slice, an apex slice, an end-diastole phase, an end-systole phase and an LVRV connection, the LVRV connection being a posterior junction between a right ventricle and the interventricular septum;
    followed by machine-based segmentation of an endocard based on fuzzy object extraction, and machine-based segmentation of an epicard through finding a radial minimum cost with dynamic sign definition and tissue classification.
  2. 2. A method as claimed in claim 1, wherein:
    said cine sequences furthermore comprise two-chamber cine sequences and four-chamber cine sequences.
  3. 3. A method as claimed in claim 1, further comprising:
    correcting for data acquisition movements.
  4. 4. A method as claimed in claim 1, further comprising:
    automatic segmentation of the papillary muscles of the heart.
  5. 5. A method as claimed in claim 4, further comprising:
    automatic detection of the position of the LVRV connection.
  6. 6. A method as claimed in claim 1, further comprising:
    allowing for operator inspection of a subject-caused movement in the acquired MR image dataset by using system coordinates from an MR scanner apparatus.
  7. 7. A method as claimed in claim 6, further comprising:
    automatic correction for subject-caused movement in the acquired MR image dataset by using system coordinates from an MR scanner apparatus.
  8. 8. A method as claimed in claim 1, wherein:
    segmentation of the endocard and epicard starts from an operator-defined seed contour and being based on radial minimum cost with dynamic sign definition and tissue classification.
  9. 9. A method as claimed in claim 1, wherein:
    volume of a ventricle is determined by a correction that is derived from the spatial location of the heart's apex position on a curve which is obtained from the long axis cine sequences.
  10. 10. A method as claimed in claim 9, wherein:
    said correction takes into account a mitral valve position obtained from long axis cine views.
  11. 11. An apparatus for machine-based segmentation of cardiac chambers in a four-dimensional magnetic resonance image that contains at least short-axis cine sequences, the segmentation based on temporal variations in the data set and on features extracted from the image, the apparatus comprising:
    preprocessing means for preprocessing image data to eliminate high local image variances and inhomogeneity;
    means for processing image data to identify positions of a base slice, an apex slice, an end-diastole, an end-systole and an LVRV connection, the LVRV connection being a posterior junction between a right ventricle and an interventricular septum, and
    means for machine-based segmentation of an endocard based on fuzzy object extraction, and means for machine-based segmentation of an epicard through finding a radial minimum cost with dynamic sign definition and tissue classification.
  12. 12. An apparatus as claimed in claim 11, wherein:
    said cine sequences furthermore comprise two-chamber cine sequences and four-chamber cine sequences.
  13. 13. An apparatus as claimed in claim 11, wherein:
    machine-based segmentation over time of endocardial, epicardial and papillary muscles is based on a seed contour.
  14. 14. A computer program product that is installable onto a computer processing platform, the computer program product readable by the computer processing platform, tangibly embodying a program of instructions executable by the computer processing platform to perform methods steps for machine-based segmentation of the cardiac chambers in four-dimensional magnetic resonance image that contains at least short-axis cine sequences, the segmentation based on temporal variations in the image and on features extracted from the image, the method steps comprising:
    preprocessing image data to eliminate high local image variances and in homogeneity;
    processing image data to identify positions of a base slice, an apex slice, an end-diastole phase, an end-systole phase and an LVRV connection, the LVRV connection being a posterior junction between a right ventricle and the interventricular septum;
    followed by machine-based segmentation of an endocard based on fuzzy object extraction, and machine-based segmentation of an epicard through finding a radial minimum cost with dynamic sign definition and tissue classification.
  15. 15. A computer program product as claimed in claim 14, the method steps further comprising:
    correcting for data acquisition movements.
  16. 16. A computer program product as claimed in claim 14, the method steps further comprising:
    automatic segmentation of the papillary muscles of the heart.
  17. 17. A computer program product as claimed in claim 14, the method steps further comprising:
    allowing for operator inspection of a subject-caused movement in the acquired MR image dataset by using system coordinates from an MR scanner apparatus.
  18. 18. A computer program product as claimed in claim 17, the method steps further comprising:
    automatic correction for subject-caused movement in the acquired MR image dataset by using system coordinates from an MR scanner apparatus.
  19. 19. A method for machine-based segmentation of cardiac chambers in a four-dimensional magnetic resonance image that contains short-axis cine sequences, two-chamber cine sequences and four-chamber cine sequences, the segmentation based on temporal variations in the image and on features extracted from the image, the method comprising:
    interacting with an operator to indicate the epicard and endocard in a long axis data set; and
    automatic segmentation of the endocard based on fuzzy object extraction, and automatic segmentation of the epicard based on radial minimum cost with dynamic sign definition and tissue classification.
  20. 20. An apparatus for machine-based segmentation of cardiac chambers in a four-dimensional magnetic resonance image that contains short-axis cine sequences, two-chamber cine sequences and four-chamber cine sequences, the segmentation based on temporal variations in the image and on features extracted from the image, the apparatus comprising:
    means for interacting with an operator to indicate the epicard and endocard in a long axis data set; and
    means for automatic segmentation of the endocard based on fuzzy object extraction, and means for automatic segmentation of the epicard based on radial minimum cost with dynamic sign definition and tissue classification.
US11380681 2006-04-28 2006-04-28 Method, apparatus and computer program product for automatic segmenting of cardiac chambers Active 2029-11-04 US7864997B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11380681 US7864997B2 (en) 2006-04-28 2006-04-28 Method, apparatus and computer program product for automatic segmenting of cardiac chambers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11380681 US7864997B2 (en) 2006-04-28 2006-04-28 Method, apparatus and computer program product for automatic segmenting of cardiac chambers

Publications (2)

Publication Number Publication Date
US20070253609A1 true true US20070253609A1 (en) 2007-11-01
US7864997B2 US7864997B2 (en) 2011-01-04

Family

ID=38648360

Family Applications (1)

Application Number Title Priority Date Filing Date
US11380681 Active 2029-11-04 US7864997B2 (en) 2006-04-28 2006-04-28 Method, apparatus and computer program product for automatic segmenting of cardiac chambers

Country Status (1)

Country Link
US (1) US7864997B2 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187180A1 (en) * 2007-02-06 2008-08-07 Kabushiki Kaisha Toshiba Magnetic resonance imaging apparatus and method of analyzing images provided thereby
US20090232371A1 (en) * 2008-03-12 2009-09-17 Seimens Corporate Research, Inc. Automatic Recovery of the Left Ventricular Blood Pool in Cardiac Cine Magnetic Resonance Images
US20090290777A1 (en) * 2008-05-23 2009-11-26 Siemens Corporate Research, Inc. Automatic localization of the left ventricle in cardiac cine magnetic resonance imaging
WO2011010243A1 (en) * 2009-07-20 2011-01-27 Koninklijke Philips Electronics N.V. Apparatus and method for influencing and/or detecting magnetic particles
US20110105931A1 (en) * 2007-11-20 2011-05-05 Siemens Medical Solutions Usa, Inc. System for Determining Patient Heart related Parameters for use in Heart Imaging
US20110210734A1 (en) * 2010-02-26 2011-09-01 Robert David Darrow System and method for mr image scan and analysis
US20110211744A1 (en) * 2010-02-26 2011-09-01 Robert David Darrow System and method for mr image scan and analysis
US20120177269A1 (en) * 2010-09-22 2012-07-12 Siemens Corporation Detection of Landmarks and Key-frames in Cardiac Perfusion MRI Using a Joint Spatial-Temporal Context Model
WO2012118459A1 (en) 2011-03-01 2012-09-07 Ulusoy Ilkay An object based segmentation method
US20130253307A1 (en) * 2009-09-18 2013-09-26 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
RU2522038C2 (en) * 2008-09-01 2014-07-10 Конинклейке Филипс Электроникс Н.В. Segmentation in mr-visualisation of heart in long axis projection with late contrast enhancement
WO2016013773A1 (en) * 2014-07-25 2016-01-28 Samsung Electronics Co., Ltd. Magnetic resonance image processing method and magnetic resonance image processing apparatus
KR20160051355A (en) * 2014-11-03 2016-05-11 삼성전자주식회사 Medical imaging apparatus and processing method thereof
US20170140530A1 (en) * 2015-11-17 2017-05-18 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image processing in magnetic resonance imaging
US20170178285A1 (en) * 2015-12-22 2017-06-22 Shanghai United Imaging Healthcare Co., Ltd. Method and system for cardiac image segmentation

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005009220A3 (en) * 2003-07-21 2005-11-17 Univ Johns Hopkins Registration of ultrasound to fluoroscopy for real time optimization of radiation implant procedures
US8315449B2 (en) * 2008-06-24 2012-11-20 Medrad, Inc. Identification of regions of interest and extraction of time value curves in imaging procedures
WO2011008906A1 (en) * 2009-07-15 2011-01-20 Mayo Foundation For Medical Education And Research Computer-aided detection (cad) of intracranial aneurysms
US8948484B2 (en) * 2010-11-11 2015-02-03 Siemens Corporation Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition
KR20150013580A (en) 2012-05-14 2015-02-05 바이엘 메디컬 케어 인크. Systems and methods for determination of pharmaceutical fluid injection protocols based on x-ray tube voltage
US20130322718A1 (en) * 2012-06-01 2013-12-05 Yi-Hsuan Kao Method and apparatus for measurements of the brain perfusion in dynamic contrast-enhanced computed tomography images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5239591A (en) * 1991-07-03 1993-08-24 U.S. Philips Corp. Contour extraction in multi-phase, multi-slice cardiac mri studies by propagation of seed contours between images
US6438403B1 (en) * 1999-11-01 2002-08-20 General Electric Company Method and apparatus for cardiac analysis using four-dimensional connectivity
US20060241376A1 (en) * 2003-04-24 2006-10-26 Koninklijke Philips Electronics N.V. Non-invasive left ventricular volume determination
US7646901B2 (en) * 2001-04-30 2010-01-12 Chase Medical, L.P. System and method for facilitating cardiac intervention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5239591A (en) * 1991-07-03 1993-08-24 U.S. Philips Corp. Contour extraction in multi-phase, multi-slice cardiac mri studies by propagation of seed contours between images
US6438403B1 (en) * 1999-11-01 2002-08-20 General Electric Company Method and apparatus for cardiac analysis using four-dimensional connectivity
US7646901B2 (en) * 2001-04-30 2010-01-12 Chase Medical, L.P. System and method for facilitating cardiac intervention
US20060241376A1 (en) * 2003-04-24 2006-10-26 Koninklijke Philips Electronics N.V. Non-invasive left ventricular volume determination

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187180A1 (en) * 2007-02-06 2008-08-07 Kabushiki Kaisha Toshiba Magnetic resonance imaging apparatus and method of analyzing images provided thereby
US8488856B2 (en) * 2007-02-06 2013-07-16 Kabushiki Kaisha Toshiba Magnetic resonance imaging apparatus and method of analyzing images provided thereby
US20110105931A1 (en) * 2007-11-20 2011-05-05 Siemens Medical Solutions Usa, Inc. System for Determining Patient Heart related Parameters for use in Heart Imaging
US8433113B2 (en) * 2008-03-12 2013-04-30 Siemens Aktiengesellschaft Automatic recovery of the left ventricular blood pool in cardiac cine magnetic resonance images
US20090232371A1 (en) * 2008-03-12 2009-09-17 Seimens Corporate Research, Inc. Automatic Recovery of the Left Ventricular Blood Pool in Cardiac Cine Magnetic Resonance Images
US20090290777A1 (en) * 2008-05-23 2009-11-26 Siemens Corporate Research, Inc. Automatic localization of the left ventricle in cardiac cine magnetic resonance imaging
US8218839B2 (en) * 2008-05-23 2012-07-10 Siemens Aktiengesellschaft Automatic localization of the left ventricle in cardiac cine magnetic resonance imaging
RU2522038C2 (en) * 2008-09-01 2014-07-10 Конинклейке Филипс Электроникс Н.В. Segmentation in mr-visualisation of heart in long axis projection with late contrast enhancement
WO2011010243A1 (en) * 2009-07-20 2011-01-27 Koninklijke Philips Electronics N.V. Apparatus and method for influencing and/or detecting magnetic particles
CN102469951A (en) * 2009-07-20 2012-05-23 皇家飞利浦电子股份有限公司 Apparatus and method for influencing and/or detecting magnetic particles
US8981770B2 (en) 2009-07-20 2015-03-17 Koninklijke Philips N.V. Apparatus and method for influencing and/or detecting magnetic particles
US9839366B2 (en) 2009-09-18 2017-12-12 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
US9585576B2 (en) * 2009-09-18 2017-03-07 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
US9474455B2 (en) 2009-09-18 2016-10-25 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
US20130253307A1 (en) * 2009-09-18 2013-09-26 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
US10058257B2 (en) 2009-09-18 2018-08-28 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and magnetic resonance imaging method
US20110211744A1 (en) * 2010-02-26 2011-09-01 Robert David Darrow System and method for mr image scan and analysis
US8594400B2 (en) 2010-02-26 2013-11-26 General Electric Company System and method for MR image scan and analysis
US20110210734A1 (en) * 2010-02-26 2011-09-01 Robert David Darrow System and method for mr image scan and analysis
US8811699B2 (en) * 2010-09-22 2014-08-19 Siemens Aktiengesellschaft Detection of landmarks and key-frames in cardiac perfusion MRI using a joint spatial-temporal context model
US20120177269A1 (en) * 2010-09-22 2012-07-12 Siemens Corporation Detection of Landmarks and Key-frames in Cardiac Perfusion MRI Using a Joint Spatial-Temporal Context Model
WO2012118459A1 (en) 2011-03-01 2012-09-07 Ulusoy Ilkay An object based segmentation method
JP2014511533A (en) * 2011-03-01 2014-05-15 ウルソイ イルカイULUSOY, Ilkay Object-based segmentation method
EP3171774A4 (en) * 2014-07-25 2017-12-27 Samsung Electronics Co., Ltd. Magnetic resonance image processing method and magnetic resonance image processing apparatus
KR101616029B1 (en) 2014-07-25 2016-04-27 삼성전자주식회사 Magnetic resonance imaging processing method and apparatus thereof
WO2016013773A1 (en) * 2014-07-25 2016-01-28 Samsung Electronics Co., Ltd. Magnetic resonance image processing method and magnetic resonance image processing apparatus
KR20160012807A (en) * 2014-07-25 2016-02-03 삼성전자주식회사 Magnetic resonance imaging processing method and apparatus thereof
KR20160051355A (en) * 2014-11-03 2016-05-11 삼성전자주식회사 Medical imaging apparatus and processing method thereof
KR101652046B1 (en) 2014-11-03 2016-08-29 삼성전자주식회사 Medical imaging apparatus and processing method thereof
US10042028B2 (en) 2014-11-03 2018-08-07 Samsung Electronics Co., Ltd. Medical imaging apparatus and method of processing medical image
US20170140530A1 (en) * 2015-11-17 2017-05-18 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image processing in magnetic resonance imaging
US20170178285A1 (en) * 2015-12-22 2017-06-22 Shanghai United Imaging Healthcare Co., Ltd. Method and system for cardiac image segmentation

Also Published As

Publication number Publication date Type
US7864997B2 (en) 2011-01-04 grant

Similar Documents

Publication Publication Date Title
Masutani et al. Computerized detection of pulmonary embolism in spiral CT angiography based on volumetric image analysis
Gerard et al. Efficient model-based quantification of left ventricular function in 3-D echocardiography
Mitchell et al. Multistage hybrid active appearance model matching: segmentation of left and right ventricles in cardiac MR images
Sluimer et al. Toward automated segmentation of the pathological lung in CT
Rezaee et al. A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering
Jacob et al. A shape-space-based approach to tracking myocardial borders and quantifying regional left-ventricular function applied in echocardiography
Dehmeshki et al. Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach
Bai et al. A probabilistic patch-based label fusion model for multi-atlas segmentation with registration refinement: application to cardiac MR images
Unal et al. Shape-driven segmentation of the arterial wall in intravascular ultrasound images
US20070116332A1 (en) Vessel segmentation using vesselness and edgeness
US7336809B2 (en) Segmentation in medical images
US7953266B2 (en) Robust vessel tree modeling
US20050281447A1 (en) System and method for detecting the aortic valve using a model-based segmentation technique
Van der Geest et al. Comparison between manual and semiautomated analysis of left ventricular volume parameters from short-axis MR images
US20020141626A1 (en) Automated registration of 3-D medical scans of similar anatomical structures
US20030069494A1 (en) System and method for segmenting the left ventricle in a cardiac MR image
US7421101B2 (en) System and method for local deformable motion analysis
US7925064B2 (en) Automatic multi-dimensional intravascular ultrasound image segmentation method
Lynch et al. Automatic segmentation of the left ventricle cavity and myocardium in MRI data
US20080292169A1 (en) Method for segmenting objects in images
US20070003117A1 (en) Method and system for volumetric comparative image analysis and diagnosis
US20060064007A1 (en) System and method for tracking anatomical structures in three dimensional images
US20030099390A1 (en) Lung field segmentation from CT thoracic images
US20070031019A1 (en) System and method for coronary artery segmentation of cardiac CT volumes
US7876938B2 (en) System and method for whole body landmark detection, segmentation and change quantification in digital images

Legal Events

Date Code Title Description
AS Assignment

Owner name: PIE MEDICAL IMAGING B. V., NETHERLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ABEN, JEAN-PAUL MARIA MICHEL;REEL/FRAME:017611/0846

Effective date: 20060413

FPAY Fee payment

Year of fee payment: 4

MAFP

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8