WO2001082787A2 - Method for determining the contour of an in vivo organ using multiple image frames of the organ - Google Patents

Method for determining the contour of an in vivo organ using multiple image frames of the organ Download PDF

Info

Publication number
WO2001082787A2
WO2001082787A2 PCT/US2001/014341 US0114341W WO0182787A2 WO 2001082787 A2 WO2001082787 A2 WO 2001082787A2 US 0114341 W US0114341 W US 0114341W WO 0182787 A2 WO0182787 A2 WO 0182787A2
Authority
WO
WIPO (PCT)
Prior art keywords
boundary
image
class
heart
pixel
Prior art date
Application number
PCT/US2001/014341
Other languages
French (fr)
Other versions
WO2001082787A3 (en
Inventor
Florence Sheehan
Robert Haralick
Lei Sui
Original Assignee
University Of Washington
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Washington filed Critical University Of Washington
Priority to AU2001261163A priority Critical patent/AU2001261163A1/en
Publication of WO2001082787A2 publication Critical patent/WO2001082787A2/en
Publication of WO2001082787A3 publication Critical patent/WO2001082787A3/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/504Clinical applications involving diagnosis of blood vessels, e.g. by angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/481Diagnostic techniques involving the use of contrast agents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Definitions

  • the present invention generally pertains to a method for determining a boundary or contour of an internal organ based upon image data, and more specifically, is directed to a method for determining the contour of the organ based on processing image data for multiple image frames.
  • Contrast ventriculography is a procedure that is routinely performed in clinical practice during cardiac catheterization. Catheters must be intravascularly inserted within the heart, for example, to measure cardiac volume and/or flow rate. Ventriculograms are X-ray images that graphically represent the inner or endocardial surface of the ventricular chamber. These images are typically used to determine tracings of the endocardial boundary at end diastole (ED), when the heart is filled with blood, and at end systole (ES), when the heart is at the end of a contraction during the cardiac cycle.
  • ED end diastole
  • ES end systole
  • a physician can determine the size and function of the left ventricle and can diagnose certain abnormalities or defects in the heart.
  • the former is perhaps the most useful in diagnosing cardiac abnormalities.
  • a radio opaque contrast fluid is injected into the left ventricle (LV) of a patient's heart.
  • An X-ray source is aligned with the heart, producing a projected image representing, in silhouette, the endocardial surface of the heart (myocardium) muscle.
  • the silhouette image of the LV is visible because of the contrast between the radio opaque fluid and other surrounding physiological structure.
  • Manual delineation of the endocardial boundary by a trained medical practitioner is normally employed to determine the contour, but this procedure requires time and considerable training and experience to accomplish accurately.
  • a medical practitioner can visually assess the ventriculogram image to estimate the endocardial contour, but such evaluation is subject to more interobserver variability and therefore less accuracy than a quantitative analysis.
  • an automated border detection technique that can produce more accurate results, in much less time than the manual evaluation, would be preferred.
  • the method disclosed in the patent includes the steps of edge detection, block generation, block classification, block refinement, and bit map generation. More specifically, after the edges of the object are detected in the first step, the image is broken into a set of non-overlapping, contiguous blocks of pixels, which are classified into foreground, background, and object, on a block-by-block basis.
  • the block classification step determines in which of ten possible states each block belongs using a set of clinically and empirically determined decision rules. By evaluating the fine structure within each block, the block classification is refined so that a two-valued or binary image is produced that functions as a template for any further image processing to be done on the image.
  • Another technique related to automated border detection is based upon identifying a gradient of the gray scale values comprising an image.
  • a gray scale threshold gradient value is applied to process the gray scale image data of a ventriculogram in order to identify the boundary of the LV, and further processing may be employed to improve the accuracy with which the border is identified.
  • landmarks or recognizable shapes or gray scale value combinations can be tracked over time to determine the direction and velocity of motion, which are represented as flow vectors. By analyzing the pattern of flow vectors, motion of the organ can be assessed. However, these flow vectors do not directly indicate the contour of the organ.
  • Yet another technique that is sometimes employed to determine the contour of an organ is based on digital subtraction.
  • a mask image is recorded prior to introduction of a radio opaque contrast substance into the organ.
  • This mask image may contain radio opaque structures such as ribs and vertebrae, which tend to interfere with discerning the contour of the organ.
  • the mask image is digitally subtracted from the second image, thereby removing the clutter in the second image that is not the organ in question.
  • this technique is difficult to implement because registration between the mask image and the subsequent second image of the organ made perceptible by the radio opaque contrast substance is difficult to achieve.
  • a variation of this technique employs time interval delay subtraction, wherein an image that was previously made close in time is subtracted from an image being analyzed, so that a difference image is produced that contains only the part of the organ that moved during the time interval between the two images. However, any part of the organ that does not move between the times that the two images were made cannot be delineated.
  • Morphological operators can also be employed to .process image data in order to define the boundaries of objects. Such techniques are often more general in application, e.g., relating to artificial vision systems, and are therefore not constrained by physiological considerations.
  • an automated method should employ as much information derived from the imaging as possible to delineate the surface of the organ. Further, the automated system should accomplish the task more efficiently and quickly than a human. Toward that end, it has become apparent that more information than can be obtained from a single image will improve the accuracy with which an automated technique can determine the contour of an organ. Analysis of more than one image can provide the additional information needed for an automated method to achieve greater accuracy and can provide the physician more information about the heart or other organ than current techniques. Methods for automatically determining the boundary of the LV (or other organ) are disclosed in commonly assigned U.S. Patent Nos. 5,570,430 and 5,734,739 (a continuation-in-part of U.S. Patent No.
  • One of the problems was due to inherent differences in the position and orientation of the heart within the image data being analyzed relative to that of hearts in the images comprising the training data. Such differences can affect the accuracy of the initial estimate of the LV region and determination of the contour at ES (and ED), since the differences introduce errors in the classification of pixels of the image data based upon the training data. Steps added in the second of these two patents included rotating and translating the training data so that the data are consistent in position and orientation to the LV of the heart being analyzed, prior to using the training data to classify pixels when producing the initial estimate of the LV region.
  • U.S. Patent No. 5,570,430 caused an uncertainty in the automated detection of the border of the LV to arise in the inferior portion of the contour, adjacent to the patient's diaphragm.
  • the gray scale values for the tissue comprising the diaphragm are, for some patients' images, very similar to the gray scale value of injected dye in the images being analyzed.
  • the uncertainty in the automated detection of the border of the LV in the inferior region bordering the diaphragm typically causes the process to define a border that extends outwardly of its actual location, creating an error.
  • U.S. Patent No. 5,734,739 describes steps for more accurately determining the contour of the LV in this region.
  • Another area for improving the technique disclosed in the two commonly assigned patents noted above involves the optimization of the various parameters used in the method so that the resulting ejection fraction error is minimized.
  • parameters that can be optimized prior to the application of the method for evaluating the shape of a patient's LV, and by using optimized parameters in the process, a substantial reduction in the error of the ejection fraction that is determined for each such patient should be achievable.
  • the ES boundary regression should be improved, for example, by using information about the ED boundary.
  • a method for automatically determining a contour of at least a portion of an organ in a patient, based upon digital image data of a region within the body of the patient in which the organ is disposed.
  • the image data represent a, sequence of image frames of the region made over an interval of time during which the organ has completed at least one cycle of a periodic movement.
  • Each image frame of the sequence includes a plurality of pixels.
  • the organ is a heart and the method is applied to determine the contour of the LV.
  • the invention can also be applied to determine the contour of other chambers in the heart and the contour of other organs in a patient's body.
  • the term “maximum volume” corresponds to the ED volume (and related boundary)
  • the term “minimum volume” corresponds to the ES volume (and related boundary) of the heart or a selected chamber of interest of the heart, such as the LV.
  • the first step of the method includes preprocessing and normalization of the image data to produce principal component images in which the image data are "smoothed" and normalized in regard to pixel size and gray scale.
  • the present invention provides for classifying pixels represented in the principal component images as being in one of three classes.
  • the classification step produces pixel class images for the ED/ES boundary (i.e., all pixels within the ED boundary but not within the ES boundary), and for the end systole boundary (i.e., all pixels within the ES boundary); the third class is assigned to the pixels in the background.
  • the maximum volume and minimum volume boundaries for the organ are extracted from the pixel class images, using parameters derived from training data previously produced through manual evaluation of a plurality of image frames over at least one cycle of periodic movement in each of a plurality of organs in other individuals. These two boundaries define the contour of the organ comprising a region of interest in the image data.
  • the step of normalizing preferably includes the steps of normalizing each pixel in regard to dimensional size, to produce square pixel image data.
  • a gray scale morphological opening and closing process is applied to reduce noise and smooth the pixel image data.
  • an image sample rate used to acquire the image data is normalized over at least one cardiac cycle, to produce a predefined number of bands of image data.
  • a cumulative distribution function (CDF) is employed to normalize the gray scale in each of the predefined number of bands of image data, producing a CDF sequence of image frames, each of which includes a plurality of pixels. Further, each pixel in the CDF sequence of image frames is projected onto a new space having a predefined number of dimensions, using parameters derived from the training data, to produce the principal component images.
  • the step of non-parametrically classifying includes the step of aligning a reference axis and an anatomical feature in the organ of the patient with a corresponding axis and corresponding anatomical feature in prior probability images that are derived from the training data.
  • the prior probability images which are obtained by aligning images in the training data into a common coordinate system, thus indicates the probability of each class at each pixel location.
  • this step produces aligned prior probability images for the ED-not-ES and ES regions of the heart of a patient during the cardiac cycle.
  • the principal component images of pixels within a region of interest are selected, and based upon a probability for each class, a conditional class probability for each of the pixels is determined.
  • the conditional class probability is the likelihood of a pixel belonging to a class given its principal component images.
  • a probability for each class at each pixel location is determined to initially classify the pixels in a region, e.g., within the maximum volume boundary but not the minimum volume boundary, within the ED ES if the organ is the heart.
  • a probability is determined for each class at each pixel location to initially classify the pixels in a region within the minimum volume boundary using the aligned prior probability image, the conditional class probability for each pixel, and regression data representing the maximum volume boundary.
  • the regression data representing the maximum volume boundary are combined with probabilities associated with distances between the maximum volume boundary and the minimum volume boundary to produce a refined aligned " prior probability image for use with the conditional class probability in the step of determining a minimum volume boundary pixel class image.
  • the step of extracting the minimum volume boundary preferably comprises the steps of forming a minimum volume boundary and a minimum volume class region image that includes all pixels classified as being either within the maximum volume boundary and not within the minimum volume boundary OR within the minimum volume boundary. Any pixel that is inconsistent with the maximum volume boundary and the minimum volume boundary class region image is deleted, producing a refined minimum volume boundary class region image.
  • the refined maximum volume boundary class region image is then smoothed, and the maximum volume boundary class region is processed to extract a classifier maximum volume boundary having a predefined number of points that represent the maximum volume boundary.
  • the step of extracting the minimum volume boundary includes the steps of forming a minimum volume boundary class region image that includes all pixels classified within the minimum volume boundary, and smoothing the minimum volume boundary class region image.
  • the minimum volume boundary class region is processed to extract a classifier minimum volume boundary having a predefined number of points that represent the minimum volume boundary.
  • Regression coefficients derived from the training data are applied to the maximum volume boundary and to the minimum volume boundary to determine a regressed maximum volume boundary and a regressed minimum volume boundary.
  • a plurality of parameters are determined for the organ, based upon the regressed maximum volume boundary, the regressed minimum volume boundary, and classifier boundaries obtained from the training data.
  • rejection classifier coefficients are determined from the training data.
  • a rejection cost threshold is employed to determine if the contour of the organ thus determined is accepted or rejected. If the contour is rejected, it is flagged for manual review by an operator.
  • This method also includes the step of employing at least one optimized parameter in determining the contour of the organ.
  • the organ comprises a heart of the patient, and the region of interest preferably comprises the LV.
  • FIGURE 1 is a schematic diagram illustrating equipment used to produce X-ray images of an internal organ and the apparatus used for processing the images to determine the contour of the organ in accordance with the present invention
  • FIGURE 2 is cross-sectional view of a human heart, illustrating the shape or contour of the LV, which the present invention can automatically determine;
  • FIGURE 3 is flow chart showing an overview of the process steps employed to determine the contour of the LV, at multiple times during a cardiac cycle;
  • FIGURE 4 is a flow chart showing the preprocessing and normalization steps applied to the raw image data for the LV;
  • FIGURE 5 is a flow chart illustrating the steps for determining the initial
  • FIGURE 6 is a flow chart showing the logic for the ED post processing and for the extraction of the ED boundary
  • FIGURE 7 is a flow chart showing the logical steps applied to determine the ED regressed boundary
  • FIGURE 8 is a flow chart illustrating the logical steps employed in producing the ES region classification image
  • FIGURE 9 is a flow chart illustrating details of the steps for post processing the ES region and for extracting the ES boundary;
  • FIGURE 10 is a flow chart showing the steps for determining the regressed ES boundary
  • FIGURE 11 is a flow chart showing the steps for determining the classifier stroke volume and classifier ejection fraction
  • FIGURE 12 is a flow chart illustrating the steps for determining the regressed stroke volume and regressed ejection fraction
  • FIGURE 13 is a flow chart showing the steps for determining the ED boundary error and ES boundary error.
  • FIGURE 14 is a flow chart illustrating the logic employed when automatically accepting or rejecting a boundary determined by the present invention.
  • FIGURE 1 A generally conventional X-ray imaging facility 10 is shown in FIGURE 1. Also shown is the equipment necessary to process the X-ray images produced by the apparatus in accordance with the present invention, so that a contour of an organ can be determined and displayed.
  • an X-ray source 12 is energized with electrical current supplied by a control 14, which determines the level of the current, the voltage applied to the X-ray source, and the time interval during which the electrical current is provided.
  • X-ray source 12 produces a beam 18 of X-rays that pass through the chest of a patient 20 and into a detector 16.
  • X-ray source 12 and detector 16 are aligned along a common longitudinal axis, so that beam 18 passes through a region of interest in the chest of patient 20.
  • the organ that is being imaged is generally disposed within the center of the region of interest and within the center of beam 18.
  • beam 18 After passing through the chest of patient 20, beam 18 enters an image intensifier 22 in detector 16, which converts the beam of X-rays into a corresponding optical image.
  • a radio opaque contrast substance such as an iodine compound
  • a radio opaque contrast substance is injected into the organ to absorb or block more of the X-ray beam energy than in the surrounding soft tissue.
  • the interior of the organ in the image produced by image intensifier 22 appears relatively brighter, compared to the surrounding background.
  • a partially silvered mirror 24 is disposed along the longitudinal axis of detector 16, so that a portion of the light comprising the image produced by image intensifier 22 is reflected in a direction transverse to the longitudinal axis of the detector and into the lens of a cine camera 26.
  • the cine camera 26 records the images onto cine film 23.
  • the remainder of the light comprising the image is transmitted through partially silvered mirror 24 along the longitudinal axis of detector 16 and into the lens of a video camera 28.
  • a cine projector 25 located outside the x-ray imaging facility is used to project the cine film images into a video camera 29 to produce an analog signal.
  • the analog signal comprises a voltage for each pixel of the image, and the level of the voltage is indicative of a gray scale value or intensity level at the pixel.
  • An analog-to-digital converter (ADC) 30 converts the voltage representing the gray scale value for each pixel to a corresponding digital value. It should be noted that digital video cameras are able to directly produce a digital output signal, and if such a video camera is used, cine camera 26, cine film 23, cine projector 25, video camera 29, and ADC 30 can be eliminated.
  • the gray scale level can range between 0 and 255 for any pixel in the digital image data (but typically, the darkest and brightest pixels will lie within a much smaller range).
  • the digital image data produced by the X-ray system at each spaced-apart interval of time are referred to throughout this specification and in the claims that follow as an "image frame.”
  • a plurality of image frames depicting the organ at spaced-apart times are produced by setting control 14 so that X-ray source 12 is repetitively energized for brief periods of time.
  • the region of interest imaged by beam 18 includes the heart of patient 20, and more specifically, the LV of the heart.
  • the preferred embodiment of the present invention automatically determines the contour of the endocardial surface of the LV at multiple times during the cardiac cycle. In a typical application of the present invention, it will be most useful to determine the contour at ED and at ES, since the contours of the LV at these points in the cardiac cycle are usable to determine an ejection fraction and other criteria indicative of a condition of the heart in patient 20.
  • the previous methods disclosed in the commonly assigned patents for automatically determining the contour of the LV at ED and ES have too often yielded an unacceptable error in the determination of ejection fraction for a patient's heart, compared to the results obtained using the ED and ES contours determined manually by highly skilled and experienced medical personnel.
  • the present invention extends the capabilities of these previous methods to achieve automated determination of ED and ES contours with mean absolute boundary errors within 1.8 mm, yielding a mean absolute error in the ejection fraction (compared to the ejection fraction determined with ED and ES boundaries that have been determined by the manual method) of 7%.
  • the digital image data for each of the image frames are stored in an image data storage device 32, which preferably comprises a large capacity hard drive or other non-volatile storage media.
  • Image data storage device 32 is coupled bi-directionally to a processor 34, which preferably comprises a desktop personal computer,, or a work station.
  • a keyboard 36 and a mouse 38 are coupled to processor 34 to enable the operator to input data, such as identification of the locations of the ED and ES aortic valve points (or axis) and the apex, and/or commands used for controlling the software running on processor 34.
  • This software is used to implement the method of the present invention, which enables the digital image data stored as image frames in image data storage device 32 to be processed automatically to determine a contour of the LV at different times during the cardiac cycle.
  • the contour of the LV in an image frame that is thus automatically determined by this method is presented to the user on a display 40, which is coupled to processor 34.
  • data defining the contour determined by processor 34 can optionally be stored in image data storage device 32, for later analysis and evaluation, and the contours that are determined will optionally be employed to compute criteria of interest, such as the ejection fraction of the heart.
  • FIGURE 2 a cross-sectional view of a portion of a human heart 60, corresponding to a projection angle typically used for recording ventriculograms, has a shape defined by its outer surface 62.
  • a radio opaque contrast material Prior to imaging an LV 64 of heart 60, a radio opaque contrast material is preferably injected into the LV, so that the plurality of image frames produced using the X-ray apparatus include a relatively bright area within LV 64.
  • the bright silhouette bounded by the contour of an endocardium (or inner surface) 66 of LV 64 is not clearly delineated.
  • the present method processes the image frames produced with the X-ray source to obtain a contour for at least the ED and ES that closely approximates the endocardium of the patient's LV at those points in the cardiac cycle.
  • image frames produced using an X-ray source are disclosed as the type of image data processed in the preferred embodiment of this invention, the present method is also applicable to processing other types of image data from other sources, including ultrasound and nuclear magnetic resonance image data.
  • the image frames produced by any of these techniques are difficult to interpret to determine the contour of the LV, since the contour of the LV (or of any other organ being examined using these imaging techniques) is typically not clearly delineated.
  • the LV as the exemplary organ of interest, the lower left portion of the contour, i.e., an inferior border 71 disposed left of an apex 72, characteristically appears with much less clarity than the upper portion of the LV.
  • the upper portion of the LV is disposed adjacent aortic valve 68, which opens into aorta 70, and mitral valve 76, which opens into part of left atrium 74.
  • the poor contrast of the inferior portion of the LV contour is primarily due to the proximity of the diaphragm.
  • the shape of LV 64 varies and its cross-sectional area and volume changes from a maximum at ED to a minimum at ES.
  • the cross-sectional area and the shape defined by the contour of the endocardium surface change during this cycle as portions of the wall of the heart contract and expand.
  • the physician is alerted to a possible problem if the contour does not change shape from frame to frame in a manner consistent with the functioning of a normal heart. For example, if a portion of the LV wall includes a weakened muscle, the condition will be evident to a physician studying the relative changes in the contour of the LV in that portion of the wall, compared to other portions, since the portion of the endocardium comprising the weakened muscle will fail to contract over several image frames during systole in a normal and vigorous manner. Similarly, over multiple cardiac cycles, valve leakage will be evident from an evaluation of the contours. At the very least, physicians are interested in comparing the contour of the LV at ED with that at ES.
  • a primary emphasis of the present invention is in automatically determining the contour of the LV within the ED and ES image frames, and in using the contour to evaluate the condition of the heart, e.g., by determining the ejection fraction, although the contour can automatically be determined for other image frames during the cardiac cycle in the same manner.
  • the capability to automatically determine the contour of the LV immediately after the images are acquired (or in real time) can enable a physician to more readily evaluate the condition of the heart during related medical procedures.
  • the present method has been shown to be capable of automatically determining the contours of the LV or other chamber of the heart (or the contour of another organ) during one or more cardiac cycles, with an accuracy approaching that of an expert skilled in evaluating such images, and the present method accomplishes this task substantially faster than a human expert. Moreover, the present method ensures that the contour is accurately determined by relating a position and orientation of the patient's heart in the image data to an anatomical feature, namely, the aortic valve plane and the apex. In addition, the present method provides a generally accurate determination of the contour adjacent to tissue such as the diaphragm, which tends to interfere with the automated process.
  • raw systolic phase left ventriculogram (LVG) images 80 are processed to automatically identify the endocardial boundaries at the ED and ES points during the systolic phase of the cardiac cycle.
  • LVG left ventriculogram
  • a plurality of gray scale eigenvectors 82 derived from the training data are input with the raw systolic phase LVG images to a preprocessing and normalization step 88.
  • the gray scale eigenvectors are obtained from the second moment matrix of gray scale vectors. They are the orthogonal axes reflecting the gray scale variance direction.
  • each pixel will have a defined vertical and horizontal scale, as indicated in a block 84.
  • the pixels in the raw systolic phase LVG image data are sometimes not square, as would be preferred, but instead are rectangular, having a different vertical and horizontal size.
  • the size of the pixels produced by the image acquisition equipment is thus input to preprocessing and normalization step 88, along with a set of tools known as morphological structuring elements 86 that are determined previously from the training data. Further details of the preprocessing and normalization step carried out in block 88 are shown in FIGURE 4.
  • a pixel size calibration step is applied to the raw systolic phase LVG image data, as indicated in a block 90.
  • This step transforms the pixels in these image data into square pixel LVG data, as indicated in a block 92.
  • each image frame is normalized to a resolution of 384x512 square pixels, and the image sequence begins with a frame at ED and ends at the following ES image frame in the systolic phase of the cardiac cycle.
  • the square pixel LVG data produced during the step in block 92 are then input to a block 94 in which a gray scale morphological opening and closing operation is applied to the data.
  • the step referenced in block 94 is a filtering process, producing "smoothed" raw image sequence data, as indicated in a block 96.
  • the step carried out in block 94 uses morphological closing operators on the LVG data that tend to lighten relatively darker spots disposed in brighter regions of the image frames and morphological opening operators that tend to darken relatively brighter spots disposed in darker regions of the image frames.
  • each image frame is a two-dimensional (2D) plane in which each pixel projects vertically upward in the third dimension to a height that depends upon its gray scale value.
  • 2D two-dimensional
  • morphological closing and opening operators used in this step to remove noise and smooth the data is that they do not significantly alter the spatial location of sharp transitions between relatively light regions and dark regions.
  • a general primer discussing morphological opening and closing operators is provided in a paper entitled, "Image Analysis Using Mathematical Morphology,” by R.M. Haralick, S.R. Steinberg, and X. Zhuang, IEEE Trans. On Pattern Recognition and Machine Intelligence, Vol. 9, Number 4, pages 532-550, (July 1987).
  • the resulting smoothed raw image sequence is then input to a block 98 which carries out a 12 band systolic phase normalization.
  • the effect of this normalization step is to compensate for the different numbers of image frames that can be captured during one cardiac cycle for different patients. It will be apparent that, at a fixed sample rate, a different number of image frames will be included in a cardiac systolic phase for one patient than for another who has a different cardiac rate.
  • a normalized 12 band image sequence is produced as indicated in a block 100, in which only 12 image frames are included, for a complete systolic phase of the patient.
  • a CDF gray scale normalization step is applied to the normalized 12 band image sequence data.
  • This step has the effect of spreading the gray scale data for each image based upon a histogram produced for the gray scale data in all 12 images of the normalized 12 band image sequence and has the effect of increasing the range of the gray scale values of these image frames over much more of the available 0 - 255 range.
  • the result of the step carried out in block 102 is a 12 frame CDF sequence, as indicated in block 104.
  • the sequence is then input to a block 106, which carries out a four-dimensional (4D) eigenspace projection using the gray scale eigenvectors developed from the training data.
  • an eigenspace projection is an inner multiplication of the gray scale vector and the eigenvector.
  • the result of the multiplication is called a principal component.
  • This multiplication can be repeated for each of gray scales eigenvectors, and in this preferred embodiment, there are four gray scale eigenvectors. Therefore, the step carried out in block 106 implements a matrix multiplication, producing an orthogonal projection of each pixel onto a new space having four dimensions.
  • the result is a 4D principal component image 108. Referring back to FIGURE 3, it will be noted that the 4D principal component image is input to a block 110 in which an initial ED/ES region classifier step is applied to the image.
  • this step a user is required to enter the location of two ED aortic valve points and an apex point. The locations of these points are manually determined by observation of the ED image frame of the patient.
  • Other inputs to this step include an ED gain matrix 114, data in a lookup table (LUT) 116, and a prior probability image 120, the LUT and prior probability image being derived from the training data.
  • the classifier step is implemented only for a region of interest in which the left ventricle is disposed, as indicated by a block 118. Details of the procedure carried out in block 110 are shown in FIGURE 5.
  • the user entered ED aortic valve points and the apex in block 112 are input along with prior probability image 120 to a procedure block 130 that provides for aligning the aortic valve and long axis of the prior probability image with that determined from the user-entered ED aortic valve points and apex.
  • the prior related methods provided only for aligning the prior probability image based upon the apex and aortic valve, or alternatively, attempting to automatically determine the location of the aortic valve plane for use as a reference in the alignment step.
  • the present invention requires that the user specify the aortic valve points and the apex of the left ventricle at ED to facilitate aligning the aortic valve angle and long axis of the patient's heart at ED with the corresponding elements in the prior probability image.
  • the result of the step carried out in block 130 is an aligned prior probability image 132, which is input to a block 134 to carry out a matrix multiply operation.
  • Region of interest 118 will typically be disposed proximate the central portion of each image frame. However, the region of interest can be more discreetly specified to minimize the amount of background involved in processing the data included in the images.
  • the 4D principal component image and region of interest are thus input to a block 122 that automatically selects pixels inside the region of interest. This step yields 4D principal components that are inside the region of interest, as indicated in a block 124.
  • LUT 116 is generated from ground truth class images derived from the training data, and from a selected number of gray scale principal components and will have a specified size that is preferably optimized.
  • Each entry in the LUT includes three probabilities, including the probability that a pixel will have a quantized feature vector given that the true class is the background, the ED/ES (i.e., ED-not-ES) region, or the ES region.
  • the background class includes all of the area outside of the boundary of the LV, while the ED/ES class represents all of the area that is inside the ED boundary but not within the ES boundary. Finally, the ES class represents the area that is inside the ES boundary.
  • the LUT is created using a non-parametric relationship that quantizes the measurement space into rectangular hyperboxes, the number of hyperboxes being the optimized LUT size, m, and a smoothing function parameter, k-NN, which is also preferably optimized.
  • the quantization is determined so that the marginal distribution of the quantized feature vector is approximately uniform.
  • the class conditional probabilities in each entry in the LUT are initially estimated by simply counting how many times the quantized feature vector falls into the associated hyperbox. This initial count data is then normalized by the total number of feature vectors in the training data for the class. The initial probability data is then smoothed by a k- NN technique to produce the final estimates.
  • a block 126 indicates that the information contained in the LUT is accessed in processing each of the 4D principal components, which are the feature vector inside the region of interest to determine a class conditional probability, as indicated in a block 128.
  • Each element of the 4D principal components is quantized and addressed in the corresponding dimension of the 4D LUT, thereby determining the location of the hyperbox to which the 4D feature vector belongs in the LUT.
  • the class conditional probability is an indication of the probability that each pixel within the region of interest is in a specific one of the three classes noted above.
  • the class conditional probability is input to block 134, which provides for implementing a matrix multiplication of the aligned prior probability image in block 132 by the class conditional probability.
  • the result of this matrix multiplication is a posterior class probability at each pixel location, as noted in a block 136.
  • the posterior class probability at a pixel location is the probability that the pixel's true class is C, given the observed feature vector of the pixel and given the pixel location.
  • the posterior class probability at each pixel location is input to a maximal gain decision-making step, as noted in a block 138.
  • ED gain matrix 114 which includes nine optimized numbers that provide a weighting for determining the class by adjusting the posterior class probability in the maximal gain decision-making step, is input to this step, producing an ED/ES pixel class image, as indicated in a block 140.
  • each pixel is labeled as being in one of the three classes noted above, the class selected having been assigned based upon the maximum probability that the pixel was in that class determined in block 138.
  • the ED/ES pixel class image in block 140 is input to a process block 142 in which an ED region post-processing and boundary extraction procedure is carried out, based upon structuring elements provided in a block 144.
  • the ED/ES pixel class image in block 140 is input to a procedure block 146 that provides for the grouping of connected components (grouping of pixels assigned to the same class and lying within a contiguous area) and the filling of any holes within the grouped components.
  • a procedure block 146 that provides for the grouping of connected components (grouping of pixels assigned to the same class and lying within a contiguous area) and the filling of any holes within the grouped components.
  • all pixels that have been assigned a common class and are connected together to form part of a region for that class are grouped together. Any pixels that are not part of the class but are encompassed within the region of the group are assigned to the same class as the majority of the other pixels in the group so that the small holes in the group are eliminated.
  • This step yields the ED and ES class region image, as indicated in a block 148.
  • This image is then input to a procedure block 150 that implements ED and ES pixel location checking.
  • This step ensures that pixels assigned to the ES class are disposed within the boundary limits of the pixels assigned to the ED class and if not, modifies the pixel assigned class to ensure consistency with the logical expectation for the pixel disposition.
  • the result is a refined ED class region image as indicated in a block 152.
  • structuring elements from block 144 are applied in a step indicated in a procedure block 154 that implements the binary morphological opening and closing smoothing of the refined ED class region image.
  • the morphological opening and closing tends to smooth out a region boundary, eliminating its "peninsulas” and "inlets.”
  • the result of this step is a smoothed ED region, as indicated in a block 156.
  • the next procedure, which is carried out in a block 158, is to trace the boundary of the smoothed ED region, producing a 100-point classifier ED boundary as indicated in a block 160.
  • the step carried out in block 158 identifies 100 spaced-apart points along the periphery of the smoothed ED region that comprise the classifier ED boundary.
  • the 100-point classifier ED boundary is input to an ED boundary regression process in a block 162 and also serves as an input to a rejection classifier procedure in a block 176 (see FIGURE 3).
  • ED boundary regression step Also input to the ED boundary regression step are user-entered ED aortic valve points and apex from block 112 and ED boundary eigenvectors (PI), as indicated in a block 166. Details of the step that carried out in the ED boundary regression process are shown in FIGURE 7. As shown therein, the ED boundary eigenvectors (PI), which are obtained from a second moment matrix of the manually traced boundaries from block 166, are applied to the 100-point classifier ED boundary from block 160 to produce a projected classifier boundary, as indicated in a block 170. The projection of the 100-point classifier ED boundary using the eigenvectors is carried out in a block 168.
  • the projected classifier boundary from block 170 is input to a procedure block 172 that carries out an ED boundary regression using an ED three-point full quadratic augmentation.
  • the quadratic augmentation appends to the projected classifier boundary vector.
  • One component has the value 1, six components are the coordinates of the user entered three points, and 21 components are the product produced by multiplying each coordinate value by itself and other coordinate values.
  • the ED boundary regression coefficients from block 164 are applied and used in connection with the user-entered ED aortic valve points and apex that quadratically augment the projected classifier boundary.
  • the ED boundary regression coefficients have previously been determined from the training data and will thus be available to carry out the step in block 172.
  • the result of this regression is a 100-point regressed ED boundary as indicated as a block 174.
  • the 100-point regressed ED boundary in block 174 is provided as an input to the rejection classifier process step in a block 176. Also input to the rejection classifier process are rejection classifier coefficients 178, which will previously have been determined from the training data.
  • the rejection coefficient vector is a vector of weights that is used in the rejection classifier.
  • the 100-point regressed ED boundary is also input to a ES region classifier processing step in a block 180. Before discussing the details of the rejection classifier step, it will first be helpful to discuss how the other inputs relating to the ES boundary are obtained.
  • the ES region classifier step in block 180 is carried out using inputs from the 4-D principal component image in block 178, the prior probability image in block 120, the LUT in 116, user-entered ES aortic valve points and apex from a block 182, and both a distance histogram in a block 194 and an ES gain matrix in a block 204.
  • FIGURE 8 Details of the ES region classifier process are illustrated in FIGURE 8.
  • user-entered ES aortic valve points and apex from block 182 and the prior probability image derived from the training data are input to a process block 184 that provides for aligning the aortic valve angle and long axis of the prior probability image at ES with that in the image of the patient's heart at ES based upon the user-entered data from block 182.
  • This step is analogous to that carried out for the ED image in block 110, as discussed above.
  • the result of this alignment is an aligned prior probability image as indicated in a block 186.
  • This image is input to a block 188 in which a step is carried out for refining the aligned prior probability image based on distance considerations.
  • a distance transform that is carried out in a block 190, based upon the regressed ED boundary from block 174.
  • a distance transform defines a distance image with respect to the ED boundary as indicated in a block 192.
  • the distance histogram provides a probability for each of a plurality of distances between the ED boundary and the ES boundary.
  • the procedure in block 188 refines the aligned prior probability image as a function of distance, to produce the refined aligned prior probability image in a block 196.
  • the refined aligned prior probability image in block 196 is input to a matrix multiply step in a block 198.
  • Block 198 also receives an input from the class conditional probability in a block 128, which is determined as previously described above.
  • the matrix multiply operation carried out in block 198 produces a posterior class probability at each pixel location, as indicated in a block 200.
  • a class probability indicates the probability of each pixel in the image being in each of the three classes.
  • the probability data are input to a process for carrying out a maximal gain decision in a block 202, which uses an ES gain matrix that includes nine optimized numbers indicating the weighting for determining the class when making the maximal gain decision.
  • the result of the operation in block 202 is the ES pixel class image indicated in a block 206. Referring back to FIGURE 3, it will be noted that the ES pixel class image is input to a procedure for processing the ES region and extracting its boundary, which is carried out in a block 208 using structuring elements from a block 216.
  • the ES pixel class image from block 206 is input to a procedure for connecting the components by grouping, and filling in holes within the groups, as set forth in a block 210.
  • This step is similar to that carried out in connection with the ED image frame data.
  • the result of the step is an ES class region image, as indicated in a block 212.
  • This image is input to a process in a block 214 that provides for applying a binary morphological opening and closing process using structuring elements 216, which were obtained from the training data, to produce a smoothed ES region in a block 218.
  • this step acts as a filter to smooth a region boundary, removing its "peninsulas” and "inlets.”
  • the smoothed ES region in block 218 is input to a procedure for tracing the boundary of the ES region in a block 220, yielding 100 spaced-apart points around a classifier ES boundary, as indicated in a block 222.
  • ES boundary eigenvectors (P2) in a block 226 are input to a procedure for projecting the 100-point classifier ES boundary in accord with the eigenvectors.
  • the result is a projected classifier ES boundary, as indicated in a block 230.
  • the user-entered ES aortic valve points and apex position in block 182 are input to a full quadratic augmentation process in a block 240 to produce an ES augmentation in a block 242.
  • the user-entered ED aortic valve points and apexes in a block 234 are input to a partial quadratic augmentation process in a block 236, producing an ED augmentation in a block 238.
  • the ES augmentation in block 242 and the ED augmentation in block 238 are input to a block 232, along with ES boundary regression coefficients determined from the training data in a block 244 and the projected classifier ES boundary, so that the 100 point ES regressed boundary can be determined by the regression with ES full quadratic and ED partial quadratic augmentation applied to the projected classifier ES boundary, as indicated in a block 246.
  • FIGURES 11 though 13 show how these various parameters are developed.
  • 100-point classifier ED boundary 160 is used to compute a classifier ED area in a block 250, as indicated in a process block 248, which provides for computing the area.
  • the 100-point classifier ED boundary is also used in a block 252, to compute a classifier ED volume in a block 254.
  • the 100-point classifier ES boundary in block 222 is input to blocks 258 and 262 to respectively compute a classifier ES area in a block 260 and a classifier ES volume in a block 264.
  • a block 256 provides for computing the stroke volume for the left ventricle, yielding a classifier stroke volume in a block 270.
  • Both the classifier ED and ES volumes are also input to a block 266, which computes the ejection fraction, yielding a classifier ejection fraction in a block 268.
  • the 100-point regressed ED boundary in block 174 is input to blocks 272 and 276, which respectively determine a regressed ED area in a block 274 and a regressed ED volume in a block 278.
  • the 100-point regressed ES boundary is input to blocks 282 and 286 for computing a regressed ES area in a block 284 and a regressed ES volume in a block 288, respectively.
  • the regressed ED volume in block 278 and the regressed ES volume in block 288 are input to both blocks 280 and 290, which respectively provide for computing a regressed stroke volume in a block 294 and a regressed ejection fraction in a block 292.
  • 100-point regressed ED boundary in block 174 are input to a block 296 which provides for computing a boundary error based upon the difference between the classifier ED boundary and the regressed ED boundary.
  • the ED boundary area in a block 298 is produced by this step.
  • the 100-point classifier ES boundary in block 222 and the 100-point regressed ES boundary in block 246 are input to a block 300, which computes a boundary error between them, yielding an ES boundary error in a block 302.
  • all of the parameters necessary to implement the rejection classifier step in block 176 (FIGURE 3) are available. Details of this step are shown in FIGURE 14.
  • FIGURES 11 through 13 have already been discussed above, in regard to how the procedure in a block 304, which is shown on FIGURE 14, is applied to respectively determine an ED boundary error 298, classifier ED and ES areas 250 and 260, classifier ED and ES volumes 254 and 264, and classifier stroke volume 256 and classifier ejection fraction volume 268. Similarly, the regressed corresponding parameters for ED and ES, including area, volume, and the regressed stroke volume and regressed ejection fraction - all determined as discussed above, are input to a procedure block 308.
  • the mean absolute boundary difference between the classifier ED boundary and the regressed ED boundary in block 298 and the mean absolute boundary difference between the classifier ES boundary and the regressed ES boundary in block 302 are also input to procedure block 308, which provides for forming a rejection classifier feature vector, as indicated in a block 310.
  • a dot product of the rejection classifier feature vector and the rejection classifier coefficients in block 178 is carried out in a block 312 and the result is compared to a rejection cost threshold t2 in a block 304.
  • the rejection cost threshold t2 is optimized to ensure that the ED and ES boundaries that have been determined are automatically accepted or rejected in a manner that minimizes the rejection of boundaries that should not be rejected and minimizes the acceptance of boundaries that should not be accepted.
  • the acceptance or rejection of the automated boundary delineation provided by the procedure is thus achieved, as indicated in a block 306.
  • a human operator is alerted by a flag appearing on the display. The operator then has the option of applying human expertise in accepting or rejecting the boundary, or alternatively in modifying the boundary that was automatically determined and rejected, to make the boundary acceptable. It is contemplated that the need for operator intervention should be minimal, given the relatively high accuracy of the present invention in determining the boundary of an organ such as the left ventricle.

Abstract

An automated method for evaluating image data taken over a sequence of image frames to determine a contour of a left ventricle (LV). Also provided are training data determined from manually-drawn contours of hearts of other individuals. End diastole (ED) and end systole (ES) aortic valve points and an apex point determined by viewing the sequence of images for the heart being analyzed are entered, and an initial ED/ES region classifier (110) is developed for the image data using a probability table (120). An ED/ES pixel class image (140) and ES pixel class image (206) are produced, yielding 100-point classifier ED and ES boundaries (160,222). Regression coefficients (164,244) derived from the training data are applied to determine a 100-point regressed ED boundary (162) and a 100-point regressed ES boundary (224), which can be used to calculate an ejection fraction for the heart.

Description

METHOD FOR DETERMINING THE CONTOUR OF AN IN VIVO ORGAN USING MULTIPLE IMAGE FRAMES OF THE ORGAN
Field of the Invention
The present invention generally pertains to a method for determining a boundary or contour of an internal organ based upon image data, and more specifically, is directed to a method for determining the contour of the organ based on processing image data for multiple image frames.
Background of the Invention Contrast ventriculography is a procedure that is routinely performed in clinical practice during cardiac catheterization. Catheters must be intravascularly inserted within the heart, for example, to measure cardiac volume and/or flow rate. Ventriculograms are X-ray images that graphically represent the inner or endocardial surface of the ventricular chamber. These images are typically used to determine tracings of the endocardial boundary at end diastole (ED), when the heart is filled with blood, and at end systole (ES), when the heart is at the end of a contraction during the cardiac cycle. By manually tracing the contour or boundary of the endocardial surface of the heart at these two extremes in the cardiac cycle, a physician can determine the size and function of the left ventricle and can diagnose certain abnormalities or defects in the heart. Of the end systole and end diastole images, the former is perhaps the most useful in diagnosing cardiac abnormalities.
To produce a ventriculogram, a radio opaque contrast fluid is injected into the left ventricle (LV) of a patient's heart. An X-ray source is aligned with the heart, producing a projected image representing, in silhouette, the endocardial surface of the heart (myocardium) muscle. The silhouette image of the LV is visible because of the contrast between the radio opaque fluid and other surrounding physiological structure. Manual delineation of the endocardial boundary by a trained medical practitioner is normally employed to determine the contour, but this procedure requires time and considerable training and experience to accomplish accurately. Alternatively, a medical practitioner can visually assess the ventriculogram image to estimate the endocardial contour, but such evaluation is subject to more interobserver variability and therefore less accuracy than a quantitative analysis. Clearly, an automated border detection technique that can produce more accurate results, in much less time than the manual evaluation, would be preferred.
Several automatic border detection algorithms have been developed to address the above-noted problem. In U.S. Patent No. 5,268,967, a number of different prior art methods are discussed for improving the definition with which images can be resolved to identify specific portions of the body. It is suggested by this reference that a histogram-based tone-scale transformation is a simple and effective way to adjust the contrast of an image, but the reference also indicates that other techniques must be employed to distinguish the desired foreground portion of an image from the background clutter and to distinguish the object in question from the foreground and background. After discussing what other researchers have done to achieve this goal and explaining the problems with these techniques, the patent discloses a method that can be applied to any digital radiographic input image. The method disclosed in the patent includes the steps of edge detection, block generation, block classification, block refinement, and bit map generation. More specifically, after the edges of the object are detected in the first step, the image is broken into a set of non-overlapping, contiguous blocks of pixels, which are classified into foreground, background, and object, on a block-by-block basis. The block classification step determines in which of ten possible states each block belongs using a set of clinically and empirically determined decision rules. By evaluating the fine structure within each block, the block classification is refined so that a two-valued or binary image is produced that functions as a template for any further image processing to be done on the image.
Another technique related to automated border detection is based upon identifying a gradient of the gray scale values comprising an image. In this prior art technique, a gray scale threshold gradient value is applied to process the gray scale image data of a ventriculogram in order to identify the boundary of the LV, and further processing may be employed to improve the accuracy with which the border is identified. Alternatively, it is suggested that landmarks or recognizable shapes or gray scale value combinations can be tracked over time to determine the direction and velocity of motion, which are represented as flow vectors. By analyzing the pattern of flow vectors, motion of the organ can be assessed. However, these flow vectors do not directly indicate the contour of the organ. Yet another technique that is sometimes employed to determine the contour of an organ is based on digital subtraction. A mask image is recorded prior to introduction of a radio opaque contrast substance into the organ. This mask image may contain radio opaque structures such as ribs and vertebrae, which tend to interfere with discerning the contour of the organ. After the radio opaque contrast substance is introduced into the organ and a second image is produced, the mask image is digitally subtracted from the second image, thereby removing the clutter in the second image that is not the organ in question. In practice, this technique is difficult to implement because registration between the mask image and the subsequent second image of the organ made perceptible by the radio opaque contrast substance is difficult to achieve. A variation of this technique employs time interval delay subtraction, wherein an image that was previously made close in time is subtracted from an image being analyzed, so that a difference image is produced that contains only the part of the organ that moved during the time interval between the two images. However, any part of the organ that does not move between the times that the two images were made cannot be delineated.
Morphological operators can also be employed to .process image data in order to define the boundaries of objects. Such techniques are often more general in application, e.g., relating to artificial vision systems, and are therefore not constrained by physiological considerations.
A paper entitled "Medical Image Analysis using Model-Based Optimization" by James S. Duncan, Lawrence H. Staib, Thomas Birkhδlzer, Randall Owen, P. Anandan, and Isil Bosma (IEEE, 1990), suggests the use of mathematical models based on empirically determined data for analysis of diagnostic medical images. In the second example discussed in this paper, a parametric shape model with an image-derived measure of boundary strength is emnloved. Use of the emDirical data for the model imoroves its accuracv. but the.
analyzing the image, an automated method should employ as much information derived from the imaging as possible to delineate the surface of the organ. Further, the automated system should accomplish the task more efficiently and quickly than a human. Toward that end, it has become apparent that more information than can be obtained from a single image will improve the accuracy with which an automated technique can determine the contour of an organ. Analysis of more than one image can provide the additional information needed for an automated method to achieve greater accuracy and can provide the physician more information about the heart or other organ than current techniques. Methods for automatically determining the boundary of the LV (or other organ) are disclosed in commonly assigned U.S. Patent Nos. 5,570,430 and 5,734,739 (a continuation-in-part of U.S. Patent No. 5,570,430), both having the same title as the present case. Because of their relevance to the present invention, the disclosures and drawings of these two patents are hereby specifically incorporated herein by reference. In the method disclosed in the first of these two patents, training data developed by manually evaluating the image data for the hearts of over 300 different individuals are employed to derive parameters used for classifying pixels in the images of a patient's heart to form an initial estimate of the region of the LV. Once the initial estimate of that region is made, it can be used to estimate a contour or border of the LV. However, several problems were encountered in applying automated techniques to determine the boundary of the LV in the method described in this first patent. One of the problems was due to inherent differences in the position and orientation of the heart within the image data being analyzed relative to that of hearts in the images comprising the training data. Such differences can affect the accuracy of the initial estimate of the LV region and determination of the contour at ES (and ED), since the differences introduce errors in the classification of pixels of the image data based upon the training data. Steps added in the second of these two patents included rotating and translating the training data so that the data are consistent in position and orientation to the LV of the heart being analyzed, prior to using the training data to classify pixels when producing the initial estimate of the LV region.
Another problem in the method disclosed in U.S. Patent No. 5,570,430 caused an uncertainty in the automated detection of the border of the LV to arise in the inferior portion of the contour, adjacent to the patient's diaphragm. The gray scale values for the tissue comprising the diaphragm are, for some patients' images, very similar to the gray scale value of injected dye in the images being analyzed. The uncertainty in the automated detection of the border of the LV in the inferior region bordering the diaphragm typically causes the process to define a border that extends outwardly of its actual location, creating an error. Accordingly, U.S. Patent No. 5,734,739 describes steps for more accurately determining the contour of the LV in this region. Although these problems were addressed in the second patent, it is now apparent that the method disclosed therein to determine the shape of a patient's LV can still produce unacceptable errors. The errors in determining the shape of the LV using the method described in the second patent become readily apparent when the shape is used for the determination of ejection fraction for the patient's LV. Based upon the shape of a patient's LV determined by the method taught in U.S. Patent No. 5,734,739, the average error in the ejection fraction was found to be in the range of 12-15%, which is unacceptably large.
Clearly, additional factors must be considered in the automated determination of a patient's heart from image data if the results are to be sufficiently accurate to enable the ejection fraction, volume, and other criteria to be determined within an acceptable error limit. For example, in determining the shape of a LV using image data, the previous approach disclosed in the second of the two commonly assigned patents required that each pixel in the data be assigned to one of 13 different classes. This requirement is computationally intensive and creates errors in the determination of the classes to which each pixel is assigned. Also, the previous approach estimated a class conditional probability for the pixel classification based on a mean/covariance matrix and aligned the class prior probability image as a function only of the aortic valve angle. This previous approach thus introduced errors in the estimate and in the alignment of the image. Clearly, it would be preferable to simplify both the pixel classification scheme and the estimate of class conditional probability and thereby improve the accuracy with which these steps are accomplished. Also, additional points should be used in aligning the prior probability image produced using the training data with the image of the patient's heart. It would further be desirable to condition the ES classifier on the ED boundary, to improve the accuracy with which the ES boundary is determined.
Another area for improving the technique disclosed in the two commonly assigned patents noted above involves the optimization of the various parameters used in the method so that the resulting ejection fraction error is minimized. There are a number of parameters that can be optimized prior to the application of the method for evaluating the shape of a patient's LV, and by using optimized parameters in the process, a substantial reduction in the error of the ejection fraction that is determined for each such patient should be achievable. To further reduce the errors in the determination of the organ shape, it would also be helpful to flag results of the automatic determination of organ contour that appear to be unreliable, to enable a trained human operator to intercede in determining the disposition of or changes in the flagged results.
In the previous technique, there were 12 feature dimensions for each image, which made non-parametric pixel classification impossible. It would be preferable to reduce the number of feature dimensions sufficiently to enable non-parametric pixel classification, which should improve the accuracy with which each pixel is classified, for example, as being within the ED (but outside the ES), or as being inside the ES, or part of the background. A related step would be the normalization of the gray scale in the image data using a cumulative distribution function, so that feature vectors can be separated between different classes.
Instead of down sampling boundary points, as is done in other prior art techniques, it is preferable to employ a method that preserves as much information contained in the original boundary of the organ that was imaged as possible. In addition, the ES boundary regression should be improved, for example, by using information about the ED boundary. By employing such steps, each of which contribute to more accurately determining the shape of the LV being imaged, it should be possible to reduce the average error in the ejection fraction calculated based upon this shape to within an acceptable limit, e.g., on the order of 3-5%.
Summary of the Invention In accord with the present invention, a method is defined for automatically determining a contour of at least a portion of an organ in a patient, based upon digital image data of a region within the body of the patient in which the organ is disposed. The image data represent a, sequence of image frames of the region made over an interval of time during which the organ has completed at least one cycle of a periodic movement. Each image frame of the sequence includes a plurality of pixels. In a preferred application of the invention, the organ is a heart and the method is applied to determine the contour of the LV. However, the invention can also be applied to determine the contour of other chambers in the heart and the contour of other organs in a patient's body. In this discussion and in the claims that follow, it will be understood that if the organ being imaged is the heart, the term "maximum volume" corresponds to the ED volume (and related boundary), while the term "minimum volume" corresponds to the ES volume (and related boundary) of the heart or a selected chamber of interest of the heart, such as the LV.
The first step of the method includes preprocessing and normalization of the image data to produce principal component images in which the image data are "smoothed" and normalized in regard to pixel size and gray scale. Unlike the prior art, the present invention provides for classifying pixels represented in the principal component images as being in one of three classes. When the organ being imaged is the heart, the classification step produces pixel class images for the ED/ES boundary (i.e., all pixels within the ED boundary but not within the ES boundary), and for the end systole boundary (i.e., all pixels within the ES boundary); the third class is assigned to the pixels in the background. The maximum volume and minimum volume boundaries for the organ (i.e., the ED and ES boundaries, if the organ is the heart) are extracted from the pixel class images, using parameters derived from training data previously produced through manual evaluation of a plurality of image frames over at least one cycle of periodic movement in each of a plurality of organs in other individuals. These two boundaries define the contour of the organ comprising a region of interest in the image data.
The step of normalizing preferably includes the steps of normalizing each pixel in regard to dimensional size, to produce square pixel image data. A gray scale morphological opening and closing process is applied to reduce noise and smooth the pixel image data. Also, an image sample rate used to acquire the image data is normalized over at least one cardiac cycle, to produce a predefined number of bands of image data. A cumulative distribution function (CDF) is employed to normalize the gray scale in each of the predefined number of bands of image data, producing a CDF sequence of image frames, each of which includes a plurality of pixels. Further, each pixel in the CDF sequence of image frames is projected onto a new space having a predefined number of dimensions, using parameters derived from the training data, to produce the principal component images.
The step of non-parametrically classifying includes the step of aligning a reference axis and an anatomical feature in the organ of the patient with a corresponding axis and corresponding anatomical feature in prior probability images that are derived from the training data. The prior probability images, which are obtained by aligning images in the training data into a common coordinate system, thus indicates the probability of each class at each pixel location. When the organ is a heart, this step produces aligned prior probability images for the ED-not-ES and ES regions of the heart of a patient during the cardiac cycle. The principal component images of pixels within a region of interest are selected, and based upon a probability for each class, a conditional class probability for each of the pixels is determined. The conditional class probability is the likelihood of a pixel belonging to a class given its principal component images. Using the aligned prior probability image and the conditional class probability for each pixel, a probability for each class at each pixel location is determined to initially classify the pixels in a region, e.g., within the maximum volume boundary but not the minimum volume boundary, within the ED ES if the organ is the heart. Similarly, a probability is determined for each class at each pixel location to initially classify the pixels in a region within the minimum volume boundary using the aligned prior probability image, the conditional class probability for each pixel, and regression data representing the maximum volume boundary. The regression data representing the maximum volume boundary are combined with probabilities associated with distances between the maximum volume boundary and the minimum volume boundary to produce a refined aligned " prior probability image for use with the conditional class probability in the step of determining a minimum volume boundary pixel class image.
The step of extracting the minimum volume boundary preferably comprises the steps of forming a minimum volume boundary and a minimum volume class region image that includes all pixels classified as being either within the maximum volume boundary and not within the minimum volume boundary OR within the minimum volume boundary. Any pixel that is inconsistent with the maximum volume boundary and the minimum volume boundary class region image is deleted, producing a refined minimum volume boundary class region image. The refined maximum volume boundary class region image is then smoothed, and the maximum volume boundary class region is processed to extract a classifier maximum volume boundary having a predefined number of points that represent the maximum volume boundary. The step of extracting the minimum volume boundary includes the steps of forming a minimum volume boundary class region image that includes all pixels classified within the minimum volume boundary, and smoothing the minimum volume boundary class region image. The minimum volume boundary class region is processed to extract a classifier minimum volume boundary having a predefined number of points that represent the minimum volume boundary.
Regression coefficients derived from the training data are applied to the maximum volume boundary and to the minimum volume boundary to determine a regressed maximum volume boundary and a regressed minimum volume boundary. In addition, a plurality of parameters are determined for the organ, based upon the regressed maximum volume boundary, the regressed minimum volume boundary, and classifier boundaries obtained from the training data. As a function of a rejection classifier feature vector derived from the plurality of parameters; rejection classifier coefficients are determined from the training data. A rejection cost threshold is employed to determine if the contour of the organ thus determined is accepted or rejected. If the contour is rejected, it is flagged for manual review by an operator. This method also includes the step of employing at least one optimized parameter in determining the contour of the organ. Preferably, the organ comprises a heart of the patient, and the region of interest preferably comprises the LV.
Brief Description of the Drawing Figures The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIGURE 1 is a schematic diagram illustrating equipment used to produce X-ray images of an internal organ and the apparatus used for processing the images to determine the contour of the organ in accordance with the present invention;
FIGURE 2 is cross-sectional view of a human heart, illustrating the shape or contour of the LV, which the present invention can automatically determine; FIGURE 3 is flow chart showing an overview of the process steps employed to determine the contour of the LV, at multiple times during a cardiac cycle;
FIGURE 4 is a flow chart showing the preprocessing and normalization steps applied to the raw image data for the LV; FIGURE 5 is a flow chart illustrating the steps for determining the initial
ED/ES region classification;
FIGURE 6 is a flow chart showing the logic for the ED post processing and for the extraction of the ED boundary;
FIGURE 7 is a flow chart showing the logical steps applied to determine the ED regressed boundary;
FIGURE 8 is a flow chart illustrating the logical steps employed in producing the ES region classification image; FIGURE 9 is a flow chart illustrating details of the steps for post processing the ES region and for extracting the ES boundary;
FIGURE 10 is a flow chart showing the steps for determining the regressed ES boundary; FIGURE 11 is a flow chart showing the steps for determining the classifier stroke volume and classifier ejection fraction;
FIGURE 12 is a flow chart illustrating the steps for determining the regressed stroke volume and regressed ejection fraction;
FIGURE 13 is a flow chart showing the steps for determining the ED boundary error and ES boundary error; and
FIGURE 14 is a flow chart illustrating the logic employed when automatically accepting or rejecting a boundary determined by the present invention.
Description of the Preferred Embodiment The Imaging Apparatus
A generally conventional X-ray imaging facility 10 is shown in FIGURE 1. Also shown is the equipment necessary to process the X-ray images produced by the apparatus in accordance with the present invention, so that a contour of an organ can be determined and displayed. In X-ray imaging facility 10, an X-ray source 12 is energized with electrical current supplied by a control 14, which determines the level of the current, the voltage applied to the X-ray source, and the time interval during which the electrical current is provided. In response to the signals supplied by control 14, X-ray source 12 produces a beam 18 of X-rays that pass through the chest of a patient 20 and into a detector 16. X-ray source 12 and detector 16 are aligned along a common longitudinal axis, so that beam 18 passes through a region of interest in the chest of patient 20. The organ that is being imaged is generally disposed within the center of the region of interest and within the center of beam 18. After passing through the chest of patient 20, beam 18 enters an image intensifier 22 in detector 16, which converts the beam of X-rays into a corresponding optical image.
To more clearly delineate the organ that is disposed in the region of interest from surrounding soft tissue, a radio opaque contrast substance, such as an iodine compound, is injected into the organ to absorb or block more of the X-ray beam energy than in the surrounding soft tissue. As a result, the interior of the organ in the image produced by image intensifier 22 appears relatively brighter, compared to the surrounding background. Preferably, a partially silvered mirror 24 is disposed along the longitudinal axis of detector 16, so that a portion of the light comprising the image produced by image intensifier 22 is reflected in a direction transverse to the longitudinal axis of the detector and into the lens of a cine camera 26. The cine camera 26 records the images onto cine film 23. The remainder of the light comprising the image is transmitted through partially silvered mirror 24 along the longitudinal axis of detector 16 and into the lens of a video camera 28.
In the illustrated embodiment, a cine projector 25 located outside the x-ray imaging facility is used to project the cine film images into a video camera 29 to produce an analog signal. The analog signal comprises a voltage for each pixel of the image, and the level of the voltage is indicative of a gray scale value or intensity level at the pixel. An analog-to-digital converter (ADC) 30 converts the voltage representing the gray scale value for each pixel to a corresponding digital value. It should be noted that digital video cameras are able to directly produce a digital output signal, and if such a video camera is used, cine camera 26, cine film 23, cine projector 25, video camera 29, and ADC 30 can be eliminated.
In a preferred embodiment of the present invention, the gray scale level can range between 0 and 255 for any pixel in the digital image data (but typically, the darkest and brightest pixels will lie within a much smaller range). It should also be noted that the digital image data produced by the X-ray system at each spaced-apart interval of time are referred to throughout this specification and in the claims that follow as an "image frame." Further, in this preferred embodiment, a plurality of image frames depicting the organ at spaced-apart times are produced by setting control 14 so that X-ray source 12 is repetitively energized for brief periods of time. In the exemplary application of the present invention discussed below, the region of interest imaged by beam 18 includes the heart of patient 20, and more specifically, the LV of the heart. By processing a sequence of digital image frames of the LV that have been made over at least one cardiac cycle, the preferred embodiment of the present invention automatically determines the contour of the endocardial surface of the LV at multiple times during the cardiac cycle. In a typical application of the present invention, it will be most useful to determine the contour at ED and at ES, since the contours of the LV at these points in the cardiac cycle are usable to determine an ejection fraction and other criteria indicative of a condition of the heart in patient 20. As noted above in the Background of the Invention, the previous methods disclosed in the commonly assigned patents for automatically determining the contour of the LV at ED and ES have too often yielded an unacceptable error in the determination of ejection fraction for a patient's heart, compared to the results obtained using the ED and ES contours determined manually by highly skilled and experienced medical personnel. However, the present invention extends the capabilities of these previous methods to achieve automated determination of ED and ES contours with mean absolute boundary errors within 1.8 mm, yielding a mean absolute error in the ejection fraction (compared to the ejection fraction determined with ED and ES boundaries that have been determined by the manual method) of 7%.
In the embodiment of the present invention shown in FIGURE 1, the digital image data for each of the image frames are stored in an image data storage device 32, which preferably comprises a large capacity hard drive or other non-volatile storage media. Image data storage device 32 is coupled bi-directionally to a processor 34, which preferably comprises a desktop personal computer,, or a work station. A keyboard 36 and a mouse 38 (or other pointing device) are coupled to processor 34 to enable the operator to input data, such as identification of the locations of the ED and ES aortic valve points (or axis) and the apex, and/or commands used for controlling the software running on processor 34. This software is used to implement the method of the present invention, which enables the digital image data stored as image frames in image data storage device 32 to be processed automatically to determine a contour of the LV at different times during the cardiac cycle. The contour of the LV in an image frame that is thus automatically determined by this method is presented to the user on a display 40, which is coupled to processor 34. In addition, data defining the contour determined by processor 34 can optionally be stored in image data storage device 32, for later analysis and evaluation, and the contours that are determined will optionally be employed to compute criteria of interest, such as the ejection fraction of the heart. Object of the Method
Referring now to FIGURE 2, a cross-sectional view of a portion of a human heart 60, corresponding to a projection angle typically used for recording ventriculograms, has a shape defined by its outer surface 62. Prior to imaging an LV 64 of heart 60, a radio opaque contrast material is preferably injected into the LV, so that the plurality of image frames produced using the X-ray apparatus include a relatively bright area within LV 64. However, those of ordinary skill in the art will appreciate that in typical X-ray images of the LV, the bright silhouette bounded by the contour of an endocardium (or inner surface) 66 of LV 64 is not clearly delineated. The present method processes the image frames produced with the X-ray source to obtain a contour for at least the ED and ES that closely approximates the endocardium of the patient's LV at those points in the cardiac cycle.
It should be noted that although image frames produced using an X-ray source are disclosed as the type of image data processed in the preferred embodiment of this invention, the present method is also applicable to processing other types of image data from other sources, including ultrasound and nuclear magnetic resonance image data. The image frames produced by any of these techniques are difficult to interpret to determine the contour of the LV, since the contour of the LV (or of any other organ being examined using these imaging techniques) is typically not clearly delineated. With regard to the LV as the exemplary organ of interest, the lower left portion of the contour, i.e., an inferior border 71 disposed left of an apex 72, characteristically appears with much less clarity than the upper portion of the LV. The upper portion of the LV is disposed adjacent aortic valve 68, which opens into aorta 70, and mitral valve 76, which opens into part of left atrium 74. The poor contrast of the inferior portion of the LV contour is primarily due to the proximity of the diaphragm.
During the cardiac cycle, the shape of LV 64 varies and its cross-sectional area and volume changes from a maximum at ED to a minimum at ES. The cross-sectional area and the shape defined by the contour of the endocardium surface change during this cycle as portions of the wall of the heart contract and expand. By evaluating the changes in the contour of the LV from image frame to image frame over one or more cardiac cycles, a physician can diagnose organic problems in the patient's heart, such as a leaking mitral valve or a weakened myocardium (muscle) along a portion of the wall of the LV. These physiological dysfunctions of the heart are more readily apparent to a physician provided with contours of the heart over the cardiac cycle. The physician is alerted to a possible problem if the contour does not change shape from frame to frame in a manner consistent with the functioning of a normal heart. For example, if a portion of the LV wall includes a weakened muscle, the condition will be evident to a physician studying the relative changes in the contour of the LV in that portion of the wall, compared to other portions, since the portion of the endocardium comprising the weakened muscle will fail to contract over several image frames during systole in a normal and vigorous manner. Similarly, over multiple cardiac cycles, valve leakage will be evident from an evaluation of the contours. At the very least, physicians are interested in comparing the contour of the LV at ED with that at ES. Thus, a primary emphasis of the present invention is in automatically determining the contour of the LV within the ED and ES image frames, and in using the contour to evaluate the condition of the heart, e.g., by determining the ejection fraction, although the contour can automatically be determined for other image frames during the cardiac cycle in the same manner. The capability to automatically determine the contour of the LV immediately after the images are acquired (or in real time) can enable a physician to more readily evaluate the condition of the heart during related medical procedures. The present method has been shown to be capable of automatically determining the contours of the LV or other chamber of the heart (or the contour of another organ) during one or more cardiac cycles, with an accuracy approaching that of an expert skilled in evaluating such images, and the present method accomplishes this task substantially faster than a human expert. Moreover, the present method ensures that the contour is accurately determined by relating a position and orientation of the patient's heart in the image data to an anatomical feature, namely, the aortic valve plane and the apex. In addition, the present method provides a generally accurate determination of the contour adjacent to tissue such as the diaphragm, which tends to interfere with the automated process. Overview of the Method With reference to FIGURE 3, the steps implemented by processor 34 of the X-ray imaging facility in carrying out the present invention are illustrated. This Figure and the other Figures used to disclose the method of the present invention include oval shapes to indicate processes and rectangles to indicate objects. In this exemplary application of the present invention, raw systolic phase left ventriculogram (LVG) images 80 are processed to automatically identify the endocardial boundaries at the ED and ES points during the systolic phase of the cardiac cycle. In carrying out this process, it should be understood that training data derived by manually determining the contours of the LV from a number of ventriculogram images for a range of hearts in other persons have previously been obtained so that the results of the manual process are now available during the automated processing of a current patient's cardiac images. Initially, a plurality of gray scale eigenvectors 82 derived from the training data are input with the raw systolic phase LVG images to a preprocessing and normalization step 88. The gray scale eigenvectors are obtained from the second moment matrix of gray scale vectors. They are the orthogonal axes reflecting the gray scale variance direction. Therefore, most of the information contained in the original gray scale vectors can be represented by only a few eigenvectors. Depending upon characteristics of the particular image acquisition equipment used to produce the LVG data, each pixel will have a defined vertical and horizontal scale, as indicated in a block 84. The pixels in the raw systolic phase LVG image data are sometimes not square, as would be preferred, but instead are rectangular, having a different vertical and horizontal size. The size of the pixels produced by the image acquisition equipment is thus input to preprocessing and normalization step 88, along with a set of tools known as morphological structuring elements 86 that are determined previously from the training data. Further details of the preprocessing and normalization step carried out in block 88 are shown in FIGURE 4. Turning now to FIGURE 4, a pixel size calibration step is applied to the raw systolic phase LVG image data, as indicated in a block 90. This step transforms the pixels in these image data into square pixel LVG data, as indicated in a block 92. In the preferred embodiment, each image frame is normalized to a resolution of 384x512 square pixels, and the image sequence begins with a frame at ED and ends at the following ES image frame in the systolic phase of the cardiac cycle.
The square pixel LVG data produced during the step in block 92 are then input to a block 94 in which a gray scale morphological opening and closing operation is applied to the data. The step referenced in block 94 is a filtering process, producing "smoothed" raw image sequence data, as indicated in a block 96. The step carried out in block 94 uses morphological closing operators on the LVG data that tend to lighten relatively darker spots disposed in brighter regions of the image frames and morphological opening operators that tend to darken relatively brighter spots disposed in darker regions of the image frames. To better understand the effect of the morphological structuring elements, it is helpful to think of each image frame as a two-dimensional (2D) plane in which each pixel projects vertically upward in the third dimension to a height that depends upon its gray scale value. Thus, in a larger region of bright pixels having relatively high gray scale values, a small region of pixels having substantially lower gray scale values would appear as a small depression in the third dimension. By applying the morphological closing operators, the small region of pixels would have the gray scale values for each pixel increased to be more consistent with the gray scale values of the brighter pixels surrounding them. Similarly, the morphological opening process reduces relatively high gray scale values of pixels that appear as bright spots in a relatively darker region, so that they are more consistent with the lower gray scale values of pixels surrounding them. One of the characteristics of the morphological closing and opening operators used in this step to remove noise and smooth the data is that they do not significantly alter the spatial location of sharp transitions between relatively light regions and dark regions. A general primer discussing morphological opening and closing operators is provided in a paper entitled, "Image Analysis Using Mathematical Morphology," by R.M. Haralick, S.R. Steinberg, and X. Zhuang, IEEE Trans. On Pattern Recognition and Machine Intelligence, Vol. 9, Number 4, pages 532-550, (July 1987).
The resulting smoothed raw image sequence is then input to a block 98 which carries out a 12 band systolic phase normalization. The effect of this normalization step is to compensate for the different numbers of image frames that can be captured during one cardiac cycle for different patients. It will be apparent that, at a fixed sample rate, a different number of image frames will be included in a cardiac systolic phase for one patient than for another who has a different cardiac rate. However, by normalizing the smoothed raw image sequence, a normalized 12 band image sequence is produced as indicated in a block 100, in which only 12 image frames are included, for a complete systolic phase of the patient.
Next, in a block 102, a CDF gray scale normalization step is applied to the normalized 12 band image sequence data. This step has the effect of spreading the gray scale data for each image based upon a histogram produced for the gray scale data in all 12 images of the normalized 12 band image sequence and has the effect of increasing the range of the gray scale values of these image frames over much more of the available 0 - 255 range. The result of the step carried out in block 102 is a 12 frame CDF sequence, as indicated in block 104. The sequence is then input to a block 106, which carries out a four-dimensional (4D) eigenspace projection using the gray scale eigenvectors developed from the training data. Given an original gray scale vector and a gray scale eigenvector obtained as described above, an eigenspace projection is an inner multiplication of the gray scale vector and the eigenvector. The result of the multiplication is called a principal component. This multiplication can be repeated for each of gray scales eigenvectors, and in this preferred embodiment, there are four gray scale eigenvectors. Therefore, the step carried out in block 106 implements a matrix multiplication, producing an orthogonal projection of each pixel onto a new space having four dimensions. The result is a 4D principal component image 108. Referring back to FIGURE 3, it will be noted that the 4D principal component image is input to a block 110 in which an initial ED/ES region classifier step is applied to the image. In this step, a user is required to enter the location of two ED aortic valve points and an apex point. The locations of these points are manually determined by observation of the ED image frame of the patient. Other inputs to this step include an ED gain matrix 114, data in a lookup table (LUT) 116, and a prior probability image 120, the LUT and prior probability image being derived from the training data. Further, the classifier step is implemented only for a region of interest in which the left ventricle is disposed, as indicated by a block 118. Details of the procedure carried out in block 110 are shown in FIGURE 5.
Referring now to FIGURE 5, the user entered ED aortic valve points and the apex in block 112 are input along with prior probability image 120 to a procedure block 130 that provides for aligning the aortic valve and long axis of the prior probability image with that determined from the user-entered ED aortic valve points and apex. In contrast to this step, the prior related methods provided only for aligning the prior probability image based upon the apex and aortic valve, or alternatively, attempting to automatically determine the location of the aortic valve plane for use as a reference in the alignment step. However, because of the problems that have been identified with these two approaches, the present invention requires that the user specify the aortic valve points and the apex of the left ventricle at ED to facilitate aligning the aortic valve angle and long axis of the patient's heart at ED with the corresponding elements in the prior probability image. The result of the step carried out in block 130 is an aligned prior probability image 132, which is input to a block 134 to carry out a matrix multiply operation.
Region of interest 118 will typically be disposed proximate the central portion of each image frame. However, the region of interest can be more discreetly specified to minimize the amount of background involved in processing the data included in the images. The 4D principal component image and region of interest are thus input to a block 122 that automatically selects pixels inside the region of interest. This step yields 4D principal components that are inside the region of interest, as indicated in a block 124.
LUT 116 is generated from ground truth class images derived from the training data, and from a selected number of gray scale principal components and will have a specified size that is preferably optimized. Each entry in the LUT includes three probabilities, including the probability that a pixel will have a quantized feature vector given that the true class is the background, the ED/ES (i.e., ED-not-ES) region, or the ES region. The background class includes all of the area outside of the boundary of the LV, while the ED/ES class represents all of the area that is inside the ED boundary but not within the ES boundary. Finally, the ES class represents the area that is inside the ES boundary. The LUT is created using a non-parametric relationship that quantizes the measurement space into rectangular hyperboxes, the number of hyperboxes being the optimized LUT size, m, and a smoothing function parameter, k-NN, which is also preferably optimized. The quantization is determined so that the marginal distribution of the quantized feature vector is approximately uniform. The class conditional probabilities in each entry in the LUT are initially estimated by simply counting how many times the quantized feature vector falls into the associated hyperbox. This initial count data is then normalized by the total number of feature vectors in the training data for the class. The initial probability data is then smoothed by a k- NN technique to produce the final estimates.
A block 126 indicates that the information contained in the LUT is accessed in processing each of the 4D principal components, which are the feature vector inside the region of interest to determine a class conditional probability, as indicated in a block 128. Each element of the 4D principal components is quantized and addressed in the corresponding dimension of the 4D LUT, thereby determining the location of the hyperbox to which the 4D feature vector belongs in the LUT. The class conditional probability is an indication of the probability that each pixel within the region of interest is in a specific one of the three classes noted above. The class conditional probability is input to block 134, which provides for implementing a matrix multiplication of the aligned prior probability image in block 132 by the class conditional probability. The result of this matrix multiplication is a posterior class probability at each pixel location, as noted in a block 136. The posterior class probability at a pixel location is the probability that the pixel's true class is C, given the observed feature vector of the pixel and given the pixel location. The posterior class probability at each pixel location is input to a maximal gain decision-making step, as noted in a block 138. ED gain matrix 114, which includes nine optimized numbers that provide a weighting for determining the class by adjusting the posterior class probability in the maximal gain decision-making step, is input to this step, producing an ED/ES pixel class image, as indicated in a block 140. In the ED/ES pixel class image, each pixel is labeled as being in one of the three classes noted above, the class selected having been assigned based upon the maximum probability that the pixel was in that class determined in block 138. Referring back to FIGURE 3, the ED/ES pixel class image in block 140 is input to a process block 142 in which an ED region post-processing and boundary extraction procedure is carried out, based upon structuring elements provided in a block 144.
Referring now to FIGURE 6, details of the steps carried out to implement the ED region post-processing and boundary extraction are illustrated. The ED/ES pixel class image in block 140 is input to a procedure block 146 that provides for the grouping of connected components (grouping of pixels assigned to the same class and lying within a contiguous area) and the filling of any holes within the grouped components. In essence, all pixels that have been assigned a common class and are connected together to form part of a region for that class are grouped together. Any pixels that are not part of the class but are encompassed within the region of the group are assigned to the same class as the majority of the other pixels in the group so that the small holes in the group are eliminated. This step yields the ED and ES class region image, as indicated in a block 148. This image is then input to a procedure block 150 that implements ED and ES pixel location checking. This step ensures that pixels assigned to the ES class are disposed within the boundary limits of the pixels assigned to the ED class and if not, modifies the pixel assigned class to ensure consistency with the logical expectation for the pixel disposition. The result is a refined ED class region image as indicated in a block 152. Next, structuring elements from block 144 are applied in a step indicated in a procedure block 154 that implements the binary morphological opening and closing smoothing of the refined ED class region image. The morphological opening and closing tends to smooth out a region boundary, eliminating its "peninsulas" and "inlets." The result of this step is a smoothed ED region, as indicated in a block 156. The next procedure, which is carried out in a block 158, is to trace the boundary of the smoothed ED region, producing a 100-point classifier ED boundary as indicated in a block 160. The step carried out in block 158 identifies 100 spaced-apart points along the periphery of the smoothed ED region that comprise the classifier ED boundary. The 100-point classifier ED boundary is input to an ED boundary regression process in a block 162 and also serves as an input to a rejection classifier procedure in a block 176 (see FIGURE 3). Also input to the ED boundary regression step are user-entered ED aortic valve points and apex from block 112 and ED boundary eigenvectors (PI), as indicated in a block 166. Details of the step that carried out in the ED boundary regression process are shown in FIGURE 7. As shown therein, the ED boundary eigenvectors (PI), which are obtained from a second moment matrix of the manually traced boundaries from block 166, are applied to the 100-point classifier ED boundary from block 160 to produce a projected classifier boundary, as indicated in a block 170. The projection of the 100-point classifier ED boundary using the eigenvectors is carried out in a block 168. Next, the projected classifier boundary from block 170 is input to a procedure block 172 that carries out an ED boundary regression using an ED three-point full quadratic augmentation. The quadratic augmentation appends to the projected classifier boundary vector. One component has the value 1, six components are the coordinates of the user entered three points, and 21 components are the product produced by multiplying each coordinate value by itself and other coordinate values. In this step, the ED boundary regression coefficients from block 164 are applied and used in connection with the user-entered ED aortic valve points and apex that quadratically augment the projected classifier boundary. The ED boundary regression coefficients have previously been determined from the training data and will thus be available to carry out the step in block 172. The result of this regression is a 100-point regressed ED boundary as indicated as a block 174.
As shown in FIGURE 3, the 100-point regressed ED boundary in block 174 is provided as an input to the rejection classifier process step in a block 176. Also input to the rejection classifier process are rejection classifier coefficients 178, which will previously have been determined from the training data. The rejection coefficient vector is a vector of weights that is used in the rejection classifier. The 100-point regressed ED boundary is also input to a ES region classifier processing step in a block 180. Before discussing the details of the rejection classifier step, it will first be helpful to discuss how the other inputs relating to the ES boundary are obtained. It will be noted in FIGURE 3 that the ES region classifier step in block 180 is carried out using inputs from the 4-D principal component image in block 178, the prior probability image in block 120, the LUT in 116, user-entered ES aortic valve points and apex from a block 182, and both a distance histogram in a block 194 and an ES gain matrix in a block 204.
Details of the ES region classifier process are illustrated in FIGURE 8. With reference to this Figure, user-entered ES aortic valve points and apex from block 182 and the prior probability image derived from the training data are input to a process block 184 that provides for aligning the aortic valve angle and long axis of the prior probability image at ES with that in the image of the patient's heart at ES based upon the user-entered data from block 182. This step is analogous to that carried out for the ED image in block 110, as discussed above. The result of this alignment is an aligned prior probability image as indicated in a block 186. This image is input to a block 188 in which a step is carried out for refining the aligned prior probability image based on distance considerations. To implement the step carried out in block 188, it is necessary to use a distance transform that is carried out in a block 190, based upon the regressed ED boundary from block 174. A distance transform defines a distance image with respect to the ED boundary as indicated in a block 192. The distance histogram provides a probability for each of a plurality of distances between the ED boundary and the ES boundary. Using the distance histograms in block 194, the aligned prior probability image, and the distance image with respect to the ED boundary in block 192, the procedure in block 188 refines the aligned prior probability image as a function of distance, to produce the refined aligned prior probability image in a block 196.
The refined aligned prior probability image in block 196 is input to a matrix multiply step in a block 198. Block 198 also receives an input from the class conditional probability in a block 128, which is determined as previously described above. The matrix multiply operation carried out in block 198 produces a posterior class probability at each pixel location, as indicated in a block 200. A class probability indicates the probability of each pixel in the image being in each of the three classes. The probability data are input to a process for carrying out a maximal gain decision in a block 202, which uses an ES gain matrix that includes nine optimized numbers indicating the weighting for determining the class when making the maximal gain decision. The result of the operation in block 202 is the ES pixel class image indicated in a block 206. Referring back to FIGURE 3, it will be noted that the ES pixel class image is input to a procedure for processing the ES region and extracting its boundary, which is carried out in a block 208 using structuring elements from a block 216.
Details of the steps implemented in block 208 are shown in FIGURE 9. As shown therein, the ES pixel class image from block 206 is input to a procedure for connecting the components by grouping, and filling in holes within the groups, as set forth in a block 210. This step is similar to that carried out in connection with the ED image frame data. The result of the step is an ES class region image, as indicated in a block 212. This image is input to a process in a block 214 that provides for applying a binary morphological opening and closing process using structuring elements 216, which were obtained from the training data, to produce a smoothed ES region in a block 218. As explained above, this step acts as a filter to smooth a region boundary, removing its "peninsulas" and "inlets." The smoothed ES region in block 218 is input to a procedure for tracing the boundary of the ES region in a block 220, yielding 100 spaced-apart points around a classifier ES boundary, as indicated in a block 222.
Referring to FIGURE 10, a procedure for determining the ES boundary regression (shown in a block 224 in FIGURE 3) is illustrated and makes use of the 100-point classifier ES boundary in block 222 that was just determined. In addition, ES boundary eigenvectors (P2) in a block 226 are input to a procedure for projecting the 100-point classifier ES boundary in accord with the eigenvectors. The result is a projected classifier ES boundary, as indicated in a block 230.
The user-entered ES aortic valve points and apex position in block 182 are input to a full quadratic augmentation process in a block 240 to produce an ES augmentation in a block 242. Similarly, the user-entered ED aortic valve points and apexes in a block 234 are input to a partial quadratic augmentation process in a block 236, producing an ED augmentation in a block 238. The ES augmentation in block 242 and the ED augmentation in block 238 are input to a block 232, along with ES boundary regression coefficients determined from the training data in a block 244 and the projected classifier ES boundary, so that the 100 point ES regressed boundary can be determined by the regression with ES full quadratic and ED partial quadratic augmentation applied to the projected classifier ES boundary, as indicated in a block 246.
Before proceeding with the rejection classifier process in block 176 (FIGURE 3), several additional parameters must be determined. FIGURES 11 though 13 show how these various parameters are developed. With reference to FIGURE 11, 100-point classifier ED boundary 160 is used to compute a classifier ED area in a block 250, as indicated in a process block 248, which provides for computing the area. The 100-point classifier ED boundary is also used in a block 252, to compute a classifier ED volume in a block 254.
Similarly, the 100-point classifier ES boundary in block 222 is input to blocks 258 and 262 to respectively compute a classifier ES area in a block 260 and a classifier ES volume in a block 264. Using both the classifier ED volume and the classifier ES volume, a block 256 provides for computing the stroke volume for the left ventricle, yielding a classifier stroke volume in a block 270. Both the classifier ED and ES volumes are also input to a block 266, which computes the ejection fraction, yielding a classifier ejection fraction in a block 268. Referring now to FIGURE 12, the 100-point regressed ED boundary in block 174 is input to blocks 272 and 276, which respectively determine a regressed ED area in a block 274 and a regressed ED volume in a block 278. Similarly, the 100-point regressed ES boundary is input to blocks 282 and 286 for computing a regressed ES area in a block 284 and a regressed ES volume in a block 288, respectively. Then, the regressed ED volume in block 278 and the regressed ES volume in block 288 are input to both blocks 280 and 290, which respectively provide for computing a regressed stroke volume in a block 294 and a regressed ejection fraction in a block 292. In FIGURE 13, the 100-point classifier ED boundary in block 160 and the
100-point regressed ED boundary in block 174 are input to a block 296 which provides for computing a boundary error based upon the difference between the classifier ED boundary and the regressed ED boundary. The ED boundary area in a block 298 is produced by this step. Further, the 100-point classifier ES boundary in block 222 and the 100-point regressed ES boundary in block 246 are input to a block 300, which computes a boundary error between them, yielding an ES boundary error in a block 302. At this point, all of the parameters necessary to implement the rejection classifier step in block 176 (FIGURE 3) are available. Details of this step are shown in FIGURE 14. FIGURES 11 through 13 have already been discussed above, in regard to how the procedure in a block 304, which is shown on FIGURE 14, is applied to respectively determine an ED boundary error 298, classifier ED and ES areas 250 and 260, classifier ED and ES volumes 254 and 264, and classifier stroke volume 256 and classifier ejection fraction volume 268. Similarly, the regressed corresponding parameters for ED and ES, including area, volume, and the regressed stroke volume and regressed ejection fraction - all determined as discussed above, are input to a procedure block 308. The mean absolute boundary difference between the classifier ED boundary and the regressed ED boundary in block 298 and the mean absolute boundary difference between the classifier ES boundary and the regressed ES boundary in block 302 are also input to procedure block 308, which provides for forming a rejection classifier feature vector, as indicated in a block 310. A dot product of the rejection classifier feature vector and the rejection classifier coefficients in block 178 is carried out in a block 312 and the result is compared to a rejection cost threshold t2 in a block 304. The rejection cost threshold t2 is optimized to ensure that the ED and ES boundaries that have been determined are automatically accepted or rejected in a manner that minimizes the rejection of boundaries that should not be rejected and minimizes the acceptance of boundaries that should not be accepted. The acceptance or rejection of the automated boundary delineation provided by the procedure is thus achieved, as indicated in a block 306. In the event that a boundary is rejected as being outside the rejection cost threshold t2, a human operator is alerted by a flag appearing on the display. The operator then has the option of applying human expertise in accepting or rejecting the boundary, or alternatively in modifying the boundary that was automatically determined and rejected, to make the boundary acceptable. It is contemplated that the need for operator intervention should be minimal, given the relatively high accuracy of the present invention in determining the boundary of an organ such as the left ventricle.
Although the present invention has been described in connection with the preferred form of practicing it, those of ordinary skill in the art will understand that many modifications can be made thereto within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.

Claims

The invention in which an exclusive right is claimed is defined by the following:
1. A method for automatically determining a contour of at least a portion of an organ in a patient, based upon digital image data of a region in which the organ is disposed, said image data representing a sequence of image frames of the region made over an interval of time during which the organ has completed at least one cycle of periodic movement, each image frame of said sequence of image frames comprising a plurality of pixels, said method comprising the steps of:
(a) normalizing the image data, producing principal component images therefrom;
(b) non-parametrically classifying pixels represented in the principal component images as being in one of three classes, producing pixel class images for a maximum volume but not a minimum volume, and for the minimum volume; and
(c) extracting a minimum volume boundary and a maximum volume boundary for the organ from the pixel class images, using parameters derived from training data previously produced through manual evaluation of a plurality of image frames over at least one cycle of periodic movement in each of a plurality of organs in other individuals, said minimum volume and said maximum volume boundaries defining the contour of at least said portion of the organ.
2. The method of Claim 1, wherein the step of normalizing includes the steps of:
(a) normalizing each pixel in regard to dimensional size, producing square pixel image data;
(b) using a gray scale morphological opening and closing process to reduce noise in the pixel image data;
(c) normalizing an image sample rate used to acquire the image data over a cycle of movement of the organ, to produce a predefined number of bands of image data;
(d) applying a cumulative distribution function to normalize a gray scale in each of the predefined number of bands of image data, producing a cumulative distribution function (CDF) sequence of image frames that each include a plurality of pixels; and
(e) orthogonally projecting each pixel in the CDF sequence of image frames onto a new space having a predefined number of dimensions, using parameters derived from the training data, to produce the principal component images.
3. The method of Claim 1, wherein the step of non-parametrically classifying comprises the steps of:
(a) aligning a reference axis and an anatomical feature in the organ of the patient with a corresponding axis and corresponding anatomical feature in prior probability images derived from the training data, producing aligned prior probability images for the maximum and minimum volume boundaries of the organ during the cycle;
(b) selecting pixels within a region of interest inside the principal component images, and based upon a probability for each class, determining a conditional class probability for each of said pixels;
(c) using the aligned prior probability image and the conditional class probability for each pixel, determining a probability for each class at each pixel location to initially classify the pixels in a region within the maximum volume boundary; and
(d) using the aligned prior probability image, the conditional class probability for each pixel, and regression data representing the maximum volume boundary, determining a probability for each class at each pixel location to initially classify the pixels in a region within the minimum volume boundary.
4. The method of Claim 3, wherein the regression data representing the maximum volume boundary are combined with probabilities associated with distances between the maximum volume boundary and the minimum volume boundary to produce a refined aligned prior probability image for use with the conditional class probability in the step of determining a minimum volume boundary pixel class image.
5. The method of Claim 1, wherein the step of extracting the maximum volume boundary comprises the steps of:
(a) forming a maximum volume boundary and minimum volume boundary class region image that includes all pixels classified either:
. (i) within the maximum volume boundary AND not within the minimum boundary volume; or
(ii) within the minimum volume boundary;
(b) deleting any pixel that is inconsistent with the maximum volume boundary and minimum volume boundary class region image, producing a refined maximum volume boundary class region image;
(c) smoothing the refined maximum volume boundary class region image; and
(d) automatically tracing the maximum volume boundary class region to extract a classifier maximum volume boundary having a predefined number of points that represent the maximum volume boundary.
6. The method of Claim 1, wherein the step of extracting the minimum volume boundary comprises the steps of:
(a) forming a minimum volume boundary class region image that includes all pixels classified within the minimum boundary volume, or within the minimum volume boundary;
(b) smoothing the minimum volume boundary class region image; and
(c) automatically tracing the minimum volume boundary class region to extract a classifier minimum volume boundary having a predefined number of points that represent the minimum volume boundary.
7. The method of Claim 1, further comprising the step of applying regression coefficients derived from the training data to the maximum volume boundary and to the minimum volume boundary to determine a regressed maximum volume boundary and a regressed minimum volume boundary.
8. The method of Claim 7, further comprising the steps of:
(a) determining a plurality of parameters for the organ based upon the regressed maximum volume boundary, the regressed minimum volume boundary, and classifier boundaries obtained from the training data; and
(b) as a function of a rejection classifier feature vector derived from the plurality of parameters, rejection classifier coefficients determined from the training data, and a rejection cost threshold, determining if the contour of at least said portion of the organ is accepted or rejected.
9. The method of Claim 8, wherein if said contour is rejected, further comprising the step of flagging the contour for manual review by an operator.
10. The method of Claim 1, wherein the organ comprises a heart and the cycle comprises a cardiac cycle of the heart, said maximum volume boundary corresponding to an end diastole boundary of a chamber of the heart, and said minimum volume boundary corresponding to an end systole boundary of the chamber of the heart.
11. The method of Claim 1, further comprising the step of employing at least one optimized parameter in determining the contour.
12. A method for automatically determining a contour of a chamber of a heart in a patient, based upon digital image data of a region in which the heart is disposed, said image data representing a sequence of image frames of the region made over an interval of time during which the heart has completed at least one cardiac cycle, each image frame of said sequence of image frames comprising a plurality of pixels, said method comprising the steps of:
(a) optimizing a plurality of parameters for use in processing the image data;
(b) normalizing the image data in regard to pixel size and gray scale, producing normalized image data;
(c) classifying the pixels in the normalized image data into pixels that are within:
(i) an end diastole boundary and not within an end systole boundary of the chamber;
(ii) pixels that are within the end systole boundary of the chamber; and
(iii) all other pixels, said step of classifying using an optimized gain matrix produced in step (a) to produce a plurality of pixel class images; and
(d) processing the pixel class images to extract an end diastole boundary and an end systole boundary for the chamber of the heart, using criteria derived from training data previously produced through manual evaluation of a plurality of image frames over at least one cardiac cycle for each of a plurality of hearts in other individuals, and parameters optimized in step (a), said end diastole boundary and said end systole boundary defining the contour of said chamber of the heart in the patient.
13. The method of Claim 12, further comprising the step of applying boundary regression using the training data, to produce regressed end systole and regressed end diastole boundaries.
14. The method of Claim 13, wherein the step of applying boundary regression includes the step of using anatomical features for the heart in the patient to augment determining the end systole boundary and end diastole boundary.
15. The method of Claim 12, further comprising the step of rejecting or accepting the contour of the chamber as a function of the training data and as a function of a threshold value that is one of the parameters optimized in step (a).
16. The method of Claim 15, wherein the threshold value is optimized to minimize an ejection fraction error for the chamber.
17. The method of Claim 12, wherein the step of classifying includes the step of aligning an axis of the chamber and an anatomical feature for prior probability images derived from the training data with a corresponding axis and corresponding anatomical feature in the image frames for the heart in the patient.
18. The method of Claim 12, wherein the step of classifying includes the step of employing a lookup table to determine probabilities of each class assigned to the pixels, said step of optimizing including the step of optimizing a size of the lookup table.
19. The method of Claim 12, wherein the parameters that are optimized include a plurality of parameters selected from:
(a) a size of a lookup table used in the step of classifying;
(b) a smoothing parameter used in minimizing noise in the image data;
(c) a prior probability image comprising a smoothing template derived from the training data;
(d) a set of distance histogram sectors that specify probabilities for distances between points on the end diastole boundary and the end systole boundary;
(e) gain matrices applied during the step of classifying;
(f) a regression dimension used in extracting the end diastole boundary and the end systole boundary; and
(g) a rejection threshold employed in determining whether to accept or reject a contour of the heart in the patient.
20. The method of Claim 12, further comprising the step of reducing a feature dimension in the image data using a principal component method that combines image data frames.
21. A method for automatically determining a contour of a chamber of a heart in a patient, based upon digital image data of a region in which the heart is disposed, said image data representing a sequence of image frames of the region made over an interval of time during which the heart has completed at least one cardiac cycle, each image frame of said sequence of image frames comprising a plurality of pixels, said method comprising the steps of:
(a) using criteria derived from training data previously produced through manual evaluation of a plurality of image frames over at least one cardiac cycle for each of a plurality of hearts in other individuals, processing the image data for the heart in the patient to produce an end diastole boundary and an end systole boundary of the chamber that define the contour of the chamber;
(b) computing a plurality of parameters for the heart in the patient, as a function of the training data, the end diastole boundary, and the end systole boundary;
(c) using the parameters to determine a characteristic of the chamber of the heart; and
(d) making a determination to accept or reject the contour by comparing the characteristic with a rejection threshold.
22. The method of Claim 21, wherein the characteristic is an ejection fraction error and wherein the rejection threshold defines an acceptable range for the ejection fraction error.
23. The method of Claim 21, further comprising the step of producing a regressed end diastole boundary and a regressed end systole boundary using eigenvectors derived from the training data, said regressed end diastole boundary and regressed end systole boundary being employed in the step of computing the parameters.
24. The method of Claim 21, wherein the step of processing the image data comprises the steps of:
(a) normalizing the image data in regard to gray scale using a cumulative distribution function, and in regard to pixel size;
(b) smoothing image frames in the sequence;
(c) normalizing an image sample rate; and
(d) orthogonally projecting each pixel onto a new space having a predetermined number of dimensions.
25. The method of Claim 21, wherein the step of processing comprises the steps of:
(a) classifying pixels non-parametrically into three classes;
(b) estimating class conditional probability using a lookup table;
(c) aligning class prior probability images from training data with image frames for patient in regard to an axis and an anatomical feature in the prior probability images and the image frames for the patient; and
(d) determining the end systole classifier as a function of the end diastole boundary.
26. The method of Claim 24, wherein the step of normalizing includes the step of substantially reducing a feature dimension of the image frames for the heart in the patient.
27. The method of Claim 21, wherein the chamber of the heart comprises a left ventricle.
PCT/US2001/014341 2000-05-03 2001-05-03 Method for determining the contour of an in vivo organ using multiple image frames of the organ WO2001082787A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001261163A AU2001261163A1 (en) 2000-05-03 2001-05-03 Method for determining the contour of an in vivo organ using multiple image frames of the organ

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US56406100A 2000-05-03 2000-05-03
US09/564,061 2000-05-03

Publications (2)

Publication Number Publication Date
WO2001082787A2 true WO2001082787A2 (en) 2001-11-08
WO2001082787A3 WO2001082787A3 (en) 2002-03-21

Family

ID=24253003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/014341 WO2001082787A2 (en) 2000-05-03 2001-05-03 Method for determining the contour of an in vivo organ using multiple image frames of the organ

Country Status (2)

Country Link
AU (1) AU2001261163A1 (en)
WO (1) WO2001082787A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295709B2 (en) 2001-05-25 2007-11-13 The Victoria University Of Manchester Object identification
US7306561B2 (en) 2004-09-02 2007-12-11 Scimed Life Systems, Inc. Systems and methods for automatic time-gain compensation in an ultrasound imaging system
US7460716B2 (en) 2004-09-13 2008-12-02 Boston Scientific Scimed, Inc. Systems and methods for producing a dynamic classified image
US8545412B2 (en) 2009-05-29 2013-10-01 Boston Scientific Scimed, Inc. Systems and methods for making and using image-guided intravascular and endocardial therapy systems
US9451929B2 (en) 2008-04-17 2016-09-27 Boston Scientific Scimed, Inc. Degassing intravascular ultrasound imaging systems with sealed catheters filled with an acoustically-favorable medium and methods of making and using
US20180330518A1 (en) * 2017-05-11 2018-11-15 Verathon Inc. Probability map-based ultrasound scanning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5533085A (en) * 1995-02-27 1996-07-02 University Of Washington Automatic indexing of cine-angiograms
US5533510A (en) * 1994-07-15 1996-07-09 Hewlett-Packard Company Real time ultrasound endocardial displacement display
US5570430A (en) * 1994-05-31 1996-10-29 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5734739A (en) * 1994-05-31 1998-03-31 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5797396A (en) * 1995-06-07 1998-08-25 University Of Florida Research Foundation Automated method for digital image quantitation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5570430A (en) * 1994-05-31 1996-10-29 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5734739A (en) * 1994-05-31 1998-03-31 University Of Washington Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5533510A (en) * 1994-07-15 1996-07-09 Hewlett-Packard Company Real time ultrasound endocardial displacement display
US5533085A (en) * 1995-02-27 1996-07-02 University Of Washington Automatic indexing of cine-angiograms
US5797396A (en) * 1995-06-07 1998-08-25 University Of Florida Research Foundation Automated method for digital image quantitation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CAO X. ET AL.: 'Adaptive movable window enhancement in digital chest radiography' IEEE 1997, pages 551 - 552, XP002948151 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295709B2 (en) 2001-05-25 2007-11-13 The Victoria University Of Manchester Object identification
US7306561B2 (en) 2004-09-02 2007-12-11 Scimed Life Systems, Inc. Systems and methods for automatic time-gain compensation in an ultrasound imaging system
US7460716B2 (en) 2004-09-13 2008-12-02 Boston Scientific Scimed, Inc. Systems and methods for producing a dynamic classified image
US9451929B2 (en) 2008-04-17 2016-09-27 Boston Scientific Scimed, Inc. Degassing intravascular ultrasound imaging systems with sealed catheters filled with an acoustically-favorable medium and methods of making and using
US8545412B2 (en) 2009-05-29 2013-10-01 Boston Scientific Scimed, Inc. Systems and methods for making and using image-guided intravascular and endocardial therapy systems
US20180330518A1 (en) * 2017-05-11 2018-11-15 Verathon Inc. Probability map-based ultrasound scanning
CN110753517A (en) * 2017-05-11 2020-02-04 韦拉索恩股份有限公司 Ultrasound scanning based on probability mapping

Also Published As

Publication number Publication date
WO2001082787A3 (en) 2002-03-21
AU2001261163A1 (en) 2001-11-12

Similar Documents

Publication Publication Date Title
US5570430A (en) Method for determining the contour of an in vivo organ using multiple image frames of the organ
US5734739A (en) Method for determining the contour of an in vivo organ using multiple image frames of the organ
CN106682435B (en) System and method for automatically detecting lesion in medical image through multi-model fusion
EP0990222B1 (en) Image processing method and system involving contour detection steps
Chalana et al. A multiple active contour model for cardiac boundary detection on echocardiographic sequences
US7155042B1 (en) Method and system of measuring characteristics of an organ
US5669382A (en) System for measuring myocardium in cardiac images
US7756306B2 (en) Method and apparatus for extracting cerebral ventricular system from images
JP5438029B2 (en) Automatic 3D segmentation of short axis delay-enhanced cardiac MRI
US8218845B2 (en) Dynamic pulmonary trunk modeling in computed tomography and magnetic resonance imaging based on the detection of bounding boxes, anatomical landmarks, and ribs of a pulmonary artery
US20050018890A1 (en) Segmentation of left ventriculograms using boosted decision trees
Weng et al. Learning-based ventricle detection from cardiac MR and CT images
JP2017064370A (en) Image processing device, and method and program for controlling image processing device
Zhao et al. Three-dimensional multi-criterion automatic segmentation of pulmonary nodules of helical computed tomography images
Hao et al. Segmenting high-frequency intracardiac ultrasound images of myocardium into infarcted, ischemic, and normal regions
WO2001082787A2 (en) Method for determining the contour of an in vivo organ using multiple image frames of the organ
JP2023517058A (en) Automatic detection of tumors based on image processing
JP2632501B2 (en) Method for extracting internal organs contours in vivo using multiple image frames of internal organs
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
Maes et al. Automated contour detection of the left ventricle in short axis view in 2D echocardiograms
US7218767B2 (en) Method of improving the resolution of a medical nuclear image
JP4571378B2 (en) Image processing method, apparatus, and program
Bosch et al. Overview of automated quantitation techniques in 2D echocardiography
JP3392608B2 (en) Image processing apparatus and method
Ecabert et al. Automatic whole heart segmentation in CT images: method and validation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AT AU BG BR CA CH CN DE DK ES FI GB HU ID IL IN IS JP KR LK LU MX NO NZ PL RU SE SG TR VN YU ZA

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AT AU BG BR CA CH CN DE DK ES FI GB HU ID IL IN IS JP KR LK LU MX NO NZ PL RU SE SG TR VN YU ZA

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP