WO2005036457A2 - Virtual endoscopy methods and systems - Google Patents

Virtual endoscopy methods and systems Download PDF

Info

Publication number
WO2005036457A2
WO2005036457A2 PCT/US2004/033888 US2004033888W WO2005036457A2 WO 2005036457 A2 WO2005036457 A2 WO 2005036457A2 US 2004033888 W US2004033888 W US 2004033888W WO 2005036457 A2 WO2005036457 A2 WO 2005036457A2
Authority
WO
WIPO (PCT)
Prior art keywords
colon
image
voxel
residue
region
Prior art date
Application number
PCT/US2004/033888
Other languages
French (fr)
Other versions
WO2005036457A3 (en
Inventor
Dongqing Chen
Sarang Lakare
Kevin Kreeger
Mark R. Wax
Arie E. Kaufman
Zhengrong Liang
Original Assignee
Viatronix Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Viatronix Incorporated filed Critical Viatronix Incorporated
Priority to EP04795095A priority Critical patent/EP1716535A2/en
Publication of WO2005036457A2 publication Critical patent/WO2005036457A2/en
Publication of WO2005036457A3 publication Critical patent/WO2005036457A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp

Definitions

  • the present invention relates generally to virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, the invention relates to 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps).
  • virtual endoscopy systems implement methods for processing 3D image datasets to enable examination and evaluation of organs with hollow lumens or cavities, such as colons, bladders, lungs, arteries, etc, and to enable virtual simulation of endoscopic examination of such organs.
  • virtual endoscopy procedures include an organ preparation process whereby an organ to be evaluated is prepared in a manner to enable the anatomical features of the organ (e.g., organ tissue) to be contrasted from surrounding anatomical objects or materials in subsequently acquired medical images, followed by image acquisition and image processing to construct 2D or 3D models of the organ from the acquired image data.
  • Such 2D/3D models can be displayed in different rendering modes for inspecting for organ abnormalities.
  • virtual endoscopy applications provide 3D visualization (e.g., "fly-through" visualization) of the inner surface of the organ, which is referred to as endoluminal view.
  • Virtual endoscopy is continuing to gain wider acceptance in the medical field as non- invasive, patient comfortable methods for examining and evaluating organs.
  • Virtual endoscopy will eventually eliminate the need for invasive screening/testing endoscopic procedures such as optical colonoscopies which require long instruments (catheters/endoscopes) to be inserted into the patient.
  • Such invasive procedures provide risk of injury including organ perforation, infection, hemorrhage, etc.
  • invasive endoscopic procedures can be highly uncomfortable and stressful to the patient.
  • exemplary embodiments of the invention include virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, exemplary embodiments of the invention include 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps).
  • an imaging method that can be implemented for virtual endoscopy applications includes a process of obtaining an image dataset of an organ, processing the acquired image dataset to obtain feature data, and rendering a multi-dimensional representation of the imaged organ using the obtained feature data, wherein processing includes obtaining image intensity feature data from the image dataset, processing the image intensity feature data to obtain gradient feature data representing intensity change along each of a plurality of directions in a region of interest in the acquired image dataset; and processing the gradient feature data to determine boundary layers between anatomical features of the imaged organ and surrounding objects or materials in the region of interest.
  • an imaging method which can be implemented for virtual colonoscopy applications includes a process of obtaining an image dataset comprising image data of a colon that is prepared to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering a volumetric image comprising an endoluminal view at a region within the imaged colon.
  • FIG. 1 is a flow diagram illustrating a virtual endoscopy method according to an exemplary embodiment of the invention.
  • FIG. 2 is a flow diagram illustrating virtual colonoscopy methods according to exemplary embodiments of the invention.
  • FIGs. 3A ⁇ 3D are diagrams illustrating bowel preparation methods for virtual colonscopy, according to exemplary embodiments of the invention.
  • FIGs. 4A ⁇ 4C are diagrams illustrating various types of foods/meals that can be used for bowel preparation methods according to exemplary embodiments of the invention.
  • FIG. 5 is a flow diagram illustrating an electronic organ cleaning method for virtual endoscopy according to an exemplary embodiment of the invention.
  • FIGs. 6A and 6B are exemplary images illustrating results of an electronic organ cleaning process according to the invention.
  • FIG. 7 is an exemplary image depicting a condition in which a portion of a colon wall has a similar intensity as that of a puddle of tagged residue.
  • FIG. 8 is a flow diagram illustrating a boundary extraction method according to an exemplary embodiment of the invention.
  • FIG. 9 is an exemplary diagram illustrating directional gradients that can be used for extracting boundary layers, according to an exemplary embodiment of the invention.
  • FIG. 10 is an exemplary diagram illustrating a method for computing directional gradients for extracting boundary layers, according to an exemplary embodiment of the invention.
  • FIG. 11 is a flow diagram illustrating a method for reconstructing boundary layers according to an exemplary embodiment of the invention. Detailed Description of Exemplary Embodiments
  • exemplary embodiments of the invention as described in detail hereafter include organ preparation methods and image processing methods for virtual endoscopy applications. It is to be understood that exemplary imaging systems and methods according to the invention as described herein may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
  • virtual endoscopy systems and methods described herein can be implemented in software comprising program instructions that are tangibly embodied on one or more program storage devices (e.g., hard disk, magnetic floppy disk, RAM, CD Rom, DVD, ROM, flash memory, etc.), an executable by any device or machine comprising suitable architecture.
  • FIG. 1 is a high-level flow diagram illustrating a method for virtual endoscopy according to an exemplary embodiment of the invention.
  • FIG. 1 depicts a virtual endoscopy process including methods for preparing a target organ for imaging and methods for processing volumetric images of the organ to enable virtual endoscopic examination of the organ.
  • the exemplary method of FIG. 1 depicts a general framework for virtual endoscopy which can be implemented with various imaging modalities for virtually examining various types of objects such as organs with hollow lumens/cavities such as colons, tracheo- bronchial airways, bladders, and the like.
  • an exemplary virtual endoscopy method includes an initial process of preparing an organ to be imaged (step 10).
  • organ preparation methods according to the invention are designed to enhance the contrast between anatomical features of an organ under evaluation and surrounding objects and material in subsequently acquired medical images of the organ.
  • organ preparation methods according to the invention are designed to be non-invasive and comfortable for the individual whose organ is to be virtually examined.
  • An organ preparation process according to the invention will vary depending on factors such as, e.g., the type of organ to be imaged and the imaging modalities used for acquiring medical images of the organ.
  • exemplary embodiments of the invention include bowel preparation methods for preparing a colon for virtual colonoscopy by an individual being administered contrast agent(s) and following a specific diet regime for a period of time prior to imaging the colon. More specifically, exemplary embodiments of the invention include laxative- free/suppository-free bowel preparation methods which use a combination of diet management and administration of contrast agents to effectively "tag" colonic residue (e.g., stool, fluid) that may be present in the lumen of the colon such that the colonic residue can be distinguished from surrounding tissues in subsequently acquired medical images of the colon.
  • contrast agents e.g., stool, fluid
  • contrast agents and methods of administering contrast agents will vary depending on, e.g., the application and the organ under evaluation.
  • contrast agents can be administered according to a particular protocol either orally, intravenously, or a combination thereof.
  • the type of contrast agent is not limited to medicine or specific chemicals, and could be normal food or even natural water.
  • natural water can be utilized as the contrast agent for preparing a bladder for imaging.
  • organ preparation may include drinking a specified amount of water before image acquisition.
  • the organ will be imaged to acquire a 3D volumetric image dataset (step 11) using one or more imaging modalities that are suitable for the given application.
  • imaging modalities that may be used for acquiring medical images of an organ include, for example, x-ray CT (Computed Tomography), MRI (Magnetic Resonance Imaging), US (ultrasound), PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography).
  • the medical images can be 3D volumetric image datasets that are directly acquired as a result of a scan.
  • 3D volumetric image datasets can be generated by acquiring an image dataset comprising multiple 2D or "slice" images and then stacking and interpolating between the 2D images to produce a 3D volumetric image dataset.
  • volume rendering methods known to those of ordinary skill in the art can be implemented to combine adjacent 2D image planes (slices) including, for example, maximum intensity projection, minimum intensity projection, and surface rendering techniques in combination with voxel texture information, depth information, gradient shading, etc.
  • the 3D volumetric images are processed using known techniques for segmenting a
  • ROI region of interest
  • the target organ from the 3D image dataset (step 12).
  • any suitable automated or semi-automated segmentation method may be implemented to extract the image data volume that corresponds to the target organ from the original 3D volume space.
  • methods are implemented for segmenting a colon region from a 3D image dataset, wherein the colon region includes the colon wall, colon lumen, and tagged colonic residue in the colon.
  • the segmentation methods will vary depending on the target organ of interest and imaging modality and generally include methods for segmenting features or anatomies of interest by reference to known or anticipated image characteristics, such as edges, identifiable structures, boundaries, changes or transitions in colors or intensities, changes or transitions in spectrographic information, a priori anatomical knowledge, etc.
  • image characteristics such as edges, identifiable structures, boundaries, changes or transitions in colors or intensities, changes or transitions in spectrographic information, a priori anatomical knowledge, etc.
  • Various types of segmentation methods that can be implemented are well known to those of ordinary skill in the art, and a detailed discussion thereof is not necessary and beyond the scope of the claimed inventions.
  • the image data corresponding to the extracted ROI is processed using one or more feature analysis methods to extract feature data of relevant clinical significance with respect to the organ under investigation (step 13).
  • the feature analysis methods that are implemented for extracting relevant features, data, or image parameters from image data will vary depending on the type(s) of anatomical structures or organs under consideration and imaging modality. For example, methods may be implemented for detecting potential anatomical abnormalities in the organ of interest or extracting boundaries or edges between different tissues/materials/structures in the imaged organ. For example, as explained below, methods according to the invention are provided for processing volumetric images of a colon to detect and remove tagged regions of colonic residue and reconstruct boundaries between the colon wall and lumen in regions in which colonic residue is removed.
  • Various types of feature extraction methods that can be implemented for various medical applications and image domains are well known to those of ordinary skill in the art.
  • the extracted feature data can be transformed and fused with the original volumetric images (step 14) and the fused volumetric images can be rendered and displayed (step 15) in a manner that facilitates physician inspection of abnormalities.
  • any type of feature data that is relevant to the given clinical application can be fused with an original volumetric dataset.
  • a fusion process can be a transformation process that maps feature data into the intensity of the volumetric images. For instance, with virtual colonoscopy, the results of an automated polyp detection process can be fused with an original volume dataset such that potential polyps are rendered and displayed in a particular color.
  • MRI virtual cystoscopy images a lesion that invades the wall of the organ will demonstrate different texture as compared to the normal wall.
  • texture information can be extracted and overlaid on the endoluminal view to facilitate the early detection of lesions and cancer, especially when abnormalities in the organ wall are small and present a subtle shape deformation on the inner wall surface. If texture information is fused in the original volumetric images in an appropriate way, it is possible that lesion regions will be enhanced in the endoluminal view.
  • FIG. 1 provides a general framework for virtual endoscopy methods according to the invention.
  • systems and methods for laxative-free virtual colonoscopy will be discussed in detail, which are based on the exemplary general framework of FIG. 1, but nothing herein shall be construed as limiting the scope of the invention.
  • FIG. 2 is a flow diagram that illustrates methods for virtual colonoscopy according to exemplary embodiments of the invention.
  • Virtual colonoscopy methods according to the invention implement bowel preparation methods, which are based on i diet management and contrast agent administration protocols (step 20).
  • exemplary embodiments of the invention include laxative-free bowel preparation methods, which are based on low-residue diets and administration of sufficient dosages of one or more types of contrast agents over a period of time prior to colon imaging, and which provide good residual stool and fluid tagging quality.
  • Exemplary bowel preparation methods according to the invention will be explained in detail below with reference to FIGs. 3 A ⁇ 3D and 4A-4C, for example.
  • the colon preparation process is followed by image acquisition.
  • a colon distention process is performed prior to scanning (step 21).
  • the patient's colon is expanded by forcing approximately 2 to 3 liters of room air or carbon dioxide into the patient's colon.
  • Other methods for distending a colon known to those of ordinary skill in the art may be implemented.
  • a region of interest of the individual which includes the colon, is scanned using an imaging device to acquire image data of the colon (step 22) and the image data is processed to generate one or more volumetric image datasets (step 23).
  • Image acquisition can be implemented using one or more imaging modalities.
  • CT images of the colon can be acquired using a routine CT virtual colonoscopy scanning protocol.
  • a helical CT scanner can be configured to provide 3 to 5 mm collimation, 1 :1 to 2:1 pitch, 120 kVp, and 100 mA Such scan configuration will generate 300 to 550 2D slice images for each scan series.
  • the patient can be scanned twice in different body positions, e.g., supine position (face up) and prone position (face down), to acquire two image datasets, which can be separately processed and subsequently fused for rending and displaying.
  • multiple imaging modalities are used for acquiring images of the colon. If multi-modalities of images are available, they can be fused and displayed in a single view.
  • PET and CT images can be acquired using a dual modalities scanner.
  • the high-uptake focal spot in PET images can be fused to be displayed in the 3D CT images for facilitating polyp detection and differentiating between benign and malignant lesions.
  • image acquisition could be performed in multiple time sections or phases.
  • the images can be acquired in both pre-contrast and post-contrast phases.
  • the pre-contrast images may serve as a mask for digital subtraction from the post-contrast images.
  • the subtracted images provide better contrast on contrast-enhanced tissue.
  • An MIP rendering mode can be applied to the subtracted images to provide clearer images as compared to the original images.
  • the organ can be scanned twice - once without contrast agent and another with contrast agent (e.g., for IV contrast agent injection applications).
  • the pre-contrast images can then be subtracted from the post-contrast images, such that the tagged region will only be enhanced in the subtraction images.
  • the subtracted images can then be fused with the original volume for 2D/3D rendering.
  • the subtracted images can be used for segmenting tagged regions.
  • the acquired volumetric image dataset(s) is/are further processed using image processing methods to electronically "clean" tagged residue (residual stool and fluid) in the imaged colon (step 24).
  • methods 1 for electronic cleaning include methods for detecting and removing tagged residue regions inside the colon lumen and reconstructing a boundary between the colon lumen and colon wall in regions of the image where tagged colonic residue is removed. Exemplary methods for electronic cleaning will be discussed below with reference to the illustrative diagrams of FIGs. 5-11, for example.
  • electronic cleansing methods according to the invention are robust and effective for processing laxative free virtual colonoscopy CT images, for example.
  • electronically cleaned volumetric images can be rendered and displayed to enable examination of the imaged colon by a physician.
  • An electronic cleaning process according to the invention enables a physician or radiologist, for example, to inspect the 3D endoluminal colon surface where the colon surface is covered by residual fluid.
  • feature data of clinical significance which is obtained after image acquisition (step 23) (e.g., during the electronic cleaning process (step 24)) can be fused with the original or electronically cleaned volumetric images and displayed (step 27) to facilitate physician review of the cleansed volumetric images.
  • methods may be implemented to enhance volumetric images and facilitate lesion detection.
  • image enhancement may be implemented via image filtering, feature rendering, and fusion of multi-modal images.
  • image filtering various types of filters can be applied for noise reduction and edge enhancement.
  • feature rendering the intensity feature data or volume data of a lesion region could be extracted. Specific rendering modes could be selected to display the feature data. The feature rendered image could be further super-imposed on the original 3D volumetric image data for highlighting the lesion when displayed.
  • volume rendering techniques can be extended using methods described herein for reconstructing boundary layers between the air and colon wall tissue.
  • boundary layer reconstruction methods include methods for intensity transformation of the original volumetric data, which transfonnation is based on information of both residue tagging and anatomy.
  • the result of the cleaned volumetric images is the fusion of available information of both tagged residue and anatomy.
  • the electronically cleaned images can be processed using automatic polyp detection methods (step 25).
  • the result of polyp detection represents all suspicious regions or areas in the images that are automatically detected using suitable image processing methods.
  • the computer-assisted polyp detection (CAPD) results can be fused with the original electronically cleaned volumetric images (step 26).
  • the results of CAPD can be a list of labeled regions for potential polyps together with an indication of the likelihood that the labeled region is indeed a real polyp.
  • One example of method to fuse the CAPD result with original volumetric images is to color code the suspicious regions in the volumetric images such that the results are rendered as colored regions when the volumetric images with fused feature data are displayed (step 27).
  • Another approach to fuse the CAPD results is to transform the intensity of suspicious region into certain range and adjust the color map for volume rendering to show the suspicious region in different color from surrounding normal tissue in a 3D endoluminal view.
  • the clean colon wall and the wall coated with tagged stool can be shown in different colors by adjusting volume rendering color maps, as is understood by those of ordinary skill in the art.
  • Bowel Preparation Methods Exemplary bowel preparation methods for virtual colonoscopy applications according to the invention will now be discussed in detail with reference to the exemplary embodiments of FIGs 3A-3D and FIGs. 4A-4C. In particular, FIGs.
  • FIGs. 3A-3D are diagrams illustrating laxative/suppository free bowel preparation methods for virtual colonscopy according to exemplary embodiments of the invention, which are based on contrast administration and diet management protocols.
  • FIGs. 4A-4C are diagrams illustrating various types of foods/meals that can be used for the exemplary bowel preparation methods of FIGs. 3A-3D, according to exemplary embodiments of the invention.
  • FIGs. 3A and 3B illustrate bowel preparation methods in which an individual follows contrast administration and diet management protocols during a period of about 36 hours prior to image acquisition of the individual's colon.
  • FIG. 3A is a chart that depicts a method for administering doses of contrast agents (30) and (31) at various times (at mealtimes) during a three-day period (including the day on which the colon is to be scanned and the two days prior to the scan day).
  • the first contrast agent (30) comprises a dose of barium sulfate solution (2.1%, 250 ml)
  • the second contrast agent (31) comprises a dose of a non-ionic iodinated agent (e.g. diatrizoate meglumine and diatrizoate sodium solution, 367 mg/ml, 120 ml) as colon residue contrast agents.
  • These contrast agents (30) and (31) are taken orally at mealtimes as shown in FIG. 3A.
  • the contrast agent solutions (30) and (31) are commercially available under different name brands.
  • the contrast solution (31) is available under the name brand GASTROVIEW.
  • GASTRO VIEW can be mixed with soft drink, such as soda, if it is easier for the patient to drink.
  • the various commercially available contrast agent solutions (30) and (31) provide similar tagging quality with the same dosage.
  • the invention is not limited to specific brand named contrast agents.
  • FIG. 3B is a chart that depicts different diet regimes (Diet 1, Diet 2, Diet 3) that an individual can follow for a four-day period (including the day on which the colon is to be scanned and the three days prior to the scan day), according to exemplary embodiments of the invention. These diets are preferably designed to be easy to follow using readily available food products.
  • a bowel preparation method according to an exemplary embodiment of the invention includes following one of the 3-day diets depicted in FIG. 3B combined with administration of contrast agents according to the method of FIG. 3 A, for example.
  • the exemplary diets include a combination of meals based on a food list, a meal kit, liquid foods and water.
  • FIG. 4A is a table that lists various types of foods in different food groups which can be included in the food list.
  • FIG. 4B illustrates the food contents of a predefined meal kit, according to an exemplary embodiment of the invention.
  • FIG. 4C illustrates various liquid foods that may be included for exemplary liquid diets, according to exemplary embodiments of the invention
  • Diet 3 is a low-residue diet, which is similar to a normal diet. If a patient follows Diet 3, he/she is only required to avoid several kinds of food for 3 days prior to imaging, which can possibly result in more colonic residue. Diet 2 allows the patient to follow the same food list in Days 3 and 2 prior to the day of the scan and requires the patient to eat food from a predetermined meal kit in Day 1 prior to the day of scan. Diet 1 requires the patient to follow a liquid food diet in Day 1 prior to the scan. In the other 2 days, Diet 1 is the same as Diets 2 and 3.
  • the patient will eat small portions at each meal and avoid the foods such as whole grain flour or cereals, dried or raw fruits, raw or deep fried vegetables, tough fibrous meats, caffeinated liquids, nuts and seeds, and yogurt.
  • foods such as whole grain flour or cereals, dried or raw fruits, raw or deep fried vegetables, tough fibrous meats, caffeinated liquids, nuts and seeds, and yogurt.
  • FIG. 3C is a diagram that illustrates a bowel preparation method according to another exemplary embodiment of the invention.
  • the exemplary method of FIG. 3C includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered, in about a 24 hour period prior to image acquisition.
  • the exemplary method of FIG. 3 C includes a diet regime in which the individual will eat food from the food list (FIG. 4A) for each meal on the 2 nd day before the scan and follow a liquid diet in about a 24 hour period prior to image acquisition.
  • FIG. 3D is a diagram that illustrates a bowel preparation method according to yet another exemplary embodiment of the invention.
  • the exemplary method of FIG. 3D includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered in about a 24 hour period prior to image acquisition.
  • the exemplary method of FIG. D includes a diet regime in which the individual will follow a liquid diet in about a 24 hour period prior to image acquisition.
  • the exemplary bowel preparation methods described above with reference to FIGs. 3 and 4 facilitate uniform tagging of colonic residue and mitigate the stool sticking around the colon wall, and are sufficient for laxative/suppository free preparation of the colon.
  • the exemplary bowel preparation methods may include administration of laxative or suppositories, if desired. Electronic Cleaning
  • exemplary embodiments of the invention for virtual colonoscopy implement methods for electronically cleaning volumetric images of a colon.
  • exemplary methods for electronically cleaning volumetric images of a colon include methods for automatically detecting and removing tagged residue inside the colonic lumen and automatically reconstructing the image of the boundaries between tagged residue and the colon wall into boundaries between air (lumen) and the colon wall.
  • colonic residue such as residual stool can attach around the colon wall.
  • conventional virtual colonoscopy methods in which laxatives/suppositories are used to prepare the colon for examination, which causes colonic residue to form puddles with flat level due to gravitation.
  • exemplary embodiments of the invention include methods that take into consideration the above factors to achieve better quality of electronic cleaning for disparate colonic residue tagging conditions and morphologies of colonic residue regions.
  • FIG. 5 is a flow diagram illustrating an electronic cleaning method according to an exemplary embodiment of the invention.
  • the exemplary method of FIG. 5 can be implemented for the electronic cleaning process (step 24) in FIG. 2.
  • an initial process includes processing the acquired volumetric image dataset using methods for segmenting or extracting the image data associated with the colon region in the volumetric dataset (step 50).
  • the colon region that is segmented comprises air (in the hollow lumen/cavity), colonic residue, and the colon wall.
  • any suitable method may be implemented for segmenting the colon region from other anatomical objects/structures within a volumetric image dataset.
  • Anatomical knowledge e.g., thickness of colon wall is deemed within a certain range, e.g. less than 5 mm
  • Anatomical knowledge can be used for segmentation.
  • the colon lumen including both air and residue, is found, the colon wall region could be determined by dilating the region of colon lumen.
  • a low-level voxel classification segmentation process can be implemented using the methods disclosed in the above-incorporated U.S. Patent No. 6,331,116. Briefly, a low-level voxel classification method can be applied to all regions of tagged residue. For each region of tagged residue, the image data is processed to group the voxels within the region into several clusters (e.g., around 10 clusters). The voxels with similar intensity properties are assigned to the same group (cluster).
  • voxels that are located in the center of uniformly tagged region are grouped (clustered) together in the same cluster and voxels that are located around an edge of the uniformly tagged region are grouped together in another cluster that is different from that of the center cluster.
  • the classification results basically represent intensity properties of the entire volumetric images.
  • the voxels in the same cluster might represent different clinical meaning due to be in different tagged regions with different tagged conditions.
  • the intensity of a fold that is covered by tagged fluid might have an intensity that is similar to that of tagged fluid residue due to the partial volume effect.
  • electronic cleaning methods according to the invention are self-adaptive to variations in tagging conditions in different tagged regions.
  • the image data is further processed to determine the boundaries between the different tissues or materials (step 52). More specifically, for virtual colonoscopy, boundary layer extraction methods according to exemplary embodiments of the invention are applied for automatically detecting and extracting boundaries between air (in the lumen/cavity), the colon wall, and tagged residue in the segmented colon region. Exemplary methods for extracting boundary layers will be discussed below with reference to FIGs. 8, 9 and
  • boundary layer reconstruction process is independently implemented for each spatially separate region of tagged residue which is removed from the image data. Exemplary methods for reconstructing boundary layers will be discussed in further detail below with reference to FIG.
  • FIGs. 6A and 6B are exemplary image diagrams that illustrate results of electronic cleaning in virtual colonoscopy images.
  • FIG. 6 A is an exemplary image of a portion of a colon showing a plurality of tagged residue regions (60, 61 and 62) in the colon lumen.
  • the tagged residue region (60) is a small tagged stool that is attached to the colon wall.
  • the tagged residue regions (61) and (62) are puddles can have non-uniform tagging.
  • FIG. 6B is an exemplary image of FIG.
  • FIG. 6A after applying an electronic cleaning process according to the invention, wherein the tagged residue regions (60, 61 and 62) are removed and shown as part of the lumen (air) region within the colon and wherein the boundary layers between the tagged residue regions and surrounding tissue in the image of FIG. 6A are replaced by boundary layers between air and colon wall tissue in the image of FIG. 6B.
  • Exemplary electronic cleaning methods for virtual colonoscopy according to the invention will now be discussed in further detail with reference to FIGs. 7 ⁇ 11, for example.
  • exemplary methods for boundary layer extraction will be described with reference to FIGs. 7-10 and exemplary methods for boundary layer reconstruction will be described with reference to FIG. 11.
  • boundary extraction can be implemented using known techniques such as thresholding, edge detection, etc., which are suitable for the given modality for determining the boundaries.
  • thresholding edge detection
  • boundary extraction in CT images may not be achieved using a simple thresholding approach due to non-uniform tagging and partial volume effects in the CT images.
  • colon wall tissue often has an intensity around [-100, 50] HU and well-tagged residue has intensity around [200, 800] HU. If a thin haustral fold submerges in well-tagged residue puddle, the intensity of the colon wall of the thin fold may greater than 600 HU.
  • FIG. 7 is an exemplary diagram of a CT image, wherein a region (70) of the image includes a portion (71) of a colon wall that has a similar intensity value as that of tagged residue portion (72) within the region (70).
  • the shadowed region (71), which is a thin haustral fold, is depicted as having an intensity of 719 HU
  • the brighter region (72), which is tagged puddle of residual fluid is depicted as having an intensity of 823 HU.
  • edge detecting methods which are based on intensity difference rather than intensity range may be used for determining such boundaries.
  • conventional edge detection methods may not be applicable for CT images.
  • edge detection methods are not particularly accurate for processing volumetric CT images having anisotropic voxel size.
  • the images can be re-sampled or interpolated into isotropic voxel size, but at the cost of extra computation time.
  • electronic cleaning methods according to the invention are preferably designed to extract a boundary layer (e.g., plurality of voxels in thickness) rather than a line of edge curve (e.g., single voxel).
  • the layer of the boundary represents the partial volume effect, which results from the physical limitation of CT scanner.
  • it is well known that most edge detection methods are sensitive to noise and the noise level of CT images usually is high, which is not desirable.
  • FIG. 8 is a flow diagram that illustrates boundary detection methods according to exemplary embodiments of the invention.
  • boundary detection methods according to the invention include methods for extracting a boundary layer between different tissues/materials using a maximal gradient approach, wherein the gradient is defined as a directional first-order derivative.
  • Exemplary boundary detection methods can be implemented with medical images in which the integer grids have different unit lengths along different coordinate directions due to the variation of scanning protocols. Indeed, anisotropic voxel size is considered when computing a discrete first-order derivative in the digital images.
  • exemplary boundary detection methods according to the invention take into consideration the partial volume effect in CT images, wherein the tissue boundary usually forms a thick layer with certain range (as opposed to an edge or a sharp curved line ). The intensity of voxels in a boundary layer changes from one type to another.
  • the extracted boundary layers include more information with respect to tissue abnormalities or other tissue features, as compared to the information provided by sharp boundary lines.
  • FIG. 8 a flow diagram illustrates a boundary layer detection method according to an exemplary embodiment of the invention, which implements a maximal gradient process.
  • An initial step is to select n discrete directions in a 3-D image grid, which are to be used for computing gradients (step 80).
  • FIG. 9 is an exemplary diagram that illustrates an image grid coordinate system (95) and a plurality of selected directions (D1-D5).
  • the X, Y, Z directions (D 1 , D2 and D3) correspond to the orthogonal axial directions of the image grid coordinate system (95).
  • the directions D4 and D5 denote diagonal directions on the X-Y plane.
  • the selected directions will vary depending on the application and should be uniformly covered in all directions so as to render the process rotation independent (independent of differences of position (e.g., slight rotation) of the patient during image acquisition) while balancing against the cost of computation (e.g., selection of too many directions may be computationally expensive).
  • Each voxel of the imaged organ (e.g., the segmented colon region) is then processed using various methods described hereafter to identify voxels that are part of boundary layers.
  • the boundary layers within the imaged colon include air/colon wall, tagged residue/colon wall and air/tagged residue boundaries. More specifically, for a selected voxel (step 81), a first-order derivative is computed along each of the n directions for the selected voxel (step 82).
  • a GFV gradient feature value
  • GFV gradient feature value
  • FIG. 9 is an exemplary diagram illustrating 5 directions (D1 ⁇ D5) in a Cartesian coordinate system (95) in which directional gradients are computed for a given voxel
  • FIG. 10 is an exemplary diagram that illustrates neighbor voxels that are used for computing selected directional gradients for a given voxel (Vcurrent).
  • the X, Y, and Z are the orthogonal axial directions (Dl), (D2) and (D3), respectively, of the image grid coordinate system (95).
  • the other directions (D4) and (D5) are diagonal directions on the X-Y plane.
  • the directional derivative computation is designed for spiral CT images, wherein the Z direction (D3) typically has a longer step length than the step lengths of the X and Y directions (Dl and D2).
  • the derivatives along diagonal directions in the X-Z and Y-Z planes should also considered.
  • y l ⁇ x l2 - 8x 4 + 8x 6 -x u ⁇ /( ⁇ 2 » ⁇ xy )
  • y 2
  • y 3
  • v 4
  • y 5 ⁇ x l6 -x ⁇ /(2 » ⁇ z )
  • the term ⁇ denotes the step length along direction "*", and such terms are scaled using scaling factors to account for the different lengths in the different directions for the non-isotropic voxels.
  • the GFV for a given voxel represents the direction and magnitude of the greatest change in intensity with respect to the given voxel.
  • the GFV is much less sensitive to the image noise than value of single directional derivative.
  • the exemplary expression (1) comprises a central 5-point formula for non-Z directions wherein the directional gradient is computed over a length of 5 voxels, and a central 3 -point formula for the Z direction wherein the directional gradient is computed over a length of 3 voxels. It is to be understood, however, that the methods of FIGs. 9 and 10 are merely exemplary and the invention is not limited to the expression (1) and the neighborhood in FIG. 10 and that other formulas can be readily envisioned by one of ordinary skill in the art for computing a first-order derivative taking into account voxel size for other applications. Referring again to FIG. 8, after the GFV of the current voxel is determined, the GFV is compared against a threshold GFV (step 85).
  • the boundary layer can be separated from the tissue homogeneous regions since such regions have much lower GFV as compared to tissue boundary when the intensity dramatically changes. If the voxel GFV exceeds the threshold (affirmative result in step 86), the voxel is tagged as a real boundary voxel (step 89).
  • the GFV threshold can be pre-set for a specific application. In one exemplary embodiment for virtual colonoscopy CT images, the threshold can be set as 56. Therefore, in FIG.
  • the voxel is deemed a boundary voxel (step 89,), otherwise the voxel can be deemed to be in the homogeneous tissue region.
  • the GFV threshold cannot be set too small.
  • the GFV threshold cannot be too large since the extracted boundary layer may not enclose the entire colon lumen. This is the result of non- uniform range of partial volume effect along different directions. If the threshold for GFV thresholding is reduced, the chance of enclosing the colon lumen increases. However, reducing the threshold GFV increases the risk of over estimating the boundary layer.
  • the thresholded GFV result can be fused with the low-level voxel classification results (step 51, FIG. 5).
  • the information from both voxel classification and GFV are combined to achieve optimal estimation of the boundary layers.
  • a given voxel will be deemed a real boundary voxel (step 89) if the GFV of the voxel is greater than preset GFV threshold (affirmative result in step 86) or if based on the classification results, the voxel is determined to be located at the edge of region where all voxels are in the same cluster (i.e., the voxel is part of a boundary layer cluster as determined by the classifying the voxels of the tagged region) (affirmative result in step 87).
  • the voxel will be tagged as a non-boundary voxel (step 88) (e.g., tissue voxel or lumen voxel).
  • the exemplary methods are repeated for each remaining voxel until all the voxels in the desired image dataset have been processed (negative determination in step 90).
  • the results of the exemplary boundary layer extraction process of FIG. 8 is that each voxel is classified as a boundary or non-boundary voxel, which enables identification of boundary layers in the imaged colon region between air/tagged region, air/colon wall, and tagged region/colon wall.
  • FIG. 11 is a flow diagram that illustrates a boundary layer reconstruction method according to an exemplary embodiment of the invention. As noted above, boundary layer reconstruction is applied to each region of tagged residue.
  • the intensity of each voxel that is tagged as a non-boundary voxel is deemed part of the tagged residue and the intensity of the voxel is set to an average air intensity T a j r (step 101).
  • air has an intensity of -850 HU, for example.
  • the remaining voxels for the selected tagged residue region include boundary voxels.
  • boundary voxels there are three types of boundary voxels in the tagged residue region: (1) boundary voxels that are part of the air/tagged residue boundary; (2) boundary voxels that are part of the colon wall/residue boundary; and (3) boundary voxels that are in proximity to air, colon wall and tagged residue voxels.
  • the first type of boundary voxels are readily detected because such voxels are close to the air region and have greater GFV (e.g. GFV is larger than 500) since the intensity change from air to the tagged region is the most dramatic in CT images.
  • a conditional region growing algorithm is applied to the region to expand the region (air/tagged residue boundary layer) into the tagged residue and air regions (step 102).
  • a 6-connected 3-D region growing process which may be implemented (step 102) is as follows:
  • the exemplary region growing process expands the region of the air/tagged residue boundary layer (i.e., the partial volume layer between the tagged residue and the air lumen) and the intensities of voxels in the expanded region are set to T a ⁇ r .
  • the expended region may also cover part of the boundary layer having the third type of voxels as described above.
  • boundary voxels are transformed into an air/colon wall boundary layer (step 103). More specifically, in one exemplary embodiment of the invention, an intensity transformation process is applied to the boundary voxels as follows:
  • V f T a ⁇ r ;
  • T ave denotes the average intensity of all boundary layer voxels of the second and third type
  • Ti ssue denotes the average intensity of soft tissue around colon lumen (e.g. -50 HU)
  • V c denotes the intensity of the cunent boundary voxel
  • V f denotes the transformed intensity value of the cunent voxel
  • an intensity transformation process having a penalty factor based on the GFV is applied to the boundary voxels as follows: If( V c > T ave )
  • V f T air ;
  • P(g f ) is the penalty function
  • g f is the GFV value of the cunent voxel.
  • the range of the penalty function can be [0, B], where B > 1.
  • step 104 After applying an intensity transformation as above, electronic cleansing for the given tagged residue region is complete. If there are remaining tagged residue regions to process (affirmative determination in step 104), the above process (steps 101-103) are repeated for the next selected tagged residue region. The boundary layer reconstruction process terminates when all tagged regions have been processed.
  • electronic cleansing methods can be implemented with any modality of images having the same feature in intensity.
  • a maximum directional gradient process as described herein can also be applied to any modality of medical images for extraction of tissue boundary layer with partial volume effects.
  • the bladder wall region could be extracted with the exemplary maximum gradient method described above.
  • a texture analysis can be applied to the bladder wall region.
  • the texture indexes associated to voxels can be mapped back to the original volumetric data and volume rendering in the endoluminal view to facilitate detection of abnormality in the bladder wall.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Imaging systems and methods for processing and rendering volumetric images of a selected organ for virtual endoscopy applications are provided, which enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ. For example, methods are provided for preparing 3D volumetric images of colons for virtual endoscopy application which do not require administration of oral laxatives or suppositories for patient bowel preparation. In particular, an imaging method which can be implemented for virtual colonoscopy applications, includes a process of obtaining an image dataset comprising image data of a colon (step 11) that is prepared (step 10) to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting (step 12) a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning (step 13) the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering (step 14) a volumetric image comprising an endoluminal view at a region within the imaged colon.

Description

VIRTUAL ENDOSCOPY METHODS AND SYSTEMS Cross-Reference to Related Application
This application claims priority to U.S. Provisional Application No. 60/510,237 filed on October 10, 2003, which is fully incorporated herein by reference. Technical Field of the Invention
The present invention relates generally to virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, the invention relates to 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps).
Background In general, virtual endoscopy systems implement methods for processing 3D image datasets to enable examination and evaluation of organs with hollow lumens or cavities, such as colons, bladders, lungs, arteries, etc, and to enable virtual simulation of endoscopic examination of such organs. In general, virtual endoscopy procedures include an organ preparation process whereby an organ to be evaluated is prepared in a manner to enable the anatomical features of the organ (e.g., organ tissue) to be contrasted from surrounding anatomical objects or materials in subsequently acquired medical images, followed by image acquisition and image processing to construct 2D or 3D models of the organ from the acquired image data. Such 2D/3D models can be displayed in different rendering modes for inspecting for organ abnormalities. For example, virtual endoscopy applications provide 3D visualization (e.g., "fly-through" visualization) of the inner surface of the organ, which is referred to as endoluminal view.
Virtual endoscopy is continuing to gain wider acceptance in the medical field as non- invasive, patient comfortable methods for examining and evaluating organs. Virtual endoscopy will eventually eliminate the need for invasive screening/testing endoscopic procedures such as optical colonoscopies which require long instruments (catheters/endoscopes) to be inserted into the patient. Such invasive procedures provide risk of injury including organ perforation, infection, hemorrhage, etc. Moreover, invasive endoscopic procedures can be highly uncomfortable and stressful to the patient.
Therefore, significant effort is being directed towards developing and improving virtual endoscopy procedures and applications. For instance, advances in image processing technology will enable accurate and rapid rendering and displaying of virtual endoscopy volumetric images. Moreover, improvements in the organ preparation process can provide more optimal imaging results. Indeed, with respect to virtual colonoscopy, for example, the existence of air, residual stool, and residual fluid in the human colon can make it difficult to detect abnormalities (e.g., polyps) related to the colon. Moreover, conventional methods in which laxatives or suppositories are used for bowel preparation are invasive and highly uncomfortable to patients. Thus, it is highly desirable for non-invasive, patient comfortable bowel preparation methods for virtual colonoscopy, for example.
Summary of the Invention In general, exemplary embodiments of the invention include virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, exemplary embodiments of the invention include 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps). For example, in one exemplary embodiment of the invention, an imaging method that can be implemented for virtual endoscopy applications includes a process of obtaining an image dataset of an organ, processing the acquired image dataset to obtain feature data, and rendering a multi-dimensional representation of the imaged organ using the obtained feature data, wherein processing includes obtaining image intensity feature data from the image dataset, processing the image intensity feature data to obtain gradient feature data representing intensity change along each of a plurality of directions in a region of interest in the acquired image dataset; and processing the gradient feature data to determine boundary layers between anatomical features of the imaged organ and surrounding objects or materials in the region of interest.
Exemplary embodiments of the invention further include virtual colonoscopy methods, as well as methods for preparing 3D volumetric images of colons for virtual endoscopy which do not require administration of oral laxatives or suppositories for patient bowel preparation. For example, in one exemplary embodiment of the invention, an imaging method which can be implemented for virtual colonoscopy applications includes a process of obtaining an image dataset comprising image data of a colon that is prepared to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering a volumetric image comprising an endoluminal view at a region within the imaged colon.
These and other exemplary embodiments, aspects, features and advantages of the present invention will become apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
Brief Description of the Drawings FIG. 1 is a flow diagram illustrating a virtual endoscopy method according to an exemplary embodiment of the invention. FIG. 2 is a flow diagram illustrating virtual colonoscopy methods according to exemplary embodiments of the invention.
FIGs. 3A~3D are diagrams illustrating bowel preparation methods for virtual colonscopy, according to exemplary embodiments of the invention.
FIGs. 4A~4C are diagrams illustrating various types of foods/meals that can be used for bowel preparation methods according to exemplary embodiments of the invention.
FIG. 5 is a flow diagram illustrating an electronic organ cleaning method for virtual endoscopy according to an exemplary embodiment of the invention.
FIGs. 6A and 6B are exemplary images illustrating results of an electronic organ cleaning process according to the invention. FIG. 7 is an exemplary image depicting a condition in which a portion of a colon wall has a similar intensity as that of a puddle of tagged residue.
FIG. 8 is a flow diagram illustrating a boundary extraction method according to an exemplary embodiment of the invention.
FIG. 9 is an exemplary diagram illustrating directional gradients that can be used for extracting boundary layers, according to an exemplary embodiment of the invention.
FIG. 10 is an exemplary diagram illustrating a method for computing directional gradients for extracting boundary layers, according to an exemplary embodiment of the invention.
FIG. 11 is a flow diagram illustrating a method for reconstructing boundary layers according to an exemplary embodiment of the invention. Detailed Description of Exemplary Embodiments
In general, exemplary embodiments of the invention as described in detail hereafter include organ preparation methods and image processing methods for virtual endoscopy applications. It is to be understood that exemplary imaging systems and methods according to the invention as described herein may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. In one exemplary embodiment of the invention, virtual endoscopy systems and methods described herein can be implemented in software comprising program instructions that are tangibly embodied on one or more program storage devices (e.g., hard disk, magnetic floppy disk, RAM, CD Rom, DVD, ROM, flash memory, etc.), an executable by any device or machine comprising suitable architecture. It is to be further understood that because the exemplary imaging systems and methods depicted in the accompanying Figures can be implemented in software, system configurations and processing steps as described herein may differ depending upon the manner in which the application is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.
FIG. 1 is a high-level flow diagram illustrating a method for virtual endoscopy according to an exemplary embodiment of the invention. In particular, FIG. 1 depicts a virtual endoscopy process including methods for preparing a target organ for imaging and methods for processing volumetric images of the organ to enable virtual endoscopic examination of the organ. It is to be appreciated that the exemplary method of FIG. 1 depicts a general framework for virtual endoscopy which can be implemented with various imaging modalities for virtually examining various types of objects such as organs with hollow lumens/cavities such as colons, tracheo- bronchial airways, bladders, and the like. Referring now to FIG. 1, an exemplary virtual endoscopy method includes an initial process of preparing an organ to be imaged (step 10). In general, organ preparation methods according to the invention are designed to enhance the contrast between anatomical features of an organ under evaluation and surrounding objects and material in subsequently acquired medical images of the organ. Moreover, organ preparation methods according to the invention are designed to be non-invasive and comfortable for the individual whose organ is to be virtually examined. An organ preparation process according to the invention will vary depending on factors such as, e.g., the type of organ to be imaged and the imaging modalities used for acquiring medical images of the organ. For instance, exemplary embodiments of the invention include bowel preparation methods for preparing a colon for virtual colonoscopy by an individual being administered contrast agent(s) and following a specific diet regime for a period of time prior to imaging the colon. More specifically, exemplary embodiments of the invention include laxative- free/suppository-free bowel preparation methods which use a combination of diet management and administration of contrast agents to effectively "tag" colonic residue (e.g., stool, fluid) that may be present in the lumen of the colon such that the colonic residue can be distinguished from surrounding tissues in subsequently acquired medical images of the colon.
The types of contrast agents and methods of administering contrast agents will vary depending on, e.g., the application and the organ under evaluation. For instance, contrast agents can be administered according to a particular protocol either orally, intravenously, or a combination thereof. The type of contrast agent is not limited to medicine or specific chemicals, and could be normal food or even natural water. For example, natural water can be utilized as the contrast agent for preparing a bladder for imaging. In such instance, organ preparation may include drinking a specified amount of water before image acquisition.
After completion of an organ preparation process, the organ will be imaged to acquire a 3D volumetric image dataset (step 11) using one or more imaging modalities that are suitable for the given application. For instance, imaging modalities that may be used for acquiring medical images of an organ include, for example, x-ray CT (Computed Tomography), MRI (Magnetic Resonance Imaging), US (ultrasound), PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography). Depending on the imaging protocol, the medical images can be 3D volumetric image datasets that are directly acquired as a result of a scan. Alternatively, 3D volumetric image datasets can be generated by acquiring an image dataset comprising multiple 2D or "slice" images and then stacking and interpolating between the 2D images to produce a 3D volumetric image dataset. For example, volume rendering methods known to those of ordinary skill in the art can be implemented to combine adjacent 2D image planes (slices) including, for example, maximum intensity projection, minimum intensity projection, and surface rendering techniques in combination with voxel texture information, depth information, gradient shading, etc. Next, the 3D volumetric images are processed using known techniques for segmenting a
ROI (region of interest), e.g., the target organ, from the 3D image dataset (step 12). More specifically, any suitable automated or semi-automated segmentation method may be implemented to extract the image data volume that corresponds to the target organ from the original 3D volume space. For example, for virtual colonoscopy applications according to the invention, methods are implemented for segmenting a colon region from a 3D image dataset, wherein the colon region includes the colon wall, colon lumen, and tagged colonic residue in the colon. The segmentation methods will vary depending on the target organ of interest and imaging modality and generally include methods for segmenting features or anatomies of interest by reference to known or anticipated image characteristics, such as edges, identifiable structures, boundaries, changes or transitions in colors or intensities, changes or transitions in spectrographic information, a priori anatomical knowledge, etc. Various types of segmentation methods that can be implemented are well known to those of ordinary skill in the art, and a detailed discussion thereof is not necessary and beyond the scope of the claimed inventions. The image data corresponding to the extracted ROI is processed using one or more feature analysis methods to extract feature data of relevant clinical significance with respect to the organ under investigation (step 13). It is to be appreciated that the feature analysis methods that are implemented for extracting relevant features, data, or image parameters from image data will vary depending on the type(s) of anatomical structures or organs under consideration and imaging modality. For example, methods may be implemented for detecting potential anatomical abnormalities in the organ of interest or extracting boundaries or edges between different tissues/materials/structures in the imaged organ. For example, as explained below, methods according to the invention are provided for processing volumetric images of a colon to detect and remove tagged regions of colonic residue and reconstruct boundaries between the colon wall and lumen in regions in which colonic residue is removed. Various types of feature extraction methods that can be implemented for various medical applications and image domains are well known to those of ordinary skill in the art.
Depending on the application, the extracted feature data can be transformed and fused with the original volumetric images (step 14) and the fused volumetric images can be rendered and displayed (step 15) in a manner that facilitates physician inspection of abnormalities. In general, any type of feature data that is relevant to the given clinical application can be fused with an original volumetric dataset. A fusion process can be a transformation process that maps feature data into the intensity of the volumetric images. For instance, with virtual colonoscopy, the results of an automated polyp detection process can be fused with an original volume dataset such that potential polyps are rendered and displayed in a particular color. By way of further example, with MRI virtual cystoscopy images, a lesion that invades the wall of the organ will demonstrate different texture as compared to the normal wall. In such instance, texture information can be extracted and overlaid on the endoluminal view to facilitate the early detection of lesions and cancer, especially when abnormalities in the organ wall are small and present a subtle shape deformation on the inner wall surface. If texture information is fused in the original volumetric images in an appropriate way, it is possible that lesion regions will be enhanced in the endoluminal view.
As noted above, the exemplary method of FIG. 1 provides a general framework for virtual endoscopy methods according to the invention. For purposes of illustration, systems and methods for laxative-free virtual colonoscopy will be discussed in detail, which are based on the exemplary general framework of FIG. 1, but nothing herein shall be construed as limiting the scope of the invention. For instance, FIG. 2 is a flow diagram that illustrates methods for virtual colonoscopy according to exemplary embodiments of the invention. Virtual colonoscopy methods according to the invention implement bowel preparation methods, which are based on i diet management and contrast agent administration protocols (step 20). More specifically, exemplary embodiments of the invention include laxative-free bowel preparation methods, which are based on low-residue diets and administration of sufficient dosages of one or more types of contrast agents over a period of time prior to colon imaging, and which provide good residual stool and fluid tagging quality. Exemplary bowel preparation methods according to the invention will be explained in detail below with reference to FIGs. 3 A~3D and 4A-4C, for example. The colon preparation process is followed by image acquisition. In particular, to facilitate scanning and examination of the colon, a colon distention process is performed prior to scanning (step 21). For example, in one exemplary embodiment of the invention, the patient's colon is expanded by forcing approximately 2 to 3 liters of room air or carbon dioxide into the patient's colon. Other methods for distending a colon known to those of ordinary skill in the art may be implemented.
Next, a region of interest of the individual, which includes the colon, is scanned using an imaging device to acquire image data of the colon (step 22) and the image data is processed to generate one or more volumetric image datasets (step 23). Image acquisition can be implemented using one or more imaging modalities. For instance, CT images of the colon can be acquired using a routine CT virtual colonoscopy scanning protocol. By way of specific example, a helical CT scanner can be configured to provide 3 to 5 mm collimation, 1 :1 to 2:1 pitch, 120 kVp, and 100 mA Such scan configuration will generate 300 to 550 2D slice images for each scan series. In one embodiment, the patient can be scanned twice in different body positions, e.g., supine position (face up) and prone position (face down), to acquire two image datasets, which can be separately processed and subsequently fused for rending and displaying. In other embodiments, multiple imaging modalities are used for acquiring images of the colon. If multi-modalities of images are available, they can be fused and displayed in a single view. For virtual colonoscopy, PET and CT images can be acquired using a dual modalities scanner. The high-uptake focal spot in PET images can be fused to be displayed in the 3D CT images for facilitating polyp detection and differentiating between benign and malignant lesions. Moreover, image acquisition could be performed in multiple time sections or phases. For example, in MRI based virtual colonoscopy, the images can be acquired in both pre-contrast and post-contrast phases. The pre-contrast images may serve as a mask for digital subtraction from the post-contrast images. The subtracted images provide better contrast on contrast-enhanced tissue. An MIP rendering mode can be applied to the subtracted images to provide clearer images as compared to the original images. More specifically, in other exemplary embodiments of the invention, to detect or segment tagged ROIs, the organ can be scanned twice - once without contrast agent and another with contrast agent (e.g., for IV contrast agent injection applications). The pre-contrast images can then be subtracted from the post-contrast images, such that the tagged region will only be enhanced in the subtraction images. The subtracted images can then be fused with the original volume for 2D/3D rendering. The subtracted images can be used for segmenting tagged regions. These methods can be applied with MRI-based virtual colonoscopy, for example.
The acquired volumetric image dataset(s) is/are further processed using image processing methods to electronically "clean" tagged residue (residual stool and fluid) in the imaged colon (step 24). In general, methods1 for electronic cleaning according to exemplary embodiments of the invention include methods for detecting and removing tagged residue regions inside the colon lumen and reconstructing a boundary between the colon lumen and colon wall in regions of the image where tagged colonic residue is removed. Exemplary methods for electronic cleaning will be discussed below with reference to the illustrative diagrams of FIGs. 5-11, for example. As explained below, electronic cleansing methods according to the invention are robust and effective for processing laxative free virtual colonoscopy CT images, for example. In one exemplary embodiment of the invention, electronically cleaned volumetric images can be rendered and displayed to enable examination of the imaged colon by a physician. An electronic cleaning process according to the invention enables a physician or radiologist, for example, to inspect the 3D endoluminal colon surface where the colon surface is covered by residual fluid. Moreover, feature data of clinical significance which is obtained after image acquisition (step 23) (e.g., during the electronic cleaning process (step 24)) can be fused with the original or electronically cleaned volumetric images and displayed (step 27) to facilitate physician review of the cleansed volumetric images. In other applications, methods may be implemented to enhance volumetric images and facilitate lesion detection. For example, image enhancement may be implemented via image filtering, feature rendering, and fusion of multi-modal images. As for image filtering, various types of filters can be applied for noise reduction and edge enhancement. As for feature rendering, the intensity feature data or volume data of a lesion region could be extracted. Specific rendering modes could be selected to display the feature data. The feature rendered image could be further super-imposed on the original 3D volumetric image data for highlighting the lesion when displayed.
It is to be understood that volume rendering techniques can be extended using methods described herein for reconstructing boundary layers between the air and colon wall tissue. By reconstructing the air/wall layer from the original residue/wall layer, it is feasible to apply the same rendering color map to render the entire cleaned colon lumen. As explained below, boundary layer reconstruction methods according to the invention include methods for intensity transformation of the original volumetric data, which transfonnation is based on information of both residue tagging and anatomy. The result of the cleaned volumetric images is the fusion of available information of both tagged residue and anatomy.
In other exemplary embodiments of the invention as depicted in FIG. 2, the electronically cleaned images can be processed using automatic polyp detection methods (step 25). The result of polyp detection represents all suspicious regions or areas in the images that are automatically detected using suitable image processing methods. The computer-assisted polyp detection (CAPD) results can be fused with the original electronically cleaned volumetric images (step 26). For example, the results of CAPD can be a list of labeled regions for potential polyps together with an indication of the likelihood that the labeled region is indeed a real polyp. One example of method to fuse the CAPD result with original volumetric images is to color code the suspicious regions in the volumetric images such that the results are rendered as colored regions when the volumetric images with fused feature data are displayed (step 27). Another approach to fuse the CAPD results is to transform the intensity of suspicious region into certain range and adjust the color map for volume rendering to show the suspicious region in different color from surrounding normal tissue in a 3D endoluminal view. For example, the clean colon wall and the wall coated with tagged stool can be shown in different colors by adjusting volume rendering color maps, as is understood by those of ordinary skill in the art. Bowel Preparation Methods Exemplary bowel preparation methods for virtual colonoscopy applications according to the invention will now be discussed in detail with reference to the exemplary embodiments of FIGs 3A-3D and FIGs. 4A-4C. In particular, FIGs. 3A-3D are diagrams illustrating laxative/suppository free bowel preparation methods for virtual colonscopy according to exemplary embodiments of the invention, which are based on contrast administration and diet management protocols. FIGs. 4A-4C are diagrams illustrating various types of foods/meals that can be used for the exemplary bowel preparation methods of FIGs. 3A-3D, according to exemplary embodiments of the invention.
More specifically, FIGs. 3A and 3B illustrate bowel preparation methods in which an individual follows contrast administration and diet management protocols during a period of about 36 hours prior to image acquisition of the individual's colon. In particular, FIG. 3A is a chart that depicts a method for administering doses of contrast agents (30) and (31) at various times (at mealtimes) during a three-day period (including the day on which the colon is to be scanned and the two days prior to the scan day). In one exemplary method, the first contrast agent (30) comprises a dose of barium sulfate solution (2.1%, 250 ml) and the second contrast agent (31) comprises a dose of a non-ionic iodinated agent (e.g. diatrizoate meglumine and diatrizoate sodium solution, 367 mg/ml, 120 ml) as colon residue contrast agents. These contrast agents (30) and (31) are taken orally at mealtimes as shown in FIG. 3A.
The contrast agent solutions (30) and (31) are commercially available under different name brands. For instance, the contrast solution (31) is available under the name brand GASTROVIEW. GASTRO VIEW can be mixed with soft drink, such as soda, if it is easier for the patient to drink. The various commercially available contrast agent solutions (30) and (31) provide similar tagging quality with the same dosage. Hence, the invention is not limited to specific brand named contrast agents.
The above method describes an exemplary process in which a total of about 6 doses of contrast agents (30) and (31) are administered within about a 36 hour period prior to image acquisition coupled with one of the exemplary Diet regimes listed in FIG 3B. In particular, FIG. 3B is a chart that depicts different diet regimes (Diet 1, Diet 2, Diet 3) that an individual can follow for a four-day period (including the day on which the colon is to be scanned and the three days prior to the scan day), according to exemplary embodiments of the invention. These diets are preferably designed to be easy to follow using readily available food products. A bowel preparation method according to an exemplary embodiment of the invention includes following one of the 3-day diets depicted in FIG. 3B combined with administration of contrast agents according to the method of FIG. 3 A, for example.
As depicted in FIG. 3B the exemplary diets include a combination of meals based on a food list, a meal kit, liquid foods and water. FIG. 4A is a table that lists various types of foods in different food groups which can be included in the food list. FIG. 4B illustrates the food contents of a predefined meal kit, according to an exemplary embodiment of the invention. FIG. 4C illustrates various liquid foods that may be included for exemplary liquid diets, according to exemplary embodiments of the invention
In FIG. 3B, Diet 3 is a low-residue diet, which is similar to a normal diet. If a patient follows Diet 3, he/she is only required to avoid several kinds of food for 3 days prior to imaging, which can possibly result in more colonic residue. Diet 2 allows the patient to follow the same food list in Days 3 and 2 prior to the day of the scan and requires the patient to eat food from a predetermined meal kit in Day 1 prior to the day of scan. Diet 1 requires the patient to follow a liquid food diet in Day 1 prior to the scan. In the other 2 days, Diet 1 is the same as Diets 2 and 3. For each of the exemplary diet regimes, the patient will eat small portions at each meal and avoid the foods such as whole grain flour or cereals, dried or raw fruits, raw or deep fried vegetables, tough fibrous meats, caffeinated liquids, nuts and seeds, and yogurt.
FIG. 3C is a diagram that illustrates a bowel preparation method according to another exemplary embodiment of the invention. In general, the exemplary method of FIG. 3C includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered, in about a 24 hour period prior to image acquisition. Moreover, the exemplary method of FIG. 3 C includes a diet regime in which the individual will eat food from the food list (FIG. 4A) for each meal on the 2nd day before the scan and follow a liquid diet in about a 24 hour period prior to image acquisition.
FIG. 3D is a diagram that illustrates a bowel preparation method according to yet another exemplary embodiment of the invention. In general, the exemplary method of FIG. 3D includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered in about a 24 hour period prior to image acquisition. Moreover, the exemplary method of FIG. D includes a diet regime in which the individual will follow a liquid diet in about a 24 hour period prior to image acquisition. The exemplary bowel preparation methods described above with reference to FIGs. 3 and 4 facilitate uniform tagging of colonic residue and mitigate the stool sticking around the colon wall, and are sufficient for laxative/suppository free preparation of the colon. However, in other embodiments of the invention, the exemplary bowel preparation methods may include administration of laxative or suppositories, if desired. Electronic Cleaning
As noted above, exemplary embodiments of the invention for virtual colonoscopy implement methods for electronically cleaning volumetric images of a colon. In general, exemplary methods for electronically cleaning volumetric images of a colon include methods for automatically detecting and removing tagged residue inside the colonic lumen and automatically reconstructing the image of the boundaries between tagged residue and the colon wall into boundaries between air (lumen) and the colon wall.
There are various factors that are considered for implementing robust and effective electronic cleansing methods for laxative free virtual colonoscopy images according to the invention. For instance, with CT volumetric images, separating tagged residue within the image colon from the backbone can be difficult due to the fact that the intensity range of tagged residue is similar to that of bone tissue in CT images, and that tagged residue in the rectum and sigmoid frequently touches the backbone in the supine position. Furthermore, depending on the efficacy of the organ preparation, the colonic residue may be tagged in a non-uniform manner such that a given parameter set may not be sufficient for accurately cleaning all tagged residue. Moreover, due to the partial volume effect with CT imaging, thin haustral fold tissue that may be submerged in tagged residue may have intensities that are similar to tagged residue.
Furthermore, for exemplary embodiments of the invention for virtual colonoscopy in which no laxative or suppository is applied for bowel preparation, colonic residue such as residual stool can attach around the colon wall. This is to be contrasted with conventional virtual colonoscopy methods in which laxatives/suppositories are used to prepare the colon for examination, which causes colonic residue to form puddles with flat level due to gravitation. As explained below, exemplary embodiments of the invention include methods that take into consideration the above factors to achieve better quality of electronic cleaning for disparate colonic residue tagging conditions and morphologies of colonic residue regions.
For example, FIG. 5 is a flow diagram illustrating an electronic cleaning method according to an exemplary embodiment of the invention. The exemplary method of FIG. 5 can be implemented for the electronic cleaning process (step 24) in FIG. 2. Referring now to FIG. 5, an initial process includes processing the acquired volumetric image dataset using methods for segmenting or extracting the image data associated with the colon region in the volumetric dataset (step 50). For virtual colonoscopy, the colon region that is segmented comprises air (in the hollow lumen/cavity), colonic residue, and the colon wall. It is to be understood that any suitable method may be implemented for segmenting the colon region from other anatomical objects/structures within a volumetric image dataset. For example, the methods disclosed in U.S. Patent No. 6,331,116 to Kaufman, et al. entitled "System and Method for Performing a Three-Dimensional Virtual Segmentation and Examination", U.S. Patent No. 6,343,936 to Kaufman, et al., entitled "System and Method for Pei forming a Three-Dimensional Virtual Examination, Navigation and Visualization" , and U.S. Patent No. 6,514,082, to Kaufman, et al., entitled "System and Method for Performing a Three-Dimensional Examination With Collapse Correction, which are all incorporated herein by reference, can be implemented for segmenting the colon region. In CT images, the intensity of bone is similar to that of tagged residue and the lung is also filled of air. In general, these patents disclose methods for detecting and removing regions of both lung and bone while keeping and labeling the region of colon lumen.
Anatomical knowledge (e.g., thickness of colon wall is deemed within a certain range, e.g. less than 5 mm) can be used for segmentation. When the colon lumen, including both air and residue, is found, the colon wall region could be determined by dilating the region of colon lumen.
After segmenting the colon region, methods are applied for automatically detecting and classifying tagged regions within the imaged colon (step 51). In one exemplary embodiment of the invention, a low-level voxel classification segmentation process can be implemented using the methods disclosed in the above-incorporated U.S. Patent No. 6,331,116. Briefly, a low-level voxel classification method can be applied to all regions of tagged residue. For each region of tagged residue, the image data is processed to group the voxels within the region into several clusters (e.g., around 10 clusters). The voxels with similar intensity properties are assigned to the same group (cluster). For example, voxels that are located in the center of uniformly tagged region are grouped (clustered) together in the same cluster and voxels that are located around an edge of the uniformly tagged region are grouped together in another cluster that is different from that of the center cluster. The classification results basically represent intensity properties of the entire volumetric images. The voxels in the same cluster might represent different clinical meaning due to be in different tagged regions with different tagged conditions. For example, as noted above, for CT images, the intensity of a fold that is covered by tagged fluid might have an intensity that is similar to that of tagged fluid residue due to the partial volume effect. However, electronic cleaning methods according to the invention are self-adaptive to variations in tagging conditions in different tagged regions.
Next, using the classification results, the image data is further processed to determine the boundaries between the different tissues or materials (step 52). More specifically, for virtual colonoscopy, boundary layer extraction methods according to exemplary embodiments of the invention are applied for automatically detecting and extracting boundaries between air (in the lumen/cavity), the colon wall, and tagged residue in the segmented colon region. Exemplary methods for extracting boundary layers will be discussed below with reference to FIGs. 8, 9 and
10, for example. After extraction of boundary layers (step 52), methods are implemented for cleaning the images of tagged residue (step 53) by removing the tagged residue image data and reconstructing the boundary layers between colon wall and tagged residue into boundary layers between air and colon wall (step 54). In one exemplary embodiment of the invention, a boundary layer reconstruction process is independently implemented for each spatially separate region of tagged residue which is removed from the image data. Exemplary methods for reconstructing boundary layers will be discussed in further detail below with reference to FIG.
11 , for example. With such methods, the reconstruction method is the same for each tagged region of residue, but the parameters of the reconstruction method are adaptively adjusted to take into account different tagging conditions of each residue region. FIGs. 6A and 6B are exemplary image diagrams that illustrate results of electronic cleaning in virtual colonoscopy images. FIG. 6 A is an exemplary image of a portion of a colon showing a plurality of tagged residue regions (60, 61 and 62) in the colon lumen. The tagged residue region (60) is a small tagged stool that is attached to the colon wall. The tagged residue regions (61) and (62) are puddles can have non-uniform tagging. FIG. 6B is an exemplary image of FIG. 6A after applying an electronic cleaning process according to the invention, wherein the tagged residue regions (60, 61 and 62) are removed and shown as part of the lumen (air) region within the colon and wherein the boundary layers between the tagged residue regions and surrounding tissue in the image of FIG. 6A are replaced by boundary layers between air and colon wall tissue in the image of FIG. 6B. Exemplary electronic cleaning methods for virtual colonoscopy according to the invention will now be discussed in further detail with reference to FIGs. 7~11, for example. In particular, exemplary methods for boundary layer extraction will be described with reference to FIGs. 7-10 and exemplary methods for boundary layer reconstruction will be described with reference to FIG. 11.
Depending on the imaging modality and the organ under examination, boundary extraction can be implemented using known techniques such as thresholding, edge detection, etc., which are suitable for the given modality for determining the boundaries. For CT volumetric colon images, however, certain factors are taken into consideration. For instance, although the intensities of air, colon wall, and tagged residue typically fall within different intensity ranges, accurate boundary extraction in CT images may not be achieved using a simple thresholding approach due to non-uniform tagging and partial volume effects in the CT images. For example, colon wall tissue often has an intensity around [-100, 50] HU and well-tagged residue has intensity around [200, 800] HU. If a thin haustral fold submerges in well-tagged residue puddle, the intensity of the colon wall of the thin fold may greater than 600 HU.
By way of specific example, FIG. 7 is an exemplary diagram of a CT image, wherein a region (70) of the image includes a portion (71) of a colon wall that has a similar intensity value as that of tagged residue portion (72) within the region (70). The shadowed region (71), which is a thin haustral fold, is depicted as having an intensity of 719 HU, and the brighter region (72), which is tagged puddle of residual fluid, is depicted as having an intensity of 823 HU.
Moreover, edge detecting methods which are based on intensity difference rather than intensity range may be used for determining such boundaries. However, conventional edge detection methods may not be applicable for CT images. For example, edge detection methods are not particularly accurate for processing volumetric CT images having anisotropic voxel size. Of course, the images can be re-sampled or interpolated into isotropic voxel size, but at the cost of extra computation time. Second, electronic cleaning methods according to the invention are preferably designed to extract a boundary layer (e.g., plurality of voxels in thickness) rather than a line of edge curve (e.g., single voxel). The layer of the boundary represents the partial volume effect, which results from the physical limitation of CT scanner. Third, it is well known that most edge detection methods are sensitive to noise and the noise level of CT images usually is high, which is not desirable.
FIG. 8 is a flow diagram that illustrates boundary detection methods according to exemplary embodiments of the invention. In general, boundary detection methods according to the invention include methods for extracting a boundary layer between different tissues/materials using a maximal gradient approach, wherein the gradient is defined as a directional first-order derivative. Exemplary boundary detection methods can be implemented with medical images in which the integer grids have different unit lengths along different coordinate directions due to the variation of scanning protocols. Indeed, anisotropic voxel size is considered when computing a discrete first-order derivative in the digital images. Moreover, exemplary boundary detection methods according to the invention take into consideration the partial volume effect in CT images, wherein the tissue boundary usually forms a thick layer with certain range (as opposed to an edge or a sharp curved line ). The intensity of voxels in a boundary layer changes from one type to another. The extracted boundary layers include more information with respect to tissue abnormalities or other tissue features, as compared to the information provided by sharp boundary lines.
Referring to FIG. 8, a flow diagram illustrates a boundary layer detection method according to an exemplary embodiment of the invention, which implements a maximal gradient process. An initial step is to select n discrete directions in a 3-D image grid, which are to be used for computing gradients (step 80). For instance, FIG. 9 is an exemplary diagram that illustrates an image grid coordinate system (95) and a plurality of selected directions (D1-D5). In the exemplary embodiment, the X, Y, Z directions (D 1 , D2 and D3) correspond to the orthogonal axial directions of the image grid coordinate system (95). The directions D4 and D5 denote diagonal directions on the X-Y plane. The selected directions will vary depending on the application and should be uniformly covered in all directions so as to render the process rotation independent (independent of differences of position (e.g., slight rotation) of the patient during image acquisition) while balancing against the cost of computation (e.g., selection of too many directions may be computationally expensive).
Each voxel of the imaged organ (e.g., the segmented colon region) is then processed using various methods described hereafter to identify voxels that are part of boundary layers. In the exemplary embodiment, the boundary layers within the imaged colon include air/colon wall, tagged residue/colon wall and air/tagged residue boundaries. More specifically, for a selected voxel (step 81), a first-order derivative is computed along each of the n directions for the selected voxel (step 82). A GFV (gradient feature value) is determined for the selected voxel by determining the maximum of the absolute value of the first-order derivatives associated with the voxel (step 83) and then setting the determined maximum value as the GFV of that voxel (step 84). The above process will be described by way of example with reference to the exemplary diagrams of FIGs. 9 and 10, for example. FIG. 9 is an exemplary diagram illustrating 5 directions (D1~D5) in a Cartesian coordinate system (95) in which directional gradients are computed for a given voxel, and FIG. 10 is an exemplary diagram that illustrates neighbor voxels that are used for computing selected directional gradients for a given voxel (Vcurrent). For the following example, it is assumed that the imaged organ comprises a CT volumetric image wherein the voxel size is approximately X:Y:Z=0.7: 0.7: 1.0 mm, and five directional derivatives are computed for each voxel in the image, in the directions (D1-D5) depicted in FIG. 9. The X, Y, and Z are the orthogonal axial directions (Dl), (D2) and (D3), respectively, of the image grid coordinate system (95). The other directions (D4) and (D5) are diagonal directions on the X-Y plane. In this example, the directional derivative computation is designed for spiral CT images, wherein the Z direction (D3) typically has a longer step length than the step lengths of the X and Y directions (Dl and D2). In other exemplary embodiments of the invention where, e.g., CT images are acquired using multi-slice CT with isotropic voxel size, the derivatives along diagonal directions in the X-Z and Y-Z planes should also considered.
An exemplary process for determining a first-order derivative will now be discussed with reference to expression (1) and FIG. 10. Assume X{ denotes the intensity of z-th neighbor voxel and Y= {y\ : i = 1, 2, ... , 5} denotes a vector of directional derivatives. The following expression (1) illustrates an exemplary method for determining the elements y,- of Y for a given voxel, wherein the exemplary neighborhood is depicted in FIG. 10 for computing selected directional gradients for the given voxel (Vcurrent):
yl =\ xl2 - 8x4 + 8x6 -xu \ /(\2 » λxy) , y2 =| x9 - Sxl + 8x3 - xn \ /(12 » λy) y3 =| xu - 8x5 + 8x7 - xl5 I /(12 • λ ) , v4 =| xlQ - 8x2 + 8xQ - x8 1 /(12 • λx) (1) y5 =\ xl6 -x \ /(2 » Λz)
In expression (1), the term λ, denotes the step length along direction "*", and such terms are scaled using scaling factors to account for the different lengths in the different directions for the non-isotropic voxels. In the above example, the GFV for a given voxel is determined as maxfyi, i = 1,2,3,4,5). In other words, the GFV for a given voxel represents the direction and magnitude of the greatest change in intensity with respect to the given voxel. The GFV is much less sensitive to the image noise than value of single directional derivative. The exemplary expression (1) comprises a central 5-point formula for non-Z directions wherein the directional gradient is computed over a length of 5 voxels, and a central 3 -point formula for the Z direction wherein the directional gradient is computed over a length of 3 voxels. It is to be understood, however, that the methods of FIGs. 9 and 10 are merely exemplary and the invention is not limited to the expression (1) and the neighborhood in FIG. 10 and that other formulas can be readily envisioned by one of ordinary skill in the art for computing a first-order derivative taking into account voxel size for other applications. Referring again to FIG. 8, after the GFV of the current voxel is determined, the GFV is compared against a threshold GFV (step 85). By thresholding the GFV, the boundary layer can be separated from the tissue homogeneous regions since such regions have much lower GFV as compared to tissue boundary when the intensity dramatically changes. If the voxel GFV exceeds the threshold (affirmative result in step 86), the voxel is tagged as a real boundary voxel (step 89). The GFV threshold can be pre-set for a specific application. In one exemplary embodiment for virtual colonoscopy CT images, the threshold can be set as 56. Therefore, in FIG. 8, if the GFV of the selected voxel is greater than 56, the voxel is deemed a boundary voxel (step 89,), otherwise the voxel can be deemed to be in the homogeneous tissue region.
Furthermore, since the intensity for a tagged residue region having a non-uniform tagging slowly varies, such intensity change does not contribute much to the GFV. As such, the GFV threshold cannot be set too small. However, the GFV threshold cannot be too large since the extracted boundary layer may not enclose the entire colon lumen. This is the result of non- uniform range of partial volume effect along different directions. If the threshold for GFV thresholding is reduced, the chance of enclosing the colon lumen increases. However, reducing the threshold GFV increases the risk of over estimating the boundary layer.
To render boundary layer extraction method robust against sub-optimal GFV thresholding, the thresholded GFV result can be fused with the low-level voxel classification results (step 51, FIG. 5). In other words, the information from both voxel classification and GFV are combined to achieve optimal estimation of the boundary layers. Referring to FIG. 8, a given voxel will be deemed a real boundary voxel (step 89) if the GFV of the voxel is greater than preset GFV threshold (affirmative result in step 86) or if based on the classification results, the voxel is determined to be located at the edge of region where all voxels are in the same cluster (i.e., the voxel is part of a boundary layer cluster as determined by the classifying the voxels of the tagged region) (affirmative result in step 87). If neither condition (step 86 or 87) is met, the voxel will be tagged as a non-boundary voxel (step 88) (e.g., tissue voxel or lumen voxel). The exemplary methods (steps 81-89) are repeated for each remaining voxel until all the voxels in the desired image dataset have been processed (negative determination in step 90). The results of the exemplary boundary layer extraction process of FIG. 8 is that each voxel is classified as a boundary or non-boundary voxel, which enables identification of boundary layers in the imaged colon region between air/tagged region, air/colon wall, and tagged region/colon wall. As noted above, after extraction of the boundary layers, the image data is further processed to remove the tagged residue regions and to transform the boundary layer between colon wall and tagged residue into a boundary layer between air and colon wall. The reconstruction is implemented on each spatial separate region of tagged residue independently using the same reconstruction process, but with different parameters to adaptively adjust the process to each residue region to account for different tagging conditions. More specifically, FIG. 11 is a flow diagram that illustrates a boundary layer reconstruction method according to an exemplary embodiment of the invention. As noted above, boundary layer reconstruction is applied to each region of tagged residue. In particular, for a selected region of tagged residue (step 100), the intensity of each voxel that is tagged as a non-boundary voxel is deemed part of the tagged residue and the intensity of the voxel is set to an average air intensity Tajr (step 101). For example, in CT images, air has an intensity of -850 HU, for example.
The remaining voxels for the selected tagged residue region include boundary voxels. In particular, there are three types of boundary voxels in the tagged residue region: (1) boundary voxels that are part of the air/tagged residue boundary; (2) boundary voxels that are part of the colon wall/residue boundary; and (3) boundary voxels that are in proximity to air, colon wall and tagged residue voxels. The first type of boundary voxels are readily detected because such voxels are close to the air region and have greater GFV (e.g. GFV is larger than 500) since the intensity change from air to the tagged region is the most dramatic in CT images.
For the boundary layer region having the first type of boundary voxel, a conditional region growing algorithm is applied to the region to expand the region (air/tagged residue boundary layer) into the tagged residue and air regions (step 102). In one exemplary embodiment of the invention, a 6-connected 3-D region growing process which may be implemented (step 102) is as follows:
Push the seed in the Queue While( Queue is not empty ) {
Get the front voxel in the queue and denote its intensity as Vc.
Label the current voxel as in the region.
Check the 6 closest 3-D neighbors of the current voxel.
{ If the neighbor is not labeled as in the region, denote the neighbor intensity as Vn.
{ If( (Vc > 150 HU) && (Vo+50 HU < Vn) && ( Vn < 350 HU )
{ Add this neighbor into the Queue
} Else if( (Vc < - 100 HU) && (Vc-50 HU > Vn) && (Vn > Taιr) )
{ Add this neighbor into the region
} } // End of checking this neighbor } // End of checking all neighbors pop the front voxel from the Queue. } // While()
The exemplary region growing process expands the region of the air/tagged residue boundary layer (i.e., the partial volume layer between the tagged residue and the air lumen) and the intensities of voxels in the expanded region are set to Taιr . The expended region may also cover part of the boundary layer having the third type of voxels as described above.
Next, the remaining boundary layers having the second and third type of boundary voxels are transformed into an air/colon wall boundary layer (step 103). More specifically, in one exemplary embodiment of the invention, an intensity transformation process is applied to the boundary voxels as follows:
If ( Vc > Tave )
{
Vf= Taιr;
Figure imgf000022_0001
},
where Tave denotes the average intensity of all boundary layer voxels of the second and third type, Tissue denotes the average intensity of soft tissue around colon lumen (e.g. -50 HU), Vc denotes the intensity of the cunent boundary voxel, and where Vf denotes the transformed intensity value of the cunent voxel.
In another exemplary embodiment, an intensity transformation process having a penalty factor based on the GFV is applied to the boundary voxels as follows: If( Vc > Tave )
{
Vf = Tair;
} Else
{ vr = p(gf) * * (τave - vc) + τal
Figure imgf000023_0001
where P(gf) is the penalty function, and where gf is the GFV value of the cunent voxel. The range of the penalty function can be [0, B], where B > 1.
After applying an intensity transformation as above, electronic cleansing for the given tagged residue region is complete. If there are remaining tagged residue regions to process (affirmative determination in step 104), the above process (steps 101-103) are repeated for the next selected tagged residue region. The boundary layer reconstruction process terminates when all tagged regions have been processed.
It is to be appreciated that electronic cleansing methods according to exemplary embodiments of the invention can be implemented with any modality of images having the same feature in intensity. Moreover, a maximum directional gradient process as described herein can also be applied to any modality of medical images for extraction of tissue boundary layer with partial volume effects. For example, with MRI virtual cystoscopy, the bladder wall region could be extracted with the exemplary maximum gradient method described above. Then, a texture analysis can be applied to the bladder wall region. The texture indexes associated to voxels can be mapped back to the original volumetric data and volume rendering in the endoluminal view to facilitate detection of abnormality in the bladder wall.
It is to be further appreciated that methods and systems described herein could be applied to virtually examine animals or inanimate objects. In addition to medical applications, methods described herein can be used to detect the contents of sealed objects which cannot be opened, or some of the contents can be dyed to show contrast in the images. Although exemplary embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the invention described herein is not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention. All such changes and modifications are intended to be included within the scope of the invention as defined by the appended claims.

Claims

What Is Claimed Is:
1. An imaging method, comprising: obtaining an image dataset of an organ; processing the acquired image dataset to obtain feature data, wherein processing comprises: obtaining image intensity feature data from the image dataset; processing the image intensity feature data to obtain gradient feature data representing intensity change along each of a plurality of directions in a region of interest in the acquired image dataset; and processing the gradient feature data to determine boundary layers between anatomical features of the imaged organ and sunounding objects or materials in the region of interest; and rendering a multi-dimensional representation of the imaged organ using the obtained feature data.
2. The method of claim 1, wherein the image intensity feature data comprises intensity values for voxels of the image dataset, and wherein processing the image intensity feature data to obtain gradient feature data comprises determining a gradient feature value (GFV) for each voxel in the region of interest using a maximum directional gradient method.
3. The method of claim 2, wherein determining a GFV for a given voxel comprises: determining an intensity gradient value for each of the plurality of directions based on intensity values of neighboring voxels of the given voxel along each of the directions; determining a maximum absolute value of the intensity gradient values; and setting the GFV of the given voxel as the maximum absolute intensity gradient value.
4. The method of claim 3, wherein determining an intensity gradient value for each of the plurality of directions comprises applying a scaling factor for unit lengths along the directions based on the voxel dimensions
5. The method of claim 2, wherein processing; the gradient feature data to determine boundary layers comprises: comparing the GFV of each voxel to a threshold GFV; and labeling each voxel having a GFV that exceeds the threshold GFV as a boundary layer voxel.
6. The method of claim 5, further comprising applying a voxel classification method to cluster the voxels into a plurality of groups based on intensity values of the voxels.
7. The method of claim 6, further comprising using the classification results to determine if a given voxel is a boundary layer voxel when the GFV of the given voxel does not exceed the GFV threshold.
8. The method of claim 1, wherein rendering a multi-dimensional representation of the imaged organ comprises rendering a three-dimensional model of the organ that enables virtual endoscopic examination of the imaged organ.
9. The method of claim 1, wherein the organ comprises a colon and wherein the boundary layers include a lumen/colon wall boundary layer, a lumen/colonic residue boundary layer and a colonic residue/colon wall boundary layer.
10. The method of claim 1, wherein processing the acquired image dataset comprises segmenting the region of interest, wherein the region of interest comprises a cavity or lumen of the organ, tagged regions within the cavity or lumen, and a wall of the organ.
11. The method of claim 10, wherein proces sing the acquired image dataset further comprises: detecting tagged regions; and changing image intensity values of the detected tagged region to an intensity value of other tissue or material in the region of interest.
12. The method of claim 1, wherein processing the acquired image dataset further comprises reconstructing a first type of boundary layer into a second type of boundary layer.
13. The method of claim 1, wherein processing the acquired image dataset comprises automatically detecting organ abnormalities, if any, in the region of interest.
14. The method of claim 1 , wherein rendering a multi-dimensional representation of the imaged organ using the obtained feature data comprises rendering a volumetric image using image intensity feature data of an original volumetric image dataset.
15. The method of claim 14, further comprising: fusing additional feature data with the original volumetric image dataset to generated a fused volumetric image dataset; and rendering a multi-dimensional representation of the fused volumetric image dataset.
16. The method of claim 1 , wherein rendering is performed by volume rendering the image dataset.
17. The method of claim 16, wherein volume rendering comprises volume rendering an inner surface of the organ at a viewpoint from within a cavity or lumen of the organ.
18. The method of claim 17, wherein volume rendering the inner surface of the organ comprises adjusting a color map to display potential abnormalities on the inner surface with different color than normal regions of the inner surface of the organ.
19. The method of claim 1, wherein acquiring an image dataset comprises acquiring image data in a plurality of modalities and generating the image dataset by fusing image data of different modalities.
20. An imaging method, comprising: obtaining an image dataset comprising image data of a colon that is prepared to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering a volumetric image comprising an endoluminal view at a region within the imaged colon.
21. The method of claim 20, wherein segmenting comprises detecting and removing regions of lung and bone in the image dataset by applying a low-level voxel classification process and using a priori knowledge of anatomy.
22. The method of claim 20, wherein segmenting is performed using a low-level voxel classification process and a priori knowledge of anatomy.
23. The method of claim 20, wherein the maximum directional gradient feature analysis comprises: selecting several discrete directions in an image grid space of the image dataset; for each voxel in the segmented region of interest, detemrining a first-order derivative along each selected direction from the voxel, determining a maximum absolute value of the determined directional first-order derivatives; and setting a GFV (gradient feature value) of the voxel as the determined maximum absolute value.
24. The method of claim 23, wherein electronically cleaning comprises: extracting boundary layers in the region of interest using the determined GFV of the voxels in the region of interest, the boundary layers including a lumen/colon wall boundary layer, lumen/tagged residue boundary layer, and a tagged residue/colon wall boundary layer; removing tagged residue regions; and transforming a tagged residue/colon wall boundary layer into a lumen/colon wall boundary layer.
25. The method of claim 24, wherein extracting boundary layers comprises: comparing the GFV of each voxel to a threshold GFV; and labeling each voxel having a GFV that exceeds the threshold GFV as a boundary layer voxel.
26. The method of claim 25, further comprising applying a voxel classification method to cluster the voxels into a plurality of groups based on intensity values of the voxels.
27. The method of claim 26, further comprising using the classification results to determine if a given voxel is a boundary layer voxel when the GFV of the given voxel does not exceed the GFV threshold.
28. The method of claim 24, wherein removing tagged residue regions comprises transforming the intensity of voxels in the tagged region into a preset intensity.
29. The method of claim 28, wherein the preset intensity comprises an average intensity of air voxels in the volumetric image.
30. The method of claim 24, wherein transforming a tagged residue/colon wall boundary layer into a lumen/colon wall boundary layer is performed using a linear transformation process.
31. The method of claim 30, wherein the linear transformation process implements a voxel GFV as a penalty term.
32. The method of claim 23, wherein determining a first-order derivative along each selected direction from the voxel is performed using equations for different ranges to reduce the affects of noise.
33. The method of claim 23, wherein determining a first-order derivative along each selected direction from the voxel is perfonned using equations that include scaling factors based on dimensions of the voxels.
34. The method of claim 20, wherein rendering a volumetric image comprises volume rendering an inner surface of the colon wall.
35. The method of claim 34, wherein volume rendering comprises rendering the inner surface of the colon lumen with specific color map to display voxels of the colon wall in different colors based on intensity ranges.
36. The method of claim 20, further comprising automatically detecting potential polyps, if any, on the colon wall.
37. The method of claim 20, wherein rendering a volumetric image comprises fusing the polyp detection results into the electronically cleaned volumetric image.
38. The method of claim 20, wherein rendering a volumetric image comprises rendering an electronically cleaned volumetric image.
39. The method of claim 20, wherein acquiring an image dataset comprises acquiring image data using two or more imaging modalities and fusing the image data to form a volumetric image dataset.
40. A bowel preparation method for preparing a colon for imaging, comprising administering doses of contrast agents to an individual in combination with the individual following a low-residue diet regimen for a period of time prior to acquiring image data of the individual's colon.
41. The method of claim 40, wherein the contrast agents comprise a first contrast agent including a barium sulfate solution and a second contrast agent comprising a diatrizoate meglumine, and diatrizoate sodium solution.
42. The method of claim 41 , wherein administering doses of contrast agents comprises administering 4 doses of the first contrast agent and 2 doses of the second contrast agent in about a 36 hour period prior to image acquisition and wherein following a low-residue diet regimen comprises following a fluid diet in about a 24 hour period prior to image acquisition and eating meals including food items from a suggested food list for about 2 days before the 24 hour period.
43. The method of claim 41 , wherein administering doses of contrast agents comprises administering 4 doses of the first contrast agent and 2 doses of the second contrast agent in about a 36 hour period prior to image acquisition and wherein following a low-residue diet regimen comprises eating food from a meal kit in about a 24 hour period prior to image acquisition and eating meals including food items from a suggested food list for about 2 days before the 24 hour period.
44. The method of claim 41, wherein administering doses of contrast agents comprises administering 4 doses of the first contrast agent and 2 doses of the second contrast agent in about a 36 hour period prior to image acquisition and wherein following a low-residue diet regimen comprises eating meals including food items from a suggested food list for about 3 days before image acquisition.
45. The method of claim 41 , wherein administering doses of contrast agents comprises administering 3 doses of the first contrast agent and 2 doses of the second contrast agent in about a 24 hour period prior to image acquisition and wherein following a low-residue diet regimen comprises following a fluid diet in about a 24 hour period prior to image acquisition.
46. The method of claim 41, wherein following a low-residue diet regime further comprises eating meals including food items from a suggested food list for about 1 day before the 24 hour period.
47. The method of claim 40, wherein the bowel preparation process is a laxative and suppository free bowel preparation process.
48. The method of claim 40, further comprising distending the colon prior to image acquisition.
49. The method of claim 48, wherein distending the colon comprises forcing air or carbon dioxide into the colon.
50. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform imaging method steps, the imaging method steps comprising: obtaining an image dataset of an organ; processing the acquired image dataset to obtain feature data, wherein processing comprises: obtaining image intensity feature data from the image dataset; processing the image intensity feature data to obtain gradient feature data representing intensity change along each of a plurality of directions in a region of interest in the acquired image dataset; and processing the gradient feature data to determine boundary layers between anatomical features of the imaged organ and surrounding objects or materials in the region of interest; and rendering a multi-dimensional representation of the imaged organ using the obtained feature data.
51. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform imaging method steps, the imaging method steps comprising: obtaining an image dataset comprising image data of a colon that is prepared to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering a volumetric image comprising an endoluminal view at a region within the imaged colon.
PCT/US2004/033888 2003-10-10 2004-10-12 Virtual endoscopy methods and systems WO2005036457A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP04795095A EP1716535A2 (en) 2003-10-10 2004-10-12 Virtual endoscopy methods and systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US51023703P 2003-10-10 2003-10-10
US60/510,237 2003-10-10

Publications (2)

Publication Number Publication Date
WO2005036457A2 true WO2005036457A2 (en) 2005-04-21
WO2005036457A3 WO2005036457A3 (en) 2008-11-20

Family

ID=34435075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/033888 WO2005036457A2 (en) 2003-10-10 2004-10-12 Virtual endoscopy methods and systems

Country Status (2)

Country Link
EP (1) EP1716535A2 (en)
WO (1) WO2005036457A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1884894A1 (en) * 2006-07-31 2008-02-06 iCad, Inc. Electronic subtraction of colonic fluid and rectal tube in computed colonography
EP1955263A2 (en) * 2005-10-17 2008-08-13 The General Hospital Corporation Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
EP2000075A1 (en) * 2006-03-28 2008-12-10 Olympus Medical Systems Corp. Medical image processing device and medical image processing method
US7840051B2 (en) 2006-11-22 2010-11-23 Toshiba Medical Visualization Systems Europe, Limited Medical image segmentation
WO2010138691A3 (en) * 2009-05-28 2011-08-04 Kjaya, Llc Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
EP2779095A3 (en) * 2013-03-15 2016-03-02 Kabushiki Kaisha TOPCON Optic disc image segmentation method
US9454643B2 (en) 2013-05-02 2016-09-27 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination
US10726955B2 (en) 2009-05-28 2020-07-28 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10937542B1 (en) 2020-09-18 2021-03-02 Vent Creativity Corporation Patient specific treatment planning
US11410769B1 (en) 2021-12-08 2022-08-09 Vent Creativity Corporation Tactile solutions integration for patient specific treatment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920319A (en) * 1994-10-27 1999-07-06 Wake Forest University Automatic analysis in virtual endoscopy
US6377833B1 (en) * 1999-01-25 2002-04-23 Douglas Albert System and method for computer input of dynamic mental information
US6876760B1 (en) * 2000-12-04 2005-04-05 Cytokinetics, Inc. Classifying cells based on information contained in cell images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920319A (en) * 1994-10-27 1999-07-06 Wake Forest University Automatic analysis in virtual endoscopy
US6377833B1 (en) * 1999-01-25 2002-04-23 Douglas Albert System and method for computer input of dynamic mental information
US6876760B1 (en) * 2000-12-04 2005-04-05 Cytokinetics, Inc. Classifying cells based on information contained in cell images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BARTZ ET AL.: 'Integration of Navigation, Optical and Virtual Endoscopy in Neurosurgery and Oral and maxillofacial Surgery' COMPUTER AIDED MEDICINE 12 November 2001 - 13 November 2001, pages 1 - 3 *
BYEONG-SEOK ET AL.: 'An Efficient Virtual Endoscopy System for Stereotactic NeuroNavigation' APCMBE 1999, pages 1 - 3 *
CHEN D. ET AL.: 'A Novel Approach to Extract Colon Lumen from CT Images for Virtual Colonoscopy' IEEE TRANSACTIONS ON MEDICAL IMAGING vol. 19, no. 12, December 2000, pages 1220 - 1226, XP001003262 *
HARDERS ET AL.: 'Improving Virtual Endoscopy for the Intestinal Tract' MICCASI 2002, pages 20 - 27 *
KOLOSZAR ET AL.: 'Accelerating Virtual Endoscopy' WSCG vol. 11, no. 1, 03 February 2003 - 07 February 2003, pages 1 - 6 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9299156B2 (en) 2005-10-17 2016-03-29 The General Hospital Corporation Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
EP1955263A2 (en) * 2005-10-17 2008-08-13 The General Hospital Corporation Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
US20090304248A1 (en) * 2005-10-17 2009-12-10 Michael Zalis Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
EP1955263A4 (en) * 2005-10-17 2011-12-21 Gen Hospital Corp Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
EP2000075A1 (en) * 2006-03-28 2008-12-10 Olympus Medical Systems Corp. Medical image processing device and medical image processing method
EP2000075A4 (en) * 2006-03-28 2010-12-22 Olympus Medical Systems Corp Medical image processing device and medical image processing method
EP1884894A1 (en) * 2006-07-31 2008-02-06 iCad, Inc. Electronic subtraction of colonic fluid and rectal tube in computed colonography
US7840051B2 (en) 2006-11-22 2010-11-23 Toshiba Medical Visualization Systems Europe, Limited Medical image segmentation
US10084846B2 (en) 2009-05-28 2018-09-25 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US10726955B2 (en) 2009-05-28 2020-07-28 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US11676721B2 (en) 2009-05-28 2023-06-13 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US8701167B2 (en) 2009-05-28 2014-04-15 Kjaya, Llc Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US10930397B2 (en) 2009-05-28 2021-02-23 Al Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US9106609B2 (en) 2009-05-28 2015-08-11 Kovey Kovalan Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US9749389B2 (en) 2009-05-28 2017-08-29 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
WO2010138691A3 (en) * 2009-05-28 2011-08-04 Kjaya, Llc Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US10497124B2 (en) 2013-03-15 2019-12-03 Kabushiki Kaisha Topcon Optic disc image segmentation method and apparatus
EP2779095A3 (en) * 2013-03-15 2016-03-02 Kabushiki Kaisha TOPCON Optic disc image segmentation method
US10586332B2 (en) 2013-05-02 2020-03-10 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination
US9747688B2 (en) 2013-05-02 2017-08-29 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination
US9454643B2 (en) 2013-05-02 2016-09-27 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination
US11145121B2 (en) 2013-05-02 2021-10-12 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination
US11704872B2 (en) 2013-05-02 2023-07-18 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination

Also Published As

Publication number Publication date
WO2005036457A3 (en) 2008-11-20
EP1716535A2 (en) 2006-11-02

Similar Documents

Publication Publication Date Title
Kostis et al. Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images
US7620442B2 (en) Colonography on an unprepared colon
US7809177B2 (en) Lumen tracking in computed tomographic images
Wyatt et al. Automatic segmentation of the colon for virtual colonoscopy
EP2279490B1 (en) Automatic detection and accurate segmentation of abdominal aortic aneurysm
US8515200B2 (en) System, software arrangement and method for segmenting an image
US8311301B2 (en) Segmenting an organ in a medical digital image
US20090226057A1 (en) Segmentation device and method
US20070263915A1 (en) System and method for segmenting structures in a series of images
CA2727736C (en) Method and system for determining an estimation of a topological support of a tubular structure and use thereof in virtual endoscopy
US20080317310A1 (en) Method and system for image processing and assessment of blockages of heart blood vessels
Sato et al. An automatic colon segmentation for 3D virtual colonoscopy
JP5854561B2 (en) Image data filtering method, image data filtering system, and use of image data in virtual endoscopy
WO2005036457A2 (en) Virtual endoscopy methods and systems
Alyassin et al. Semiautomatic bone removal technique from CT angiography data
Liang Virtual colonoscopy: An alternative approach to examination of the entire colon
Wyatt et al. Segmentation in virtual colonoscopy using a geometric deformable model
Taimouri et al. Colon segmentation for prepless virtual colonoscopy
Carston et al. CT colonography of the unprepared colon: an evaluation of electronic stool subtraction
Chen et al. Electronic colon cleansing by colonic material tagging and image segmentation for polyp detection: detection model and method evaluation
Huang et al. Volume visualization for improving CT lung nodule detection
CHENCHEN Virtual colon unfolding for polyp detection
KR101305678B1 (en) Method for automatic electronic colon cleansing in virtual colonoscopy and apparatus thereof
Schoonenberg et al. Effects of filtering on colorectal polyp detection in ultra low dose CT
Sadleir Enhanced computer assisted detection of polyps in CT colonography

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2004795095

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2004795095

Country of ref document: EP