EP1716535A2 - Verfahren und systeme zur virtuellen endoskopie - Google Patents

Verfahren und systeme zur virtuellen endoskopie

Info

Publication number
EP1716535A2
EP1716535A2 EP04795095A EP04795095A EP1716535A2 EP 1716535 A2 EP1716535 A2 EP 1716535A2 EP 04795095 A EP04795095 A EP 04795095A EP 04795095 A EP04795095 A EP 04795095A EP 1716535 A2 EP1716535 A2 EP 1716535A2
Authority
EP
European Patent Office
Prior art keywords
colon
image
voxel
residue
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04795095A
Other languages
English (en)
French (fr)
Inventor
Dongqing Chen
Sarang Lakare
Kevin Kreeger
Mark R. Wax
Arie E. Kaufman
Zhengrong Liang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Viatronix Inc
Original Assignee
Viatronix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Viatronix Inc filed Critical Viatronix Inc
Publication of EP1716535A2 publication Critical patent/EP1716535A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp

Definitions

  • the present invention relates generally to virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, the invention relates to 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps).
  • virtual endoscopy systems implement methods for processing 3D image datasets to enable examination and evaluation of organs with hollow lumens or cavities, such as colons, bladders, lungs, arteries, etc, and to enable virtual simulation of endoscopic examination of such organs.
  • virtual endoscopy procedures include an organ preparation process whereby an organ to be evaluated is prepared in a manner to enable the anatomical features of the organ (e.g., organ tissue) to be contrasted from surrounding anatomical objects or materials in subsequently acquired medical images, followed by image acquisition and image processing to construct 2D or 3D models of the organ from the acquired image data.
  • Such 2D/3D models can be displayed in different rendering modes for inspecting for organ abnormalities.
  • virtual endoscopy applications provide 3D visualization (e.g., "fly-through" visualization) of the inner surface of the organ, which is referred to as endoluminal view.
  • Virtual endoscopy is continuing to gain wider acceptance in the medical field as non- invasive, patient comfortable methods for examining and evaluating organs.
  • Virtual endoscopy will eventually eliminate the need for invasive screening/testing endoscopic procedures such as optical colonoscopies which require long instruments (catheters/endoscopes) to be inserted into the patient.
  • Such invasive procedures provide risk of injury including organ perforation, infection, hemorrhage, etc.
  • invasive endoscopic procedures can be highly uncomfortable and stressful to the patient.
  • exemplary embodiments of the invention include virtual endoscopy systems and methods for medical diagnosis and evaluation of anatomical objects such as organs with hollow lumens or cavities. More specifically, exemplary embodiments of the invention include 3D imaging systems and methods for processing and rendering volumetric images of an organ for virtual endoscopy applications that enable visualization and navigation of the imaged organ from within a lumen/cavity of the organ (e.g., virtual colonoscopy to detect colonic polyps).
  • an imaging method that can be implemented for virtual endoscopy applications includes a process of obtaining an image dataset of an organ, processing the acquired image dataset to obtain feature data, and rendering a multi-dimensional representation of the imaged organ using the obtained feature data, wherein processing includes obtaining image intensity feature data from the image dataset, processing the image intensity feature data to obtain gradient feature data representing intensity change along each of a plurality of directions in a region of interest in the acquired image dataset; and processing the gradient feature data to determine boundary layers between anatomical features of the imaged organ and surrounding objects or materials in the region of interest.
  • an imaging method which can be implemented for virtual colonoscopy applications includes a process of obtaining an image dataset comprising image data of a colon that is prepared to tag regions of colonic residue in a manner that enhances a contrast between tagged regions of colonic residue in a lumen of the colon and a colon wall; segmenting a region of interest in the image dataset, the region of interest comprising the colon lumen, the colon wall, and regions of tagged residue in the colon lumen; electronically cleaning the tagged residue in the colon lumen using a gradient feature data obtained from the image dataset using a maximum directional gradient feature analysis; and rendering a volumetric image comprising an endoluminal view at a region within the imaged colon.
  • FIG. 1 is a flow diagram illustrating a virtual endoscopy method according to an exemplary embodiment of the invention.
  • FIG. 2 is a flow diagram illustrating virtual colonoscopy methods according to exemplary embodiments of the invention.
  • FIGs. 3A ⁇ 3D are diagrams illustrating bowel preparation methods for virtual colonscopy, according to exemplary embodiments of the invention.
  • FIGs. 4A ⁇ 4C are diagrams illustrating various types of foods/meals that can be used for bowel preparation methods according to exemplary embodiments of the invention.
  • FIG. 5 is a flow diagram illustrating an electronic organ cleaning method for virtual endoscopy according to an exemplary embodiment of the invention.
  • FIGs. 6A and 6B are exemplary images illustrating results of an electronic organ cleaning process according to the invention.
  • FIG. 7 is an exemplary image depicting a condition in which a portion of a colon wall has a similar intensity as that of a puddle of tagged residue.
  • FIG. 8 is a flow diagram illustrating a boundary extraction method according to an exemplary embodiment of the invention.
  • FIG. 9 is an exemplary diagram illustrating directional gradients that can be used for extracting boundary layers, according to an exemplary embodiment of the invention.
  • FIG. 10 is an exemplary diagram illustrating a method for computing directional gradients for extracting boundary layers, according to an exemplary embodiment of the invention.
  • FIG. 11 is a flow diagram illustrating a method for reconstructing boundary layers according to an exemplary embodiment of the invention. Detailed Description of Exemplary Embodiments
  • exemplary embodiments of the invention as described in detail hereafter include organ preparation methods and image processing methods for virtual endoscopy applications. It is to be understood that exemplary imaging systems and methods according to the invention as described herein may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
  • virtual endoscopy systems and methods described herein can be implemented in software comprising program instructions that are tangibly embodied on one or more program storage devices (e.g., hard disk, magnetic floppy disk, RAM, CD Rom, DVD, ROM, flash memory, etc.), an executable by any device or machine comprising suitable architecture.
  • FIG. 1 is a high-level flow diagram illustrating a method for virtual endoscopy according to an exemplary embodiment of the invention.
  • FIG. 1 depicts a virtual endoscopy process including methods for preparing a target organ for imaging and methods for processing volumetric images of the organ to enable virtual endoscopic examination of the organ.
  • the exemplary method of FIG. 1 depicts a general framework for virtual endoscopy which can be implemented with various imaging modalities for virtually examining various types of objects such as organs with hollow lumens/cavities such as colons, tracheo- bronchial airways, bladders, and the like.
  • an exemplary virtual endoscopy method includes an initial process of preparing an organ to be imaged (step 10).
  • organ preparation methods according to the invention are designed to enhance the contrast between anatomical features of an organ under evaluation and surrounding objects and material in subsequently acquired medical images of the organ.
  • organ preparation methods according to the invention are designed to be non-invasive and comfortable for the individual whose organ is to be virtually examined.
  • An organ preparation process according to the invention will vary depending on factors such as, e.g., the type of organ to be imaged and the imaging modalities used for acquiring medical images of the organ.
  • exemplary embodiments of the invention include bowel preparation methods for preparing a colon for virtual colonoscopy by an individual being administered contrast agent(s) and following a specific diet regime for a period of time prior to imaging the colon. More specifically, exemplary embodiments of the invention include laxative- free/suppository-free bowel preparation methods which use a combination of diet management and administration of contrast agents to effectively "tag" colonic residue (e.g., stool, fluid) that may be present in the lumen of the colon such that the colonic residue can be distinguished from surrounding tissues in subsequently acquired medical images of the colon.
  • contrast agents e.g., stool, fluid
  • contrast agents and methods of administering contrast agents will vary depending on, e.g., the application and the organ under evaluation.
  • contrast agents can be administered according to a particular protocol either orally, intravenously, or a combination thereof.
  • the type of contrast agent is not limited to medicine or specific chemicals, and could be normal food or even natural water.
  • natural water can be utilized as the contrast agent for preparing a bladder for imaging.
  • organ preparation may include drinking a specified amount of water before image acquisition.
  • the organ will be imaged to acquire a 3D volumetric image dataset (step 11) using one or more imaging modalities that are suitable for the given application.
  • imaging modalities that may be used for acquiring medical images of an organ include, for example, x-ray CT (Computed Tomography), MRI (Magnetic Resonance Imaging), US (ultrasound), PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography).
  • the medical images can be 3D volumetric image datasets that are directly acquired as a result of a scan.
  • 3D volumetric image datasets can be generated by acquiring an image dataset comprising multiple 2D or "slice" images and then stacking and interpolating between the 2D images to produce a 3D volumetric image dataset.
  • volume rendering methods known to those of ordinary skill in the art can be implemented to combine adjacent 2D image planes (slices) including, for example, maximum intensity projection, minimum intensity projection, and surface rendering techniques in combination with voxel texture information, depth information, gradient shading, etc.
  • the 3D volumetric images are processed using known techniques for segmenting a
  • ROI region of interest
  • the target organ from the 3D image dataset (step 12).
  • any suitable automated or semi-automated segmentation method may be implemented to extract the image data volume that corresponds to the target organ from the original 3D volume space.
  • methods are implemented for segmenting a colon region from a 3D image dataset, wherein the colon region includes the colon wall, colon lumen, and tagged colonic residue in the colon.
  • the segmentation methods will vary depending on the target organ of interest and imaging modality and generally include methods for segmenting features or anatomies of interest by reference to known or anticipated image characteristics, such as edges, identifiable structures, boundaries, changes or transitions in colors or intensities, changes or transitions in spectrographic information, a priori anatomical knowledge, etc.
  • image characteristics such as edges, identifiable structures, boundaries, changes or transitions in colors or intensities, changes or transitions in spectrographic information, a priori anatomical knowledge, etc.
  • Various types of segmentation methods that can be implemented are well known to those of ordinary skill in the art, and a detailed discussion thereof is not necessary and beyond the scope of the claimed inventions.
  • the image data corresponding to the extracted ROI is processed using one or more feature analysis methods to extract feature data of relevant clinical significance with respect to the organ under investigation (step 13).
  • the feature analysis methods that are implemented for extracting relevant features, data, or image parameters from image data will vary depending on the type(s) of anatomical structures or organs under consideration and imaging modality. For example, methods may be implemented for detecting potential anatomical abnormalities in the organ of interest or extracting boundaries or edges between different tissues/materials/structures in the imaged organ. For example, as explained below, methods according to the invention are provided for processing volumetric images of a colon to detect and remove tagged regions of colonic residue and reconstruct boundaries between the colon wall and lumen in regions in which colonic residue is removed.
  • Various types of feature extraction methods that can be implemented for various medical applications and image domains are well known to those of ordinary skill in the art.
  • the extracted feature data can be transformed and fused with the original volumetric images (step 14) and the fused volumetric images can be rendered and displayed (step 15) in a manner that facilitates physician inspection of abnormalities.
  • any type of feature data that is relevant to the given clinical application can be fused with an original volumetric dataset.
  • a fusion process can be a transformation process that maps feature data into the intensity of the volumetric images. For instance, with virtual colonoscopy, the results of an automated polyp detection process can be fused with an original volume dataset such that potential polyps are rendered and displayed in a particular color.
  • MRI virtual cystoscopy images a lesion that invades the wall of the organ will demonstrate different texture as compared to the normal wall.
  • texture information can be extracted and overlaid on the endoluminal view to facilitate the early detection of lesions and cancer, especially when abnormalities in the organ wall are small and present a subtle shape deformation on the inner wall surface. If texture information is fused in the original volumetric images in an appropriate way, it is possible that lesion regions will be enhanced in the endoluminal view.
  • FIG. 1 provides a general framework for virtual endoscopy methods according to the invention.
  • systems and methods for laxative-free virtual colonoscopy will be discussed in detail, which are based on the exemplary general framework of FIG. 1, but nothing herein shall be construed as limiting the scope of the invention.
  • FIG. 2 is a flow diagram that illustrates methods for virtual colonoscopy according to exemplary embodiments of the invention.
  • Virtual colonoscopy methods according to the invention implement bowel preparation methods, which are based on i diet management and contrast agent administration protocols (step 20).
  • exemplary embodiments of the invention include laxative-free bowel preparation methods, which are based on low-residue diets and administration of sufficient dosages of one or more types of contrast agents over a period of time prior to colon imaging, and which provide good residual stool and fluid tagging quality.
  • Exemplary bowel preparation methods according to the invention will be explained in detail below with reference to FIGs. 3 A ⁇ 3D and 4A-4C, for example.
  • the colon preparation process is followed by image acquisition.
  • a colon distention process is performed prior to scanning (step 21).
  • the patient's colon is expanded by forcing approximately 2 to 3 liters of room air or carbon dioxide into the patient's colon.
  • Other methods for distending a colon known to those of ordinary skill in the art may be implemented.
  • a region of interest of the individual which includes the colon, is scanned using an imaging device to acquire image data of the colon (step 22) and the image data is processed to generate one or more volumetric image datasets (step 23).
  • Image acquisition can be implemented using one or more imaging modalities.
  • CT images of the colon can be acquired using a routine CT virtual colonoscopy scanning protocol.
  • a helical CT scanner can be configured to provide 3 to 5 mm collimation, 1 :1 to 2:1 pitch, 120 kVp, and 100 mA Such scan configuration will generate 300 to 550 2D slice images for each scan series.
  • the patient can be scanned twice in different body positions, e.g., supine position (face up) and prone position (face down), to acquire two image datasets, which can be separately processed and subsequently fused for rending and displaying.
  • multiple imaging modalities are used for acquiring images of the colon. If multi-modalities of images are available, they can be fused and displayed in a single view.
  • PET and CT images can be acquired using a dual modalities scanner.
  • the high-uptake focal spot in PET images can be fused to be displayed in the 3D CT images for facilitating polyp detection and differentiating between benign and malignant lesions.
  • image acquisition could be performed in multiple time sections or phases.
  • the images can be acquired in both pre-contrast and post-contrast phases.
  • the pre-contrast images may serve as a mask for digital subtraction from the post-contrast images.
  • the subtracted images provide better contrast on contrast-enhanced tissue.
  • An MIP rendering mode can be applied to the subtracted images to provide clearer images as compared to the original images.
  • the organ can be scanned twice - once without contrast agent and another with contrast agent (e.g., for IV contrast agent injection applications).
  • the pre-contrast images can then be subtracted from the post-contrast images, such that the tagged region will only be enhanced in the subtraction images.
  • the subtracted images can then be fused with the original volume for 2D/3D rendering.
  • the subtracted images can be used for segmenting tagged regions.
  • the acquired volumetric image dataset(s) is/are further processed using image processing methods to electronically "clean" tagged residue (residual stool and fluid) in the imaged colon (step 24).
  • methods 1 for electronic cleaning include methods for detecting and removing tagged residue regions inside the colon lumen and reconstructing a boundary between the colon lumen and colon wall in regions of the image where tagged colonic residue is removed. Exemplary methods for electronic cleaning will be discussed below with reference to the illustrative diagrams of FIGs. 5-11, for example.
  • electronic cleansing methods according to the invention are robust and effective for processing laxative free virtual colonoscopy CT images, for example.
  • electronically cleaned volumetric images can be rendered and displayed to enable examination of the imaged colon by a physician.
  • An electronic cleaning process according to the invention enables a physician or radiologist, for example, to inspect the 3D endoluminal colon surface where the colon surface is covered by residual fluid.
  • feature data of clinical significance which is obtained after image acquisition (step 23) (e.g., during the electronic cleaning process (step 24)) can be fused with the original or electronically cleaned volumetric images and displayed (step 27) to facilitate physician review of the cleansed volumetric images.
  • methods may be implemented to enhance volumetric images and facilitate lesion detection.
  • image enhancement may be implemented via image filtering, feature rendering, and fusion of multi-modal images.
  • image filtering various types of filters can be applied for noise reduction and edge enhancement.
  • feature rendering the intensity feature data or volume data of a lesion region could be extracted. Specific rendering modes could be selected to display the feature data. The feature rendered image could be further super-imposed on the original 3D volumetric image data for highlighting the lesion when displayed.
  • volume rendering techniques can be extended using methods described herein for reconstructing boundary layers between the air and colon wall tissue.
  • boundary layer reconstruction methods include methods for intensity transformation of the original volumetric data, which transfonnation is based on information of both residue tagging and anatomy.
  • the result of the cleaned volumetric images is the fusion of available information of both tagged residue and anatomy.
  • the electronically cleaned images can be processed using automatic polyp detection methods (step 25).
  • the result of polyp detection represents all suspicious regions or areas in the images that are automatically detected using suitable image processing methods.
  • the computer-assisted polyp detection (CAPD) results can be fused with the original electronically cleaned volumetric images (step 26).
  • the results of CAPD can be a list of labeled regions for potential polyps together with an indication of the likelihood that the labeled region is indeed a real polyp.
  • One example of method to fuse the CAPD result with original volumetric images is to color code the suspicious regions in the volumetric images such that the results are rendered as colored regions when the volumetric images with fused feature data are displayed (step 27).
  • Another approach to fuse the CAPD results is to transform the intensity of suspicious region into certain range and adjust the color map for volume rendering to show the suspicious region in different color from surrounding normal tissue in a 3D endoluminal view.
  • the clean colon wall and the wall coated with tagged stool can be shown in different colors by adjusting volume rendering color maps, as is understood by those of ordinary skill in the art.
  • Bowel Preparation Methods Exemplary bowel preparation methods for virtual colonoscopy applications according to the invention will now be discussed in detail with reference to the exemplary embodiments of FIGs 3A-3D and FIGs. 4A-4C. In particular, FIGs.
  • FIGs. 3A-3D are diagrams illustrating laxative/suppository free bowel preparation methods for virtual colonscopy according to exemplary embodiments of the invention, which are based on contrast administration and diet management protocols.
  • FIGs. 4A-4C are diagrams illustrating various types of foods/meals that can be used for the exemplary bowel preparation methods of FIGs. 3A-3D, according to exemplary embodiments of the invention.
  • FIGs. 3A and 3B illustrate bowel preparation methods in which an individual follows contrast administration and diet management protocols during a period of about 36 hours prior to image acquisition of the individual's colon.
  • FIG. 3A is a chart that depicts a method for administering doses of contrast agents (30) and (31) at various times (at mealtimes) during a three-day period (including the day on which the colon is to be scanned and the two days prior to the scan day).
  • the first contrast agent (30) comprises a dose of barium sulfate solution (2.1%, 250 ml)
  • the second contrast agent (31) comprises a dose of a non-ionic iodinated agent (e.g. diatrizoate meglumine and diatrizoate sodium solution, 367 mg/ml, 120 ml) as colon residue contrast agents.
  • These contrast agents (30) and (31) are taken orally at mealtimes as shown in FIG. 3A.
  • the contrast agent solutions (30) and (31) are commercially available under different name brands.
  • the contrast solution (31) is available under the name brand GASTROVIEW.
  • GASTRO VIEW can be mixed with soft drink, such as soda, if it is easier for the patient to drink.
  • the various commercially available contrast agent solutions (30) and (31) provide similar tagging quality with the same dosage.
  • the invention is not limited to specific brand named contrast agents.
  • FIG. 3B is a chart that depicts different diet regimes (Diet 1, Diet 2, Diet 3) that an individual can follow for a four-day period (including the day on which the colon is to be scanned and the three days prior to the scan day), according to exemplary embodiments of the invention. These diets are preferably designed to be easy to follow using readily available food products.
  • a bowel preparation method according to an exemplary embodiment of the invention includes following one of the 3-day diets depicted in FIG. 3B combined with administration of contrast agents according to the method of FIG. 3 A, for example.
  • the exemplary diets include a combination of meals based on a food list, a meal kit, liquid foods and water.
  • FIG. 4A is a table that lists various types of foods in different food groups which can be included in the food list.
  • FIG. 4B illustrates the food contents of a predefined meal kit, according to an exemplary embodiment of the invention.
  • FIG. 4C illustrates various liquid foods that may be included for exemplary liquid diets, according to exemplary embodiments of the invention
  • Diet 3 is a low-residue diet, which is similar to a normal diet. If a patient follows Diet 3, he/she is only required to avoid several kinds of food for 3 days prior to imaging, which can possibly result in more colonic residue. Diet 2 allows the patient to follow the same food list in Days 3 and 2 prior to the day of the scan and requires the patient to eat food from a predetermined meal kit in Day 1 prior to the day of scan. Diet 1 requires the patient to follow a liquid food diet in Day 1 prior to the scan. In the other 2 days, Diet 1 is the same as Diets 2 and 3.
  • the patient will eat small portions at each meal and avoid the foods such as whole grain flour or cereals, dried or raw fruits, raw or deep fried vegetables, tough fibrous meats, caffeinated liquids, nuts and seeds, and yogurt.
  • foods such as whole grain flour or cereals, dried or raw fruits, raw or deep fried vegetables, tough fibrous meats, caffeinated liquids, nuts and seeds, and yogurt.
  • FIG. 3C is a diagram that illustrates a bowel preparation method according to another exemplary embodiment of the invention.
  • the exemplary method of FIG. 3C includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered, in about a 24 hour period prior to image acquisition.
  • the exemplary method of FIG. 3 C includes a diet regime in which the individual will eat food from the food list (FIG. 4A) for each meal on the 2 nd day before the scan and follow a liquid diet in about a 24 hour period prior to image acquisition.
  • FIG. 3D is a diagram that illustrates a bowel preparation method according to yet another exemplary embodiment of the invention.
  • the exemplary method of FIG. 3D includes a bowel preparation method in which 3 doses of the first contrast agent (30) and 2 doses of the second contrast agent (31) are administered in about a 24 hour period prior to image acquisition.
  • the exemplary method of FIG. D includes a diet regime in which the individual will follow a liquid diet in about a 24 hour period prior to image acquisition.
  • the exemplary bowel preparation methods described above with reference to FIGs. 3 and 4 facilitate uniform tagging of colonic residue and mitigate the stool sticking around the colon wall, and are sufficient for laxative/suppository free preparation of the colon.
  • the exemplary bowel preparation methods may include administration of laxative or suppositories, if desired. Electronic Cleaning
  • exemplary embodiments of the invention for virtual colonoscopy implement methods for electronically cleaning volumetric images of a colon.
  • exemplary methods for electronically cleaning volumetric images of a colon include methods for automatically detecting and removing tagged residue inside the colonic lumen and automatically reconstructing the image of the boundaries between tagged residue and the colon wall into boundaries between air (lumen) and the colon wall.
  • colonic residue such as residual stool can attach around the colon wall.
  • conventional virtual colonoscopy methods in which laxatives/suppositories are used to prepare the colon for examination, which causes colonic residue to form puddles with flat level due to gravitation.
  • exemplary embodiments of the invention include methods that take into consideration the above factors to achieve better quality of electronic cleaning for disparate colonic residue tagging conditions and morphologies of colonic residue regions.
  • FIG. 5 is a flow diagram illustrating an electronic cleaning method according to an exemplary embodiment of the invention.
  • the exemplary method of FIG. 5 can be implemented for the electronic cleaning process (step 24) in FIG. 2.
  • an initial process includes processing the acquired volumetric image dataset using methods for segmenting or extracting the image data associated with the colon region in the volumetric dataset (step 50).
  • the colon region that is segmented comprises air (in the hollow lumen/cavity), colonic residue, and the colon wall.
  • any suitable method may be implemented for segmenting the colon region from other anatomical objects/structures within a volumetric image dataset.
  • Anatomical knowledge e.g., thickness of colon wall is deemed within a certain range, e.g. less than 5 mm
  • Anatomical knowledge can be used for segmentation.
  • the colon lumen including both air and residue, is found, the colon wall region could be determined by dilating the region of colon lumen.
  • a low-level voxel classification segmentation process can be implemented using the methods disclosed in the above-incorporated U.S. Patent No. 6,331,116. Briefly, a low-level voxel classification method can be applied to all regions of tagged residue. For each region of tagged residue, the image data is processed to group the voxels within the region into several clusters (e.g., around 10 clusters). The voxels with similar intensity properties are assigned to the same group (cluster).
  • voxels that are located in the center of uniformly tagged region are grouped (clustered) together in the same cluster and voxels that are located around an edge of the uniformly tagged region are grouped together in another cluster that is different from that of the center cluster.
  • the classification results basically represent intensity properties of the entire volumetric images.
  • the voxels in the same cluster might represent different clinical meaning due to be in different tagged regions with different tagged conditions.
  • the intensity of a fold that is covered by tagged fluid might have an intensity that is similar to that of tagged fluid residue due to the partial volume effect.
  • electronic cleaning methods according to the invention are self-adaptive to variations in tagging conditions in different tagged regions.
  • the image data is further processed to determine the boundaries between the different tissues or materials (step 52). More specifically, for virtual colonoscopy, boundary layer extraction methods according to exemplary embodiments of the invention are applied for automatically detecting and extracting boundaries between air (in the lumen/cavity), the colon wall, and tagged residue in the segmented colon region. Exemplary methods for extracting boundary layers will be discussed below with reference to FIGs. 8, 9 and
  • boundary layer reconstruction process is independently implemented for each spatially separate region of tagged residue which is removed from the image data. Exemplary methods for reconstructing boundary layers will be discussed in further detail below with reference to FIG.
  • FIGs. 6A and 6B are exemplary image diagrams that illustrate results of electronic cleaning in virtual colonoscopy images.
  • FIG. 6 A is an exemplary image of a portion of a colon showing a plurality of tagged residue regions (60, 61 and 62) in the colon lumen.
  • the tagged residue region (60) is a small tagged stool that is attached to the colon wall.
  • the tagged residue regions (61) and (62) are puddles can have non-uniform tagging.
  • FIG. 6B is an exemplary image of FIG.
  • FIG. 6A after applying an electronic cleaning process according to the invention, wherein the tagged residue regions (60, 61 and 62) are removed and shown as part of the lumen (air) region within the colon and wherein the boundary layers between the tagged residue regions and surrounding tissue in the image of FIG. 6A are replaced by boundary layers between air and colon wall tissue in the image of FIG. 6B.
  • Exemplary electronic cleaning methods for virtual colonoscopy according to the invention will now be discussed in further detail with reference to FIGs. 7 ⁇ 11, for example.
  • exemplary methods for boundary layer extraction will be described with reference to FIGs. 7-10 and exemplary methods for boundary layer reconstruction will be described with reference to FIG. 11.
  • boundary extraction can be implemented using known techniques such as thresholding, edge detection, etc., which are suitable for the given modality for determining the boundaries.
  • thresholding edge detection
  • boundary extraction in CT images may not be achieved using a simple thresholding approach due to non-uniform tagging and partial volume effects in the CT images.
  • colon wall tissue often has an intensity around [-100, 50] HU and well-tagged residue has intensity around [200, 800] HU. If a thin haustral fold submerges in well-tagged residue puddle, the intensity of the colon wall of the thin fold may greater than 600 HU.
  • FIG. 7 is an exemplary diagram of a CT image, wherein a region (70) of the image includes a portion (71) of a colon wall that has a similar intensity value as that of tagged residue portion (72) within the region (70).
  • the shadowed region (71), which is a thin haustral fold, is depicted as having an intensity of 719 HU
  • the brighter region (72), which is tagged puddle of residual fluid is depicted as having an intensity of 823 HU.
  • edge detecting methods which are based on intensity difference rather than intensity range may be used for determining such boundaries.
  • conventional edge detection methods may not be applicable for CT images.
  • edge detection methods are not particularly accurate for processing volumetric CT images having anisotropic voxel size.
  • the images can be re-sampled or interpolated into isotropic voxel size, but at the cost of extra computation time.
  • electronic cleaning methods according to the invention are preferably designed to extract a boundary layer (e.g., plurality of voxels in thickness) rather than a line of edge curve (e.g., single voxel).
  • the layer of the boundary represents the partial volume effect, which results from the physical limitation of CT scanner.
  • it is well known that most edge detection methods are sensitive to noise and the noise level of CT images usually is high, which is not desirable.
  • FIG. 8 is a flow diagram that illustrates boundary detection methods according to exemplary embodiments of the invention.
  • boundary detection methods according to the invention include methods for extracting a boundary layer between different tissues/materials using a maximal gradient approach, wherein the gradient is defined as a directional first-order derivative.
  • Exemplary boundary detection methods can be implemented with medical images in which the integer grids have different unit lengths along different coordinate directions due to the variation of scanning protocols. Indeed, anisotropic voxel size is considered when computing a discrete first-order derivative in the digital images.
  • exemplary boundary detection methods according to the invention take into consideration the partial volume effect in CT images, wherein the tissue boundary usually forms a thick layer with certain range (as opposed to an edge or a sharp curved line ). The intensity of voxels in a boundary layer changes from one type to another.
  • the extracted boundary layers include more information with respect to tissue abnormalities or other tissue features, as compared to the information provided by sharp boundary lines.
  • FIG. 8 a flow diagram illustrates a boundary layer detection method according to an exemplary embodiment of the invention, which implements a maximal gradient process.
  • An initial step is to select n discrete directions in a 3-D image grid, which are to be used for computing gradients (step 80).
  • FIG. 9 is an exemplary diagram that illustrates an image grid coordinate system (95) and a plurality of selected directions (D1-D5).
  • the X, Y, Z directions (D 1 , D2 and D3) correspond to the orthogonal axial directions of the image grid coordinate system (95).
  • the directions D4 and D5 denote diagonal directions on the X-Y plane.
  • the selected directions will vary depending on the application and should be uniformly covered in all directions so as to render the process rotation independent (independent of differences of position (e.g., slight rotation) of the patient during image acquisition) while balancing against the cost of computation (e.g., selection of too many directions may be computationally expensive).
  • Each voxel of the imaged organ (e.g., the segmented colon region) is then processed using various methods described hereafter to identify voxels that are part of boundary layers.
  • the boundary layers within the imaged colon include air/colon wall, tagged residue/colon wall and air/tagged residue boundaries. More specifically, for a selected voxel (step 81), a first-order derivative is computed along each of the n directions for the selected voxel (step 82).
  • a GFV gradient feature value
  • GFV gradient feature value
  • FIG. 9 is an exemplary diagram illustrating 5 directions (D1 ⁇ D5) in a Cartesian coordinate system (95) in which directional gradients are computed for a given voxel
  • FIG. 10 is an exemplary diagram that illustrates neighbor voxels that are used for computing selected directional gradients for a given voxel (Vcurrent).
  • the X, Y, and Z are the orthogonal axial directions (Dl), (D2) and (D3), respectively, of the image grid coordinate system (95).
  • the other directions (D4) and (D5) are diagonal directions on the X-Y plane.
  • the directional derivative computation is designed for spiral CT images, wherein the Z direction (D3) typically has a longer step length than the step lengths of the X and Y directions (Dl and D2).
  • the derivatives along diagonal directions in the X-Z and Y-Z planes should also considered.
  • y l ⁇ x l2 - 8x 4 + 8x 6 -x u ⁇ /( ⁇ 2 » ⁇ xy )
  • y 2
  • y 3
  • v 4
  • y 5 ⁇ x l6 -x ⁇ /(2 » ⁇ z )
  • the term ⁇ denotes the step length along direction "*", and such terms are scaled using scaling factors to account for the different lengths in the different directions for the non-isotropic voxels.
  • the GFV for a given voxel represents the direction and magnitude of the greatest change in intensity with respect to the given voxel.
  • the GFV is much less sensitive to the image noise than value of single directional derivative.
  • the exemplary expression (1) comprises a central 5-point formula for non-Z directions wherein the directional gradient is computed over a length of 5 voxels, and a central 3 -point formula for the Z direction wherein the directional gradient is computed over a length of 3 voxels. It is to be understood, however, that the methods of FIGs. 9 and 10 are merely exemplary and the invention is not limited to the expression (1) and the neighborhood in FIG. 10 and that other formulas can be readily envisioned by one of ordinary skill in the art for computing a first-order derivative taking into account voxel size for other applications. Referring again to FIG. 8, after the GFV of the current voxel is determined, the GFV is compared against a threshold GFV (step 85).
  • the boundary layer can be separated from the tissue homogeneous regions since such regions have much lower GFV as compared to tissue boundary when the intensity dramatically changes. If the voxel GFV exceeds the threshold (affirmative result in step 86), the voxel is tagged as a real boundary voxel (step 89).
  • the GFV threshold can be pre-set for a specific application. In one exemplary embodiment for virtual colonoscopy CT images, the threshold can be set as 56. Therefore, in FIG.
  • the voxel is deemed a boundary voxel (step 89,), otherwise the voxel can be deemed to be in the homogeneous tissue region.
  • the GFV threshold cannot be set too small.
  • the GFV threshold cannot be too large since the extracted boundary layer may not enclose the entire colon lumen. This is the result of non- uniform range of partial volume effect along different directions. If the threshold for GFV thresholding is reduced, the chance of enclosing the colon lumen increases. However, reducing the threshold GFV increases the risk of over estimating the boundary layer.
  • the thresholded GFV result can be fused with the low-level voxel classification results (step 51, FIG. 5).
  • the information from both voxel classification and GFV are combined to achieve optimal estimation of the boundary layers.
  • a given voxel will be deemed a real boundary voxel (step 89) if the GFV of the voxel is greater than preset GFV threshold (affirmative result in step 86) or if based on the classification results, the voxel is determined to be located at the edge of region where all voxels are in the same cluster (i.e., the voxel is part of a boundary layer cluster as determined by the classifying the voxels of the tagged region) (affirmative result in step 87).
  • the voxel will be tagged as a non-boundary voxel (step 88) (e.g., tissue voxel or lumen voxel).
  • the exemplary methods are repeated for each remaining voxel until all the voxels in the desired image dataset have been processed (negative determination in step 90).
  • the results of the exemplary boundary layer extraction process of FIG. 8 is that each voxel is classified as a boundary or non-boundary voxel, which enables identification of boundary layers in the imaged colon region between air/tagged region, air/colon wall, and tagged region/colon wall.
  • FIG. 11 is a flow diagram that illustrates a boundary layer reconstruction method according to an exemplary embodiment of the invention. As noted above, boundary layer reconstruction is applied to each region of tagged residue.
  • the intensity of each voxel that is tagged as a non-boundary voxel is deemed part of the tagged residue and the intensity of the voxel is set to an average air intensity T a j r (step 101).
  • air has an intensity of -850 HU, for example.
  • the remaining voxels for the selected tagged residue region include boundary voxels.
  • boundary voxels there are three types of boundary voxels in the tagged residue region: (1) boundary voxels that are part of the air/tagged residue boundary; (2) boundary voxels that are part of the colon wall/residue boundary; and (3) boundary voxels that are in proximity to air, colon wall and tagged residue voxels.
  • the first type of boundary voxels are readily detected because such voxels are close to the air region and have greater GFV (e.g. GFV is larger than 500) since the intensity change from air to the tagged region is the most dramatic in CT images.
  • a conditional region growing algorithm is applied to the region to expand the region (air/tagged residue boundary layer) into the tagged residue and air regions (step 102).
  • a 6-connected 3-D region growing process which may be implemented (step 102) is as follows:
  • the exemplary region growing process expands the region of the air/tagged residue boundary layer (i.e., the partial volume layer between the tagged residue and the air lumen) and the intensities of voxels in the expanded region are set to T a ⁇ r .
  • the expended region may also cover part of the boundary layer having the third type of voxels as described above.
  • boundary voxels are transformed into an air/colon wall boundary layer (step 103). More specifically, in one exemplary embodiment of the invention, an intensity transformation process is applied to the boundary voxels as follows:
  • V f T a ⁇ r ;
  • T ave denotes the average intensity of all boundary layer voxels of the second and third type
  • Ti ssue denotes the average intensity of soft tissue around colon lumen (e.g. -50 HU)
  • V c denotes the intensity of the cunent boundary voxel
  • V f denotes the transformed intensity value of the cunent voxel
  • an intensity transformation process having a penalty factor based on the GFV is applied to the boundary voxels as follows: If( V c > T ave )
  • V f T air ;
  • P(g f ) is the penalty function
  • g f is the GFV value of the cunent voxel.
  • the range of the penalty function can be [0, B], where B > 1.
  • step 104 After applying an intensity transformation as above, electronic cleansing for the given tagged residue region is complete. If there are remaining tagged residue regions to process (affirmative determination in step 104), the above process (steps 101-103) are repeated for the next selected tagged residue region. The boundary layer reconstruction process terminates when all tagged regions have been processed.
  • electronic cleansing methods can be implemented with any modality of images having the same feature in intensity.
  • a maximum directional gradient process as described herein can also be applied to any modality of medical images for extraction of tissue boundary layer with partial volume effects.
  • the bladder wall region could be extracted with the exemplary maximum gradient method described above.
  • a texture analysis can be applied to the bladder wall region.
  • the texture indexes associated to voxels can be mapped back to the original volumetric data and volume rendering in the endoluminal view to facilitate detection of abnormality in the bladder wall.
EP04795095A 2003-10-10 2004-10-12 Verfahren und systeme zur virtuellen endoskopie Withdrawn EP1716535A2 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US51023703P 2003-10-10 2003-10-10
PCT/US2004/033888 WO2005036457A2 (en) 2003-10-10 2004-10-12 Virtual endoscopy methods and systems

Publications (1)

Publication Number Publication Date
EP1716535A2 true EP1716535A2 (de) 2006-11-02

Family

ID=34435075

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04795095A Withdrawn EP1716535A2 (de) 2003-10-10 2004-10-12 Verfahren und systeme zur virtuellen endoskopie

Country Status (2)

Country Link
EP (1) EP1716535A2 (de)
WO (1) WO2005036457A2 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10937542B1 (en) 2020-09-18 2021-03-02 Vent Creativity Corporation Patient specific treatment planning
US11410769B1 (en) 2021-12-08 2022-08-09 Vent Creativity Corporation Tactile solutions integration for patient specific treatment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9299156B2 (en) * 2005-10-17 2016-03-29 The General Hospital Corporation Structure-analysis system, method, software arrangement and computer-accessible medium for digital cleansing of computed tomography colonography images
JP2007260144A (ja) * 2006-03-28 2007-10-11 Olympus Medical Systems Corp 医療用画像処理装置及び医療用画像処理方法
US20080027315A1 (en) * 2006-07-31 2008-01-31 Icad, Inc. Processing and presentation of electronic subtraction for tagged colonic fluid and rectal tube in computed colonography
US7840051B2 (en) 2006-11-22 2010-11-23 Toshiba Medical Visualization Systems Europe, Limited Medical image segmentation
US10726955B2 (en) 2009-05-28 2020-07-28 Ai Visualize, Inc. Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US8701167B2 (en) 2009-05-28 2014-04-15 Kjaya, Llc Method and system for fast access to advanced visualization of medical scans using a dedicated web portal
US10497124B2 (en) * 2013-03-15 2019-12-03 Kabushiki Kaisha Topcon Optic disc image segmentation method and apparatus
US9454643B2 (en) 2013-05-02 2016-09-27 Smith & Nephew, Inc. Surface and image integration for model evaluation and landmark determination

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5920319A (en) * 1994-10-27 1999-07-06 Wake Forest University Automatic analysis in virtual endoscopy
US6377833B1 (en) * 1999-01-25 2002-04-23 Douglas Albert System and method for computer input of dynamic mental information
US6876760B1 (en) * 2000-12-04 2005-04-05 Cytokinetics, Inc. Classifying cells based on information contained in cell images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005036457A2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10937542B1 (en) 2020-09-18 2021-03-02 Vent Creativity Corporation Patient specific treatment planning
EP3971907A1 (de) 2020-09-18 2022-03-23 Vent Creativity Corporation Patientenspezifische behandlungsplanung
US11410769B1 (en) 2021-12-08 2022-08-09 Vent Creativity Corporation Tactile solutions integration for patient specific treatment
EP4193954A1 (de) 2021-12-08 2023-06-14 Vent Creativity Corporation Integration taktiler lösungen für patientenspezifische behandlung

Also Published As

Publication number Publication date
WO2005036457A3 (en) 2008-11-20
WO2005036457A2 (en) 2005-04-21

Similar Documents

Publication Publication Date Title
Kostis et al. Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images
US7620442B2 (en) Colonography on an unprepared colon
US7809177B2 (en) Lumen tracking in computed tomographic images
Wyatt et al. Automatic segmentation of the colon for virtual colonoscopy
EP2279490B1 (de) Automatische detektion und genaue segmentierung des bauchaortenaneurysma
US8515200B2 (en) System, software arrangement and method for segmenting an image
US8311301B2 (en) Segmenting an organ in a medical digital image
US20090226057A1 (en) Segmentation device and method
US20070263915A1 (en) System and method for segmenting structures in a series of images
CA2727736C (en) Method and system for determining an estimation of a topological support of a tubular structure and use thereof in virtual endoscopy
US20080317310A1 (en) Method and system for image processing and assessment of blockages of heart blood vessels
Sato et al. An automatic colon segmentation for 3D virtual colonoscopy
JP5854561B2 (ja) 画像データのフィルタリング方法、画像データのフィルタリングシステム、及び仮想内視鏡検査における画像データの使用
WO2005036457A2 (en) Virtual endoscopy methods and systems
Alyassin et al. Semiautomatic bone removal technique from CT angiography data
Liang Virtual colonoscopy: An alternative approach to examination of the entire colon
Wyatt et al. Segmentation in virtual colonoscopy using a geometric deformable model
Taimouri et al. Colon segmentation for prepless virtual colonoscopy
Carston et al. CT colonography of the unprepared colon: an evaluation of electronic stool subtraction
Chen et al. Electronic colon cleansing by colonic material tagging and image segmentation for polyp detection: detection model and method evaluation
Huang et al. Volume visualization for improving CT lung nodule detection
CHENCHEN Virtual colon unfolding for polyp detection
KR101305678B1 (ko) 가상 대장내시경에서 전자적 장세척 방법 및 장치
Schoonenberg et al. Effects of filtering on colorectal polyp detection in ultra low dose CT
Sadleir Enhanced computer assisted detection of polyps in CT colonography

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20060818

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

PUAK Availability of information related to the publication of the international search report

Free format text: ORIGINAL CODE: 0009015

18D Application deemed to be withdrawn

Effective date: 20080503