MXPA01009388A - System and method for performing a three-dimensional virtual segmentation and examination - Google Patents

System and method for performing a three-dimensional virtual segmentation and examination

Info

Publication number
MXPA01009388A
MXPA01009388A MXPA/A/2001/009388A MXPA01009388A MXPA01009388A MX PA01009388 A MXPA01009388 A MX PA01009388A MX PA01009388 A MXPA01009388 A MX PA01009388A MX PA01009388 A MXPA01009388 A MX PA01009388A
Authority
MX
Mexico
Prior art keywords
colon
image
graphic data
data
volumetric elements
Prior art date
Application number
MXPA/A/2001/009388A
Other languages
Spanish (es)
Inventor
Arie E Kaufman
Zhengrong Liang
Mark R Wax
Ming Wan
Dongqing Chen
Original Assignee
The Research Foundation Of State University Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Research Foundation Of State University Of New York filed Critical The Research Foundation Of State University Of New York
Publication of MXPA01009388A publication Critical patent/MXPA01009388A/en

Links

Abstract

A system and method for generating a three-dimensional visualization image of an object such as an organ using volume visualization techniques and exploring the image using a guided navigation system which allows the operator to travel along a flight path and to adjust the view to a particular portion of the image of interest in order, for example, to identify polyps, cysts or other abnormal features in the visualized organ. An electronic biopsy can also be performed on an identified growth or mass in the visualized object. Virtual colonoscopy can be enhanced by electronically removing residual stool, fluid and non-colonic tissue from the image of the colon, by employing bowel preparation followed by image segmentation operations. Methods are also employed for virtually expanding regions of colon collapse using image segmentation results.

Description

SYSTEM AND METHOD FOR PERFORMING A TRIDIMENSIONAL VIRTUAL SEGMENTATION AND EXAMINATION TECHNICAL FIELD The present invention relates to a system and method for performing a three-dimensional virtual examination based on volume by means of planned and guided navigation techniques; an application consists of a virtual endoscopy.
BACKGROUND OF THE INVENTION Colon cancer remains one of the leading causes of death worldwide. The early detection of cancerous growths, which in the colon of man manifests as polyps, can greatly improve the patient's chances of recovery. Currently there are two conventional methods for the detection of polyps and other growths in a patient's colon. The first is colonoscopy, which uses a tube with flexible optical fibers called a colonoscope to visually examine the colon after inserting it rectally. The doctor can manage the tube to look for any abnormal growth in the colon. Although it turns out to be a reliable method, colonoscopy is also relatively expensive, its completion takes time and it turns out to be an uncomfortable, invasive and painful procedure for the patient.
The second detection technique consists in applying a barium enema and taking two-dimensional radiography of the colon. The barium enema is used to coat the colon of barium, and the two-dimensional radiograph is taken to capture the image of the colon. However, barium enemas do not always provide a view of the entire colon, require intensive prior treatment and management of the patient, the operation usually requires an operator, exposes the patient to excessive radiation, and may be less accurate than colonoscopy. Due to the deficiencies of the conventional methods already described, a more reliable, less intrusive and less expensive means is recommended to verify the existence of polyps in the colon. A method is also recommended to examine other human organs, such as the lungs, for growth that is reliable and cost effective, and that is not so uncomfortable for the patient.
The two-dimensional visualization of human organs using the imaging devices available in the medical area, such as computed tomography and magnetic resonance imaging, has been widely used for patient diagnoses. Three-dimensional images are formed by superimposing and interpolating two-dimensional images obtained from machines f? of scanning. The generation of images of an organ and the three-dimensional visualization of its volume are beneficial because they do not imply a physical intrusion and it is easy to manage the data. However, the analysis of a three-dimensional volumetric image must be performed adequately to take full advantage of the advantages of ^^. 0 observe by virtual means an organ from within.
When observing the virtual three-dimensional volumetric image of an environment, a functional model must be used to analyze the virtual space. A possible model is a camera virtual that the observer can use as a reference point to analyze the virtual space. Control of the camera in the context of navigation within an environment General virtual in third dimension has already been studied previously. There are two conventional ways of control the camera in the navigation of virtual spaces.
In the first, the operator fully controls the camera, which allows you to place it in different positions and angles to achieve the desired view. Being literally the "pilot" of the camera, the operator can explore a particular area and ignore the others. However, an absolute control of the camera in a wide field would be tedious and exhausting, and the operator might not see all the important characteristics between the starting point and the conclusion point of the scan. Also, the camera can "get lost" easily in remote areas or "crash" on one of the walls by a careless operator or by various unexpected obstacles.
The second technique for controlling the camera is a planned navigation method, which assigns the camera a predetermined route to follow that can not be modified by the operator. This is equivalent to having an "autopilot", which allows the operator to concentrate on the observed virtual space since he does not have to concentrate on maneuvering within the analyzed environment. However, this second technique does not give the observer the option to modify the course or investigate an interesting area that has been observed during the trajectory.
The ideal would be to use a combination of both described navigation techniques to take advantage of their advantages and minimize their drawbacks. It would be advisable to apply a flexible navigation technique to the examination of human or animal organs in a virtual three-dimensional space in order to perform a thorough, painless and non-intrusive examination. This ideal navigation technique would also allow the operator to examine flexibly and completely the exterior and interior of an organ in a virtual three-dimensional space. It would also be ideal to be able to show the exploration of the organ in real time using a technique that minimizes the calculations necessary to visualize the organ. The desired technique could also be applied to the exploration of ^^. 0 any virtual object.
COMPENDIUM OF THE INVENTION The invention generates the image of three-dimensional visualization of an object - a human organ, for example - using volumetric visualization techniques and analyzing the virtual image using a guided navigation system that allows the operator to move along a predefined path and adjust both the position and the angle of observation to see an area of interest in the image that is not within the defined path, in order to identify polyps, cysts or other abnormal features of the organ.
The novel technique for a three-dimensional virtual examination of an object includes the production of a discrete representation of the object in volumetric elements, the definition of the part of the object to be examined, the realization of a navigation within the virtual object and the deployment of the virtual object in real time during navigation.
The novel technique for a three-dimensional virtual examination when performed on a patient's organ consists of ^^ IT preparing the organ for scanning, if necessary, scanning the organ and converting the data into volumetric elements, defining the part of the organ that is going to examine, perform a guided navigation in the virtual organ and show the virtual organ in real time during guided navigation. 15 During virtual examination, you often want to observe a specific type of material at the same time you are removing other materials from the image. In order to perform such an operation, a method for electronically cleaning an image can be carried out by converting the graphic data to a plurality of volumetric elements, each of which has an intensity value. Then, a sorting operation is performed in order to categorize the volumetric elements in a plurality of groups in accordance with the intensity values. Once the classification is complete, it is then possible to eliminate at least one group of volumetric elements from the graphic data.
The classification can be performed by evaluating a plurality of volumetric elements of the graphic data in relation to a plurality of neighboring volumetric elements in order to determine a similarity value between the neighboring elements for the volumetric element.
In addition, groups can be classified by applying a mix probability function to categorize voxels whose intensity value is due to the inclusion of more than one type of material.
An alternative classification includes the step to perform a feature vector analysis with respect to at least one of the groups comprising graphic data related to a material of interest; then the step to perform an extraction of characteristics at a high level to remove the volumetric elements of the image that do not represent important indicators of the material of interest.
The method according to the method for electronically cleaning an image is widely used for application fl where the graphic data represent a region of the human body that comprises at least a part of the colon and the material of interest is tissue thereof. In colon imaging applications, the removal can remove volumetric elements representing intercolonic fluids, residual stool in the colon, bone, and non-colonic tissue.
An object of the invention is to use a system and method for performing a relatively painless, economical and non-intrusive in vivo examination of an organ where the actual analysis of the scanned colon may perhaps be performed in the absence of the patient. It is possible to scan and visualize the colon in real time or visualize the data stored later.
Another object of the invention for generating three-dimensional volumetric representations of an object - an organ, inclusive - where the regions of the object can be peeled off layer by layer in order to obtain an analysis of a subsurface of a region of the graphic object. The surface of an object (for example an organ) can become transparent or translucent in order to observe more objects that are inside or behind the object's wall. The object can also be cut to examine a specific cross section of the object.
A further object of the invention is to provide a system and method of guided navigation through a three-dimensional volumetric representation of an object, such as an organ using potential fields.
Another object of the invention of the invention is to calculate • the central line of an object - an organ, inclusive - for a virtual route using a "detached layers" technique as described herein.
Another object of the invention is to use a modified technique of Z buffer in order to reduce the number of calculations necessary to generate the observable screen. • Another object of the invention is to assign coefficients of opacity with respect to each volumetric element in the representation in order to return the specific volumetric elements transparent or translucent in different degrees to customize the visualization of the part of the object being viewed. Also, a section of the object can be composed by means of the opacity coefficients.
BRIEF DESCRIPTION OF THE DRAWINGS The other objects, features and advantages of the invention will be apparent by virtue of the detailed description and the accompanying drawings that exemplify a preferred embodiment of the invention, and where: Figure 1 is a flow diagram with the steps for • virtually examine an object, specifically a colon, according to the invention; Figure 2 is an illustration of a "underwater" camera model 15 that performs guided navigation on the virtual organ; • Figure 3 is an illustration of a pendulum used to model the displacement factor of the "underwater" camera 20; Figure 4 is a diagram illustrating a two-dimensional cross section of a volumetric colon identifying two blocking walls; Figure 5 is a diagram illustrating a two dimensional cross section of a volumetric colon in which the start and end volume elements are selected; • Figure 6 is a diagram illustrating a two dimensional cross section of a volumetric colon showing a discrete subvolume delimited by the blocking walls and the surface of the colon; Figure 7 is a diagram illustrating a two-dimensional cross section of a volumetric colon with multiple detached layers; Figure 8 is a diagram illustrating a two-dimensional cross section of a volumetric colon containing the remaining route; Figure 9 is a flow diagram of the steps for generating a volumetric display of the scanned organ; Figure 10 is an illustration of the virtual colon that has been subdivided into cells; Figure HA is a graphic representation of an organ that is being examined virtually; Figure 11B is a graphical representation of a tree diagram generated by depicting the organ in Figure HA; Figure 11C is another graphical representation of a tree diagram generated by depicting the organ in Figure HA; Figure 12A is a graphic representation of a scene to be represented, with objects within certain cells of the scene; Figure 12B is a graphical representation of a tree diagram generated by depicting the scene in Figure 12A; Figures 12C-12E are other graphical representations of tree diagrams generated by depicting the image in Figure 12A; Figure 13 is a two-dimensional representation of a virtual colon with a polyp whose layers can be detached; ^ fc Figure 14 is a diagram of a system used to virtually examine a human organ according to the invention; Figure 15 is a flowchart representing an improved method of image segmentation; .10 Figure 16 is a graph showing the intensity of the voxels versus the frequency of a series of data obtained by a typical CT scan of the abdomen; Figure 17 is a diagram of the perspective view of the structure of an intensity vector including an interesting voxel and its selected neighbors; Figure 18A is a typical cut of an image obtained by computerized tomography scanning of the abdominal region of a human being, which primarily shows an area that includes the lungs; Figure 18B is a pictorial diagram showing the identification of the lung area in the section of the image of Figure 18A; Figure 18C is a pictorial diagram showing the volume extraction in the lung identified in Figure 18B; Figure 19A is a typical cut-off of an image obtained when using a computerized tomography scan the abdominal region of a human being, which mainly shows an area that includes part of the colon and bone; Figure 19B is a pictorial diagram showing the identification of the area with the colon and bone in the section of the image of Figure 19A; • Figure 19C is a pictorial diagram showing the scanned image of Figure 19A with the regions of bone 20 that were extracted; Y Figure 20 is a flow diagram showing a method for applying texture to the data of a monochromatic image.
DETAILED DESCRIPTION OF THE PREFERRED MODALITIES Although the methods and systems described in this application can be applied to any object you want • studied, the preferred modality to be described is the study of an organ of the human body, specifically the colon. The colon is an elongated tube with angles and curves, due to which it is particularly suitable to analyze it virtually. In this way, the patient saves money and avoids the discomforts and risks of a physical sounding. Other examples of organs that can be studied in this way are the lungs, the stomach and parts of the gastrointestinal system, the heart and blood vessels. 15 Figure 1 exemplifies the steps necessary to perform a virtual colonoscopy using techniques of • volume display. In step 101 the colon is prepared for scanning so that it is visualized for your examination, if required by the doctor or the particular scanning instrument. This preparation may include cleaning the colon with a "cocktail" or liquid that enters the colon after being administered orally and passing through the stomach. The cocktail forces the patient to expel stool present in the colon. An example of this purgative is Golytely. Additionally, in the case of the colon, air or C02 can be insufflated in order to distend the colon and facilitate its scanning and study. This can be achieved at • 5 introduce a small tube into the rectum and pump about 1,000 cc of air to expand the colon. Depending on the type of scanner used, it may be necessary for the patient to drink a contrast substance, such as barium, to cover the unexposed stool and thus distinguish it from the walls of the colon. Another option is to remove the virtual fecal matter with the method to virtually examine the colon, either before or during the virtual examination, as will be explained later in this specification. Step 101 does not need to be done at all exams, as indicated by the dotted line in Figure 1.
In step 103 the organ to be evaluated is scanned. The scanner can be a device well known to those skilled in the art, such as a helical computed tomograph for the colon, or a Zenith magnetic resonance machine for scanning a lung marked with xenon gas, for example. The scanner must be able to take multiple images from different positions around the body while holding the breath, in order to generate the necessary data to visualize the volume. For example, to obtain a single image of the computerized tomograph, a 5 mm wide X-ray beam would be used, with a displacement factor of 1: 1 to 2: 1 and a field of view of 40 cm from the top from the splenic flexure to the rectum.
In addition to scanning, there are other methods to obtain discrete data representations of that object. Using a geometric model, voxel data representing an object can be obtained by applying the techniques described in U.S. Patent No. 5,038,302 entitled "Method of Converting Continuous Three-Dimensional Geometrical Representations into, Discrete Three-Dimensional Voxel-Based Representations Within a Three- Dimensional Voxel-Based System "by Kaufman, issued on August 8, 1991 and requested on July 26, 1988, which is considered to be reproduced as if it were inserted in the letter. Also, data can be generated by a computer model of an image that can be converted into three-dimensional voxels and analyzed according to this invention. An example of this type of data is a computer simulation of the turbulence surrounding a space shuttle.
In step 104 the scanned images are converted into three-dimensional volumetric elements (voxels). In the preferred embodiment for examining the colon, the scan data is reformatted into 5-mm-thick slices at increments of 1 mm or 2.5 mm, each of which is represented as a 512 x 512 pixel array. Depending on the length of the scan, a large number of two-dimensional cuts will be generated. This series of two-dimensional cuts is subsequently reconstructed in three-dimensional voxels. The process of converting two-dimensional scanner images into three-dimensional voxels is performed by the scanner itself or by another machine, such as a computer, by applying techniques well known to those skilled in the art (for example, see US patent No 4,985,856 entitled "Method and Apparatus for Storing, Accessing, and Processing Voxel-based Data" by Kaufman et al., Issued on January 15, 1991 and requested on November 11, 1998, which is considered to be reproduced as if it were inserted to the letter).
Step 105 allows the operator to define which portion of the selected organ he is going to examine. A doctor may be interested in a particular section of the colon that is prone to developing polyps. The doctor can use a panoramic map in two-dimensional cut to indicate the section to be examined. The doctor or operator can indicate the start and end point of the route he will see. A computer or interface (for example, conventional keyboard, mouse or fixed mouse) can be used to indicate which portion of the colon will be inspected. The doctor or operator can use a grid system or coordinates to enter the desired points with the keyboard, or mark them by clicking with the mouse. If desired, it is also possible to visualize the complete colon image.
In step 107, the guided or planned navigation of the virtual organ to be examined is carried out. Guided navigation is defined as navigating through an environment along a predefined or automatically defined route that the operator can manually adjust at any time. After the scan data has been converted into three-dimensional voxels, the interior of the organ must be traversed from the starting point to the selected end. The virtual exams are modeled on the basis of a tiny camera that travels through the virtual space with a lens pointing towards the end. The guided navigation technique provides some interaction with the camera, so that it can automatically navigate through a virtual environment without the operator intervening and, at the same time, allows the operator to operate the camera when necessary. The preferred modality • for guided navigation is to use a physically based camera model that uses potential fields to control camera movement, which are described in detail in Figures 2 and 3. -0 In step 109, which can be done step-by-step 107, the interior of the organ is seen from the perspective of the camera model along the route chosen for guided navigation. Three-dimensional visualizations can be generated using well-known techniques by experts in the field such as "buckets floating". However, to visualize the colon in real time, a technique is required that decreases the large number of operations f with data necessary to show the virtual organ. The Figure 9 describes this step of viewing more in detail.
The method described in Figure 1 can also be applied to the simultaneous scanning of multiple organs of the body. For example, the cancerous nodules of a patient in the colon and in the lungs can be analyzed. The method of Figure 1 would be modified to scan all areas of interest in step 103 and the organ that would be examined in step 105 would be chosen. For example, initially the physician or operator could opt to explore the colon virtually and then , the lung. Alternatively, two doctors with different specialties could explore virtually two different scanned organs that were related to their respective specialty. In accordance with step 109, the next organ to be examined is chosen, and the corresponding portion is defined and explored. This process is repeated until all the organs that are to be examined have been processed.
The steps described in relation to Figure 1 can also be applied to the exploration of any object that can be represented by volumetric elements. For example, an architectural structure or an inanimate object can be represented and studied in the same way.
Figure 2 depicts a "underwater" camera control model that performs the guided navigation technique described in step 107. When there is no operator direction during guided navigation, the default navigation is similar to the planned navigation, which Automatically directs the camera by a route from one chosen end of the colon to another. During the planned navigation phase, the camera remains in the center of the colon to obtain better views of the surface of the colon. When an interesting area appears, the operator of the virtual camera that uses guided navigation can interact with the camera to bring it closer to the specific area and direct its movements and angle in order to study in detail the area of interest and not crash unintentionally. the walls of the colon. The operator can control the camera with a standard interface device, either a keyboard or a mouse, or a non-standard device (a fixed mouse). In order to fully handle a camera in a virtual environment, six degrees of beam are required so that the camera can move in horizontal, vertical and Z direction (axes 217), and other three degrees of rotation sleeve (axes 219). This way, you can scroll and scan all the sides and angles of a virtual environment. The camera model for guided navigation includes an inextensible and weightless bar 201 that joins two particles, xx 203 and x2 205, which are subject to a potential field 205. The height assigned to the potential field will be greater than the height of the walls of the organ in order to move the camera away from the walls.
Xi and x2 provide the position of the particles and it is assumed that they have the same mass m. In front of the submarine xj 203 is a camera; the address in the • 5 which points coincides with x2X? . The submarine can perform a movement of translation and rotation around the center of mass x of the model when the two particles are affected by the forces of the potential field V (x) defined below, by any frictional force and by any external force. simulated The relations between Xi, x2 and x are as follows: X = (X, Y, Z), r = (rsin? Cos? Rsin? Sin? Rcos?), 15 X, = x + r, X2 = x - r, • where r,? and F are polar coordinates of the vector xxi.
The kinetic energy of the model, T, is defined as the sum of the kinetic energy of the movement of Xi and x2: mx2 + wr2 = (.jt + ^ +2)) ++ mmrr2 (? 2sen2?) (2) Then, the equations corresponding to the movement of the submarine model are obtained by using the LaGrange equation: dt and d < * ijj j dq: í dq¡? where qjS are the generalized coordinates of the model and can be considered as variables of time t as: (q?, q2, q3, q4, q5, qe) = (x, y, z,?, F,?) = g (t) (4: when ? denotes the balance angle of our camera system, which will be explained later. The Fis are called generalized forces. The control of the submarine is carried out by applying a simulated external force to Xi, Fext - (Fx F "x) and it is assumed that both Xi and x2 are affected by the forces of the potential field and by the frictions acting in the opposite direction to the velocity of each particle, consequently, the generalized forces have the following formula : F. = - m V (x,) - kxx + F ^ F2 - - m V V (x 2) - kx 2 (3) where k denotes the coefficient of friction of the system. The operator applies the Fext external force by simply clicking the mouse button in the desired direction 207 within the generated image, as shown in Figure 2. The camera model would then move in that direction. This allows the operator to control at least five degrees of camera sleeve with just a click of the mouse button. From the equations (2), (3) and (5), the acceleration of the five parameters of our submarine model can be deduced as: ? H (Excos # cos ^ + cos # sin > - zsin #), m 2mr where X and X denote the first and second derivatives of x, the potential gradient at a point x.
The terms f2 sin? Cos? from ? and f are called Ú? T centrifugal force and Coriolis force, respectively, and intervene in the angular velocity exchange of the submarine. Since the model does not have the moment of inertia defined by the submarine's bar, these terms tend to cause an overrun of the numerical calculation of F. t ^ Fortunately, these terms become significant only when the angular velocities of the submarine model are significant, which in essence means that the camera moves too fast. Since it makes no sense to allow the camera to move forward so fast that it does not allow an organ to be properly viewed, these terms are ^^. 0 minimize in our application to avoid the overshoot problem.
From the first three formulas of Equation (6), it is known that the submarine can not be propelled by the external force against the potential field if the following condition is met: Since the velocity of the submarine and the external Fext force have higher limits in our application, by assigning sufficiently high potential values at the limit of the objects, it can be ensured that the submarine never hits the surrounding objects or walls.
As mentioned above, it is necessary to consider the balance angle? of the camera system. One possible option allows the operator to fully control the angle? However, while the operator can freely rotate the camera around the bar of the model, it is easy to become disoriented. The preferred technique assumes that the upper direction of the chamber is connected to a pendulum with a mass m2 301, which rotates freely around the submarine bar, as shown in Figure 3. The direction of the pendulum, r2 , is expressed as: r2 = r2 (cos? CosFsiníF + sinFcoslí ^ cos? Sin? Sin-cosFcos? ^ - sin? Sinif.
While it is possible to calculate the precise movement of this pendulum along with the movement of the submarine, this complicates the equations of the system too much. Therefore, it is assumed that all generalized coordinates except the angle of balance? they are constant and, therefore, the independent kinetic energy of the pendular system is defined as: ffí. T = ^, 2 _ ^ X '2 - - 2 This simplifies the model of the balance angle. Since it is assumed in this model that the gravitational force Fg s m2g = (m2gx m2gy, m2gz) acts at the point of mass m2, the acceleration of? can be obtained using the LaGrange equation as: From Equations (6) and (7), the generalized coordinates q (t) and their derivatives q (t) are calculated asymptomatically using the Taylor series as: q (t + h) = q (t) + hq (t) + - # (t) + 0 (/ í3) q (t + h) = q (t) + hq (t) + 0 (h2) to move the submarine freely. In order to smooth the displacement of the submarine, the value chosen for the time interval h will be a balance between the smallest number possible to smooth the displacement, but as large as necessary to reduce the cost of the calculation.
Definition of the potential field The potential field in the submarine model in Figure 2 defines the limits (walls or other objects) in the virtual organ by assigning a high potential to the limit in order to ensure that the submarine's camera does not collide with the walls and other limits. If the operator tries to move the model of the camera to an area of high potential, the model of the camera will be prevented from doing so unless the operator wishes to analyze the organ beyond the limit or, for example, the interior of a polyp. . In the case of a virtual colonoscopy, a potential field value is assigned to each data of the volumetric colon (the volumetric element).
When a region of particular interest is designated in step 105 of Figure 1 with a starting point and a final point, the voxels within the selected area of the scanned colon are identified using conventional blocking operations. Subsequently, a potential value is assigned to each voxel x of the selected volume based on the three following distance values: the distance from the end dt (x), the distance from the colon surface ds (x) and the distance from the longitudinal axis of the colon space of (x). dt (x) is calculated using a conventional growth strategy. The distance from the surface of the colon, ds (x), is calculated using a conventional growth technique from the voxels of the surface inwards. To determine from (x), we first obtain the longitudinal axis of the colon from the voxel, and then calculate it from (x) using the conventional growth strategy from the longitudinal axis of the colon.
To calculate the longitudinal axis of the selected colonic area, defined by the start and end point specified by the user, locate the maximum value of ds (x) and denote it as dmax. Then, a cost value of dmax -ds (x) is assigned to each voxel within the area of interest. Thus, voxels that are near the surface of the colon have high cost values and those near the longitudinal axis have relatively low cost values. Subsequently, based on the allocation of costs, the shortest route technique from a single source, which is well known to those skilled in the art, is applied to effectively calculate a minimum cost route from the point of origin to the extreme. This low cost line indicates the longitudinal axis or backbone of the section of colon that you want to explore. This technique for determining the longitudinal axis is the preferred technique of the invention.
To calculate the potential value V (x) of a voxel x within the area of interest, the following formula is used: Where Ci, C2, μ and v are constants chosen for the task. In order to avoid collisions between the virtual camera and the virtual colonic surface, a sufficiently large potential value is assigned to all points outside the colon. Therefore, the potential field gradient will become so significant that, during its operation, the submarine model camera will never collide with the colon wall.
Another technique for determining the longitudinal axis of the route in the colon is called the "detached layer" technique and is shown in Figures 4 to 8.
Figure 4 shows a two-dimensional cross section of the volumetric colon, together with its two side walls 401 and 403. The operator chooses two blocking walls in order to define which section of the colon he will analyze. It is not possible to see anything beyond the blocking walls. This helps reduce the number of calculations needed to show the virtual representation. The blocking walls and the lateral walls identify a volumetric form within the colon, which will be investigated.
Figure 5 shows two end points in the virtual examination path: the initial volumetric element 501 and the final volumetric element 503. The operator chooses the start and end points in step 105 of Figure 1. The voxels between the points of start and end and colon sides are identified and marked, as indicated by the area designated with x (x) in Figure 6. Voxels are three-dimensional representations of the graphic element.
Subsequently, the layering technique is applied to the voxels identified and marked in Figure 6. The outer layer of all voxels (closest to the walls of the colon) is detached, and then the next, and so on consecutively until only there is only one layer of voxels: the internal one. Stated differently, each voxel is removed from the center point until such removal does not cause the path to be unlinked between the start voxel and the end voxel. Figure 7 shows the intermediate result after a series of voxel detachments have been made in the virtual colon. The voxels closest to the colon walls have already been eliminated. Figure 8 shows the final route of the chamber model to the center of the colon after all the detachments have been made. In essence, this produces a backbone in the center of the colon that becomes the desired path for the camera model.
Assisted visibility by Z buffer Figure 9 describes a real-time visibility technique to show the virtual images observed by the camera model in the three-dimensional virtual volumetric representation of an organ. Figure 9 shows a visualization technique using a modified Z buffer, corresponding to step 109 in Figure 1. The number of voxels that would be possible to see with the camera model is extremely high. Unless the total number of elements (or polygons) that must be calculated and displayed is less than the entire series of voxels in the scanned environment, the general number of calculations required will render the process of displaying a large internal area excessively slow. However, in the present invention it is only necessary to make the visualization calculations of those images that are • 5 visible on the surface of the colon. The scanned environment can be subdivided into smaller sections or cells. The Z buffer technique produces only a portion of the cells visible from the camera. The Z buffer technique has also been used to represent voxels three-dimensionally. The .10 use of a modified Z buffer reduces the number of visible voxels that must be calculated and allows a physician or technician in the medical area to examine the virtual colon in real time.
The area of interest for which the longitudinal axis has been calculated in step 107 is subdivided into cells before applying the visualization technique. The cells are • groupings of voxels that become a unit of visibility. The voxels in each cell will be displayed as a group. Each cell contains a number of thresholds through which it is possible to see other cells. The colon begins to subdivide from the chosen starting point and along the longitudinal axis 1001 towards the end. Subsequently, the colon is divided into cells (eg, cells 1003, 1005 and 1007 in Figure 10) when a predefined threshold distance along the longitudinal axis is reached. The threshold distance is based on the specifications of the platform on which the visualization technique is performed, as well as in its storage and processing capacity. The size of the cells is directly related to the number of voxels that the platform can store and process. An example of a threshold distance is 5 cm, although the distance can vary greatly. Each cell has two longitudinal axes that act as thresholds to see outside the cell as shown in Figure 10.
Step 901 in Figure 9 identifies the cell within the selected organ that currently contains the camera. That cell will be shown along with all the other cells visible from that orientation of the camera. In step 903, a hierarchical tree diagram of the cells potentially visible from the camera (through defined thresholds) is constructed, as will be described in more detail below. The tree diagram contains one node for each cell visible to the camera. Some cells may be transparent because they do not have blocking entities, so you may see more than one cell in a given address. In step 905, the subset of voxels of a cell that is intercepted with the edge of the adjacent cells is stored on the outer edge of the tree diagram in order to more effectively determine which cells are visible.
In step 907, it is checked whether the tree diagram contains any double node. A double node occurs when two or more edges of a single cell limit in the same adjacent cell. This can occur when a single cell is surrounded by another cell. If a double node is identified in the tree diagram, the method proceeds to step 909. If there is no double node, the process performs step 911.
In step 909, the two cells forming the double node are collapsed to form a large node. This corrects the tree diagram and eliminates the problem of seeing the same cell twice due to the double node. This step is repeated with each of the double nodes that have been detected. The process subsequently proceeds to step 911.
In step 911 the Z buffer starts with the maximum value of Z. The value of Z defines the distance away from the camera along the route / spine. Afterwards, the tree is traversed to verify the intersection values in each node. If the intersection of a node is covered, which means that the current threshold sequence is occluded (based on the Z buffer test), then the path of that branch of tree 5 will end. In step 913 each of the branches is traversed to verify if the nodes are covered and shows them if not.
In step 915 the image to be displayed is constructed J.0 on the operator's monitor with the volumetric elements within the visible cells identified in step 913.
This is carried out using one of several techniques known to those skilled in the art, such as volumetric representation by composition. Only displayed those cells identified as potentially visible.
This technique limits the number of cells that require calculations to achieve a real-time visualization and, therefore, increases the speed of visualization and performance. This technique is an improvement over techniques previous ones that calculate all possible points of visible information, whether they truly can be visualized or not.
Figure HA is a pictorial representation of an organ that is being explored through guided navigation and that must be visualized by an operator. The organ 1101 shows two side walls 1102 and an object 1105 at the center of the route. The organ has been divided into four cells: A 1151, B 1153, C 1155 and D 1157. Camera 1103 is facing cell D 1157 and has a field of view defined by vision vectors 1107, 1108 that identify a field of conical shape. The cells that can potentially be displayed are B 1153, C 1155 and D 1157. Cell C 1155 is completely surrounded by cell B and, therefore, constitutes a double node.
Figure 11B is a representation of a tree diagram constructed from the cells in Figure 11 A. Node A 1109 containing the camera is located at the root of the tree. A line or cone of visibility is drawn, an unobstructed route, to node B 1110. Node B has direct lines of visibility to both the C 1112 node and the D 1114 node, which are indicated by the joining arrows. The visibility line of node C 1112 in the direction of the camera is combined with node B 1110. Therefore, node C 1112 and node B 1110 are merged into a large node B'1122, as shown in Figure 11C .
Figure 11C shows node A 1109 that contains the camera and is adjacent to node B '1122 (which contains both node B and node C) and node D 1114. Nodes A, B' and D will be shown at least partially to the operator.
Figures 12A-12E exemplify the use of the modified Z buffer with cells containing objects that impede visibility. An object can be fecal matter in a part of the virtual colon. Figure 12A shows a virtual space with 10 potential cells: A 1251, B 1253, C 1255, D 1257, E 1259, F 1261, G 1263, H 1265, I 1267 and J 1269. Some of these cells contain objects. If camera 1201 is placed in cell I 1267 and faces cell F 1261 as indicated by visibility vectors 1203, then a tree diagram is generated according to the technique exemplified by the flow chart in Figure 9. Figure 12B shows the generated tree diagram and the intersection nodes show the virtual representation as exemplified in Figure 12A. Figure 12B shows cell I 1267 as the root node of the tree because it contains camera 1201. Node I 1211 points to node F 1213 (indicated with a date), because cell F is directly attached to the line visibility of the camera. The node F 1213 points to both node B 1215 and node E 1219. Node B 1215 points to node A 1217. Node C 1202 is completely blocked by camera 1201 in the line of visibility, so it does not appear in the tree diagram.
Figure 12C shows the tree diagram after node I 1211 is displayed on the monitor for the operator. Node I 1211 is subsequently removed from the tree diagram because it has already been shown and node F 1213 becomes the root. Figure 12D shows that node F 1213 is now going to join node I 1211. The next nodes of the tree connected by arrows are checked to see if they are already covered (processed). In this example, all intercepted nodes that are observed from the camera located in cell I 1267 have been covered, so it is not necessary to show node B 515 (and, therefore, dependent node A) in the monitor .
Figure 12E shows that node E 515 is being checked to determine if its intersection has been covered. Since this has been the case, the only nodes shown in this example of Figure 12A-12E are nodes I and F, while nodes A, B and E are not visible and it is not necessary to prepare their cells to display them. The modified Z buffer technique described in Figure 9 allows for fewer calculations and can be applied to an object that has been represented by voxels or other data elements, such as polygons.
Figure 13 shows a two-dimensional virtual view of a colon with a huge polyp on one of its walls. Figure 13 shows a selected section of the colon of the patient to be studied in greater detail. The view shows two colon walls 1301 and 1303 with growth indicated as 1305. Layers 1307, 1309 and 1311 show the inner growth layers. It is ideal for a doctor to detach the layers of the polyp or tumor to see if there is cancerous or harmful material inside the mass. This process would be a virtual biopsy of the mass without surgery. Once the colon is represented virtually by voxels, the process of detaching the layers of an object is easily performed in a manner similar to that described in conjunction with Figures 4 to 8. It is also possible to make cuts to the mass to study a particular cross section . In Figure 13, a flat cut 1313 can be made so that that particular portion of the growth can be studied. Likewise, it is possible to make any cut 1319 defined by the user. The voxels 1319 can be detached or modified as explained below.
It is possible to apply a transfer function to each voxel in the area of interest that becomes transparent, semitransparent or opaque to the object by modifying the coefficients that represent the translucency of each voxel. An opacity coefficient is assigned to each voxel based on its density. Later, a mapping function transforms the density value into a coefficient that represents its translucency. A high density voxel will indicate a wall or dense matter of another type, and not just open space. An operator or program could later change the opacity coefficient of a voxel or group of voxels to make them appear transparent or semitransparent in the eyes of the underwater camera model. For example, an operator might see a tumor inside or outside of a growth. 0 could make a transparent voxel appear not to be present in the display step of Figure 9. It is possible to compose a section of the object using a weighted average of the opacity coefficients of the voxels in that section.
If the physician wishes to see the various layers of a polyp to look for cancerous areas, it is possible to do so by detaching the outer layer of the polyp 1305 and leaving the first layer 1307 uncovered. Also, the first inner layer 1307 can be peeled off to visualize a second inner layer 1309. The second inner layer can be peeled off to see a third inner layer 1311, and so on. The doctor could also cut the polyp 1305 and see only the voxels that are inside the section ^ 0 desired. This cutting area can be fully defined by the user.
The incorporation of an opacity coefficient can also be used in other useful ways for exploration of a virtual system. If fecal material is found whose density and other properties are within a certain known range, it is possible to return said transparent material • in the eyes of the virtual camera by changing its opacity coefficient during the study. This will avoid the patient have to take a purgative before the procedure and facilitates and expedites the examination. Depending on the application in use, other objects can also disappear in the same way. Also, some objects such as polyps could be enhanced electronically by the application of a contrast agent and the subsequent use of an appropriate transfer function.
Figure 14 shows a system to perform the test • 5 virtual of an object as a human organ using the techniques described in this specification. The patient 1401 lies on a platform 1402 while the scanning device 1405 scans the area containing the organ or organs to be analyzed. The scanning device 1405 'contains _10 a scanning section 1403 which takes images of the patient and an electronic section 1406. The electronic section 1406 comprises an interface 1407, a central processing unit (CPU) 1409, a memory 1411 for temporarily storing the scan data and a second interface 1413 for sending data to the virtual navigation platform. It is possible to include interfaces 1407 and 1413 within a single component, or both can constitute the same component.
The components in section 1406 are connected by conventional connectors. In the system 1400, the data from the scan section of the device 1403 is transferred to the section 1405 to be processed and stored in the memory 1411. The central processing unit 1409 converts the scanned two-dimensional data into three-dimensional voxel data and saves the results in another section of the memory 1411. Another option could be to send the converted data directly to the interface unit 1413 for transfer to the virtual navigation terminal 1416. The conversion of two-dimensional data could also occur in the virtual navigation terminal 1416 after being transmitted from the interface 1413. In the preferred embodiment, the converted data is transmitted by carrier wave 1414 to the virtual navigation terminal 1416 in order for an operator to perform the virtual examination. The data may also be transferred by other conventional means, either by storing them in a storage medium and physically transporting them to terminal 1416 or using satellite transmissions.
The scanned data will not be converted to its three-dimensional representation until it is required by the machine that generates the visualization. In this way, computational steps and memory storage space are avoided.
The virtual navigation terminal 1416 includes a monitor for viewing the virtual organ or any other scanned image, an electronic section 1415 and an interface control 1419 (fixed keyboard, mouse or mouse). The electronic section 1415 comprises a port for interface 1421, a central processing unit 1423, other components 1427 necessary to operate the terminal and a memory 1425. The components in the terminal 1416 are connected by conventional connectors. The converted voxel data is received at the port for interface 1421 and stored in memory 1425. The central processing unit 1423 subsequently joins the three-dimensional voxels in a virtual representation and operates the underwater camera model as described in Figures 2 and 3 to perform the virtual exam. As the underwater camera travels through the virtual organ, the visibility technique described in Figure 9 is used to make calculations of the areas visible from the virtual camera and display them on the monitor 1417. A graphics accelerator can also be used to generate the representations. The operator may employ a device for interface 1419 to indicate which portion of the scanned body it is desired to scan. The device for interface 1419 can also be used to control and move the underwater camera as desired, as indicated in Figure 2 and its accompanying description. Section 1415 of the terminal can be a dedicated Cube-4 system, available from the Department of Computer Science at the State University of New York at Stony Brook.
The scanning device 1405 and the terminal 1416, or parts thereof, can be part of the same unit. A single platform can be used to receive the scanned graphic data, connect it to the three-dimensional voxels if necessary and perform guided navigation. A preponderant feature of the 1400 system is that the virtual organ can be examined later without the need for the patient to be present. On the other hand, it is possible to perform the virtual examination while the patient is being scanned. Scan data can also be sent to multiple terminals. This would make it possible for more than one doctor to see the inside of an organ simultaneously. In this way, a doctor in New York could be seeing the same section of a patient's organ as a doctor in California while both discuss the case. Another option could be to visualize the data at different times. Two or more doctors could perform on their own an analysis of the same data on a difficult case. Multiple virtual navigation terminals could be used to see the same scan data. By reproducing the organ as a virtual organ with a series of discrete data, various benefits are obtained in terms of accuracy, costs and multi-faceted data management.
The techniques described above can be improved in the applications of virtual colonoscopy by using a technique to clean the colon electronically using modified operations to prepare the intestines, followed by operations to segment images, so that fluid and stool remaining in the colon. the colon during a computed tomography scan or an MRI can be detected and extracted from the virtual colonoscopy images. By using such techniques, the discomfort caused by the physical means of washing the colon is minimized or totally eliminated.
Referring to Figure 15, the first step to cleanse the colon electronically is to prepare the intestines (step 1510). This is done before computed tomography or magnetic resonance imaging is performed and is intended to generate conditions in which the faeces or fluid remaining in the colon have visual properties totally different from those inside the colon insufflated with gas and its walls. By way of example, said operation to prepare the intestines includes the ingestion of three doses of 250 cc of a suspension of barium sulfate at 2.1% W / V, such as that manufactured by EZ-EM, Inc. of Westbury, New York, the day before the computed tomography or magnetic resonance. The three doses should be distributed throughout the day and can be taken with all three meals. Barium sulfate is used to improve the image of any fecal matter that remains in the colon. In addition to the intake of barium sulfate, it is preferable to increase the consumption of liquids during the day before the computed tomography or magnetic resonance. Cranberry juice is preferred because it increases intestinal fluids, but water can also be ingested. In order to improve the properties of the colonic fluid image, 60 ml of a diatrizoate solution of meglumine or sodium, manufactured under the brand MD-Gastroview by Mallinckrodt, Inc. of St, should be consumed overnight and before the scan. . Louis, Missouri. Sodium phosphate can also be added to the solution to liquefy stool in the colon; In this way, colonic fluid and residual feces are more uniformly highlighted.
The preliminary operation to prepare the intestines described above by way of example can make conventional colonic lavage protocols unnecessary, which require ingesting a gallon of Golytely solution before a tomography.
In order to minimize the collapse of the colon, just before performing the tomography can be administered by intravenous injection 1 ml of glucagon, manufactured by Ely Lily and Company, of Indianapolis, Indiana. Subsequently, the colon can be insufflated using approximately 1000 cc of compressed gas, such as C02, or ambient air, introduced through a rectal tube. Once this is done, a conventional computed tomography is performed to obtain data from the colon region (step 1520). For example, data can be obtained using a GE / CTI spiral scanner operating in helical mode with a 5 mm spacing between spirals, a displacement factor of 1.5-2.0: 1 and a displacement factor adjusted, as is customary, to the height of the patient. For this operation, a routine image generation protocol of 120 kVp and 200-280 ma can be used. The data can be obtained and reconstructed as images with cuts of 1 mm in thickness and a matrix size of 512x512 pixels in the field of vision, which ranges between 34 and 40 cm depending on the size of the patient. The number of such cuts in these conditions usually ranges between 300 and 450, depending on the height of the patient. The series of graphical data is converted into volume elements or voxels (step 1530).
The segmentation of the image can be done in several ways. In the current method of image segmentation, the local neighbor technique is used to classify the voxels of the graphic data according to similar intensity values. In this method, each voxel of an image obtained _10 is evaluated with respect to a group of neighboring voxels. The voxel of interest is called the central voxel and has a related intensity value. A classification indicator of each voxel is established by comparing the value of the central voxel with that of each of its neighbors. If the neighbor voxel has the same value as the central voxel, the value of the classification indicator increases. However, if the neighboring voxel has a different value than the central voxel, the? Central voxel classification indicator decreases. The central voxel is then classified according to the category that obtains the maximum value of the indicator, which indicates the most uniform neighborhood between the local neighboring voxels. Each classification is indicative of a particular intensity range, which in turn represents one or more types of materials in the image. The method can be improved by applying a probability mix function to the obtained similarity classifications.
A second process of image segmentation is carried out as two main operations: low-level processing and extraction of high-level characteristics. During low-level processing, regions outside the body contour are no longer processed and voxels within the body contour are roughly classified according to the well-defined characteristics of different intensity classes. For example, a computed tomography of the abdominal region generates a series of data that tends to show a well-defined intensity distribution. The graph of Figure 16 exemplifies this intensity distribution as a typical histogram with four well-defined peaks (1602, 1604, 1606, and 1608) that can be classified according to intensity thresholds.
The voxels of the abdominal tomography data series are roughly classified by intensity threshold as four groups (step 1540). For example, Grouping 1 may include voxels with intensity less than 140. This grouping generally corresponds to the regions of lowest density inside the gas-filled colon. Grouping 2 can include voxels with intensity values greater than 2200. These intensity values ^ fc correspond to raised feces and fluids within the 5 colon, as well as to bone. Grouping 3 can include voxels with intensities in the range of 900 to 1080. This intensity range generally represents soft tissue, such as fat and muscle, that is hardly related to the colon. The remaining voxels can be brought together as Grouping 4, and J.0 probably relate to the wall of the colon (the mucosa and partial volumetric mixtures around the colon wall), lung tissue and soft bones.
Clusters 1 and 3 are not particularly useful for identifying the colon wall and, therefore, are not subjected to substantial processing during image segmentation procedures for virtual colonoscopy. The voxels related to Grouping 2 are important for separating feces and liquids from the wall of the colon, so they receive a greater processing during extraction operations of high level characteristics. The low-level processing is concentrated in the fourth group, which is more likely to correspond to colonic tissue (step 1550).
In the case of each voxel in the fourth grouping, an intensity vector is generated using the voxel itself and its neighbors. The intensity vector provides an indication of the change in intensity in the immediate vicinity of a given voxel. The number of neighboring voxels that are used to establish the intensity vector is not critical, but implies a balance between processing costs and precision. For example, you can set a vector of simple voxels intensity with seven (7) voxels: the voxel of interest, its neighbors back and forth, its neighbors to the right and to the left, and its neighbors up and down, all of which surround the voxel of interest in three axes respectively perpendicular. Figure 17 is a view in perspective which exemplifies a typical intensity vector in the form of a 25 voxels intensity vector model, which includes the selected voxel 1702 as well as its neighbors of 'first, second and third order. The selected voxel 1702 is the central point of this model and is called voxel fixed. A flat cut of voxels, which includes 12 neighbors in the same plane as the fixed voxel, is called a fixed cut 1704. In planes adjacent to the fixed cut are the two closest cuts 1706, which have five voxels each. Beside these first closest cuts 1706, there are the two next close cuts 1708, each of which has a single voxel. This set of intensity vectors for each voxel in the fourth cluster is called the series of local vectors.
Because the series of data corresponding to an abdominal image usually includes more than 300 images of cuts, each with a matrix of 512 x 512 voxels, since each voxel has a local vector of 25 voxels related, it is advisable to apply a feature analysis (step 1570) to the series of local vectors to reduce the computational load. One such feature analysis is principal component analysis (ACP), which can be applied to the series of local vectors to determine the dimension of a series of vector characteristics and an orthogonal transformation matrix for the voxels of Grouping 4.
It has been found that the histogram (Figure 16) of the intensity of the tomographic images tends to be fairly constant between one patient and another in a given scanner, with equivalent preparation and scanning parameters. Based on this observation, an orthogonal transformation matrix can be established that is a predetermined matrix obtained by employing several series of orienting data obtained using the same scanner under similar conditions. From these data, a transformation matrix such as the transformed one can be obtained in known manner. • 5 Karlhunen-Loéve (K-L). The transformation matrix is applied to the series of local vectors to obtain a series of characteristic vectors. Once in the spatial domain of the characteristic vectors, vector quantification techniques can be used to classify the series of 10 characteristic vectors.
A self-adapting analytical algorithm can be used to classify the characteristic vectors. When defining this algorithm, let's say that. { X CR4: i = l, 2, 3, ..., N} It's the series of characteristic vectors, where N is the number of characteristic vectors, K denotes the maximum number of classes and T is a threshold adaptable to the data series. For each class, > through the algorithm a representative element is generated. Let's say that ak is the representative element of class k and nk is the number of characteristic vectors in that class.
The algorithm could then be described as: 1. Set n = l; a1 = X1; K = 1; Obtain the class number Ky the class parameters (ak, nk) for (i = l; i &N; i ++) for (j = l; j < K; j + +) calculate dj = dist (X ± , aj) conclude index = are min dj; if ((Kx < T) j or (K = K) update class parameters: conclude if everything 15 generates a new class deduce all conclude 3. label each characteristic vector of a class according to the nearest neighbor rule corresponding to (i = l; i &l; N; i ++) for (j = l; j < K; j ++) • calculate dj = dist (X ±, aj); deduct index = are min dj; label the voxel i to classify the index. 10 conclude In this algorithm, dist (x, y) is the Euclidean distance between the vector x and the vector y, and are min dj gives the integral j that generates the minimum value of d j. 15 The algorithm described above depends only on the parameters T and K. However, the value of K, which is • relates to the number of classes within each group of voxels, is not critical and can be set at a value constant as K = 18. However, T, which is the threshold of similarity of vectors, greatly influences the results of the classification. If the chosen value of T is too high, only one class will be generated. On the other hand, if the value of T is too small, the resulting classes will show undesirable redundancy. By setting the value of T as equal to the maximum component variation of the series of characteristic vectors, we obtain the maximum number of distinctive classes. • 5 As a result of the initial classification processes, each voxel selected within the cluster is assigned a class (step 1570). In this typical case of virtual colonoscopy, there are several classes within the Grouping 4. Therefore, the next task is to determine which of the various classes in Grouping 4 corresponds to the colon wall. The first coordinate of the characteristic vector, which is the one that shows the greatest variation, reflects the information of the average of intensities of three-dimensional local voxels. The remaining coordinates of the characteristic vector contain the directional intensity change information within • local neighbors. Because the voxels of the colon wall are usually very close to the voxels of the In the gas of Grouping 1, it is possible to determine a threshold interval by choosing data samples of the typical intensities of the wall of a colon in the data of a typical tomography, in order to distinguish approximately the candidate voxel from the wall of the colon. For each protocol and particular device generating an image a particular threshold value is chosen. This threshold interval can then be applied to all the data series of the tomography (obtained from the same machine, using the same image generation protocol). If the first coordinate of the representative element is located in the threshold interval, the corresponding class is considered as the class of the colon wall and all the voxels in that class will be marked as voxels similar to the colon wall.
Each voxel similar to the colon wall is a candidate for a voxel of the colon wall. There are three possibilities that a voxel does not belong to the colon wall. The first case is related to the voxels that are near the stool / fluid inside the colon. The second case occurs when the voxels are in the regions of lung tissue. The third case represents voxels of the mucosa. It is clear then that a low level of classification implies a degree of uncertainty in the classification. The causes of uncertainty at a low level of classification vary. For example, a partial volume effect because voxels contain more than one type of material (ie, liquid and colon wall) leads to the first case of uncertainty. The second and third cases of uncertainty are due both to the effect of partial volume and to a low contrast in the images of the tomography. To resolve this uncertainty, additional information is required. Therefore, a method of extracting high-level features in the present method is used to better distinguish those candidates to be voxels from the colon wall of other voxels similar to the colon wall. Said method is based on prior anatomical knowledge of the tomographic images (step 1580).
A first step of the high level feature extraction procedure may be to remove the lung tissue region from the results of the low level classification. Figure 18A is the typical image of a cut that clearly exemplifies the lung region 1802. The lung region 1802 is usually identified as a generally contiguous three-dimensional volume, limited by voxels similar to the colon wall, as shown in FIG. exemplifies in Figure 18B. Due to this characteristic, the lung region can be identified using a regional growth strategy. The first step of this technique is to find a seed voxel within the region that is going to increase. The operator who performs the tomography usually sets the range of image generation so that the first cut of the tomography does not contain any voxel of the colon. Because the interior of the lung is full of air, the low level classification 5 provides the seed simply by choosing an air voxel. Once the contour of the lung region of Figure 18B is determined, the volume of the lung can be extracted from the image cut (Figure 18C).
J.0 A next step to perform a high-level feature extraction may be to separate the bone voxels from the stool / fluid voxels enhanced in Grouping 2. The bone tissue voxels 1902 are usually located relatively far from the wall of the colon and outside the volume of the colon. And on the contrary, the 1906 feces and residual 1904 fluid are contained within the volume of the colon. The approximate volume of the colon wall is obtained by combining information that was previously known about proximity with information about the colon wall that was obtained from the low-level classification process. Any voxel separated by more than a predetermined number (eg, 3) of voxel units from the colon wall, and which is outside the colon volume, will be marked as bone and subsequently extracted from the image. It can be assumed that the remaining voxels in Cluster 2 represent fecal matter and fluid within the volume of the colon (see Figures 19A- • c > 5 The voxels within the volume of the colon identified as feces 1906 and liquid 1904 can be removed from the image to generate a clean image of the lumen and wall of the colon. In general, there are two types of regions with stools / liquid. A type of region consists of small residual areas with feces 1906 in the wall of the colon. The other type of region consists of large volumes of liquid 1904 that are concentrated in bowl-like colonic folds (see Figures 19A-C). 15 Regions with residual feces bound to colon 1906 can be identified and eliminated because they are found • within the approximate volume of the colon that was generated during the low-level classification process. The liquid 1906 in The colon folds usually have a horizontal surface 1908 due to the effect of gravity. A region of gas of very high contrast with respect to the intensity of the fluid is always located on said surface. Therefore, it is very easy to mark the contact surface of the regions with liquid.
By means of a strategy of growth of regions, the outline of the regions with feces attached to the colon wall 1906 can be delineated, and the part that is remote from the volume of the colon wall can be eliminated. Similarly, the contour of the regions can be delineated with liquid 1904. After removing the horizontal surfaces 1908, the outline of the colon wall appears and the clean colon wall is obtained.
It is difficult to distinguish mucosal voxels from voxels in the colon wall. Although the three-dimensional processing mentioned above can eliminate some mucus voxels, it is difficult to eliminate all of them. In optic colonoscopy, doctors inspect the colonic mucosa directly and look for lesions based on their color and texture. In virtual colonoscopy, most of the mucosal voxels in the colon wall can be left intact in order to preserve more information. This can be very useful for three-dimensional volumetric representation.
It is possible to extract the internal and external surface of the colon, as well as the colon wall itself, from the segmented volume of the colon wall, and visualize them as virtual objects. This circumstance represents a clear advantage over conventional optical colonoscopy because it is possible to examine both the external wall of the colon and the internal one. Also, the wall and lumen of the colon can be obtained separately from the segmentation.
Due to the substantial evaluation of the colon before imaging, the collapse of the colon lumen in some segments is a common problem. Although the insufflation of the colon with compressed gas, air or C02, reduces the frequency of the collapsed regions, this situation is not completely eliminated. When performing a virtual colonoscopy, it is advisable to automatically maintain a route through the collapsed regions and use the graphic data from the scan to at least partially recreate the colon lumen in the collapsed regions. Since the image segmentation methods described above allow obtaining both the internal and external colon walls, this information can be used to improve the determination of the route through the collapsed regions.
The first step to extend the route through the 5 collapsed regions of the colon or to distend these regions is to detect them. To detect areas where the colon has collapsed, an entropic analysis can be used based on the premise that the grayscale values of the graphical data outside the wall of the colon change more markedly than the values of the colon. in the grayscale within the colon wall itself and in other regions such as fat, muscle or other tissue.
The degree of change in the value of the gray scale, for example along the longitudinal axis, can be expressed and measured by an entropic value. To calculate the entropic value, the voxels are selected on the external surface of the colon wall. Said points are identified from the image segmentation techniques described above. A 5x5x5 cubic window can be applied to the pixels, centered on the pixel of interest. Before calculating the entropic value, a smaller window (3x3x3) can be applied to the pixels of interest in order to filter the noise and remove it from the graphic data. The entropic value of an open window around a pixel can be determined by the equation: E =? c (í) ln (c () ¡where E is the entropy and C (i) is the number of points in the window with the gray scale of i (i = 0, 1, 2, ..., 255) The entropic values calculated for each window are then compared to a predetermined threshold value.In the case of the air regions, the entropic values will be much smaller than those of tissue regions.Therefore, there will be a region collapsed throughout of the longitudinal axis of the colon lumen when the entropic values increase and exceed the predetermined threshold value The exact value of the threshold is not critical and will depend in part on the protocol for generating images and particular aspects of the image generating device.
Once a collapsed region is detected, it is possible to extend the previously determined route along the longitudinal axis by making a perforation, consisting of a voxel-wide navigation line, which runs through the center of the collapsed segment.
In addition to automatically continuing the path of the virtual chamber through the colon lumen, the collapsed region of the colon can be opened virtually using a physical modeling technique to recover some of the properties of the collapsed region. In this technique, a model of the physical properties of the colon wall is created. From this model, the parameters of movement, density of mass, density of damping and coefficients are calculated of stretching and bending for a LaGrange equation. Next, a model of expansive force (ie, a liquid or gas, such as air, blown into the colon) is formulated and applied according to the elastic properties of the colon, as defined by the LaGrange equation, so that the image of the collapsed region of the colon recovers its natural form.
To model the colon, a finite element model can be applied to the collapsed or obstructed regions of the colon lumen. This can be done by creating a sample of the elements in a regular grid, such as a rectangle of 8 voxels, and then applying the traditional techniques of volumetric representation. Another option is to apply a representation approach of irregular volumes, such as tetrahedra, to collapsed regions.
By applying the external force model (air insufflation) to the colon model, we first determine the magnitude of the external force to properly separate the regions of the collapsed colon wall. A three-dimensional growth model can be used to track the internal and external surfaces of the colon wall in a parallel fashion. Each surface is marked from a starting point in the collapsed region to a point of growth, and the force model is applied to distend the surfaces in a similar and natural way. The region between the internal and external surfaces (ie, the colon wall) is classified as a shared region. The external repulsive force model is applied to the shared regions to separate and distend the collapsed segments of the colon wall naturally.
To visualize more clearly the characteristics of a virtual object, such as the colon, which is being examined virtually, it is useful to provide a representation of the various textures of the object. These textures, which can be observed in the color images presented during the optical colonoscopy, are usually lost in the black and white images, or within the gray scale, which provide the graphic data of a tomography. Therefore, a system and method for generating textured images during a virtual exam is required.
Figure 20 is a flow diagram showing the present method for generating textured virtual objects. The purpose of this method is to map the textures obtained by means of optical colonoscopy images in the red-green-blue chromatic space, such as those of the Visible Man Project, in the grayscale monochrome graphic data used to generate virtual objects The images of the optical colonoscopy are obtained by means of conventional techniques of obtaining digital images, such as the image grabber (image recorder) 1429 that receives optical analog images from a camera (video, for example) and converts the image into digital data which can be sent to the CPU 1423 by means of a port for interface 1431 (Figure 14). The first step in this process is to segment the graphical data of the tomography (step 2010). The image segmentation techniques described above can be applied to choose thresholds of grayscale image intensity to classify the graphical data of the tomography into various types of tissue: bone, colon wall tissue, air, and so on. the style.
In addition to performing a segmentation of images in the graphical data of the tomography, it is necessary to obtain the characteristics of the texture in the optical image from optical graphical data (step 2020). To do this, a Gaussian filter can be applied to the optical graphical data and then sub-sampled to decompose the ^ data in a pyramid with multiple resolutions. A Laplacian filter and an orientable filter can also be applied to this pyramid to obtain the oriented and unoriented characteristics of the data. Although this method is useful for obtaining and capturing the characteristics of the texture, its application requires a large memory and processing capacity.
Another approach to obtaining the characteristics of the texture of the optical image is to use a small wave transformation. However, although small wave transformations by computer are usually very efficient, conventional small wave transformations are limited in the sense that they only capture characteristics in those orientations parallel to the axes and can not be applied directly to a region. of interest. To overcome these limitations, an inseparable filter can be used. For example, a hoisting system may be employed to construct filter groups for the transformation of small waves in any direction i using a two-step approach to prediction and updating. These groups of fjilters can be synthesized by the Boor-Rom i algorithm for multidimensional polynomial I I interpolations. After the characteristics of the texture of the optical graphic data are retained, models must be created to describe them (step 2030). This can be done, for example, by using a nonparametric statistical model with several scales based on calculating and managing the entropy of non-Gaussian distributions attributable to natural I textures. I I I Once patterns of the texture are generated from the optical graphical data, a pairing I of textures must be performed to relate these models to the graphical data of the segmented tomography (step 2050). In those regions of the tomographic graphical data where the texture is continuous, it is easy to combine the corresponding kinds of texture. However, in border regions between regions with two or more textures, the fc process is more complex. The segmentation of tomographic data around the border region usually generates diffuse data, that is, the results reflect a percentage of texture of each material or fabric, and vary depending on the different weights of each. The percentage of weighting can be used to determine the importance of the criteria for collation.
In the case of the nonparametric statistical model with several scales, a crossed entropy or a Kullback-Leiber divergence algorithm can be used to measure the distribution of different textures in a border region.
After combining the textures, a synthesis of textures is applied to the tomographic graphics data (step 2050). This is done by merging the textures of the optical graphics data with the tomographic graphics data. In the case of isotropic texture patterns, such as those presented by bone, a sample of the texture of the optical data can be taken and merged with the graphical data of the segmented tomography. In the case of regions with anisotropic texture, such as those of the colon mucosa, a sampling procedure with multiple resolutions is preferred. In this process, repeated selective sampling of homogeneous and heterogeneous regions is used. • In addition to the augmented image, the techniques described above can also form the basis of a system for performing the virtual electronic biopsy of a region to be examined for a flexible and non-invasive biopsy.
The volumetric representation techniques employ a defined transfer function to map the different ranges of sample values of the original volumetric data with respect to different colors and opacities. Once the suspicious area is detected during the virtual exam, the doctor can interactively change the transfer function used during the volumetric representation in such a way that the wall studied becomes • substantially transparent and, therefore, the interior of the area can be observed. Apart from performing the virtual biopsy, the system and the present methods can also automatically detect the polyps, which are located, for example, inside the colon, and are generally convex structures similar to mounts that extend from the wall of the colon. . This geometry is different from the fold of the colon wall. Consequently, a differential geometry model can be used to detect each polyp in the wall of the colon.
The surface of the colon lumen can be represented using a soft surface model C-2. In this model, each voxel of the surface has an associated geometric feature that has a Gaussian curvature, in relation to the Gaussian curvature fields. A convex mount on the surface, which can be an indicator of a polyp, presents a unique local feature in the fields of Gaussian curvature. Consequently, by looking for Gaussian curvature fields in specific local features, polyps can be detected.
Each of the aforementioned methods can be implemented by means of a system such as that of Figure 14, with suitable software being provided to control the operation of CPU 1409 and CPU 1423.
The foregoing only illustrates the principles of the invention. Accordingly, those skilled in the art will be able to appreciate various systems, apparatuses and methods, which although not having been presented or explicitly described, are embodiments of the invention and, therefore, pertain to the spirit and scope of the present invention as it is described in its claims.
The foregoing is illustrated when the methods and systems described herein can be applied to virtually study an animal, fish and inanimate object. In addition to the indicated uses in the medical field, the applications of the technique can be used to detect the content of sealed objects that can not be opened. Also, the technique could be applied within an architectural structure such as a building or cavern allowing the operator to navigate through said structure.

Claims (34)

1. A method for electronically cleaning a virtual object formed from graphic data consisting of: Converting the graphic data into a plurality of volumetric elements that constitute the virtual object, each volumetric element with an intensity value; Sort the volumetric elements in a plurality of groups according to the intensity values; and Remove at least one group of the volumetric elements of the virtual object.
2. The method for electronically cleaning a virtual object according to claim 1, wherein the classification further consists in evaluating a plurality of volumetric elements of the graphic data in relation to a plurality of neighboring volumetric elements in order to determine a similarity between the two. neighboring volumetric elements.
3. The method for electronically cleaning a virtual object according to claim 2, wherein the groups are classified according to the value of similarity between the volumetric elements.
4. The method for electronically cleaning a virtual object according to claim 2, wherein the groups are further classified by applying a probability mix function to the groups in order to categorize the voxels whose intensity value is derived from the inclusion of more of a type of material.
5. The method for electronically cleaning a virtual object according to claim 1, wherein the classification consists of: performing a feature vector analysis in at least one of the groups comprising graphic data for a material of interest; and Perform a high-level feature extraction in order to remove volumetric elements from the virtual object that do not represent significant indicators of the material of interest.
6. The method for electronically cleaning a virtual object according to claim 5, wherein the graphic data represents a region of the human body including at least a part of the colon and the material of interest is the tissue of the colon.
7. The method for electronically cleaning a virtual object according to claim 1, wherein the graphic data represents a region of the human body comprising at least a part of the colon.
8. The method for electronically cleaning a virtual object according to claim 7, wherein the removal removes the volumetric elements representing at least one intracolonic fluid, residual feces within the colon, bone and non-colonic tissue.
9. A method to prepare graphic data related to a virtual colonoscopy consisting of: Acquiring a series of graphic data that includes at least a part of the colon; Convert the graphic data series to a plurality of volumetric elements, each volumetric element with an intensity value; Classify the volumetric elements in a plurality of groups according to the intensity values, each group represents at least one material close to the colon; and Eliminating at least one group of volumetric elements derived from the series of graphical data.
10. The method for preparing graphical data for a virtual colonoscopy according to claim 9, further comprising the increase of intensity values of the volumetric elements of fluids and feces that are still in the colon before the acquisition operation.
11. The method for performing a virtual colonoscopy according to claim 10, wherein the step-up operation encompasses the passage of an edible material by the patient in order to increase the intensity of the image of feces and fluids within the colon.
12. The method for performing a virtual colonoscopy according to claim 11, wherein the step of intake consists of at least one solution of barium sulfate, diatrizoate of meglumine and sodium.
13. The method for performing a virtual colonoscopy according to claim 10, wherein at least one material near the colon comprises the tissue of the colon wall and at least one bony, fluid, stool and non-colonic material.
14. The method for performing a virtual colonoscopy according to claim 13, wherein one of the plurality of groups includes the volumetric elements of greater intensity representing fluid and feces, and this group is eliminated during the removal operation.
15. The method for performing a virtual colonoscopy according to claim 9, wherein a plurality of groups includes volumetric elements of the colon wall and other materials similar to those of the colon wall and another classification is carried out in this group with the To identify the volumetric elements of the colon wall.
16. The method for performing a virtual colonoscopy according to claim 15, further comprising: Identifying the interior of the colon; Generate a central line to navigate through the interior of the colon; Detect a collapsed region of the interior of the colon; Y Extend the centerline through the collapsed region.
17. The method for performing a virtual colonoscopy according to claim 16, wherein the entropic values are calculated with respect to the intensity values close to the center line, and the detection encompasses the • 5 identification of at least one of the entropic values that are above the threshold value.
18. The method for performing a virtual colonoscopy according to claim 17, further comprising J.0 dilate virtually a particular collapsed region in accordance with the properties of the colon wall.
19. A method for mapping the optical texture of at least one optical image according to the data series 15 acquired monochromatic consists of: Divide the series of monochromatic data acquired in a plurality of classifications that represent a plurality of textures; Divide the optical image into a plurality of 20 classifications by color; Represent a second plurality of textures; Generate a textures model for the plurality of classifications by color; Match the texture models with the plurality of classifications of the monochrome graphic data; and Apply texture models for monochrome graphic data.
20. An image generation system comprising an image segmentation feature and consisting of: an image generating scanner to obtain graphic data; a processor, said processor converts the graphic data into a plurality of volumetric elements that form a series of data of volumetric elements, each volumetric element with a value of intensity, the processor realizes a division of images on the series of data of volumetric elements that it consists in classifying the volumetric elements in a plurality of groups according to the intensity values and eliminating at least one group of volumetric elements derived from the graphic data; and a monitor operatively linked to the processor to display a representation of the graphic data with at least one group of volumetric elements removed.
21. The image generation system according to claim 20, wherein the classification performed by the processor further consists in evaluating a plurality of volumetric elements of the graphic data with respect to a plurality of neighboring volumetric elements in order to determine the value of similarity between neighboring volumetric elements.
22. The image generation system according to claim 21, wherein the classification performed by the processor classifies the groups according to the similarity value of the volumetric elements.
23. The image generation system according to claim 20, wherein the classification performed by the processor comprises a probability algorithm of mixtures in order to classify the voxels whose intensity value is derived from the inclusion of more than one type of material .
24. The image generation system according to claim 20, wherein the classification performed by the processor also consists of: Perform a trait vector analysis in at least one of the groups that contains the graphical data for the material of interest; and Carry out a high-level feature extraction in order to eliminate volumetric elements of the image that do not represent basic indicators of the material of interest.
25. The image generation system according to claim 24, wherein the image generating scanner is adapted to obtain graphic data of a human body that includes at least a part of the colon and the material of interest is the tissue of the colon.
26. The image generation system according to claim 20, wherein the image generating scanner is adapted to obtain graphic data of a region of the human body such as at least a part of the colon.
27. The image generation system according to claim 20, wherein the classification performed by the processor eliminates the volumetric elements of the volumetric data series representing at least one intracolonic fluid, residual feces, bone and non-colonic tissue.
28. An image generating system for mapping the optical texture of at least one optical image from a series of acquired monochromatic data consisting of: An image generating scanner in order to obtain the series of monochromatic data; a processor, said processor divides the series of acquired monochromatic data into a plurality of classifications that represent a plurality of textures, dividing the optical image into a plurality of classifications by color that represent a second plurality of textures, generating a pattern of textures for the plurality of classifications by color, coupling the texture models to the plurality of classifications of the monochrome graphic data, and applying the texture models to the monochrome graphic data; and a monitor operatively linked to the processor to display a representation of the graphic data with the texture models applied.
29. The image generator system for mapping the optical texture according to claim 28, wherein the image generating scanner is a computer tomography scanner.
30. The image generator system for mapping the optical texture according to claim 28, wherein the image generating scanner is a magnetic resonance scanner.
31. The image generator system for mapping the optical texture according to claim 28, wherein the graphic data with applied texture models is a color representation of the object being illustrated.
32. A system for performing a virtual colonoscopy comprising: a scanner image generator to obtain the graphic data of a colon; a processor, said processor that receives the graphic data and identifies the interior of the colon, which generates a central line to navigate through the interior of the colon, which detects a collapsed region of the interior of the colon, and which extends a central line through the colon. the collapsed region; a monitor operatively linked to the processor to display a representation of the data.
33. The system for performing a virtual colonoscopy according to claim 32, wherein the processor detects the collapsed region when calculating the entropic values in relation to the intensity values of the graphic data near the center line and identifying at least one of the Entropic values that are above the threshold value.
34. The system for performing a virtual colonoscopy according to claim 32, wherein the processor virtually dilates a detected collapsed region of the colon in accordance with the properties of the colon wall.
MXPA/A/2001/009388A 1999-03-18 2001-09-18 System and method for performing a three-dimensional virtual segmentation and examination MXPA01009388A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US60/125,041 1999-03-18
US09343012 1999-06-29

Publications (1)

Publication Number Publication Date
MXPA01009388A true MXPA01009388A (en) 2002-06-05

Family

ID=

Similar Documents

Publication Publication Date Title
JP4435430B2 (en) System and method for performing three-dimensional virtual subdivision and inspection
KR100790536B1 (en) A method for generating a fly-path through a virtual colon lumen
US7194117B2 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US7477768B2 (en) System and method for performing a three-dimensional virtual examination of objects, such as internal organs
US5971767A (en) System and method for performing a three-dimensional virtual examination
AU759501B2 (en) Virtual endoscopy with improved image segmentation and lesion detection
IL178768A (en) System and method for mapping optical texture properties from at least one optical image to an acquired monochrome data set
MXPA01009388A (en) System and method for performing a three-dimensional virtual segmentation and examination
MXPA01009387A (en) System and method for performing a three-dimensional virtual examination, navigation and visualization
MXPA99002340A (en) System and method for performing a three-dimensional virtual examination