CA2365062A1 - Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset - Google Patents

Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset Download PDF

Info

Publication number
CA2365062A1
CA2365062A1 CA002365062A CA2365062A CA2365062A1 CA 2365062 A1 CA2365062 A1 CA 2365062A1 CA 002365062 A CA002365062 A CA 002365062A CA 2365062 A CA2365062 A CA 2365062A CA 2365062 A1 CA2365062 A1 CA 2365062A1
Authority
CA
Canada
Prior art keywords
rendering
dataset
volume
segmentation
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002365062A
Other languages
French (fr)
Inventor
Vittorio Accomazzi
Harald Zachmann
Arun Menawat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cedara Software Corp
Original Assignee
Cedara Software Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cedara Software Corp filed Critical Cedara Software Corp
Priority to CA002365062A priority Critical patent/CA2365062A1/en
Publication of CA2365062A1 publication Critical patent/CA2365062A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A system for allowing fast review of scanned baggage represented by 3D
data set is disclosed. The system comprises a scanning device producing a 3D
data set of a piece of baggage, a computer (workstation) with a monitor and software as detailed below, a custom-made button box with a built-in track ball to control a pointer on the monitor, and a network connection between the scanning device and the computer to transfer the image data from the scanning device to the workstation. A system for visualizing and extracting 3D objects of interest from scanned baggage represented by a 3D data set is also disclosed.
The system comprises a scanning device producing a 3D data set of a piece of baggage, a computer (workstation) with a keyboard, pointing device, a monitor, and a software, and a network connection between the scanning device and the computer to transfer the image data from the scanning device to the workstation.

Description

Fast Review of Scanned Baggage, and Visualization and Extraction of 3D~ Objecfs of inferest from the Scanned Baggage 3D dataset Field of the Invention The present invention relates to use of multiple volume rendered views of s scanned baggage data sets to speed up baggage inspection process. The present invention also related to visualization and extraction of 3D objects of interest from scanned baggage represented by a 3D data set.
Summary and Advantages of the Invention The present invention provides a technology, which allows fast review of scanned baggage represented by a 3D data set, which is generated using computed tomography (CT), magnetic resonance (MR), or other 3D imaging technology. A human operator who is screening baggage for potential threats, drugs; or other objects of interest (001) is presented with multiple 3D views of a bag on a computer screen. Each view shows one or more materials of different ~ s densities contained in the bag such as metal or inorganic materials, from different directions in different colors. The method of the invenfiion also allows the operator to make homogeneous regions translucent and thus make thick objects appear more translucent using a method called "opacity modulation". A
larger 3D view shows the bag rotating around multiple axis, thus giving the 20 operator a quick insight into the contents of the bag from multiple directions.
Furthermore, the present invention provides a technology,~which allows visualization and extraction of 3D objects of interest from scanned baggage represented by a 3D data set, generated using computed tomography (CT), magnetic resonance (MR), or other 3D imaging technology. A human operator 2s who is screening baggage for potential threats, drugs, or other objects of interest (001) uses a pointer device, e.g. a mouse, to select one or more points of the 001 on a 3D view of the whole bag presented on a computer screen. The method automatically extracts and displays the object whose final outlines can be fine-tuned using a set of parameters.
2 A 3D view is the visualization of a three-dimensional data set in a two-dimensional image. The most common methods for this kind of visualization are orthogonal or perspective projections (basically a simulation of an X-ray image) and orthogonal or perspective volume rendering (see "Introduction to Volume Rendering (Hewlett-Packard Professional Books)" arthold Lichtenbelt, Randy Crane, Shaz Naqvi).
The present invention is designed to increase throughput of scanned baggage at security checkpoints at airports, buildings, cruise ships, etc, by speeding up the process of inspecting the baggage and also to helps the operator to quickly recognize the contents of a densely packed, scanned bag, where multiple objects of materials with different densities or the conveyor belt are obstructing each other preventing easy recognition in just one view of all materials. Further, the invention is devised for almost automated extraction of objects of interest from the 3D volume representing the scanned object for the purpose of 3D visualization, volume measurements, or ultimately identification of that object.
Conventional technologies has used visual inspection of 2D X-ray images of scanned baggage by human operators, possibly with the help of highlighting parts of the image using a pseudo-color display. To date, objects hidden behind other objects of the same or different densities could only be made visible using automatic detection algorithms. Different colors are.used for materials with different densities or identified by using automatic detection algorithms.
Advantages of the present invention are as follows:
(1 ) Easily identify objects of interest, which are not easily identifiable in a 2D or 3D view of the complete bag because they are obscured by other, thicker objects of denser materials or the conveyor belt.
(2) Minimal user interaction by presenting the contents of the.bag from different viewing angles. It has to be faster than hand-searching the bag.
3 (3) No tedious manual editing of the 3D view required.
(4) No tedious manual segmentation of the object of interest required.
(5) Use of different colors for different material densities makes objects visually distinguishable without prior segmentation or classification.
All of these lead to faster inspection process per bag.
A further understanding of the other features, aspects, and advantages of the present invention will be realized by reference to the following description, appended claims, and accompanying drawings.
Brief Description of the Drawingis ~o Embodiments of the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 illustrates basic principles of the 'present invention and also specific embodiments according to the invention.
Detailed Description of the Preferred Embodiments) ~ 5 According to one aspect of the present invention, there is provided a system and method (algorithm) for allowing fast review of scanned baggage represented by 3D data set.
The system of the invention comprises a scanning device producing a 3D
data set of a piece of baggage, a computer (workstation) with a monitor and 20 software as detailed below, a custom-made button box with a built-in track ball to control a pointer on the monitor, and a network connection between the scanning device and the computer to transfer the image data from the scanning device to the workstation.
Following steps summarize the first example of the invention:

1. At a carry-on baggage checkpoint at an airport a piece of baggage is scanned in a CT scanner.
2. The 3D reconstructed image data is transferred from the CT scanner over the network connection to the workstation.
s . 3. Upon arrival at the workstation the images the operator pushes a button on the button box to display the next bag. This causes the 3D data set to be automatically rendered three-dimensionally in multiple windows on the display device using a volume rendering technique. For example, a layout like in Fig. 1 can be presented. The display of fewer or more viewports in different ~o layouts is possible through the use of configuration parameters. Each 3D
view in a window shows the bag in a different way, e.g. from a different angle, exposing materials of different densities in different colors. The details of the different display possibilities are explained in the following.
Layouts 15 (1 ) The inventive concept includes presenting one large interactive 3D
View and a number of smaller, static 3D Views. The smaller views are intended to present 3D snapshots that each show all objects in the bag, which are of one material type, i.e. organic, inorganic, and metallic in pseudo color. Shaded volume rendering is most suited for this task. Another option is to show one of 20 the views in X-Ray mode, i.e: to create an artificial projection view like in a traditional projection X-Ray radiograph. This will enable the operator to quickly identify any threat objects that are well isolated and not hidden by other objects.
The smaller views are rendered at a lower resolution (e.g. downsampled to half in each direction) and thus faster than the large, interactive view.
2s (2) The large 3D view is intended to let the operator apply different visualization modes through viewport interaction, buttons, and use of GUI
elements, but there will also be automated modes that allow the operator to just watch the screen. The large 3D view will be displayed in a two-phase approach:
first high speed at half the resolution, then high quality at the finest resolution.
(3) The operator can swap the contents of two viewports at any time by dragging the mouse pointer from one viewport into the other. This allows the 5 operator to quickly get to the object of interest in case he spots something in the smaller views and wants to inspect it more closely using the large view.
Visualization options Automated features These features don't require the operator to use any interaction device.
~o (1 ) Auto-Rotate: The large 3D view is rotating in a tumble motion, showing the bag from all sides. The operator can tum material types individually on or off while the bag is rotating.
(2) Auto-Explore: A spherical volume of interest (radius is configurable) is automatically moved around the bag in a random motion exposing different parts ~ 5 of the bag and thus focusing the operators attention on one part of the volume at a time. This gives the operator the impression he is looking at the bag with a flashlight. Note that this will also give the operator a view into the inside of the bag.
(3) Always hide conveyor belt by applying a clipping region as described 2o in the additional descriptions A, B and C attached hereto.
Note that the operator can turn the automatic motion off at any time and take confrol of the motion for the first two items using the track ball.
Interactive features These more advanced features require the operator to use the track ball 25 on the button box to either interact directly with the objects in the viewport or use
6 the GUI. More frequently used operations would also be accessible via special-purpose buttons on the button box. These interactions only affect the large viewport:
(1 } Rotate,~zoom, pan, window/level through viewport interaction using s track ball.
(2) Turn different material types in the whole volume individually on and off (for these we would likely want buttons on the button box). This includes showing all materials by pressing a special button.
(3) Enhance object contours individually for each material type, using a ~o feature called "opacity modulation" in the additional description A, B and C
attached hereto.
(4) Suppress all objects of a certain material type smaller than a certain volume (volume could be controlled by operator using slider bar).
According to another aspect of the present invention, there is provided a 15 method and system for visualizing and extracting 3D objects of interest from scanned baggage represented by a 3D data set. The system of the invention comprises a scanning device producing a 3D data set of a piece of baggage, a computer (workstation) with a keyboard, pointing device, a monitor, and software as detailed below, and a network connection between the scanning device and 2o the computer to transfer the image data from the scanning device to the workstation.
The following steps illustrate one embodiment of the present invention.
(1 ) At a carry-on baggage checkpoint at an airport a piece of baggage is scanned in a CT scanner.
2s (2) The 3D reconstructed image data is transfen-ed from the CT scanner over the network connection to the workstation.
7 (3) Upon arrival on the workstation the images are automatically rendered three-dimensionally on the display device using a volume rendering technique.
Shaded volume rendering is most suitable here, see the attached additional description and "Introduction to Volume Rendering (Hewlett-Packard s Professional Books)" arthold Lichtenbelt, Randy Crane, Shaz Naqvi.
(4) The operator uses the pointing device to specify a point of interest on the volume-rendered image, i.e. a point within an object that he wants to inspect more closely (seeding).
(5) The method finds the point in the 3D data set, which corresponds to the point, which was specified by the operator on the volume-rendered image in the previous step (coordinate query). The operator may repeat steps 4 and 5 any number of times to specify more seed points.
(6) The method automatically identifies the parts of the 3D data set that belong to the 3D seed points) based on some user-definable criteria like ~s similarity in.voxel values or proximity (classification).
(7) The method renders just that object in a 3D view.
(8) The user may fine tune the outline of the extracted object by modifying the criteria that were used for the classification in step 6 and review the result in an iterative manner until he is satisfied with the visualization.
20 Details of the above embodiment will be described as follows:
(a} In order to perform step 5 the 3D point, which is most likely the point the operator wanted to select, has to be automatically determined. This point lies somewhere on the ray that originates on the point that the user selected in the image and intersects the 3D data set according to the chosen projection model.
2s The point is determined according to the method detailed in the additional description A, B and C attached hereto.

(b) One possible set of criteria to be applied in step 6 that define which voxels in the 3D data set belong to the same object as the seed point are defined in detail in the attached additional descriptions A, B and C. One important criterion is based on the degree of connectivity between the seed point and a given point. The application of this criterion results in a new 3D
data set representing the degree of connectivity between the seed points and any point in the original 3D data set. A pre-defined threshold for the minimum level of connectivity is applied to the new 3D data set, resulting in a binary 3D
object that is then used in step 7 as a mask in the volume rendering process. This means only voxels that are part of the binary 3D object are used as input to the volume rendering process thus hiding all structures outside the 3D object.
(c) As described in step 8, the user may fine tune the binary 3D object, which was calculated using the method described in (b) using one of the following parameters:
15 - Manually adjusting the threshold for the minimum. level of connectivity mentioned in the additional descriptions A, B and C attached hereto.
- Defining a so-called "contrast table" as detailed in the section "Contrast table"
in the attached additional descriptions. This is a look-up table mapping the values in the original 3D data set into contrast-enhanced values, by essentially 2o suppressing all values outside the range of interest.
- Requesting the application of an additional distance criteria that reduces the connectivity of voxels farther away from seed points relatively compared to voxels close to the seed points. This is described in detail in the additional descriptions A, B and C.
25 - Eliminating all connected components of the binary 3D object that are smaller than a user-defined volume. The volume could have been determined by using the method described in one of the co-pending patent applications, which are filed by the same applicant.
(d) Other seed-based region growing methods to determine the 3D object 3o in step 6 may be applied. These are described in the attached additional descriptions A, B and C, and "Digital Picture Processing", 2"~ ed, Rosenfeld, A
and Kak. A.C., Academic Press, New York, 1982.
The present invention will be further understood by the additional descriptions A, B and C attached hereto.
s While the present invention has been described with reference to specific embodiments, the description is illustrative of the invention and is not to be construed as limiting the invention. Various modifications may occur to those skilled in the art without departing from the true spirit and scope of the invention as defined by the appended claims.

Additional Description A
"IAP PrS Segmentation Architecture"

IAP PrS Segmentation Architecture Introduction Imaging Application Platform (IAP) is a well-established platform product specifically targeted at medical imaging. IAP supports a wide set of functionality including database, hardcopy, DICOM services, image processing, and reconstruction. The design is based on a client/server architecture and each class of functionality is implemented in a separate server. 'This paper will focus on the image processing server (further referred to as the processing server or prserver) and in particular on the segmentation functionality.
Segmentation, or feature extraction, is an important feature for arty medical imaging application. When a radiologist or a physician look s at an image he/she will mentally isolate the structure relevant to the diagnosis. If the structure has to measured and/or visualized by the computer the radiologist or physician has to identify such structure on the original images using the software, and this process is called segmentation. For the purpose of this document, segmentation is a process in which the user (radiologist, technician, or physician) identifies which part of one image, or a set of images, belongs to a specific structure.
The scope of this white paper is to describe the tools available in the IAP
processing .server that can automate and facilitate the segmentation process. They are presented in terms of how they operate and how they can be used in combination with the visualization and measurement functionality. The combination of these functionality allows the application to build a very effective system from the user's perspective in which the classification and measurements are carried out with a simple click This is referred as Pbint and Click Classification (PCC).
Several seginentaxion tools have been published. The majority of them are designed to segment a particular structure in a specific image modality. In the IAP processing server we implemented algorithm, which are proven and have a large applicability in the clip ical practice. The tools have been chosen in order to cover a large set of clinical requirement s.
Since we recognize that we can not provide the best solution for all segmentation needs, our architecture is designed to be extensible. If the applicaxion requires a specific segmentation algorithm, it is possible to extend the functionality supported by the processing server through a dll or a shared library.
This. white paper assumes that the reader is familiar with the IAP
processing server architecture and has minimal experience with the IAP
C,edara Software Core. Page 1 IAP PrS Segmentation Architecture 3D visualization functionality. The reader can refer also to the "IAP-PrS
Image Processing" QUhite Paper and "PrS Rendering architecture" D~hite Paper.
Cedara Software Core. Page 2 IAP PrS Segmentation Architecture Glossary Volume Rendering A technique used to visualize three-dimensional sampled data that does not require any geometrical intermediate structures.
Surface Rendering A technique used to visualize three-dimensional surfaces represented by dither polygons or voxels that have been previously extracted from a sampled dataset.
Interpolation A set of techniques used to generate missing data between known samples.
Voxel A three dimensional discrete sample.
Shape Interpolation An interpolation technique for binary objects that allows users to smoothly connect arbitrary contours.
Multiplanar or Curved Arbitrary cross sections of a three-dimensional sampled Reformatting dataset.
Binary Object (Bitvol) Structure which stores which voxels, in a slice stack, satisfy a specific property (for example the voxels belonging to an anatomical structure).
ROI Region of interest. Irregular region which includes only the voxels that have to be processed. It is very often represented as a bitvol Segmentation Process which lead to the ident~cation of a set of voxels in an image or set of images, which satisfy a specific property.
Cxdara Software Core. Page 3 IAP PrS Segmentation Architecture Segmentation Tools The segmentation process can vary considerably from application to application. This is usually due to the level of automation and the workflow. This is related to how the application uses the tools rather then the tools themselves. The IAP processing server doesn't force any workflow. A general approach could be to automate the process as much as possible and allow the user to review and correct the segmentation.
The goal then is to minimize the user intervention rather then make the segmentation entirely automatic.
Overview The IAP processing server supports both binary tools, which have been proven through the years as reliable, as well as advanced tools with very sophisticated functionality.
The binary tools operate on a binary object; they do not use the original density from the images. These tools include binary region growing and extrusion.
The advanced tools operate on a gray level image. They typically allow an higher level of automation. These tools are based on gray level region ~o~g~
Figure 1.0 shows a schematic diagram of how these tools operaxe all together.
Cedara Software Core. Page 4 IAP PrS Segmentation Architecture .
Shapc interpolation Gray Level R 'on growing Binary 1 ~eshold ~ day ~vel Object ~ ~ Object F.~~trusion Region growing Figure 1.0 Taxonomy of the segmentation tools supported by the IAP processing server.
The scope of each tool in Figure 1.0 is as follows:
~ Shape Interpolation: Reconstruct a binary'object by interpolating an anisotropic stack of 2D ROIs. This functionality is implemented in the Recon object ~ Extrusion: Generate a binary object by extruding in one direction.
1"his functionality is implemented in the ExtBv object ~ Thresholding: Generate a binary object by selecting all the voxels in the slice stack within the range of densities. This functionality is implemented in the Thr3 object.
~ Binary Region Growing: Connectivity is evaluated on the binary image. This functionality is implemented in the Seed3 object.
~ Gray Level Region Growing: Connectivity is evaluated on the gray level image, with no thresholding necessary before the segmentation process. This functionality is implemented in the Seg3 object.
The IAP processing server architecture allows to connect. these objects in any possible way. This is a very powerful feature since the segmentation is usually accomplished in several steps. For example Cedara Software Core. Page 5 /D' 6 IAP PrS Segmentation Architecture the slice stack can be threshold and then the structure isolated using a region growing. Figure 1.1 was generated using this technique.

A g Figure 1.1: Region growing after a normal threshold can be used to isolate an objects very e~ciently. Image A is the result of the threshold with the bone window in the CT dataset. The bed is removed with a single click of the mouse. The resulted binary object is used as a mask for the Volume Renderer.
Figure 1.2 shows the part of the pipeline which implements the segmentation in Figure 1.1. The Ss object is the input slice stack, and contains the original images. The Thr3 object is the object that performs the thresholding, and the Seed3 object performs the region growing on a point specified by the user.
Ss Thr3 --~. Seed3 Figure 1.2 : The pipeline used for the generation of the binary object in Figure 1.1.
The IAP processing server also supports several features for manipulating binary objects directly:
~ Erosion ~ Dilation ~ Intersection Cedara Software Corp. Page 6 IAP PrS Segmentation Architecture ~ Union ~ Difference ~ Complement For example as we'll see in the next section it is necessary to dilate a binary object before using it as a clipping region for Volume Rendering.
For example, in the pipeline in Figure 1.3, the binary object has to be dilated before the connection to the Volume Rendering pipeline. This can be accomplished by simply adding a new Bv object at the end of the pipeline, as shown in Figure 1.3.
Figure 1.3 : Pipeline in Figure 1.2 with the Bv object added which will perform a dilation on the result of the region growing.
Binary Tools The tools presented in this sections implement the well known techniques that have been used for several years in the medical market.
Some of them, like Seed3, extend the standard functionality in order to minimize the user intervention.
Thresholding (Thr3) Thresholding is on of the most basic tools and is often used as a starting point in order to perform more complicated operations, as shown in Figure 1.l. The Thr3 object reconstructs a binary object selecting all the vowels in a specific density range in the slice stack. If the slice stack is not anisotropy the application can choose to generate the binary object using cubic or nearest neighbor interpolation.
Thr3 also supports an automatic dilation in the X and the Y direction.
This is useful in situations, like the one in Figure 1.3, where the binary object be has to be used in the Volume Rendering pipeline.
Extrusion (ExtBv) Extrusion projects a 2D shape along any direction. This feature is very powerful when it is used in conjunction with 3D visualization. In fact, it Cedara Software Corp. page 7 /b~~' IAP PrS Segmentation Architecture allows irrelevant structures to be eliminated very quickly and naturally, as shown in Figure 1.4.

Figure 1.4 : Extrusion is a mechanism which works very well in conjunction with 3D visualization. The user draws ROI which defines the region of the dataset in which he is interested in. The data outside the region is removed.
Figures 1.4.A and 1.4.B show the application from the user's perspective.
The user outlines the relevant anatomy and only that area will be rendered. Figure 1.4.C shows the binary object that has been generated through extrusion. This object has been used to restrict the area for the volume renderer, and so eliminate unwanted structures.
Shape Interpolator (Recon) The shape interpolator reconstructs a 3D binary object from a stack of 2D ROIs. The stack of 2D ROIs can be unequally spaced and present branching, as shown in Figure 1.5. The Recon object supports nearest neighbor and cubic interpolation kernels, which are guaranteed to generate smooth surfaces, (see Figure 1.6). This functionality is used when the user manually draws some ROIs on the original slices or retouches the ROI generated through a threshold.
Cedara Software Corp. Page 8 IAP PrS Segmentation Architecture i II
ri A B

Figure 1.6 The shape interpolation can be accomplished with cubic interpolation (A) or nearest neighbor interpolation (B).
Binary Connectivity (Seed3) Connectivity is a well proven aechnique used to quickly and efficiently isolate a structure in a binary object. The user, or the application, identifies a few points belonging to the structure of interest, called seed points. All the voxels that are connected to the seed points are extracted, typically removing the background. This process is also referred as "region growing".
Region growing is very often used to 'clean' an object that has been obtained by' thresholding. Figure 1.1 shows an example of a CT dataset where the. bed has been removed simply by selecting a point on the skull.
Cedara Software Corp. Page 9 Figure 1.5 The Shape interpolation process reconstruct a 3d binary object from a set of 2D ROIs, even if they are not equally spaced and have branching.

IAP PrS Segmentation Architecture This functionality is very effective when combined with the Coordinate Query functionality of the prserver volume renderer. Coordinate Query allows the application to identify the 3D location in the stack when the user clicks on the rendered image. By combining these two tools, the entire operation of segmentation and clean up can be done entirely in 3D
as shown in Figure 1.7. See the "PrS 3D Rendering Architecture" White Paper for more details on the Coordinaxe Query.
A I B
Figure 1.7 MR pheriperial angio dataset. The Volume Rendering visualization of the vasculature also includes other unrelated structures (A). By just clicking on the vessel, the user can eliminate the background irrelevant structures (B).
The Seed3 object implements a six-neighbor connectivity. It also supports the following advanced functionality in order to facilitate the identification of the structure:
1. Tolerance: when a seed point is not actually on the object, Seed3 will move it to the nearest :voxel belonging to the binary object, if the distance is less then the tolerance specified by the application. This functionality allows the application to compensate for rounding errors and imprecision from the user.
2. Filling. This functionality removes all the holes from the object segmented It is sometimes used for volume measurements.
3. Small links: When a bitvol is generaxed using a noisy dataset, several parts of could be connected by narrow structures. The Seed3 object allows the "smallest bridge" in which the region growing can grow to be specified. This functionality allows the system to be insensitive to noise. Figure 1.8 shows how this feature can extract the brain in an MR dataset.
Cedara Software Corp. Page 10 IAP PrS Segmentation Architecture A B

Figure 1.8 : MR dataset of the brain. The seed point is set in the brain. In (A) the region growing fails in extracting the brain since there are small connections from the brain to the skin. In (B) the brain is extracted because the small connection are not followed.
4. Disarticulation: This is the separation of different anatomical structures that are connected. The application can specify two sets of seed points, one for the object to be kept and one for the object to be removed. Seed3 will: perform erosion until these two sets of points are no longer connected and then perform a conditional dilation the same amount as the erosion. This operation is computationally intensive: It works well if the bitvol has a well-defined structure, i.e. the regions to be separated do not have holes inside them, and narrow bridges link them.'On the other hand, if the regions are thin and the thickness is comparable to the thickness of the bridges, then the result may not be optimal. Figure 1.9 shows how this feature can be applied to select half of the hip in a CT
dataset:
fk~ g,~ ' T
r, :',. ' ;'r.. .: . 1 _ :.

A B.

Cedara Softavare Core. Fage 11 lAP PrS Segmentation Architecture Figure 1.9 In the binary volume in (A) the user sets one seed point to select the part to include (green) and one seed point to select the part to remove (red). The system identifies the part of the two structures with minimal connection and separates the structures there. (B) shows the result.
Gray Level Connectivity The concept of connectivity introduced by the binary images can be extended for the gray-level images. The gray level connectivity between two voxel measure the "level of confidence" for which these voxels belong to the same structure. This definition leads to algorithm which enables a more automate method for segmentation and reduces user intervention. Grey level connectivity tends to work very well when the structure under investigation has a narrow range of densities with respect to the entire dynamic range of the images.
The Basic Algorithm The algorithm takes a slice stack as input and a set of seed points. For each voxel in the stack it calculates the "level of confidence" with which this voxel belongs 'to 'the stmcture identified by the seeds. Voxels far from the seeds or with a differern density than the seeds have a low confidence value, whereas voxels close to the seed points and with similar density have high confidence values. Note that no thresholds are required for this process: ~ From the mathematical point of view the "confidence level" 'is defined as connectivity from a specific voxel to a seed point. The definition of connectivity from a voxel to a seed point according to Rosenfeld is C(seedvoxel)~MaxP~,~"~EP(,~"~~ u,(z)]
Where P(seed,voxel) is auy possible path from the seed point to the voxel, ~(.) is a function that assigns a value between 0 and 1 to each element in the stack. In,ouryapplicazion u,(.) is defined as follows:
u,(voxel)=1- ~ . density(voxel)-density(seed) ~
The connectivity is computed as:
C(seedvoxel)' 1 MinPc~"~~ [Ma~E~,~~ I density(z)-density(seed) In simple terms the connectivity of a voxel to a seed point is obtained by:
Considering all the paxhs from the seed point to the voxel.
Cedara Software Corp. Page 12 IAP PxS Segmentation Architecture 2. Labeling each path'with the maximum difference between the seed point's density and the density of each voxel in the path.
3. Selecting the path with minimum label's value.
4. Set the connectivity as 1.0 minus the label's value.
For multiple seed the algorithm in step 2 uses the average density of the seed points.
The algorithm computes the connectivity values for each voxel in the stack and produces an output slice stack which is called "connectivity map". Figure 1.10 shows a 2D example using an MR image and its connectivity map. The area in which the seed has been placed has the connectivity; values higher then the rest of the image, and so it appears brighter.
...
A. I B
Figure 1.10 : Image, A shows an MR slice and the place where the seed point has been. Image B shows the connectivity map.
Figure 1.11 shows a 3D example in which the connectivity map is rendered using MIP. In this ~e~ample the dataset is an MR acquisition of the head, and several seedsY point have been set in the brain, which appears to be the brightest region.
Cedara Software Corp. Page 13 IAP PrS Segmentation Architecture Figure 1.11: MIP image of a connectivity map. In this example several seed points have been set in the brain of this MR dataset.
The MIP image shows that the brain is the brightest area.
The connectivity map is thresholded at different values in order to extract the area of :interest as a bitvol. Note that in this case the user need only to control' one threshold value. The connectivity map has always to be threshold from the highest value (which represent the seed points ) to a lower one defined by the user. The user increasing the threshold removes irrelevant structures and refines the anatomical structure where the deed has been planted. From the user perspecti ve this method is quite natural. and effective, the user visually browse the possible solution interactively. Figure 1.12 shows an example of user interaction; the connectivity map shown in figure 1.11 is thresholded at increasing values until the brain is extracted C,edara Software Corp. Page 14 IAP PrS Segmentation Architecture In figure 1.12 the binary object computed thresholding the connectivity map is applied as a mask for the volume renderer . As mention in the previous section it is necessaiy a small dilation before the connection to the volume rendering pipeline: Figure 1.13 shows the complete pipeline.
Seg3 'I'hr3 Bv Bvf .~~I.
___i Ss Vol ~. Cproj _..~
,,.~ : ~_ , Figure 1:13. pigeline used for the generation of the images in 1.12.
The connectivity map is . thresholded interactively changing the threshold for -the Th3 object. The Bvf object is used to avoid the pre-processing in the Vol Object. Please refer to the "PrS 3D
Rendering Architetture'' for a detailed explanation of the possible usage of the clipping tools in the Volume Rendering pipeline.
Contrast Table In order to optimize the segmentation in terms of accurary and speed the Seg3 object can use a 'contrast table which enhance the contrast between the anatomical structure under investigation and the rest of the anatomy.
The region growing process will operate on the image once it has been remapped with the contrast table. The connectivity will be calculated as follow:
Cedara Software Corp. . ,. . ,.. : Page 15 Figure 1:12: : The connectivity map shown in figure 1.11 is threshold and the binary object is used to exuact the anatomical feature from the original dataset. This process is done interactively.

IAP PrS Segmentation Architecture ~0' C(seed,voxel)=1-Min P~S~~~ [Max~EP~S~"~ ~ contrast_table(density(z))-contrast table(density(seed)) ~ ]
'The application can take advantage of this functionality in several ways:
~ Reducing the noise in the image.
~ Increase the accuracy of the segmentation eliminating densities that are not part of the anatomy under investigation. For example in a CTA dataset the liver doesn't contain intensity as high as the bone. Hence they can be remapped as zero (background) and excluded from the segmentation.
Limit the user mistakes: if the user sets a seed point in a region a region which is rernapped on a low value ( as defined by the application ) the seed point will be ignored. For example if the user in the intent to segment the liver in a CTA dataset sets a seed point on the bone, it will not considered during the region growing process: ' .
The application is not force to use the contrast table, when it is not used the systeni will operate on the original density values. For example the brain m figure 1.12 was extracted without contrast table.
The application can expose this functionality directly to the user, or if appropriate, use the rendering setting used to view the images.
In order to segment structure withy high density the window level that the user set in the 2D view can be used as contrast table.
The opacity 'curve to render a specific tissue can be used a remappmg table. T'he users in order to visualize the tissues properly has to set the opacity to 100% for all the densities in the tissues and then louver values for density which partially contains also 'part of ' other tissues. So the opacity curve implicitly maximizes the contrast between the tissue and the rest of the anatomy.
The Extended Algorithm The basic algorithm is a very powerful tool, as shown in figure 1.12. In order to extend its functionality the Seg3 object implements two extensions:
C,edara Software Corp. Page 16 IAP PrS Segmentation Architecture Distance 'path. In some: ituation the structure, which the user is trying to extract, is connected with something else. For example for the treatment planning of the AVM shownnn figure 1.14 the feeding artery and the draining vein have to be segmented from the nodule of the AVM. The density of these three structures is the same ( since it is the same blood which flaw in all of them ) and they are connected.

A B

Figure 1.14 l~ZR dataset of the head region showing an AVM.
The image A'if the MIA' of the dataset the image B is the binary segmentation of the dataset. Binary region growing is not able to segriiented the three anatomical structures ( veins, artery, avm ) require for the treatment planning.
In order to facilitate the segmentation of the structures Seg3 can reduce the connectivity value of airy voxel proportionately to the distance on the path froiwthe seed point. Seeding the vein, as shown in figure 1.15 will cause voxel with the same density, but in the nodule of the avm, to have lower connectivity value, and hence exclude them. Note that' he distance is measured along the vessel, since the densities outside the vessel's range will be remapped to zero by the contrast table and not considered. The user browsing through the possible solution will visually see the segmented area following the vein as shown in figure 1.15.
Cedars Software Corp. Page 17 IAP PrS Segmentation Architecture Figure 1.15 Vein segmented at different threshold value. The user changing the threshold can visually follow the vein. In order to generate these images the pipeline in figure 1.13 has been used. Distance path is usually used in conjucdon with contrast table.
Figure 1.16 shows the eitample analyz~l by Dr. Eldrige. In this case the functionality was used not just for segmentation but for increasing the understanding of the pathology following several vessels and see how they are interacting.
As we mention in the section "Binary Connectivity' disarticulation can be similar situation. However disarticulation is mainly designed for bone stnzctures and doesn't allow any level of control. Distance path is instead designed for vessels and allows the user to have a fine control on he. region segrt~ented.
2.. Growth 'along an" axe' In some protocols, like the peripheral ang~ographythe vessel'vvill follow mainly one axe of the dataset. The application- carp ase this information to facilitate the segmentation process 'and force the region growing process to follow the vessel along the main axe, and so select the main lumen instead of the small branches. Figure 1.17 shows an example of this functionality.
Cedars Software Core. Page 18 ,:;.: _ , Figure 1.16 Example 1.14 analyzed by Dr. Eldrige. Dr. Eldrige used the distance path functionality to follow the vessel involved iri the aneurysm: and analyzed their interaction.

IAP PrS Segmentation Architecture Figure 1.17: Segmentation of a vessel in a MR Peripheral angiography. Seg3 allows to associate weights to each one the axes, each weight represent an incremental reduction of the connectivity value for the region growing to follow that axe.
Embedding the knowledge The benefit of the algoiithin goes behind the fact that the application doesn't have to set a priori; threshold. The application cari embed the knowledge of the structure that the user is segmenting in several ways:
1. As presented in the previous section the contrast table is the simplest and effective way for the application to guide the region growing.
2. The number of densities'e~pected in the object can be used to guide.
the region growing: Note hat the process requires only the number of densities and not to specify the densities included. The threshould in the connectivity mag;identify the number of connectivity values included in the solution and hence the number of densities as defined by the C(voxel,seed) formula. Note that when the distance map or the growth along an axe is used the voxels with same contrast value can have different connectivity according to their distance to the seed point.
3. The volume size ( as nutYlber of voxels) of the object can be used to guide the region growing. The volume size of the object can be simply measured quex~irig the histogram of the connectivity map and adding all the values from the threshold to the mamimim value.
4. The relative position of the seed point can be used to guide the application in forcing ' the region growing process to follow a _.f; .
Cedara Software Corp. Page 19 IAP PrS Segmentation Architecture particular axe. For example in the dataset in figure 1.17 the user will set several seed points along the Y axe. Evaluating the displacement of the points in the ZX plane the application can estimate how much the vessel is following the Y axe and so how much the region growing has to be bound to that.
' The information gathered with the previous 4 points can be used by the application for two different purposed ~ Optimize for performances. The time requested by the Seg3 object is proportional to the number of voxels selected. Avoid including unwanted structure will speedup the process. For example m protocol used for dataset in figure 1.12 the volume of the brain can not be more then 30% of the entire volume since the whole Bead is in,the field of view. So the solutions first two solution could be not even included in the connectivity map since tlxe .region growing should have been stopped before.
Identify the best solution, the one that most likely is what the useris looking for. This solution can be proposed as default.
More specifically the .previous information can be used for these two purposed in the following way:
Optimize fot'FeiformancesIdentify best solution Setting to zero the densities, which are guaranteed not to belong to Contrast ~e ~~ to segmeat, Table will improve performances and also the quality of the segmentation by reducing the number of possible soluuoa The Seg3 ,object The threshold set accepts - as a is the input 'the nmrib~r connectivity map of densities to is actually the include in ' the number of densities solution. It will to be Number,.of:selecx. ahe :closerincluded in the ~ densities after solution after Densities the contrast table the contrast table as been as been applied. Once theseapplied.
densities have been included the process will stop.

The Seg3 object This value can be accept as a used to limit input the number the threshold in of vogel to the include. connectivity map.
Quemng the Volume histogram of the Size connectivity map the application can estimate the volume of the object segmented for each threshold Relative Constraining the Not Applicable.
position region growing of the rocess will indire in ut reduce the seed C,edara Software Corp. Page 20 IAP PrS Segmentation Architecture points number of voxel to include.
Using this functionality will increase the cost associate in the evaluation of each voxel.
The applicaxions is expected' to use some conservative values for the volume size and number of densities to stop the region growing process, and the realistic value to select the default solution.
The ability to embed the knowledge of the object under investigation makes gray level region~growing well suited for being protocol-driven.
For each protocols the application can define a group of preset which target are the relevant anatomical structure.
Binary versus Gray Level Region ;Growing The basic algorithm as presented in the previous section can be proven to be equivalent to a binary region growing where the threshold are known m advanced. So this process doesn't have to be validated for accuracy since the binary region growing is already in use right now.
Avoiding a priori knowledge of the threshold values has a major advantage f or the application:
1. The number of solution hat the user has to review are limited and pre-calculated. Requiring the user to set only one value, for the selection of the solution, means that the user has to evaluate ( at worst ) 256 solution for an 8 bit dataset, while using the binary region growing the user will have to evaluate 256'256= 65536 solution since all the combination of the low and high threshold have to be potentially analyzed.
2. Finding the best threshold is not natural from the user's perspective.
Figure 1.18.B shows a CTA of the abdominal region in which the Aorta has been segmented. To obtain this result the user has seeded the Aorta with the settings shown in figure 1.18.A.
C,edara Software Core. Page 21 lAP PrS Segmentation Architecture Figure 1.18 Image A shows the Aorta extracted fiom the dataset sho~'vn in figure B. In this case only one seed point was used:
In order to obtain the same result with the binary region growing the user has to manually identify the best threshold for the Aorta, which is shown in figure 1.19 and then seeded. Looking at figure 1.19.A is not clear-that these are the best setting for the segmentation and so they can be easily overlooked.

A B

Figure 1.19 Threshold setting necessary to extract the Aorta as in figure 1.18. The image A appears with several holes and it is not clear with the Aorta is still connected with the bone.
3. The threshold set by the user can be dictated by the volume of the object rather then the density values.
Our experience shows that. me quality of the result achievable with this functionality'~is not achievable with the normal thresholding. Even in situation in which'the thresholds are known in advanced is preferable to use this information as contrast table, and avoid binary segmentation.
Advance usage.
The gray level region growing allows reducing the time to perform the segmentation from the user perspective. It guides the user in the selection of the best threshold of the structure under investigation.
In the previous section we have been using the connectivity map f or the extraction of the binary object using the pipeline in figure 1.13. In some Cxdara Software Corp. Page 22 IAP PrS Segmentation Architecture situation it could be valuable to use the connectivity map to enhance the visualization. The connectivity map tend to have bright and uniform values in the structure seeded and darker values elsewhere. This characteristic can be exploded in the MMII' visualization to enhance the vessels in the MRA and CTA dataset. Figure 1.20 shows an example of this application; image 1.20.A is the 1VBI' of an MftA dataset, while 1.20.B is the MB' of the connectivity map. Figure 1.21 shows the MII' of the dataset in which the connectivity map and the original dataset have been averaged, and compared it with the MIP of original dataset in the upper left corner. .Irlahis :case it is visible that the contribution of the connectivity help to suppress the background values enhancing the vessels.
r 'r~y~
9:f -e:~i::::;':':::':::::

A B

Figure 1.20 'Image A. is' the MIP of a MRA pheriperial dataset.
Image B shows the connectivity map of the same region when the main vessel has been seeded.
C,edara. Software Corp. ~ - P~ 23 IAP PrS Segmentation Architecture Figure 1:21 ~MIP of therdataset obtained averaging the connectivity map and th~~: original Ii~taset shown in figure 1.20. In the upper left corner it is superimposed the MIP of the original dataset. It is visible the a~era.ging helps in suppressing the background density while preserving the details of the vessel.
In some situation it is not necessary to have the user setting directly the seeds for the identification of the anatomical structures. The seeds point can be identified as a result- of a threshold on the images under investigation. Note the threshold is necessary to identify only some point in the structure and not the entire structure, so the application can use very conservative values. For example in the CTA a very high threshold can be used to generate some seed point on the bone. An MRA a very high threshold can be used to generate seed points on the vessels. The seed points iri Figure;:I 21 'and 1.20 have been generated using this method. Figui-a 1:22 showsrari example of this technique.

A ._, B
..

Figure 1.2 . .~ ~~,~ws the seed point generated in a CTA
dataset o~ _ e~zfe,regi~c,,*for the identification of the bone. Image B shows'', a bone segmented from the same dataset. In this example ~he::user intervention is minimized only in the selection of the best threshold.
".~~ ., Seg3 supports this technique since it is designed to accept a bitvol as a input .for the identification- of '- the seed points. The bitvol can be generated by a threshold, and-edited automatically of manually.
Gedara Software Corp. ~ Page 24 ec~.;;. ,"9 qtr .;1~A iF°ir~.
,~ h~ y A ~~ ys,~7' '~.:rg ~q;t., i;=~.~i''.r .. ' ,. .

IAP PrS Segmentation Architecture VlStlahZatlon 1nd Segnlentatlon Visualization is the process that usually follows the segmentation. It is used for the validation of the segmentation as well as correction through the tools presented or~the p~~evious section.
N ' - n. ~ r:a. ~.
The IAP processing server supports two rendering engines that can be used for this task. A very sophisticated Volume Renderer and a Solid Renderer. Please refer to the "PrS 3D Rendering Architecture" White Paper for a detailed description of these two engines. This section will focus mainly on how these engines deal with binary objects.
,:
Volume Rendering Volume Rendering allows the direct visualization of the densities in a dataset, using opacity and colour tables, which are usually referred as classification.
The IAP processing .'servex extends the definition of class~cation. It allows define seyeral~ regions (obtained by segmentation) and associate a colour and opacity to each one of them. We refer as a local classification, since It allows to spe~fiy colour and opacity based on voxel's density and location. It i~ described' uiv details in the "Local Classification" section of the "PrS 3I~ Rendering Architecture" White Paper.
Local classification is necessary in the situation in which several anatomical structure .''share the some densities; situation extremely common in the medical imaging. For example in figure 1.18 the read density of the Aorta appears also in the bone due to the partial volume artifact.
So the goal of an application us~g Volume Rendering as a primary rendering tools is to classify the dataset, not necessarily to segment the data. The goal is to allow the user to point on an anaxomical structure on the 3D image and have the system classify that properly.1'his is the target of the "Point and Click Classification (PCC)" developed in. Cedara's applications which is based on the tools and techniques presented in this white paper .
As we'll,describe ui'this section segmentation and classification are tight together. _ , Cedara Software Corp. Page 25 IAP FrS Segm~rit~tion Architecture °..:,' .: t.
Segmentation a's 'a Mask The IAP volume renderer uses a binary object to define the area for each classification. When an application classifies an anatomy that shares densities with other anatomical structure, it has to remove the ambiguities on the shared densities defining a region ( binary object ) which includes only the densities of the anatomy.
The binary object has to loosely contain all the relevant (visible) densities of the anatomy, it doesn't have to define the precisely boundary of the anatomy. It is supposed to mask out the shared destines which do not belong to the anatomy. The opacity function allows the Volume Renderer to visualize the fine details in the dataset. The section "Local Classification" of the "PrS 3D Rendering Architecture" White Paper describe this cpncept. as well:
Figure 2.0 fsliow an ve~ample of this important concept. 2:0.A is the rendering:ofalie entire dataset, where the densities of the brain are shared with the skin and other structures. Although the dataset in 2Ø A has a very detailed brain it is not visible since the skin is on top of it. Using the shape interpolation the user can define the mask 2.0:B which loosely contains the brain, this mask removes the ambiguities of which are the densities of the brain and: 'allow .the proper visualization of the brain, 2ØC.
Figure 2:0 MR dataset in which the brain as been extracted. The dataset A bas been masked with the binary object B to obtain the visualization of the brain C. In this case Shape Interpolation was used to generate the mask. .
Cedara Software Core. ~. Page 26 IAP PrS Segmentation Architecture The benefits of this approach are two:
1. The definition. of the mask is typically time-effective, even in the case of figure 2.0 where the.mask is created manually it takes about 1 "2 minutes to an trained user.
2. The mask doesn'thave to precisely define the object, small error or imprecision are- tolerated. In figure 2.0 the 'user doesn't have to outline tlie~ brain in fme' details, rather to outline where it is present on a few slices.
Opacity and Segmentation The segmentation setting which are used for the definition of the binary object ( mask ) are related; to ,the: opacity used for the visualization. For example in figure 2.0 is the. user lower the opacity enough it will visualized the mask 2ØB instead of the brain 2ØC.
This will happens when the segmentation is based on some criteria and the visualization on different ones.
... ,.. s Figure 2.1 shows aineple in which the user click on the skull in order to remove fronn the bed in the background. The application uses a region growing based ori'tlie data~et thresholded by the opacity, and the result is shown in figure 2:1:8:' ( the: pipeline in figure 1.3 was used ). The mask generated contains the skull of the dataset, only the bone densities connected "to v the seed point. If the user lowered the opacity he will eventually see the mask itself, 2.1:C.
w A B C
Figure 2.1 In order to identify the object selected by the user the application can threshold the dataset, based on the opacity, and apply a binary regio' growing, as shown in image B. This method :. -Cedara Software Core. Page 27 IAP PrS. SegmCZ~tatiQri Architecture will generate an a mask C which is dependents on the opacity used for the threshold.
In general if a mask 'has been' generated using a threshold [t,,t~ can be used with a classification in"~avhich densities outside [tl,tz] have to set to zero.
There is a very simple method to get around this limitation if necessary.
The mask can be regenerated each time the opacity is changed. This will have a behavior more natural from the users' perspective as shown in figure 2.2.
'\!~ ,.~~~" ''..!,~
A
Figure 2.2 The opacity mask can be recomputed ad each opacity change. Image A shows the original opacity settings, image B
shows the result of the threshould and seeding , of the bone structure. Once the opacity is lowered from B the mask is recomputed, and -the result is shown in figure C.
The pipelineuased in..:2:2'is shown in figure 2.3, the difference with the pipeline used iri 1.3 is'that.the connection between the pvx opacity object and then Thx3 object is, kept~after that the mask is generated.
r ).~ J. ~.:.w, .
Thr3l--1~ Seed3)-.-,,( Bv ~ gvf ,_ ___.~ Ss , ' Vol ~ Cproj _.,~
s Figure 2.3 keeping the connection of the Thr3 object with the opacity pvx will allow regeneration of the mask on the fly.
Pipeline 2.3 will resolve the limitation imposed by the threshold used for the region growing, but it will trigger a, possibly expensive, computation for each opacity changes: It will hence limit the interactivity of this ~:.. ,~.. t ':r.
Cedara Software Core. ' Page 28 IAP PrS Segmentation Architecture operation. The performances of rotation and colour changes will remain unchanged.
Normal Replacement As explained in the "r'rS 3~ Rendering Architecture" V~lhite Paper the shading is computed using the normal for each voxel. The normal is approximated using the central difference or Sobel operators, which utilize the neighbor densities of the voXel. Once the dataset is clipped with a binary object the neighbor of the voxel on the surface changes, since the voxel outside the binary object are not utilized during the projection. So the normal of here voxels has to be recomputed.
Figure 2.4 shows why this operation is necessary. V~Jhen the bitvol clips in an homogeneous region, like in 2.4.A, the normals have several directions, and so the cut surface will look uneven with several dark points. Replacing the normal will make the surface look flat, as expected by the user.

A ::.,:~.

Figure 2.4 Image A shows the cut surface if the normals replacement doesn't take place. The surface looks uneven with some dark point since the normal could be zero in the homogeneous regic~ri~ 'ten the normal is replaced with the binary object's normal, Iyage B, the surface looks flat as expected by the user.
::.
The replacement of xhe riornial, is necessarywhen the application uses the binary object ~to cut the anatomy, as in figure 2.4, not when it uses to extract the. anatonry as in figure 2.0 or 2.1.
Since the same binary.object can be used for both purposes at the same time the IAP renderex.will replace the normal all the time. In situations in which the binary object is~'used to extract some feature the application can simply dilate the binary mask to avoid the normal replacement. In Cedara Software Corp. Page 29 IAP Pry Segmentation Architecture this situation the mask is based on a threshold or gray level region growing and ~ the dilation will guarantee that the boundary of the mask will be on transparent vo$els. and hence invisible. It is suggested a dilation of 2 or 3 voxels.
Note that in the situation in which a binary object is used for extracting the anatomy and cut at the same time, it is generated as intersection of binary objects. The binary object used for the extraction has to be dilated before the intersection. In this way the quality of the final render is guaranteed while allowing the user to edit the object. Figure 2.5 shows an example of this scenario.

Figure 2.5 Image A shows the skull which has been extracted as described ~n~2.I '~'l~is obyect is cut using extrusiion, as shown in 1.4.
1'he bitvQl for-the"-eXtr21e4o~ is dilated, while the bitvol extruded is not, image B shows the final binary object superimposed on the image A: The cutting area is exacly as defined by the user, while the mask is dilated to guarantee the image quality.
.... . " N..,:.. .. .
For unshaded volume rendering, the dilation is not necessary and, it will not affect the, final quality.
Multi Classifications As presented in the "PrS 3D.~Rendering Architecture" White Paper the IAP processing server allows; the definition of several classifications in the same dataset, each one associate with its own binary object.
The binary objects can be overlapping, and so several classifications can be defined in;the same location. This represent the fact that in the dataset the one vo~.~l cazi°:c4zitai#liseveral anatomical structures, and it is in the :. .. ,. .; . : ,:i~.
Cedara Software Corp. " ' , Page 30 IE1P PrS Segmentation Architecture nature of the data: This is also the same reason for which these structures shares densities: The next section will analyze this situation in details, since it is relevant for the measurements.
The IAP processing: erve~~>supports two polices .for the overlapping classifications; v' and the ' applicaxion can extend with a customized extension.
Surface Rendering The Surface Rendering engine renders directly the binary object. The IAP
processing server supports the visualization of several objects, each one with its own rotation matrix. Other features include texture mapping of the original gray level, anddepth shading. Please refer to "PrS 3D
Rendering Architecture" White Paper for a full description of the functionality.
-:...
i;::..
3 .
Cedara Software Core. Page 31 IAP PiS Segmentation Architecture Measurements and Segmentation One of the most important functions in any medical imaging application is to quantify the abnormality of the anatomy. This information is used to decide on the treatment to apply and to quantify the progress of the treatment delivered.
The IAP processing server supports a large number of 2D and 3D
measurements. In this white, paper we'll describe how the measurements can be used in conjunction with segmentation.
The measurement model has to follow the visualization model in order to be consistent with it and measure what the user is actually seeing, binary measurements for Surface Rendering or gray level measurements for Volume Rendering.
In this white paper we'll focus on surface and volume measurement since they are the most commonly used. For a complete list of measurement supported please refer to the Meas3 man page.
Definition of the Measurements During the sampling process the object is not necessarily aligned with the main ages. This cause that some vogels are partially occluded, so in other terms the volume sampled by this vosel is only partially occupied by the object being sampled:. This' is know as "Partial Volume Artifact" , and cause that the object spans across voxels with several different densities.
Figure 3.0 shoavs graphically this situation. Object A is next to object B.
dA is the density of voxel 100% filled with object A which we'll consider also homogeneous. dB is the' density of voxel completely filled with object B. We we'll also assume in'this example that d~ < d$ and the value of the density of the background is zero.
~w~~"a II~
Figure 3.0 In this example there are two objects, A and B, which have been sampled along the grid shown in the picture.
Cedara Software Corp. Page 32 IAP PxS Segmentation Architecture The vogels included by the object A can be divided in three different regions:
~ Red Region : Voxel completely covered by object A. They have density dA.
~ Yellow Region : voxel partially covered by object A and background.
~ Blue Region : Voxel covered by object A and B.
Since in the scanning process the density of a vogel is the average density of the materials in the volume covered by the vogel; we can conclude that the yellow densities have value less then dA, the blue range greater then dA. In the picture there is also highlighted the green area, which axe voxel partially occluded with the material B. The range of densities of the green vogels overlap with the red, yellow and blue. Graphically the distribution of the densities is shown in figure 2.1 den~itzer Figure 2.1 The .yellow region has voxels partially occluded with object A and the background, hence the density will be a weighted sum of dA and background density, which is zero. Similarly the blue region has voxel with densities between d~ and dB. The green region has voxel partially occluded with object B and background;
hence they can have -the full range of densities.
Depending on the model used by the application (binary ox gray level) the volume and the surface of the object A can be computed according to the following rules:
Gray Level Binary For each vo%el The number of is the vosel ~taset the volumeof the binary object Volume covered by the representing the object object A has to be added.A are counted.

Surface Volume RenderingThe voxel at the doesn't define bound of the b' an Cedara Software Corp. Page 33 IAP PrS Segmentation Architecture surface for the object, object representing so this measurement is voxel A are counted.
not applicable.
Gray Level Let us assume that with the 'previous segmentation tools we can identify all the voxels in which object A lays, even partially. Figure 2.2 shows the object and also the histogram of the voxel in this area.
den.ritfer Figure 2.2 The oulined area represent the voxels identified by the segmentation process: ' Tfie histogram of the voxel in this area is shown in the left. The' difference between this histogram and the one in figure 2.1 is that the voxel in the green region are not present.
The difference of the histogram in figure 2.2 and 2.l .is that the voxel of the green area are removed..As described in the section "Segmentation as a Mask" this correspond in removing the densities which are shared across the objects and do not belong to the object segmented.
Note that inside the mask the density of each voxel represents the occlusion (percentage) of the Object A in the volume of the voxel, regar, Bless its location.
To correctly visualize' and iizeasure the volume of the Object A we have to define for each density the percentage of the volume occupied. Since the density is . the averageA of the objects presented in the voxels, the estimate is straightforward::
1. For the voxels in the yellow region the occlusion is simply density/dA
since the density of the background is zero.
2. For the vogels iii the blue region the occlusion for a density is (density-dB)/(dB-d~: ' Setting the opacity according with these criteria guarantees good quality of the volume rendered image. This opacity curve is shown in figure 2.3.
Cedara Softavare Core. Page 34 1AP PrS Segmentation Architecture voxe~r ~'~.~-3--% Object B ( Opacity ) .. ~ ..
.~ !~ . 3 den~itzer Figure 2.3 Opacity curve for the voxels in the segmented region.
The opacity for each density represent the amount of object A in the region covered by the voxel.
So in order to measure the volume the application can request the histogram of the densities values in the segmented area and weigh each density with the opacity:
(*) Volume = ~aE~;~;~, histogram(d) * opacity( d ) As explained in the section "Opacity and Segmentation" and "Gradient Replacement" the application should dilate the bitvol is generated thought a threshold and region growing. This operation will include voxels semi-transparent and so it will not affect the measurement of the volume as defined in (*) It is not really possible to measure the surface on the object; since the borders of it ate riot 'kriov~ra. However looking at picture 2.2 it is clear that it can be estimated with the yellow and blue areas since these are the voxels in which the border of the object laps.
The reader can find more information and more detailed mathematical description in "Volume- Rendering" A. Drebin et altr. Computer Graphics August 1988:
Binary To segment an object the application typically set two thresholds, depending on those sorne part of the yellow and blue region can be included or excluded by the segmentation. Figure 2.4 shows an example of this situation. The area of the histogram between the two thresholds is the volume of the object created.
. . . densities Cedara Software Corp. Page 35 IAP PrS Segmentation Architecture Figure 2.4 VAhen the applicatian sets two threshold will include some voxels in the blue and yellow region. The picture on the left shows the voxels included in the segmentation, while the figure on the right shows the histogram and the two thresholds, the area covered by the histograrri between the two thresholds (in green) represent the volume of the segmented object.
The surface of the voxel can be simply estimate as the number of voxels on the boundary of the binary object.
IAP Object Model The example described in the previous section explain s how the application can measure vblume and surface on the binary and gray level objects. In the real case scenario there are several objects involved and overlapping, arid they usually don't have constant densities. However the same principle it is: still applicable to obtain an estimate of the measurements.: ; . . . . . . .. ..
The IAP. processing server with the Meas3 Object supports all the functionality required to perform the measurements described in the section. Meas3 computes histogram, ' average value, max, min and standard deviation of the stack: If a binary object is connected to it the measurements will be limited to the area included.
Meas3 also computed the' volume and surface of binary objects, it supports two different policies for the surface measurements 1. Edges : measure the perimeters of each plane in the binary object 2. Coin : count the voxels in the bitvol which have at least one neigbour not belonging to the bitvol (ie the voxel on the surface of the bitvol ).
C,edara Software Core. page 36 Additional Description B
"PrS 3D Rendering Architecture - Part 1"

PrS 3D Rendering Architecture White Paper Introduction Imaging Application Platform (IAP) is a well-established platform product specifically targeted for medical imaging. Its goal is to accelerate the development of applications for medical and biological applications. IAP has been used since 1991 to but~ld applications ranging from review stations to advanced post-processing workstations. IAP supports a wide set of functionality including database, hardcopy, DICOM services, image processing, and reconstruction. The design is based on a client/server architecture and each class of functionality is implemented in a separate server. All the servers are based on a data reference pipeline model; an application instantiates objects and connects them together to obtain a live entity that responds to direct or indirect stimuli. This paper will focus on the image processing server (further referred to as the processing server or PrS) and, in particular, on the three-dimensional (3D) rendering architecture.
Two different rendering engines are at the core of 3D in IAP: a well proven and established solid renderer, and an advanced volume renderer, called Mufti Mode Renderer (MIviR). These two renderers together support a very large set of clinical applications. The integration between these two technologies is very tight. Data structures can be exchanged between the two renderers making it possible to share functionality such as reconstruction, visualization and measurement. A set of tools is provided that allows interactive manipulation of volumes, variation of opacity and color, cut surfaces, camera, light positions and shading parameters. The motivation for this architecture is that our clinical exposure has lead to the observation that there are ~ different rendering techniques available, each of which is optimal for a different visualization task.
It is rare that a clinical task does not benefit from combining several of these rendering techniques in a specific protocol. IAP also extends the benefits of the 3D architecture with its infrastructure by making additional functionality available to the MMR: extremely flexible adaptation to various memory scenarios, support for mufti-threading, and asynchronous behavior.
All this comes together in a state of the art platform product that can handle not only best-case trade-show demonstrations but also real-life clinical scenarios easily and efficiently. The Cedara 3D platform technology is pooirerful, robust, and well-balanced and it has no rivals on the market in terms of usage in the field, clinical insight, and breadth of functionality.
Cedara Software Corp. Page 1 PrS 3D Rendering Architecture V~Thite Paper Glossary Volume Rendering A technique used to visualize 3D sampled data that does not require any geometrical intermediate tee.
Surface Rendering A technique used to visualize 3D surfaces represented by either polygons or voxels that have been previously extracted from a sampled dataset.
Interpolation A set of techniques used to generate missing data between known samples.
Voxel A 3D discrete sample.
Shape Interpolation An interpolation technique for binary objects that allows users to smoothly connect arbitrary contours.
Multiplanar or Curved Arbitrary cross-sections of a 3D sampled dataset.
Reformatting MIp Maximum Intensity Projection. A visualization technique that projects the ma~cimum value along the viewing direction. Typically used for visualizing angiographic data.
ROI Regj~on of Interest. An irregular region which includes only the voxets that have to be processed Page 2 Cedara Software Core.

PrS 3D Rendering Architecture White Paper Application Scenarios Binary and Gray Level Functionality For many years a controversy has raged about which technology is better for visualization of biological data: volume rendering or surface rendering. Each technique has advantages and drawbacks. (For details, see "Error! Reference source not found." on page Error! Bookmark not defined..) Which technique is best for a specific rendering task is a choice that is deliberately deferred to application designers and clinicians. In fact, the design of the processing server makes it possible to combine the best of both techniques.
The processing server provides a unifying framework where data structures used by these two technologies can be easily shared and exchanged In fact, IAP
is designed for visual data processing where the primary sources of data are sampled image. datasets. For this reason, all our data stxuctures are voxel-based The two most important are binary solids and gray level slice stacks. Several objects in our visualization pipelines accept both types of data, providing a high level of flexibility and interoperability. For example, a binary solid can be directly visualized or used as a mask for the volume rendering of a gray level slice stack. Conversely, a gray level slice stack can be rendered directly and also used to texture map the rendering of a solid object or to provide gradient information for gray level gradient shading.
A gray level slice stack is generally composed of a set of parallel cross-sectional images acquired by a scanner, and can be arbitrarily spaced and offset relative to each other. Several pixel types are supported from 8-bit unsigned to 16-bit signed with a floating point scaling factor. Planar, arbitrary, and curved reformats (with user-defined thickness) are available for slice stacks and binary solids. Slice stacks can also be interpolated to generate isotropic volumes or resampled at a different resolution. They can be volume rendered with a variety of compositing modes, such as Maximum Intensity Projection (MB'), Shaded Volume Rendering, and Unshaded Volume Rendering. Multiple stacks can be registered and rendered together, each with a different compositing mode.
To generate binary solids, a wide range of segmentation operations are provided simple thresholding, geometrical ROIs, seeding, morphological operations, etc. Several of these operations are available in both 2D and 3D.
Binary solids can be reconstructed from a slice stack; and shape interpolation can be applied during the reconstnlction phase. When binary solids have been reconstructed, logic operations (e.g., intersection) are available together with disarticulation and cleanup functionality. Texture mapping and arbitrary visibility filters are also available for binary solids.
Cxdara Software Corp. Page 3 PrS 3D Rendering Architecture White Paper C ' 'cal Scenarios The following list is not meant to be exhaustive but it does capture the most significant clinical. requirements for a complete rendering engine. We want to underline the requirements of a real-life 3D rendering system for medical imaging because the day to-day clinical situations are normally quite different from the best-case approach normally shown in demonstrations at trade shows.
The Cedara platform solution has no rivals in the market in terms of usage in the field, clinical insight and breadth of functionality.
Tumors Tumors disturb surrounding vascular structure and, in some cases, do not have well defined boundaries. Consequently, a set of rendering options needs to be available for them which can be mixed in a single image, including surface rendering '(allo'wing volume calculations and providing semi-transparent surfaces at various levels of translucency, representation of segmentation confidence, etc.), volume rendering, and vasculature rendering techniques such as MII'. An image in which multiple rendering modes are used should still allow for the full range of measurement capabilities. For example, it should be possible to make the skin and skull translucent to allow a tumor to be seen, while still allowing measurements to be made along the skin surface.
Figure 1- Volume rendered tumors.
In (a), the skin is semi-transparent and allows the user to see the tumor and its relationship'with the vasculature. In (b), the location of the tumor is shown relative to the brain and vasculature.
a_ h.
Display of Correlated Data from Different Modalities There is a need to display and explore data from multiple acquisitions in a singe image (also called mufti-channel data). Some of the possibilities include: pre-and post-operaxive data, metabolic PET data with higher resolution MR
anatomic info, pre- and post-contrast studies, MR and MRA, etc.
Page 4 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper The renderer is able to fuse them during the ray traversal, and with a real 3D
fusion, not just a 2D overlay of the images.
Figure 2 - Renderings of different modalities.
In (a), the rendering of an ultrasound kidney dataset. The red shows the visualization of the power mode; the gray shows the visualization of the B
mode. Data provided by Sonoline Elegra. In (b), the volume rendering of a liver study. The dataset was acquired with a Hawkeye scanner. The low resolution CT shows the anatomy while the Spect data highlights the hot spots: Data provided by Rambam Hospital, Israel. In (c), the MRA
provides the details of the vasculature in red, while the MRI provides the details of the anatomy. (In this image, only the brain has been rendered.) a. b.
c.
Cedara Software Corp. Page 5 PrS 3D Rendering Architecture White Paper Dental Package A dental package requires orthogonal, oblique, and curved reformatting plus accurate measurement capabilities. In addition, our experience suggests that rendering should be part of a dental package.
In 3D, surface rendering with cut surfaces corresponding to the orthogonal, oblique and curved dental reformats are required In addition the ability to label the surface of a 3D object with lines corresponding to the intersection of the object surface with the reformat planes would be useful. Since a dental package is typically used to provide information on where and how to insert dental implants, it should be possible to display geometric implant models in dental images and to obtain sizing and drilling trajectory information through exuemely accurate measurements and surgical simulation. The discussion above on prosthesis, design applies here also.
Large Size Dataset Modern scanners are able to acquire a large amount of data, for example, a normal study of a spiral.CT scan can easily be several hundreds megabytes. The system must be able to handle this study without any performance penalty and without slowing down the workflow of the radiologist. The processing server can accomplish this since it directly' manages the buffers and optimizes, or completely avoids, swapping these buffers to disk. For more information on this functionality, please refer to the Memory Management section in the PrS
White Paper.
Since volume rendering requires large data sues, Memory Management is extremely relevant in these scenarios. Figure 4 shows the rendering of the CT
dataset of the visible human project. The processing server allows you to rotate this dataset without any swapping on disk after the pre-processing has been completed Page 6 Software Corp.
Figure 3 - Application of a curvilinear reformat for a dental package.

PrS 3D Rendering Architecture VOhite Paper / l Figure 4 - Volume rendering of the visible human project.
Since the processing. server perforn~s the interpolation before the rendering, the rendered datasethas a size of 512x512x1024,12 bits per pixel. Including gradient information, the dataset size is 1 Gigabyte. Since the processing server performs some compression of the data and optimizes the memory's buffer, there is no swapping during rotation, even on a 1 Gigabyte system.
Cedars Software Corp. Page 7 PrS 3D Rendering Architecture White Paper //-Volume Rendering The Basics Volume rendering is a flexible technique for visualizing sampled image data (e.g., G"T, MR; Ultrasound, Nuclear Methane). The key benefit of this technique is the ability to display the sampled data directly without using a geometrical representation and without the need for segmentation. This makes all the sampled data available during the visualization and, by using a variable opacity transfer function, allows inner structures to appear semi-transparent.
The process of generathng an image can be described intuitively using the ray-casting idea. Casting a ray from the observer through the volume generates each pixel in the final image. Samples have to be interpolated along the ray during the traversal Each sample is classified and the image generation process is a numerical approximarion of the volume rendering integral Figure 5 - Conceptual illustration of volume rendering.
.4 viewer light Volum Image In volume rendering, a fundamental role is played by the classification. The classification defines the color-and opacity of each voxel in the dazaset. The opacity defines "how much" of the voxel is visible by associating a value from (fully transparent) to 1 (fully opaque). Using a continuous range of values avoids aliasing because it is not a binary threshold. The color allows the user to distinguish between the densities (that represent different tissues in a CT
study, for example) in the 3D image. Figure 6 shows three opacity settings for the same CT dataset.
A classification in which each voxel depends only on the density is called global, while a classification in which each voxel depends on its position and density is called local A global classification function works pretty well with CT
data since each tissue is characterized by a range of Hounsfield amts. In MRI, the global classification function has a very limited applicability as shown in Figure 7. To handle MR data properly, a local classification is necessary.
Page 8 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper Figure 6 - Different opacity settings for a CT dataset.
The first row shows'the volume rendered image, while the second row shows the histogram plus the opacity curve. Different colors have been used to differentiate the soft tissue and the bone in the 3D view.
~..._....m.,____._.___.
i :., .. .
t Figure 7 - Different.opacity~-settings for an MR dataset.
In MR, the underling;physics is not compatible with the global opacity transfer function; it is not possible to select different tissues just by changing it. As will be explained in the next section, the processing server overcomes this problem by-allowing the application to use a local classification.
.. . , i Cedars Software Corp. Page 9 PrS 3D Rendering Architecture White Paper The IAP renderer also takes advantage of optimization techniques that leverage several view-independent steps of the rendering. Specifically, some of the view-independent processing optimizations implemented are:
~ interpolation;
~ gradient generation; and ~ background suppression.
Naturally, all these optimizations are achieved at the cost of the initial computation and memory necessary to store this pre-calculated information and, therefore, can be selectively disabled depending on the available configuration. The application developer also has several controllable configurations available that allows the developer to gracefully decrease the amount of information cached depending on the system resources available.
Rendering Modes As we saw, the general model that the IAP volume renderer uses to generate a projection is to composite all the voxels that lie "behind" a pixel in the image plane. The compositing process differs for every rendering mode, and the most important are illustrated here.
In the compositing process, the following variables are involved:
~ Op(d) opacity~for density d. The opadty values are from 0.0 (complete transparent) to 1.0 (fully opaque).
Color(d) color for density d.
~ V (xyz) density-in the volume at the location x, y, z.
~ I(u,v) intensity-of outpu~yimage at pixel location u,v.
(xyz) = SampleLocation(Ray, i) this notation represents the computation of the i-th sample point along the ray. This can be accomplished mth nearest neighbor or trilinear interpolation.
Ray = ComputeRay(u,v) this notation represent the computation of the ray ,~Ray~~ passing,through the pixel u,v in the image plane. Typically, this involve the definition of the sampling step and the direction of the ray.
~ Normal(xyz) : normal of the voxel at location xyz. Typically, this is approximated with a central difference operator.
L light vector. .
~ ° Represents the~dot'product Figure 8 shows a graphical representation of these values.
Page 10 Cedars Software Corp.

PrS 3D Rendering Architecture White Paper f ~ ~l Figure 8 - Values involved in the ray casting process.
~y ~ ComputeRay(u,v) 'a ~ I
Nonnall ~ 1 vector L
i-th sample point SampleLocation(Ra5 Maximum Intensity Projection (MI1') This is the pseudo-code for the M>n:
for every pixel u,v in I {
ray = ComputeRay(u,v) for every sample point~i along ray {
(x,y,z)= SampleLocation(ray, i ) if( I(u,v) < V(x,y;z) ) I(u,v) = V(x,y,z) Density Volume Rendering~~(DVR) 'This is the pseudo code for DVR (referred to. as "Multicolor" in the IAP man pages):
for every pixel u~v in I,:{
ray = ComputeRay(u,v) ray opacity = 0.0 for every sample point i along ray {
(x,y,z)= SampleLocation(ray, i ) alpha = ( 1.0 - ray opacity ) * op(V(x,y,z)) I(u,v) += color(V~fx,y,z)) * alpha ray opacity += alpha Cedara Software Core. Page 11 PrS 3D Rendering Architecture White Paper Shaded Volume Rendering This is the pseudo code for DVR (referred to as "Shaded Multicolor" in the IAP man pages):
for every pixel u,v:in ~,{
ray = ComputeRay(u,v) ray opacity = 0.0 for every sample point i along ray {
(x,y,z)= SampleLocation(ray, i ) alpha = ( 1.0 - ray opacity ) * op(V(x,y,z)) shade = Normal(x,y,z)~ L
I(u,v) += color(V(x,y,z)) * alpha * shade ray opacity += alpha Figure 9 shows the same dataset rendered with these three rendering modes.
Figure 9 - Rendering modes.
MIP Shaded Object Model For a generic deSCription of the IAP object model, please refer to the PrS
White Paper. In this section, we illustrate in detail the object involved in the volume rendering pipeline.
In Figure 10; you cau see the,IAP volume rendering pipeline. 'The original slices are kept in a number of raster objects (Ras). Those Ras objects are collected in a singae slice stack (Ss) and then passed to the interpolator object (Sinterp).
Here, the developer has the choice of several interpolation techniques (i.e., cubic, linear, nearest neighbor). The output of the Sinterp then goes into the view independent preprocessor (Vol) and then finally to the projector object (Cproj).
From that point on, the.pipeline becomes purely two-dimensional If Cpro~ is Page 12 Cedars Software Corp.

PrS 3D Rendering Architecture Vyhite Paper used in gray scale mode (no color table applied during the projection, like in Figure 9), the application can apply window/level to the output image in V2.
If Cpro~ is m color mode, V2 will ignore the windowllevel setting. The application can add window/level in color mode using a dynamic extension of the Filter2 object, as reported in Appendix C.
Figure 10 - Pipeline used for volume rendering.
One of the keys to correct interpretation of a volume dataset is motion. The amount of data in today's clinical dataset is so high that it is very difficult to perceive all the details in a static picture. I-fistorically, this has been done using a pre-recorded cine-loop, but now it can be done by interactively changing the view position. Apart from the raw speed of the algorithm, it is also convenient to enhance the "perceived speed" of a volume rendering algorithm by using a progressive refinement update. Progressive refinement can be implemented in several ways in IAP but the easiest and most common way to do it for volume rendering is to set up a second downsampled data-stream. The two branches, each processing the input data at different resolutions, are rendered in alternation.
Figure 11 shows how the pipeline has to be modified in order to achieve progress refinement in IAP. Two volume rendering pipelines (Sinterp, Volume, Cpro~) are set in parallel and are .fed with the same stack and output un the same window. One ofthese pipelines, called High Speed, uses a down sampled dataset (the downsampling is performed in Sinterp). This allows rendering at a very high rate (more than 10 frames per second) even on a general purpose PC.
The second pipeline is called High Quality. It renders the original dataset usually interpolated with a linear or cubic kernel Cedara Software Core. Page 13 PrS 3D Rendering Architecture White Paper Figure 11- High speed/high quality pipelines.
I-~x Speed and Higkl Quality are joined at the progressive refinement object (Pref). The I~gh Quality pipeline is always interruptible to guarantee application responsiveness, while .the High Speed pipeline is normally in atomic mode (non-interruptible) and executes at a higher priority. The application developer has control of all these options, and can alter the settings depending on the requirements. For example, if a tine-loop has to be generated, the I~gh Speed pipeline is disconnected because the intermediate renderings are not used m this scenario. For rendering modes that require opacity and color control, a set of Pvx objects specifying a pixel value transformation can be connected to Vol.
The transformation is specified independently from the pixel type and can be a simplified linear ramp or full piece-wise linear. Using the full piece-wise linear specification, the application writer can implement several different types of opacity and color editors specific to each medical imaging modality. Another common addition to the volume rendering pipeline is a cut plane object, I.im3 or Binary object. These objects specify the region of the stack that have to be projected and can be connected either to Sinterp or Vol.
The processing server can also render a multi-channel dataset together. As discussed in "Clinical Scenarios" on page 1, this feature is very important in modalities such as MR.arid ultrasound, which can acquire multiple datasets.
The renderer will interleave the datasets during the projection, which guarantees that it is a real 3D fusion, rather than a 2D overlay. Examples of this functionality is in Figure 2 on page 5. Several stack are rendered together simply connecting several Volume object to the same Cproj object, as shown m Figure 12.
As in the case of one Volume object the developer can use an I~gh Speed pipeline and I-figh Quality pipeline to enhance the interactivity.
Page 14 C,edara Software Corp.

PrS 3D Rendering Architecture White Paper Advanced Features As we have seen in "Clinical Scenarios" on page 4, a renderer needs to have a much higher level of sophistication to be effective. in clinical practice. The processing: server provides functionality which is beyond the "standard"
volume rendering. The most relevant are presented here.
Clipping Tools The user in 3D visualization is usually interested in the relationship between the anaxomical structures. In order to investigate that relationship, it is necessary that the application be allowed to clip part of the dataset to reveal the internal structure. Our experience shows that there are two kinds of clipping.
~ Permanent- part of the dataset (normally irrelevant suuccures) is permanently removed Temporary - part of the dataset (normally relevant anatomy) is temporarily removed to reveal internal structures.
To accommodate these requirements, and in order to ma~amize performance, the processing server implements two kinds of clipping:
Clipping at pre-processing time.
This feature implements "permanent" clipping. The voxels clipped in this stage are not included in the data structure used by the renderer, and are not even computed, if possible.
~PPmg at rendering time.
This feature implements "temporary" clipping. The voxels are only skipped during the rendering process and are kept in the rendering data structure.
Using clipping at pre-processing time optimizes rendering performance, .but any changes to the clipping region will require a (potentially) expensive pre-Cedars Software Corp. Page 15 Figure 12 - Rendering of a mufti-channel dataset.

PrS 3D Rendering Architecture White Paper processing which usually prevents.using this feature interactively. The clipping at rendering time instead allows the application to interactively clip the dataset, since the clipping is performed skipping some voxel during the projection and so changing this region if fully interactive.
Usually the application generates two types of clipping regions:
~ Geometrical Clipping regions (e.g., generated by a bounding box or oblique plane).
~ Irregular Clipping regions (e.g., generated by outlining an anatomical region).
The processing server supports these two kinds of clipping both at pre-processing dine and rendering time.
Preprocessing Rendering The object that The render object;
performs Cproj, accepts the pre-processing,a bounding box Vol, as input. In the accepts a boundingcase of a multi-channel box dataset, for clipping. each volume can be clipped Geometrical independently.
Using the of object, it is also possible to clip each volume withue an obliq plane or a box h rotated wit respect to the orthogonal axes.

The Vol object Airy binary volume accepts a can be used a ~eg~, bitvol as input. clipping region.
Vol will The processing preprocess only server also allows the voxels to interactively included in the translating this bitvol. region.

Irregular clipping at rendering time is very useful in several situations, but particularly during MIP rendering. Normally several anatomical structures overlap in the MB' image so interactively moving a clipping region (e.g., cylinder, oblique slab, or sphere' can clarify the abilities on the image. See Figure 13 for an example.
Another very common way to clip the dataset is based on the density. For example, on a CT scan the densities below the Houasfield value of the water are background and hence have no diagnostic informaxion. Even in this case, the processing server supports the clipping at pre-processing time (using the Vol object) and at rendering time (simply changing the opacity).
Page 16 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper Figure 13 - Interactive clipping on the MIP view.
The user can interactively move the semi-sphere that clips the dataset and identify the region of the AVM. Usually the clipping region is also overlaid in the orthogonal MIP views.
Local Classification The global classification is.a very simple model and has limited applicability. As shown in Figure 7, the MR dataset does not allow the clinician to visualize any specific tissue or organ. In the CTA study, it is not possible to distinguish between contrast agent and bone. Both cases significantly reduce the ability to understand the anatomy. In order to overcome these problems, it is necessary to use the local classification.
By definition, the local classification is a function that assigns color and opacity to each voxel in the dataset based on its density and location. In the reality the color is associate to the voxel depending on the anatomical structure to which it belongs. So it is possible to split the dataset in several regions (representing the anatomical structures) and assigning a colonnap and opacity to each one of them In other terms-the local classification is implemented with a set of classifications, each one applied to a different region of the dataset. For this reason is very often referred as Multiple Classification.
The processing ;servea~..supportsMultiple Classification in a very easy and efficient manner. The application can create several binary objects (the regions) and apply fQr each one of them a different classification. The Multiple Classification is directly supported by the McIVoT object which allow to apply several classifications to the output of the Vol object. Figure 14 shows the case in which two classifications are defined in the same stack Cedars Software Corp. Page 17 PrS 3D Rendering Architecture ~Xlhite Paper Figure 14 - Pipeline to use two classifications in the same volume.
Each classification (C1) is defined in the area of the bitvol (Bv) object associated with it. The Bvf object has been added to the pipeline to allow it to interactively translate the area of the classification to obtain, for example, features as shown in Figure 13.
Since all the classifications are sharing the same data structure (produced by Vol) the amount of memory required is independent of the number of classifications. This design also allows you to addlremove/move a classification with minimal pre-processing.
The MclVol object allows controlling of the classification in the area in which one or more regions are overlapping. This is implemented through a set of policies that can be extended with run-time elctension. This feature is a powerful tool for the application which knows in advance what each classification represents, as shown in "Errort Reference source not found.".
Figure 15 shows how this pipeline can produce the correct result on a CTA
study. The aorta is filled with a contrast agent that has the same densities as the bone. Defining one bitvol (shown in Figure 15a) which defines the area of the aorta allows the application to apply the proper classification to the stack.
Note that the bitvol does not have to exactly match the structure of interest, but rather loosely contains it or excludes the obstructing anatomies. The direct volume rendering process, by using an appropriate opacity transfer function, wrll allow for the visualization of all the details contained in the bitvols.
This approach can be applied to a variety of different modalities. In Figure 1 on page 4, it is applied to an MR study. The main benefit of our approach is that the creation of the bitvols is very time effective. Please refer to the "IAP
PrS
Segmentation Architecture" White Paper for a complete list of the functionalities.
Page 18 Cedars Software Corp.
.. s~r ~l PrS 3D Rendering Architecture White Paper Figure 15 - Local classification applied to a CTA study.
In (a), BitvoI is used to define the region of the aorta. In (b), CTA of the abdominal aorta is classified using local classification.
The processing server functionality for the aeation/manipulation of bitvols includes:
~ Shape Interpolation - Allows the clinician to reconstruct a smooth binary object from a set of 2D ROIs. It is very effective for extracting complex organs.that have an irregluar shape like the brain. Figure 16 shows how this functionality can 'be' applied ~ Extrusion - Allows the clinician to exrzvde a 2D ROI along any direction. It is very useful for eliminating structures that obscure relevant pans of the anatomy, and is often used'in ultrasound applications.
~ Seeding - 3D region gxo~wing in the binary object. This functionality very naturally eliminates noise or irrelevant structures around the area of interest.
~ Disaiticulation - A kind of region garawing which allows the clinician to separate objects which are loosely connected, and represent different anatomical structures.
~ Dilation/Erosion -Each binary object can be dilated or eroded in 3D with an arbitrary number, of pixels for each axis.
~ Union, Intersection and complement of the binary objects.
~ . Gray Level region growing - Allows the clininan to segment the object without requiring the knowledge of thresholds.
The processing server does not limit the manipulation of the binary object to these tools. If the application bias the knowledge of'the region to define its own segmentation (e.g.; using an anatomical atlas can delineate the brain and generate directly the bitvol used in Figure 16), it is straightforward inuoducing this bitvol in the rendering pipeline.
Cedara Software Corp. , , Page 19 PrS 3D Rendering Architecture Vilhite Paper Figure 16 - Example of shape ~ interpolation.
To extract a complex organ, the user roughly outlines the brain contours on a few slices and then shape interpolation reconstruction generates the bitvol. The outlining of the brain does not have to be precise, but just includes the region of interest. The volume rendering process, with a proper opacitytable, will extract the details on the brain.
The functionality supported byMclVo1 can also be used to arbitrarily cut the dataset and texture~map the'origmal gray level on the cutting surface. Crne of the main benefits of this feature~is that the gray level from the MPRs can be embedded in the 3D visualization: Figure 17 shows haw this can be done in two different ways. In Figure 17a, the MPR planes are visualized with the original gray level information. This can be used to correlate the MPR shown by the application in the dataset. In Figure 17h, a cut into the dataset shows the depth of the fracture on the bone.
Figure 18 shows another~example of this feature. In this case, a spherical classification is used to reveal the bone structures in a CT daxaset (Figure 18a) or the meningioma in an MR dataset (Figure 18b).
Page 20 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper Figure 17 - Example of embedding gray level from the MPRs.
The functionality of MclVo1 allows you to embed the information from the MPRs in the visualization. This can be used for showing the location of the MPRs in the space of investigating the depth of some structure.
b.
Figure 18 - Example of clipping achievable with McNol.
In (a), there~are two classifications in the CT dataset: the skin which is applied to the whole dataset; and the bone which is applied to the sphere.
In (b), there are four different classifications (for details, see "Error!
Reference source not found.").
b.
To minimize memory consum~ion, MclVo1 supports multiple daamstream connections so that several Cpro~ objects can share the same data structure.
It is possible to associate a group of classifications to a specific Cproj, so each Cproj object can display a different et of objects. Note that in order to minimize the memory consumption, Volume keeps only one copy of the processed dataset.
Thus, if some Cproj objects connected to MclVo1 are using Shaded Volume Rendering, while others are using MIP or Unshaded Volume Rendering, the performances will be severely affected since Volume will remove the data structure of the gradient when the MIP or Unshaded Volume Rendering is scheduled for computation,. and it will recompute it when the Shaded Volume Cedars Software Corp. Page 21 PrS 3D Rendering Architecture White Paper Rendering is scheduled. In this scenario, we suggest you use two Vols, and two MclVols, one for Shaded and one for Unshaded Volume Rendering, and connect the Cproj to the MclVo1 depending on the rendering mode used.
Page 22 - Software Core.

Appendix C
Technical Report "PrS 3D Rendering Architecture - Part 2"

PrS 3D Rendering Architecture White Paper Open MMR
There is a class of applications that requires anatomical data to be rendered with synthetic objects, usually defined by polygons. Typically, applications oriented for planning (i.e., radiotherapy or surgery) require this feature.
The processing server supports this functionality in a very flexible way. The application can provide to the renderer images with associated Z buffer that has to be embedded in the scene. The Z buffer stores for each pixel in the image the distance from the polygon to the image plane. The Z buffer is widely used in computer graphics and supported by several libraries. The application has the freedom to choose the 3D library, and, if necessary, write their own.
Figure 1 shows an example of this functionality. The cone, which has been rendered using OpenGL;, is embedded in the dataset. Notice thax some voxels in front of the cone are semi-transparent, while other voxels behind the cone are not rendered at all. This functionality doesn't require any pre-processing so changing the geometry can be fully interactive.
The Z buffer is a very simple and effective way to embed geometry in the Volume, but imposes some limitations. For each pixel in the image there must be only one polygon projected onto it. This restricts the application to fully opaque polygons or non-overlapping semi-transparent polygons.
Cedars Software Corp. Page 3 PrS 3D Rendering Architecture V~hite Paper Figure 1- Geometry embedded in the volumetric dataset.
Opacity Modulation One technique widely used in volume rendering to enhance the quality of the image is changarg the opacity of each voxel depending on the magnitude of the gradient at the voxel's location. This technique, called opacity modulation, allows the user to enhance the transition between the anatomical structures (characterized by strong magnitude) and suppress homogeneous regions in the dataset. It greatly improves the effect of translucency because when the homogeneous regions are rendered with low opacity, they tend to became dark and suppress details. Using opacity modulation, the contribuxion of these regions can be completely suppressed.
The processing server supports opacity modulation in a very flexible manner. A
modulation table, which defines a muhiplication factor for the opacity of each voxel depending on its gradient, is defined for each range of densities. In Figure 2, for example, the modulation has been applied only to the soft tissue densities in the CTA dataset.

PrS 3D Rendering Architecture White Paper Figure 2 - Opacity modulation used to enhance translucency.
Suppressing the contribution of a homogeneous region, which usually appears dark with low opacity, allows the user to visualize the inner details of the data.
The effect of the modulation is also evident in Figure 3 which shows two slices of the dataset with the same settings used in Figure 2. The bone structure is the same in both images, while in Figure 20b, only the border of the soft tissue is visible (characterized by strong magnitude), while the inner part has been removed (characterized by low magnitude).
Figure 3 - Cross-section of the dataset using the settings from Figure 20.
b.
As mentioned before, opacity modulation can also enhance details in the dataset. In Figure 4, the vasculature rendered is enhanced by increasing the opacity at the border of the vessels characterized by high gradient magnitude.
As you can see, the vessel in Figure 22b appears more solid and better defined.
Cedara Software Corp. Page 5 PrS 3D Rendering Architecture White Paper ~ ' Figure 4 - Opacity modulation used to enhance vasculature visualization.
b.
One more application of opacity modulation is to minimize the effect of "color bleeding" due to the partial volume artifact in the CTA dataset. In this case, the opacity modulation is applied to the densities of the contrast agent and the strong magnitude is suppressed. Typically, the user tunes this value for each specific dataset. Figure 5 shows an example of a CTA of the neck region.
Figure 5 - Opacity modulation used to suppress "color bleeding".
In (b) the opacity modulation is used to suppress the high gradient magnitude associated with the contrast agent densities.
b.
In this situation, opacity modulation represents a good tradeoff between user intervention and image quality. To use the local classification, the user has to segment the carotid from the dataset, while in this case, the user has only to change a slider to interactively adjust for the best image quality. Note that when using this technique, it is not possible to measure the volume of the carotid, since the system does not have airy information about the object. Measurement requires the segmentation of the anatomical structure, and hence muhiple classification. Please refer to the "IAP PrS Segmentation Architecture" V~bite PrS 3D Rendering Architecture White Paper Paper for a detailed description of the measurements and their relationship with the local classification.
The processing server allows the application to set a modulation table for each density range of each Volume object rendered in the scene. Currently, opacity modulation is supported when the input rasters are 8 bit or 12 bit.
Mixing of Rendering Modes The application usually chooses the rendering method based on the feature that the user is looking for in the dataset. Typically, MM11' is used to show the vasculature on an MR dataset, while Unshaded or Shaded Volume Rendering is more appropriate to show the anatomy (like brain or tumor). In some situations, both these features have to be visualized together, hence different rendering modes have to be mixed.
The processing server provides this functionality because we have seen some clinical benefit. For example, Figure 6 shows a G"TA dataset in which two regions have been classified; the carotid and the bone structure. In Figure 24a, both regions are rendered with Shaded Volume Rendering. The user can .
appreciate the shape of the anatomy but cannot see the calcification inside the carotid. In Figure 24b, the carotid is rendered using MIP while the bone using Shaded Volume Rendering. In this image, the calcification is easily visible.
The processing server merges the two regions during the rendering, not as a 2D
overlay. This guarantees that the relative positions of the anatomical structures are correct.
Figure 6 - CTA study showing mixed rendering.
In (a), the carotid and bone are both rendered with Shaded Volume Rendering. In (b), the carotid is rendered with MIP and the bone with Shaded Volume Rendering.
a. b.
Another situation in which this functionality can be used is with a mufti channel dataset. In Figure 2.21 an NM and CT dataset are rendered together. In this Cedars Software Corp. . P~ 2 PrS 3D Rendering Architecture Wlhite Paper example the user is looking for the "hot spots" in the NM dataset and their relationship with the surrounding anatomy. The hot spots are by their nature visualized using MIl' rendering, while the anatomy in the CT dataset can be properly rendered using Shaded Volume Rendering. Figure 2.21B shows the mixing rendering mode. The hot spots are very visible m the body, while in Figure 2.21A depicts both data set rendered using Shaded mode, in this case the location of the "hot spot" in the body is not as clear.
Currently, the processing server allows the fusion of only MMII' with Shaded or Unshaded Volume Rendering.
Figure 7 - Different modalities and rendering modes.
In (a), a NM dataset and a CT dataset are both rendered using Shaded Volume Rendering. In (b),'th~rNM dataset is rendered using MIl' and the CT daxaset is rendered using Shaded Volume Rendering. Data provided by Rambam Hospital, Israel.
a. b.
Coordinate Query In the application, there is often the need to correlate the 3D images with the 2D MPRs. For example, Figure 8 shows that the user can see the stenosis in the MB' view, but needs the MPR to estimate the degree of stenosis.

PrS 3D Rendering Architecture White Paper ~ '' Since the woxel can be semi-transparent in volume rendering, selecting a point in the 3D view does not uniquely determine a location inside the stack; rather it determines a list of voxels which contribute to the color of the pixel selected.
Several algorithms can be used to select the voxel in this list to be considered as the "selected point". For example, the first non-transparent voxel can be selected or the first opaque voxel can be used instead. Our experience shaws that the algorithm described in "A method for specifying 3D interested regions on Volume Rendered Images and its evaluation for Virtual Endoscopy System", Toyofumi SAITO, Kensaku MORI, Yasuhito SUINGA, jun-ichi HASEGAWA, jun-ichiro TORIWAKI, and Kazuhiro KATADA, CARS2000 San Francisco, works verywell. The algorithm selects the voxel that contributes most to the color of the selected pixel. It is fully automatic (no threshold requested) and it works in a very intuitive way. "Error! Reference source not found." shows an implementation of this algorithm.
This functionality can be also used to create a 3D marker. When a specific point is selected in stack space, the processing server can track its position across rotations and the application can query the list of voxels that contribute to that pixel. The application can then verify if the point is the one visible. Since the marker is rendered as a 2D overlay,,.the image thickness of the line does not increase when the image is zoomed. See Figure 9.
Cxdara Software Corp. Page 9 Figure 8 - Localization of an anatomical point using a MIP view.
The user clicks on the stenosis visible on the MIP view (note the position of the pointer). The application then moves the MPR planes to that location to allow the user to estimate the degree of stenosis.

PrS 3D Rendering Architecture V~hite Paper ~p2 ' Figure 9 - Example of a 3D marker.
A specific point at the base of the orbit is tracked during the interactive rotation of the dataset. In (a), the point is visible to the user and hence the marker is rendered and drawn in red. In (b), the visibility of the point is occluded by bony structures, hence the marker is drawn in blue.
a_ b.
The processing server returns all of the voxels projected in the selected pixel to the application. The following values are returned to the application (for details on these parameters, see " Error! Reference source not found." on page Error! Bookmark not defined.):- . , .
~ The density of the voxel: V(xyz) ~ The opacity of the voxel: op[V(xyz)]
~ If color mode is used, the color of the voxel ~ The accumulated opacity: ray_opacity ~ The accumulaxed color, if color mode is used, or accumulated density: I(xy) During coordinate query, nearest neighbor interpolation is used. The application can then scan this list to determine the selected voxel using its own algonthm.
Note that the 1AP processing server also returns the voxels that do not contribute to the final pixel. This is done on purpose so that the application can, for example, determine the size/thickness of the object (e.g., vessel or bone) on which the user has clicked.

PrS 3D Rendering Architecture White Paper -' Perspective Projection In medical imaging, the teen "3D image" is very often referred to as an image generated using parallel projection. The word parallel indicates that the projector rays (the rays described in the pseudo code of "Error! Reference source not found." on page Error! Bookmark not defined.) are all parallel.
This technique creates an impression of the three-dimensionality in the image but does not simulate human vision accurately. In human vision, the rays are not parallel but instead converge on a point, the eye of the viewer (more specifically the retina). This rendering geometry is very often referred to as perspective projection. Perspective projection allows the generation of more realistic images than parallel projection. Figure 10 and Figure 11 illustrate the difference between these two projections methods.
Figure 10 - Parallel and Perspective Projections Parallel Projection ~ Perspective Projection Image Plane ~ nage Plane Focal Point Cedara Software Corp. . ~ Page 11 PrS 3D Rendering Architecture White Paper Figure 11- Parallel projection and Perspective projection schemes.
a. b.
There is an important implication of using perspective projection in medical imaging. In perspective projection; different parts of the object are magnified with different factors: parts close to the eye look bigger than objects filrther away. This implies that on a perspective image, it is not possible to compare object sizes or distances. An example of'this is shown in Figure 12.
Figure 12 - Perspective magnifies part of the dataset with different factors.
In (a), the image is rendered with parallel projection. The yellow marker shows that the two vessels do not have the same size. In (b), the image is rendered with perspective projection. The red marker shows that the same two vessels appear to have the same size.
a. b.

PrS 3D Rendering Architecture White Paper ~ --'~~
Althougki not suitable for measurement, perspective projection is useful in medical imaging for several reasons. It can simulate the view of the endoscope and the radiographic acquisition.
The geometry of the acquisition of radiography is, by its nature, a perspective projection. Using a CT dataset, it is theoretically possible to reconstruct the radiograph of a patient from any position. 'This process is referred to as DRR
(Digital Reconstructed Radiography) and is used in Radiotherapy Planning. Crne of the technical difficulties of DRR is that the x-ray used in the CT scanner has a different energy (and hence different characteristic) compared to that used in x-ray radiography. The IAP pressing server allows correction of those differences using the opacity map as a x-ray absorption curve for each specific voxel. Figure 13 shows two examples of DRR
Figure 13 - DRRs of a CT dataset.
To correct for the different x-ray absorptions between CT x-ray and radiographic x-ray, the opacity curve has been approximated to an exponential function. The function is designed to highlight the bone voxels in the dataset which in the radiography absorb more than in CT.
The perspective projection can also simulate the acquisition from an endoscope.
This allows he fly-through of the dataset to perform, for example, virtual colonoscopy:' Figure 14"shows two images generated from inside the CTA
dataset.
Cedars Software Corp. Page 13 PrS 3D Rendering Architecture White Paper Figure 14 - Perspective allows "fly-though" animations.
These images are frames from the CTA chest dataset. The white objects are calcification in the aorta. In (a), the aneurysm is shown from inside the aorta. In (b), the branch to the iliac arteries is shown. Refer to the IAP
Image Gallery for the full animation.
a. 1,_ Since perspective projection involves different optimizations than parallel projection, it has been implemented.as a separate object, Pproj, which supports the same input and output connections as Cproj. The pipeline shown in this white paper can be used in perspective mode by simply switching Cproj with Pproj (although currently not all the functionality available in Cproj is available in Pproj).
Because of the sampling scheme implemented in the Perspective renderer, it is possible to have a great level of detail even when the images are magnified several times.
Figure 15 - Perspective renderer allows a high level of detail.
The perspective rendering engine is released as a separate dll on win32. The application can replace.this dll with its own implementation as long as it is compliant with the same interface and uses the same data structure.

Claims (2)

What is claimed is:
1. A system for allowing fast review of scanned baggage represented by 3D data set, the system comprising a scanning device producing a 3D data set of a piece of baggage, a computer (workstation) with a monitor and software as detailed below, a custom-made button box with a built-in track ball to control a pointer on the monitor, and a network connection between the scanning device and the computer to transfer the image data from the scanning device to the workstation.
2. A system for visualizing and extracting 3D objects of interest from scanned baggage represented by a 3D data set, the system comprising a scanning device producing a 3D data set of a piece of baggage, a computer (workstation) with a keyboard, pointing device, a monitor, and a software, and a network connection between the scanning device and the computer to transfer the image data from the scanning device to the workstation.
CA002365062A 2001-12-14 2001-12-14 Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset Abandoned CA2365062A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002365062A CA2365062A1 (en) 2001-12-14 2001-12-14 Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002365062A CA2365062A1 (en) 2001-12-14 2001-12-14 Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset

Publications (1)

Publication Number Publication Date
CA2365062A1 true CA2365062A1 (en) 2003-06-14

Family

ID=4170843

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002365062A Abandoned CA2365062A1 (en) 2001-12-14 2001-12-14 Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset

Country Status (1)

Country Link
CA (1) CA2365062A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009094042A1 (en) * 2008-01-25 2009-07-30 Analogic Corporation Image combining
WO2016097168A1 (en) * 2014-12-19 2016-06-23 Thales Method for discrimination and identification of objects of a scene by 3-d imaging
EP3223235A1 (en) * 2016-03-24 2017-09-27 Ecole Nationale de l'Aviation Civile Object definition in virtual 3d environment
CN114113172A (en) * 2021-12-23 2022-03-01 北京航星机器制造有限公司 CT security inspection method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009094042A1 (en) * 2008-01-25 2009-07-30 Analogic Corporation Image combining
CN101933046B (en) * 2008-01-25 2012-10-03 模拟逻辑有限公司 Image combining
US8942411B2 (en) 2008-01-25 2015-01-27 Analogic Corporation Image combining
WO2016097168A1 (en) * 2014-12-19 2016-06-23 Thales Method for discrimination and identification of objects of a scene by 3-d imaging
FR3030847A1 (en) * 2014-12-19 2016-06-24 Thales Sa METHOD OF DISCRIMINATION AND IDENTIFICATION BY 3D IMAGING OBJECTS OF A SCENE
US10339698B2 (en) 2014-12-19 2019-07-02 Thales Method for discrimination and identification of objects of a scene by 3-D imaging
EP3223235A1 (en) * 2016-03-24 2017-09-27 Ecole Nationale de l'Aviation Civile Object definition in virtual 3d environment
CN107230253A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Object definition in virtual 3d environment
US10438398B2 (en) 2016-03-24 2019-10-08 Ecole Nationale De L'aviation Civile Object definition in virtual 3D environment
CN114113172A (en) * 2021-12-23 2022-03-01 北京航星机器制造有限公司 CT security inspection method
CN114113172B (en) * 2021-12-23 2024-01-09 北京航星机器制造有限公司 CT security inspection method

Similar Documents

Publication Publication Date Title
US6480732B1 (en) Medical image processing device for producing a composite image of the three-dimensional images
EP3493161B1 (en) Transfer function determination in medical imaging
US7924279B2 (en) Protocol-based volume visualization
US20070276214A1 (en) Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images
US20100194750A1 (en) Visualization of anatomical data
CN113436303A (en) Method of rendering a volume and embedding a surface in the volume
EP1945102B1 (en) Image processing system and method for silhouette rendering and display of images during interventional procedures
GB2495150A (en) Registering Regions of interest(ROIs) in multiple medical scan images
CA2365045A1 (en) Method for the detection of guns and ammunition in x-ray scans of containers for security assurance
US10692267B1 (en) Volume rendering animations
Jainek et al. Illustrative hybrid visualization and exploration of anatomical and functional brain data
CA2365062A1 (en) Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset
Klein et al. Visual computing for medical diagnosis and treatment
WO2006067714A2 (en) Transparency change of view-obscuring objects
Bartz et al. Visualization and exploration of segmented anatomic structures
Tory et al. Visualization of time-varying MRI data for MS lesion analysis
Kim et al. Biomedical image visualization and display technologies
Beyer Gpu-based multi-volume rendering of complex data in neuroscience and neurosurgery
Tang et al. A virtual reality-based surgical simulation system for virtual neuroendoscopy
Zheng et al. Visibility guided multimodal volume visualization
Chernoglazov Tools for visualising mars spectral ct datasets
CA2365043A1 (en) Method for the transmission and storage of x-ray scans of containers for security assurance
EP1923838A1 (en) Method of fusing digital images
CA2365323A1 (en) Method of measuring 3d object and rendering 3d object acquired by a scanner
US20230342957A1 (en) Volume rendering apparatus and method

Legal Events

Date Code Title Description
FZDE Discontinued