CA2365323A1 - Method of measuring 3d object and rendering 3d object acquired by a scanner - Google Patents

Method of measuring 3d object and rendering 3d object acquired by a scanner Download PDF

Info

Publication number
CA2365323A1
CA2365323A1 CA002365323A CA2365323A CA2365323A1 CA 2365323 A1 CA2365323 A1 CA 2365323A1 CA 002365323 A CA002365323 A CA 002365323A CA 2365323 A CA2365323 A CA 2365323A CA 2365323 A1 CA2365323 A1 CA 2365323A1
Authority
CA
Canada
Prior art keywords
rendering
dataset
volume
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002365323A
Other languages
French (fr)
Inventor
Vittorio Accomazzi
Harald Zachmann
Arun Menawat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cedara Software Corp
Original Assignee
Cedara Software Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cedara Software Corp filed Critical Cedara Software Corp
Priority to CA002365323A priority Critical patent/CA2365323A1/en
Publication of CA2365323A1 publication Critical patent/CA2365323A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A system for measuring and rendering a 3D object acquired by a scanner is disclosed. The system comprises (i) an acquisition scanner, (ii) a network where the acquired 2D images, representing the cross-section of the bag, are transmitted to a computer, and (iii) the computer, or workstation, where are displayed/measured.

Description

Application number/ Numero de demande : ~~vC,~~.~,~
Documents of poor quality scanned . . (request original documents in File Prep. Section on the 10~' floor) ~-~~-lo~-~~- ~3 - ~~Y ~w- l ~ - ~~- ~~ -Documents de pietre qualite numerises (Pour obtenir les documents originaux, veuillez vous adresser a la Section de preparation des dossiers, situee au 10e etage) ~13~ ~/~~_ ~~ ~ ~ ~y 9_~0 Ja_~3_~y Method of Measuring 3~ ~bject and Rendering 3~ Object Acquired by a ~cannE:r.
Field of the Invention The present invention relates to a method for measuring 3~ object acquired by a scanner and for rendering 3~ object acquired by a scanner.
Summary and Advantages of the Invention The present invention can be used to deteclt threads (guns, explo;>ive, etc.) from the images acquired by a 3~ security scanner. It can be also applied in the medical field. The security scanners are similar to CT scanner, but they are designed to scan the check in baggage or carry on baggage. The images produced are cross-sections of the bags. The pixel value (brightens) on the image represents the absorption of the material in~~ide the bag to the X-Fay.
Afso, new development in the security scanner allows the acquisition of the 2~ cross-section of the luggage, called slices. These images can be 15 generated using computed topography (CT), magnetic resonance (MR), .or other 3~ imaging technology. ptacking all the images acquired create a 3~
representation of the object scanned, also called lfolume or dataset or 3f>
Image. In the past year there have been developed techniques for the direct visualization of l6olume, and this technology is called "lfolume Rendering", see 20 "Introduction to !/oiume Rendering (Hewlett-Packard Professional Sooks;p"
arthold Lichtenbelt, Randy Crane, Shaz fVaqvi.
fn the security market, object with a specific characteristic (i.e. X-~;ay absorption densities) are a;onsidered threads only if bigger than a certain volume. For example a gun is composed by metals objects with volume greater 25 than 1000 mm3. The present invention can be deployed, for example, to measure the volume of ail the metal object detected by the security scanner.
And so it can be used to efficiently identify if it is necessary to have furth~;r investigation on the bag.

The volume rende~~ing technique developed so far are designed tc9 generate an image from a dataset, a view direction and a classification. -the classifications describe which material is visible in the dataset and the ccslor associate to each one of them. Fig. 6 shows the same dataset, rendered from the same view direction ~nrith two different classifications. For more information, see "Introduction to Volume Rendering (I-lewlett-P<~ckard Professional ~c~~ks)"
arthold Lichtenbelt, Randyy Crane, Shaz IVaqvi.
Conventional security scanner is designed to acquire only 2D images, and not 3D volumes. The problem in the past was in that the security sc~~nner was detecting only 2D im;~ge of the bag, not Volumes. In the medical field, where the CT scanner acquire 3D Image the problem was in that these scanner are not in continuous acquisition mode: the operat'~r define the area to acquire and then visualize the result.
advantages of the present inven'cion are to be able to efficiently measure 1 s the number of objecf in the bag and to efficiently measure the volume for each object in the bag. Further, it can be used very likel>/ to trigger further analysis.
The present invention provides a huge performance improvement with respect to the conventional rendering technique since a 2D processing of the newly acquired slice is necessary rather then a 3D proce:~sing of a (potential vE:ry 20 large) dataset.
The improvements according to the present invention are achievers both in terms of: (1 ) memory u;~age: the algorithm require only the previous image and the new slice to project. If the shadi;lg is nece:~sary one more image if requested, and (2) performances: the algorithm update the image with a simple 25 2D post-processing of the newly acquired image, r<~ther then an expensive post-processing of the entire dataset acquired to far.
The algorithm of th~~ present invention can be also used in scenario in which the dataset acquired has infinite dimension andlor can not be kept in memory for processing.

The algorithm of ttie present invention can be also used in scenario in which the dataset acquired has infinite dimension andfor can not be kept in memory for processing.
The present invenf:ion also provides a technology to generate a !lolume Rendering image based on a previously rendered image. IVlore specifically describe how to update an image when the scanner acquires new slices.
A further understanding of the other feature:, aspects, and advantages of the present invention will be realised by reference to the following description, appended claims, and accompanying drawings.
brief ~escription of the draw Embodiments of the invention will now be described with references to the accompanying drawings, in which:
F=figures 1 to 10 illustrate basic principles of i:he present invention and several specific embodiments according to the invE:ntion.
~5 ~etaited ~escri~tion of the Preferred Embodiments) According to one aspect of the present invention, there is provided a system and method (algorithm) fo~° efFiciently measure 3~ object acquired by a scanner.
The system according to one embodiment of the invention compri~9es (a) 2o an acquisition scanner, (b) a network in which the acquired 2~ images, representing the cross-sea;tion of tile bag, are transmitted to (c) a computer, or workstation, where the images are displayed/measured, as illustrated firs F=ig. 1.
The algorithm according to the invention cans be implemented as part of the scanner (a) or in the computer (b) to trigger further action by the user, for example to display some image of the bag.1 For purpose of explanation, each object is icaentified with a differeilt color.
A pixel for which the colon has been defined (i.e. it is defined to which object belongs to) has is called s>lassified, with one exception: the white pixel are not classified but the list con~;idered to be Classified.
s The algorithm identifies (classifies) one slice at the time based on the previous slice. Vllhen a n~:w slice is acquired it is classified looking at the connectivity of the previoras one. During the classification volume's size of each object are updated. ~nce the slice is fully classified it is kept for the class>ification of the next one and the previous one discard.
The algorithm of the invention comprise the following steps:
Step 1- The slice, which is acquired, is thresholded. The threshoicls is selected in order to identify the material under investigation, for example if the system is intended to measure metal object then the threshold for fhe m~,tal has to be used. See Fig. 2.
Step 2- The pixel set to white (inside the thr~Qshold) are checked against the pixel of the previous slice which has been cias:~ified (each color reprEaents a different object) accordinc; with the following rules:
a- If a white pixel has the same x and y coordinate of a classified pixel then it belongs to that spE:cific classificavion and has to be classified as s~~,ach with 2o all the pixel which are cor~~nected to it. The volume of the object has to bE~
increased accordingly. See Fig. 3.
b- If a classified pi:Kel has the same x and y coordinate of a different color object then the object witl~r a dififerent colour has to be turned in to the classified pixel and its volume added to the volume of the classified pixel. S2e Fig. 4.
25 Step 3 - The color vNhich appear in the previous image, but not in the current one are considered fully processed and their volume can be estimated.
Step 4- For each white pixel in the current image a new classification (object) is created and all the connected pixels cla sified. The volume has to be updated as well. See Fig. 5.
The pseudo cone i:hat represents the algoril:hm of the invention is as fol lows:
5 current_slice[][] : 2D matrix of the slice acquired. Each entry is an intensity value.
Dimensions are nx and yy c1 current_slice[][] : classification of the current slice. Each entry value is a classification (object) or v~rhite or background. Dimensions are nx, ny.
cl_previous_slice[][] : claCosification of the previous slice. Each entry value is a 1o classification (object) or white or background. DimE~nsions are nx, ny.
cl_list[] list of ail the classification (object) identified. Each entry it is a record with the following fields: color,volume,used. En c1 rappresents the number of classifications in the 9isfi.
~5 i* initialization cl_iist, insert background and white pixel.*/
c1 list[0].colour=black ~% background c1 list[0].volume=0 c1 list[0].used=True c1 list[~].colour=white ~I to be classified c; list[1].volume=o c1 list[1].used=True n c1 = 2;
while( current slice = a3etcurrentSIiceFromScanner() ) {
c1 current slice = Threshold ( current slice ) // initialize all the classiFication as unused, I order to identify efficiently which are used in this slice.
far( n=2; n<n c1; n++ ) {
cl-list[n].used = False ) l/ step 2 for( y=0; y<ny;y++) f for ( x=0; x<nx;x+~- ) f if ( c1 current slicel:x][y] != Black ) f l/ It is not background if(cl_previous slice:[x][y] != black) {
if ( c1 current_slicex[x][y] _= white) f !l rule a extend classification( c1 current_slice, cl_previous slice[x][y],x,y ) c1 list[cl_previous slice[x][y]].used = True else f II rule b remove classification( c1 current slice, cl_previous slice, cl_previous slice[x][y],x,y) ) II note that if c1 current slice[x][y] _= white and of current_sfice[x][y] _=
Black II it is not processed here.
) I/ step 4.
for( n=2; n<n c1; n+~-f if ( cl_list[n].used == False ) f II this classification ;object) has been fully measured.
UI or detection algorithm should be notified.
cl-list[n].volume = 0; II mark this entry as available:
II Step 5 for( y=0; y<ny;y++) f for ( x=0; x<nx;xø+ ) f if( c1 current_siice == white ) color = create classificatian( c! list, n c1 ) extend classification( c1 current slice, color }
) extend classification{ c1 slice, color, x,y ) f performs a 2D region crowing on the 'c1_ slice' marking on the pixel connected ~nrith 'colour'.
See {1 ) for example of these algoriothm Update cf_list[color].volume wit the list of pixel selected.
) remove_classification( c1 slicel, c1 slice2, color, x,y ) change all the pixel in 'c1 slice1' with color c1 slice11 [x][y] info 'color' change all the pixel in 'c1 slice2' with color c1 slicel2[x][y] into 'color' ci list[color].volume ~-=- c9 list[cl slice12[x][y]].volume ci list[cl slicel2[x][y]].volume = 0 It mark unsued.
c1 list[cl sficel2[x][y]].used= o }
create_classification( c;l_list, n c1 identify and entry in the c1 list in which volume is set to zero.
Threshould( slice ) f set all the pixel inside the threshould to white in the output slice.
it is noted that ~~ ) the function remove_clas:~ification can be implemented in a much more efficient may keeping a fist of connected color in cl_list[], this is omitted for simplicity, (2) 'there are alternative ways, to flag a classification as unused rather then set them volume to zero, (3) the method is not restrioted to any
2~ region-growing algoritom, and ~4) the algorithm can be easi9y extended to support severe! not-overis~pping threshould. To detect - for example - volume of metal object or organic of~ject at the same time, see "computer Graphic;
Priciples and Practice", Foley VanDam, I-lughes. A,ddisor~ illlesley.
According to snottier aspect of the present invention, there is pro~iided a method and system for eificientiy rendering 3D object acquired by a scanner.
The system comprises (a) an acquisition scanner, (b) a network in which the acquired 2D images, representing the cross-section of the bag, are tran~~mitted to (c) a computer, or vvor~,station, where the images are dispiayed/measc~red, as depicted in Fig. 1.
The images are acquired by the scanner a while the luggage moves in to 1o it, and then they are trap'>ferred on the computer c through the network connection b. The computer c updates the 3D image for the user's inspection.
The algorithm project a newly acquired slicE:, and then composite it on top of the previous image.
,According to one embodiment of the invention, the method comprises the r5 following steps:
Step 1 - The newly acquired slice is projected using a specific classification. Possibly the: classification can color-code different materials in the bag. See Fig. 7.
Step 2 - The Image is shifted. See Fig. ~.
20 1. Step 3 - The projected slice is composite: on top the shifted image.
The result is the correct image of the entire volume acquired. See Fig. 9.
According to the invention, given a volume c~nd a volume rendered image of the such volume generate a new image of the volume including the new slices are acquired. Fig. 10 illustrates this concept.

g Let it be assumed thafi I is the Volume Rendered imago of the Volume acquired so far, and ~ the slice - or slices -just acquired. The algorithm is described as follows:
1. Select a view matrix in which the ne~nrly acquired slices are not occluded ( i.e. behind of ~ any previously acquired slice. See "Computer Graphics Priciples and P~~actice", Foley VanDam, f-fughes. /~ddison Wesley.
2. Compute ~i t1, zp represent the direction of the projection of the slices in the image.
3. for each image, acquired by the scanner', or in general for each set of slice for which the image has to be update:
a. Project t~;he slice ~ using the viewing matrix and a pre-defined classification. The projection can be accomplish with any volume-rendering techniques, including, but not resi:ricted to:
i. Shear 1I11arp ii. Ray Casting iii. Texture Dapping.
The alpha (opacity ) channel has to be retained after the projection.
2o b. Shift the image I by the x and y component of the vectort z~.
c. Composite the image generated ire a into b. The technique is the well-known alpha blending using the back-to-front accumulation method described in "introduction to Volume Rendering (~lewlett-Packard Professional hooks)" arthold Lichtenbelt, Randy crane, Shaz ~laclvi.
It is noted that for the Maximum Intensity Visualization the same rnethod is applicable. The max() operator as to be used in stead of the alpha blending in 5 3.c It is also noted that, if the visualization requires shade, a minor modification is necessary in order to shade the image properly, but the logic of the algorithm it is the same.
According to the present invention, there is provided an ability to generate an image of the acquired dataset very Efficiently. ~>ince the algorithm just keep 1o updating the same image it possibly can used to show the images as they are acquired by the scanner.
The present inveni:ion will be further understood by the additional descriptions A, ~ and C attached hereto.
Vllhile the present invention has been described with reference to specific embodiments, the descri~>tion is illustrative of the invention and is not to he construed as limiting the invention. Various modifications may occur to tl~,ose skilled in the art withouf departing from the true spirit anct scope of the invention as defined by the appended claims.

i~il es~~iti "!P !PrS ~~grner~~ati~r~ Ar~;hitecture°' ffAh PrS Segmentation Archit~°cture tr~ cti Imaging Application Tlatform (IAP~ is a well-established platform product: specifically targeted at medical imaging. LAI' supports a wide set of functionality including database, hardcopy, I)ICOlI~I services, image processing, and reconstruction. The design is based on a client/server architecture and each class of functionality is implemented in a separate server. This paper will focus on true image processing server (further referred. to as the processing server or prserver) and in particuar on the segmentation functionality.
Segmentation, or feature extraction., is an important feature for any medical imaging application. When a radiologist or a physician look s at an ima.~e he/she will mentally isolate the structure relevant to the diagnosis. If the structure has to measured and/or visualized by the computer the radiologist or physician has to identify such structure on the original images using the software, and this process is called segmen~~ation. For the purpose of this document, segmentation is a process in which the user (radiologist, technician, or physician;) identifies which part of one image, or a set: of images, belongs to a specific structure.
The scope of this white paper is to describe the tools available i.n the IAP
processing server that can automate and facilitate the segmentation process. They are presented in terms of how they operate a_nd. how they can be used in combination with the visualization and measurement functionality. 'The combination of these functionality allows the applicat:ton to build a very effective system from the user's perspective in which the classification and measurements are carried out with a simple click. Tlus is referred as I~~int arid Click Classificati~n (PCC~~.
Several segmentation tools have been published. The majority of them are designed to segment a particular structure in a specific image modally. In the IAI' processing server vTe implemented al~oritlzm, which are proven and have a large applicability in the clip ical practice. 'The tools have been chosen in order to cover a large set of clinical requirement s.
Since we recognize that we can not provide the best solution for all segmentation needs, our architecture is designed to be extensible. If the application requires a specific segmentation algorithm, it is possible to extend the functionality supported by the processing server through a dll or a shared library.
This white paper assumes that the reader is familiar with the IAh processing server architecture and has minimal experience with the IAP
Cedara Software Corp. Page 1 d d~
a' s ~a''~~-'' ~hP PrS Segmentation ~3rchitecture 3I~ visualization functionality. The reader can refer also to the "IAP-P rS
Image Processing" ~Ihite Paper and "PrS Rendering architectL~re" ~'~Ihite Paper.
Cedara Software Core. Page 2 ~~'_ y~
IAP PrS Segmentation Architecture 1~ S S Y' ~lolurne Henderia~g A technique used to visualize three-dimensional sampled data that does not requzre any geometrical intermediate structures.
Surface Rendering A technique used to visualize three-dimensional surfaces represented by either polygons or voxels that have been previously extracted from a sampled dataset.
Ia~terp~lati«n A set of techniques used to generate missing data between known samples..
~~xel A three dimensional discrete sample.
Shape Interp~lati~'r~ An interpolation technique for binary objects that allows users to smoothly connect arbitrary contours.
t~ultiplanar ~r Cued Arbitrary cross sections of a three-dimensional sampled ref~rrraatti~~g dataset.
binary ~bject (I3itwa~l) Structure which stores which voxels, in a slice stack, satisfy a specific property (for example the voxels belonging to an anatomical structure).
Rt~I Region of interest. Irregular region which includes only the voxels that have to be processed. It is very often represented as a bitvol Segrmentati~~n Process which lead to the identification of a set of voxels in an image or set of images, which sa~:isfy a specific property.
Cedara Software Corp. Page 3 ~~AP PrS Segmentation Architecture The segmentation process can varr considerably from appl~,ication to application. This is usually due to the level of automatio,z and the workflow. This is related to how the application uses the tools rather then the tools themselves. The iAP processing server doesn't force any workflow. A general approach could be to automate the prose<,>s as much as possible and allow the user to review and correct the segmentation.
The goad then is to minimize the user intervention rather them make the segmen~.ation entirely automatic.
~V~~1~~
The ~AA~P processing server supports both binary tools, which have been proven through the years as reliable, as well as advanced tools, with very sopb~isticated functionality.
The binary tools operate on a binary object; they do not use t:he original density from the images. These tool;; include binary region gr~~wing and extrusion.
The advanced tools operate on a gray level image. They typically allow an higher level of automation. T here tools are based on gray Level region growing.
Figure a.0 shows a schematic diagr;~rn of how these tools operate all together.
Cedara software Corp. Page 4 -AP PrS Segmentation Architecture Region growing Fagu~e ~.0 'I'~xonorny of the seg~n~~~tatao~a tools s~p~o~tnsl by the IAP pgocessirag server>
The scope of each tool in Figure 1.0 is as follows:
;shape Interpolation: Reconstruct a binary object by ini:erpolating an anisotropic stack of :2D ROIs. This functionality is implemented in the RecorA object ~ Extrusion: Generate a binary object by extruding in one direction.
'This functionality is implemented in the E~tw object 'rhresholding: Generate a binary object by selecting all the voxels in the slice stack within the range of densities. This fu~zctionality i s implemented in the '~"h~3 obj ect.
~ Binary Region Growing: Connectivity is evaluated on the binary image. This funct_onahty is implemented in the ~eed3 object.
~ t:~ray Level Region Growing: Connectivity is evaluated on the z~ray level image, with no thresholding necessary before the segmentation process. This functionality is implement=ed in the ;ieg3 object.
The IrlP processing server are:hitecture allows to conrvect these objects in any possible way. This is a very powerful feature since the segmentation is usually accomplished in several steps. For example Cedara Software Core. Page 5 ~:AP PrS Segmentation Architecture Fi~a~re ~.~ : ~egi~ra gr~wing after a n~rrnal thresh~ld cars koe sect t~
is~late an ~bjects ~er~r efficientl~r. ~raaage A is the result ~f tlae threshold pith the b~ne ~ririd~dv in tlae C'~' dataset. '~'lae bell is rernwed pith a si~~l~ click ~f the ray~axse. '~'he resultaad binary ub~ect is aase~ as a ra~asl~ f~r the V~~lurrx~ l2ender~r.
Tigure 1.2 shows the part of the. pipeline which implements the segmentation in I~igure 1.1. The S:. object is the input slice stacl~, and contains the original images. The Thr3 object is the object than performs the threshol.ding, and the Seed3 object performs the region growing on a point specified by the user.
Ss 'l'hr3, ' Seed3 __ i figure 1.2 : The pipeline used f~r the ~enerati~n ~f the hin~r~
~k~j~~t iar Fi~ur~ ~.1.
The Ir"iP processing server also supports several features for tnanipzzlating binary objects directly:
~ lJrosion ~ Dilation ~ Intersection Cedara Software Corp. Page 6 S' :rlP PrS Segmentation Architecture ~ Union ~ Difference Complement For example as we'll see in the next section it is necessary to dilate a binary abject before using it as a clipping region for Volume Rendering.
For example, in the pipeline in Figure 1.3, the binary object has to be dilated before the connection to th~° Volume Rendering pipeline. 'This can be accomplished by simply adding a new l~v object at the end of the pipeline, as shown in Figure 1.3.
Figure l.~ : Pi~~Iine irs Figure 1.2 with the l~v ~~~ect ~~d.ed which will perk~~~ dil~ti~aa uaa the ~esul.t ~f the re~i~~ ~ra~win~.
~ft~,~~~S
The tools presented in this sections implement the well known tech~uques that have been used for several years in the medical market.
Some of them, life Seed3, extend the standard functionality i~z order to minimize the user intervention.
'I'hreshE~lding ~'I'hr3) Thresholding is on of the most basic tools and is often used as a starting poW t in order to perform more complicated operations, as shown in Figure 1.1. T'he 'I bra object reconstructs a binary object selecting all the voxels in a specific density range in the slice stack. If the slice soacl~ is not anisotropy the application can choose to generate the binary object using cubic or nearest nc:ighbo~: interpolation.
Thr3 also supports an automatic dilation in the ~ and the Y direction.
'I~his is useful in situations, like the one in Figure 1.3, where vthe binary object b~ has to be used in the Volume Rendering pipeline.
~xtr~sinn ~ExtJ~v) FJxtrusicn projects a 2D shape along any direction. This feature is very powerful when it is used in conjunction with 3I~ visualization. In fact, it Cedara Software Corp. Page 7 IMP PrS Segmentation Architecture allows irrelevant structures to be elirrunated very quickly and naturally, as shovrn i~ Figure ~.4.
' f ,~< .r =<.~:3:

i A ~
l~

Fi a 1.4 o Extrexsion is a rraechanisa~ which works v~~°y well ire co~jun~rtion with 3I~ visualization. The user draws (~I which defines the region of the dataset i:n which he is interested ia~. The data outside the region is removed..
Figures 1.4.A and 2.4.8 show the application from the user's pr°rspective.
The user outlines the relevant anatomy and only that area will be rendered. Figure 1.4.C shows the binary object that has been generated through extrusion. This object has been used to restrict the area for the volume renderer, and so eliminate un~~Tanted structures.
Sleeps Lnterpolator (Igecon) The shape interpolator reconstructs a 3I3 binary object from a stack o$
2I3 Ii~as. The stack of 2I~ ROIs can be unequally spaced arid present branching, as shown in Figure ~.5. The Recon object suppoms nearest neighbor and cubic interpolation l~ernels, which are guaranteed to generate smooth surfaces, (see Fig~ue 1.~). This functionality is used when the user manually draws some R~~s on the original slices or retouches the 1~.~~: generated through a threshold.
Cedara Software Core. Page 8 lAP PrS Segmentation .Archit~:cture Figure 1.6 'The shape interg~o~~tion ~~~a be ~ccoplished ~x~ith cubic i~aterpoLation (~~ or nearest neighbor interpolation (I~~.
binary ~uonnectgvity (S~ect3~
ConnectiW ty is a well proven technique used to quickly and efficiently isolate a structure in a binary object. The user, or the application, identifie.; a few points belonging to the structure of interest, called seed points. All the voxels that are connected to the seed points are extracted, typically removing the background. This process is also rc;ferred as "region growing".
Region growing is very often used to 'clean' an object that has been.
obtainec. by thresholding. Figure 1.1 shows an example of a CT dataset where tl.e bed has been removed simply by selecting a point on the skull.
Cedara Software Corp. Page 9 Figure 1.5 'I'he S~~~e interpolations ~arocess reconstraxct ~ .3d binary oh~~~t i:ro~ set of 2I) IWIs, e~c~n if they ire not e~u~.lly spa~~c~
end iaa~le branclain~.

~a ~~~_~a--::
IAP PrS Segmentation l~rchitecture This functionality is ver~,T effective when combined with the Coordinate Query functionality of the prser~Ter volume renderer. Coordinate Query allows the application to identify the 3~ location in the stack when the user clicks on the rendered image. By combining these two tools the entire operation of segmentation and clean up can be done entirely in 3I~
as shown in Figure 1.7. See the "PrS 3~ Rendering l'~rchitect~:~re" White Paper for more details ors the Coordinate Query.
The Seedy object implements a six-neighbor connectivit~T. ~t also support: the following advanced functionality in order to facilitate the identification of the structure:
1. Tolerance: when a seed point is n.ot actually on the object, Seed3 will move it to the nearest voxel belonging to the binary objects if the dist~~nce is less then the toleranc~° specified by the application.
This functionality allows the application ~o compensate for rounding errors and imprecision from the user.
Filling: Thus functionality removes all the holes from the object segmented. ~t is sometimes used for volume measurements.
Small links: When a bitvol is genes ated using a noisy dataset, several parts of could be connected by narrow structures. The Seed3 object allo~JS the "smallest bridge" in which the region growing can grow to be specified. This functionality allows the system to be insensitive to noise. Figure 1.8 shows how this feature can extract the brain in an dataset.
(;edam Software Core. Page ~.0 Fire 1m7 l~Ipheriperial an~i~ dataset. The ~I~luawe P;ea~dernr~g visualiz:ati~n ~f the vasculat~e als~ includes ~ther u°clated str~~ctu9°es ~~~e ~y Just ~lic~la~~ ~n the vessel, the user can eliminate the bac:~gr~und irrelevant strczctures (~)o >a°~ m_ 3°
I-AP 1'rS Segmentation r'~rchitecture r'~ B
Fig~xe 1.8 a 1~~ dataset oI° the ~x.~ix~. The seed oia~t is .set in the hx~i~. ~n ~A) the region gxowiaag l:a.iis in extracting the b9°~ia~
sense there a:r~ small connections fxothe brain to the shin. ~n ~) the hxain is extxaeteei t~eca~se tlae s~n~ll connection axe not foilos~red.
4. OisarticL~lation: This is the separation of different <anatomical st~uctL~res that are connected. The application can specify two sets of seed points, one for the object to be kept and one for the object to be removed. Seed3 will perfornn erosion until these tv~o sets of points are no longer connected and then perform a conditional dilation the same amount as the erosion. This operation is computationally intensive. It works well if the bitvol has a well-def~ned structure, i.e.. the regions to be separated do not have holes inside them, and narrow bridges link them. On the other hand, if the regions are thin and the thickness is comparable to the thickness of the bridges, then the result may not be optimal. Figure 1.9 shows lzov~ this feature can be apphea to select half of the nip in a CT
dataset.
Cedara Software Corp. page 11 lAP I'rS Segmentation Architecture .~~ ~~~ ~"~ '~' Figure 1.9 In the bina~:y volume in (A) the user sets one seed point to sele~a the part t~ include Green.) and one seed point to select the part ts~ remove (red). 'The systerr~ identifies the part ~:~ the two str~zctu:res with tninirraal connection and separates the :ytrs.~cta~res there. (~) shows the result.
~~.y wel C~e~agvity The concept of connectivity introduced by the binary images can be extended for the gray-level images. 'rhe gray level connectivity between two vo:gel measure the "level of confidence" for which the°se voxels belong ~to the same structure. This definition leads to algorit:l~n which enables a more automate method for segmentation and reduces user intervention. Grey ievei connectivit'~ tends to work very well when the structure under investigation has a narrow range of densities with respect to the entire dynamic range of the images.
The basic Al~orithan The alg~'rithm takes a slice stack as input and a set of seed points. For each voael in the stack it calculates the "level of confidence" with which this voxel belongs to the structure identified by the seeds. ~Joxels far frorrrl_ th.e seeds or with a different density than the seeds have a low confidence value, whereas voxels close to the seed points and with similar density have high confidence values. Note that no thresholds are required. for this process. From the mathematical point of view the "confidence level" is defined as connectivity from a specific ~Toxel to a seed point. The definition of connectivity from a voxel to a geed point according to Rosenfeld is C(seed,voxel)=~axP(seed,voxel)nzEP(seed,voxel) ~(z)~
~Uhere i?(seed,voxel) is any possible path from the seed po:imt to the voxel, fir.(.) is a function that assigns a value between 0 and 1 to each element in the stack. In our application ~ (.) is defined as follow,:
~,(voxel)=1-~ density(voxel)-density(seed) ~
The connectivity is computed as:
C(seed9voxel)=~-~lnp(seed,voxel) ~~zEP(seed,voxel) I density(z)-der~sity(seed) ~
In simple terms the connectivity of a voxel to a seed point is obtained by:
Considering all the piaths from the seed point to the voxel.
Cedar, Software Corp. hage 12 lt'1P PrS Segmentation tlrchitecture ''' 2. Labeling each path with the maximum difference betwee:rl the seed point's densit,J and the density of each voxel in the path.
3. Selecting the path with minimum Iabel's value.
4. Set the connectivity as 1.0 minus the label's value.
Tor multiple seed the algorithm in step 2 uses the average densit5- of the seed points.
The algorithm computes the connectivity values for each vcexel in the stack and produces an output slice stack which is called "connectivity reap". higure 1.10 shows a 2D example using an iVIR image and its connecti-c-ity map. The area in which the seed has been place°d has the connecti~ntsl values highF~r then the rest of the image, and so it appears brighter.
Figaare 1e10 o I~ge t~ sh~ws an Wl~ s~a~e and the place adhere the s~e~ prai~at has ~S~ea~o Image Jl~ ~h~s~s the ~~nnectavlry gyp, Figure 1.11 shows a 3D example in which the connectivilv rna~ is rendered using flip. In this example the dataset is an MR acd~aisition of the head; and several seeds point have been set in the brain, which appears to be the brightest region.
Cedara Software Corp. rage 13 err':,"' .~.- '~, :AP PrS Segmentation Architecture ~ig~.are 1.11~ I~Ih irraa~e ~f a c~nnectivity map. In this e~aa~aple several seed p~iy~ts ha-~e been set ire the braia~ ~f this l~IIZ dataset.
The I~~(i' grnage Shows tHat the brain is the brightest area.
The connectivity map :=.s thresholded at different values in order to extract r:he area of interest as a bitvol. 1\Tote that in this case the user need oily to control one threshold value. The connectivity map has always to be threshold from the highest value (which represent the seed points ) to a lower one defined by the user. The user incre=acing the thresho~.d removes irrelevant structures and refines the einatomical structure where the seed has been planted. From the user perspective this mej:hod is quite natural and effective, the user visually browse the possible solution interactively. Figure 1.12 shows an example of user interaction; the connecti~rity map shown in figure 1.21 is thre:>holded at increasiaig values until the brain is extracted.
Cedara Software Core. Page 14 l:rlP PrS Segmentation Architecture z~ ?~J ~P .'°~ ~--In figure 1.12 the binary object computed thresholding the connecti~-ity map is applied as a mash for the volume renderer . As mention in the previous section it is necessary a small dilation before the connection to the volume rendering pipeline. Figure- 1.13 shows the complete pipeline.
Seg3 Thr3 13~ wf Ss ~lol Cproj -~~~strast Table In order to optimize the segmentation in terms of accuracy and speed the Seg3 obect can use a co9ztrast table which enhance the contrast between the anatomical structure under investigation and the rest of the anatomy.
The region growing process will operate on the image once it has been remapped with the contrast table. The connectivity will be calculated as follow:
Cedara Software Corp. Page 15 figure 1.5.2. : The ~~nnectivity map sh~~,vi~ ~agure 1.11 is thresh~ld a~ei tlae Biscay ~b~ect is a~se~ t~ extract t~~ ab~at~anaca~
i°eat~re fr~the original dataset. °1'l~is process is ~~ne ia~teraotive~y.

ILAP PrS Segmentation Architecture C(seed,~JOxel)=1-~,~hn p(seed,voxel) ~rE:P(seed,voxel) I ~~ntYaSt table(ClenSlty(z>)-contrast table(density(seed)) ~
The application can take advantage e~f this functionality in several ways ~ Reducing the noise in the image.
Increase the accuracy of the segmentation eliminatin37 densities that are not part of the anatomy under investigation. Fc~r example in a CTA dataset the liver doesn't contain intensity as high as the bone. I-fence they can be remapped as zero (background) and excluded from the segmentation.
1_,imit the user mistakes: if the user sets a seed point in a region a region which is remapped on a low value ( as defined by the application ) the seed point -will be ignored. For example if the user in the intent to segment the liver in a CTA dat;~set sets a seed point on the bone, it will not considered during the region ,rowing process.
The app>lication is not force to use th.e contrast table, when it is not used the system will operate on the origvzal density values. For example the brain in figure 1.12 was extracted without contrast table.
The ap1)lication can expose this functionality directly to the user, or if appropriate, use the rendering setting used to view the images.
fn order to segment structure withy high density the window i.evel that the user set in the 2I~ view can be used as contrast :able.
The opacity curve to render a specific tissue can he used a r-emapping table. The users in order to visualize t:he tissues properly has to set the opacit~T to 100% for all the densities in the :issues and then Tower values for density which partiall;y~ contains aso part of other tissues. So the opacity curve implicitly anaximizes the contrast between the tissue and the rest of the anatomy.
The Expended Al~~~
The bas~~c algorithm is a very powerful tool, as shown in figure 1.12. In order to extend its functionality the Seg3 object implements two extensions:
Cedara Software Corp. F age 16 :~:AP PrS Segmentation Architecture Disl:ance path. In some situation the structure, which the user is trying to extract9 is connected with something else. For e~~ample for the treatment planning of the At~~T shown in 4'igure 1.14 the feeding artery and the draining vein have to be segmented from the nodule of the AIM. The density of these three structures is the same ( since it is the same blood which flaw in all of them ) and they are connected.
Figure 1.14 I~ data,set of the head region sh~wing an A .
T he: irrBage A if the II~IIP of the dataset the image ~ is the binary seganentation of the dataset. F~inary region gr~wing as; not able to se anted the three anatc~rxaical s ctures ( veins, artery, avn;~ ~ require for the treatment planning.
In order to facilitate the segme:~atation of the structures Seg~i can redr.ce the connectivity value of any voxel proportionately to the dist~mce on the path from the seed point. Seeding the vein9 as shown in f.gure 1.15 will cause voxel vrith the same density9 1'ut in the nodne of the avm, to have lo~~er connectivity value, and hence exclude them. 1'~Tote that the distance is measured along the vessel, since the densities outside the vessel's range will be remapp~'d to zero by the contrast table and not considered. The user browsing through the ;possible solution will vis'aally see the segmented area following the vein as shown in figure i.15.
Cedara Software Core. Page 17 ;4' ~9 _._ lAP PrS Segmentation ~rchitf°.cture Fi~xre 1.15 Vein segmented apt different threshold value. 'I"he user° changing the threshold can vistaall~ f~11~w the vein. Ire order t~ generate these inaa~es the pipeline in figezra: 1.13 has heert used. IJistance path is usually used in conjuction with c~ntrast table.
figure 1.16 shows the example analyzed by I~r. ~ldrige. In this case the functionality was used not just for segmentation but for increasing the understanding of the pathology followi~zg several vessels and see how they are interacting.
~ig~are 1.16 ,xample 1.14 analyzed by I~r. ~ldri~e. °. ~ldrige usesi the distance path ft2ncaionality t~ follow tl~e vessel involved in the aneugysxn and .analyzed their inte~°action.
f~.s ~ve mention in the section "~P~narST Connectivity" disarticulation can be similar situation. However disarticulation is mair~l~T designed for bone stnzctures and doesn't aow any level of control. IOstance path. is instead designed for vessels and allows the user to 1-iave a fine control on the region. segmented.
2. growth along an axe. In some protocols, like the peripheral angiography, the vessel will folloar mainly one axe of the dataset. The application can use thlS lnfOYmatiOn t~ facilitate the segmentation process and force the region growing process to follow the vessel along the main axe, and so select the main lumen instead of the small branches. Figa.~re 1.17 shows an example of this functionality.
Cedara Software Corp. page 18 i ~i ~s' ,, ~n 1a11' PrS Segmentation l2rchitecture Figure 1.17: Sega~e~tati~a~ ~f ,~, vessel in ~. Ii~I~ peripheral angi~graphy. S~g~ all~vas t~ ass~ciate weights t~ each ~nc the axes, sash d~~eight represent an i~crc~ae:~tal red~cti~n ~f the c~rr~nccti~aity value f~mr the regi~n gr~ing t~ f~lll~w that axe.
Embedding the ~n~wledge The benefit of the algorithm goes behind the fact that the application doesn't have to set a priori threshold. The application can embed the knowledge of the structure that the user is segmenting in several ways:
1. tis ~>resented in the previous section the contrast table is th.e simplest and effective way for the application to guide the region grcwving.
2. The wunber of densities expected in the object can be used to guide the :region growing. Note that the process requires only the number of densities and not to specify the: densities included. The threshould in tile connectivity map identify the number of connectivity values inchaded in the solution and hence the number of densities as defined by the C(voxel,seed) formula. plots that when the distance neap or the growth along an axe is used the voxels with same conv~rast value can have different connectivity according to their distance to the seed point.
3. The volume size ( as number of voxels) of the object can he used to guide the region growing. The volume size of the object can be simply meastued querying the histogram of the connectiW t;~ m.ap and adding all the ~,=slues from the threshold to the maxilnim value.
4. '1'he relative position of the seed point can be used to guide the appL.cation in forcing the region growing process to follow a Cedars Software Corp. Rage 19 ,~
3AP PrS Segmentation Architecture particular axe. For example in the dataset in figure 1.1y th.e user will set :>everal seed points along the ~' axe. Evaluating the displacement of the points in the ~X plane the application can estimate how much.
the vessel is following the Y ~txe and so how much ~:he regi.on growing has to be bound to that.
'The information gathered with the previous 4 points can be used by the application for two different purposed ~ptimize for performances. The time requested by the Seg3 object is proporErional to the number of voxels selected. Avoid including unwanted structtu-e will speedup the process. For example in protocol used for dataset in fi.gcare 1.12 the volume of the brain can not be more then 30% of the entire volume since the whole head is in the field of view. So the solutions first two solution could be not even included in the connectivity map since the region growing should have been stopped before.
identify the best solution9 the one that most likely ins what the user is looking for. 'This solution can be proposed as de~Eault.
More specifically the previous infor:rnation can be used for these two purposed in the following way.
i Optimize for Performances Identify best solution Setting to zero the densities, which are guaranteed not to belong to contrast Table the structure to segment, will improve performances and also the duality of the segmentation by reducing the number of possible solution.
The Seg3 object accepts as a The threshold set in the input the number of densities to connectivity map is actually the include in the solution. It will number of densities to be Number of select the closer densities after included in the solution after Densities the contrast table as been the contrast table as been applied. Once these densities applied.
have been included the process will stop.
The Seg3 object accept as a Tlus value can be used to limit i ~ input the number of voxel to the threshold in the ~
i include. connectivity map. ~hzerying the Volume Size histogram of the connecti~aty I map the application can ~
i estimate the volwne of the object segmented for each threshold.
Relative position constraining the y-egion ;;rowing Not Applicable.
of the input seed process will indirectl reduce the Cedara Software Corp. Page 20 ~%" ~~ ~-- a~ ~~
lAP PrS Segmentation Architecture points number of voxel to include.
Using this functionality will increase the cost associate in the evaluation of each voxel. !
i The ap~~lications is expected to use some conservative values for the volume size and number of densities to stop the region grovring process, and the realistic value to select the default solution.
The ability to embed the knowledge of the object under investigation makes gray level region growing well suited for being protocol-driven.
For eac.l2 protocols the application E:an define a group of preset which target are the relevant anatomical stnacture.
i~r°y versus ~°~y L~~el ei~r~:i The basic algorithm as presented in t'_ne previous section can be proven to be equivalent to a binary region. growing where the threeshold are known in advanced. So this process doesn't have to be validated for actor acy since the binary region growing is already in use right now.
Avoiding a priori knowledge of the threshold values has a major advanta,e for the application;
The number of solution that the user has to review are limited and pre-calculated. Requiring the user to set only one value, for the sele<:tion of the solution, means that the user has to evaluate ( at wor;~t ) 256 solution for an 8 bit clataset, while using the binary region grooving the user will have to evaluate 256"'256= 65536 solmtion since all t:he combination of the low and high threshold hove to be potentially analyzed.
2. Finding the best threshold is not natural from the user's pcerspective.
Figure 1.18. shows a CTA of the abdominal region in which the ~or~:a has been segmented. To o'iJtain this result the user ties seeded the Aorta with the settings shown in figure 1.18.A.
Cedara Software Corp. Rage 21 ~~'c~ .~-- ,._r, fi,~'~~~-IAP I'rS Segmentation Architc°.cture Figure 1.1~ Image .~ shows the Aorta extracted from tlLe dataset shwwn in figc~re ~. In this case only one seed point wa;> used.
In order to obtain the same resulx with the binary region growing the user has to manually identify the best threshold for the Picorta, which is shown in figure 1.19 and then seeded. Looking at figure=. 1.19.A is not clear that these are the best netting for the segmentation and so they can be easily overlooked.
i Figure 1.29 Threshold setting necessaay to ext~ct the Aorta as in figure 1.1~. The image A appears with several holes and it is riot clear with the Aorta is still s~onnected with the bony:.
~~he threshold set by the user cx~z be dictated by the volume of the object rather then the density values.
Our experience shows that the duality of the result achievable with this functionality is not achievable ~lith the normal thresholding. Even in situation in which the thresholds are known in advanced is preferable to use this information as contrast table, and avoid binary segmentation.
Advance usage The gra,~ level region growing allow=. reducing the time to perform the segmentation from the user perspe>ctive. It guides the user in the selection. of the best threshold of the structure under investigation.
In the previous section v~e have been using the connectivity m;ap for the extraction of the binary object using ~tl~e pipeline in figure 1.1~. In some C,edara Software Core. Page 22 ~~°~H ~ ~~ .~~'~
iAP PrS Segmentation Architecture situation it could be valuable to use the connectivity map to er~har~ce the visualization. The connectivity rrlap tend to have bright and uniform values in the structure seeded and darker values elsewl7~.ere. This characteristic can be explode d in the hJIIP visualization to enhance the vessels i.n the and C,"TA dataset. 1 figure 1.20 shows an example of this application, image 1.20.A is the I!/III' of an dataset, while 1.20. i;~ the N1IP of the connectivity map. p'igure 1.21 shows the III' of the data.set in which the connectivit~T map and the original dataset have been averaged, and compared it with the l~II' of original dat<LSet in the upper left corner. In this case it is visible that the contribut~.on of the connectivity help to suppress the background values enhancing the vessels.
A I
Cedara Software Core. Isage 23 figure 1.20 Image A is the SIP of a ~I pheriperial. dataset.
Image l~ shows the c~~nectivity reap ~f the wane region where the arlair~ ~aessel has been seeded.

a ~ , ., iAP I'rS Segmentation Architcecture Figure 1.21 1VIII~ of the dataset ~bl~ained averaging the co~:anectivity gap anal the ~~ganal dataset sh~~~n in figaire 1.2~. In the wppet° Ie$t cower it is superimposed the Ih of the ~riginal dataset. It is visible the averaging helps in suppressing the backgr~un.d density while preserving the details of the vessel.
In some situation it is not necessary to have the user setting directly the seeds fc~r the identification of the anatomical structures. The seeds point can be identified as a result of a threshold on the ima;~es under investig;~tion. Dote the threshold is necessary to identify only some point in the structure and not the entire structure, so the application can use very conservative values. For example in the C'I'A a very high threshold can be used to generate some seed point on the bone. An P~A a very high threshold can be used to generate seed points on the vessels. The seed paints in Figure :~ .21 and 1.2:0 have been generated using this method., Figure 1.22 shows an example of this technique.
Seg3 supports this technique since it is designed to accept a bitvol as a input for the identification of the seed points. °The bitvol can be generated by a threshold, and edited automatically of manually.
Cedara Software Corp. Fage 2q.
Figure 1.22 Image A shows the seed point generated in a C°I'A
dataset of the neek region for the i:der~tification ~f the bo~~~.e. Image shows the bone seg-~raented from the same dataset. In this example the user intervention is m~inipnized oa~ly in the sel!.ecti~n ~f the best: threshold.

~~ ~d"._ IAP PrS Segmentation Architecture islizir~ are ~e ~~.tl~
Visualization is the process that usually follows the segmentation. It is used foY- the validation of the segmentation as well as correction through the tools presented on the previous section.
The IAA? processing sera=er supports two rendering engines that can be used fo:r this task. A very sophisticated Volume Renderer and a Solid I~endere~r. Please refer to the "PxS 3~ Rendering Architecture" tXlhite Paper for a detailed description of these two engines. This section will focus mainly on how these engines deal with binary objects.
~~~tme eeri Volume Rendering allows the direct visualization of the densities in a dataset, using opacity and colour tables, which are usually referred as classification.
The IAP processing server extends the definition of classification. It allows cl.efine several regions (obtained by segmentation) and associate a colour and opacity to each one of them. "'XIe refer as a local cla.=;sification, since It allows to specify colour and opacit<~ based on voxel's density and location. It is described in details in the "L,ocal Classification" section of the "Pr6o 3I7 Rendering Architecture" lhite Paper.
Local classification is necessary ire the situation in which several anatomical structure share the some densities; situation extremely common in the medical imaging. For example in figure 1.1~~ the read density ~~f the Aorta appears also in the bone due to the partial volume artifact.
So the goal of an application using Volume Rendering as :a primary rendering tools is to classify the dataset, not necessarily to segment the data. The goal is to allom the user to point on an anatomical structure on the 3I~ image and have the system classify that properly. This is the target of the "Point and Click Classification (PCC)" developed in Cedara's applications which is based on the tools and techniques presen~;ed in this white paper.
As we'll describe in this section segmentation and classification are tight together.
Cedara Software Core. Page 25 1AP PrS Segmentation Archit~:cture W~t Segemtati~n as a I~as~
The Irl1? volmne renderer uses a bina.r~- object to define the areva for each classification. When an application classifies an anatomy t)lat shares densities with other anatomical structure, it has to reprove the ambiguities on the shared densities defining a region ( binary object ) which includes only the densities of th a anatomy.
The binary object has to loosely contain all the l:elevant (visible) densities of the anatomy, it doesn't hay a to define the precisely boundary of the anatomy. It is supposed to mash out the shared destines which do mot belong to the anatomy. The opacity function allows the Volume Renderer to erisuahze the fine details in the dataset. The section "Local Classification" of the "PrS 3D Rendering Architecture" Wlvte Paper describe this concept as well.
Figure a?.0 show an example of thi;~ important concept. 2.!7.A is the renderii..g of the entire dataset, where the densities of the brain are shared with they shin and other structures. Although the dataset in 2.1J. r1 has a very detailed brain it is not visible since the shin is on top of it. Using the shape interpolation the user can define the mash 2Ø13 which loosely contains the braira, this mask removes the ambiguities of which are the densities of the brain and allow the proper visualization of the brain, 2ØC.
A
figure :,.0 11~1~ dataset i~a whici~ t:~e ~rai~a as been extracaed. '~~e dataset A has ~~~~a asked ~~itla tie ~inaa~ ~~S~e~t t~ obtain tie visuaii~ati~of tie ~Yaia~ C. ~xa triis case Shade Irat~~p~ladti~~ was used tc~ federate t~~ mask.
Cedara Software Corp. Paae 26 b lAP PrS Segmentation l~rchitecture The benefits of this approach are twee 1. T he definition of the mask is typically time-effective, even i.n the case of figure 2.0 where the mask is ~:reated manually it takes about 1 "2 minutes to an trauzed user.
2. The mask doesn't have to precisely define the object, small error or imprecision are tolerated. In figure 2.0 the user doesn't have to outline the brain in fine details, rather to outline where it is present on ~. few slices.
~pacit;r and Segentati~n The segmentation setting which are used for the definition of the binary object ( mask ~ are related to the opacity used for the visualization. For example in figure 2.0 is the user lower the opacity enough it will visualize°.d the mask 2Ø1 instead of the brain 2ØC.
This will happens when the segmentation is based on some criteria and the visualization on different ones.
Figure 2.1 shows an example in which the user click on the skull in order to remove from the bed in the background. The application use°.s a region growing based on the dataset thresho.(ded by the opacity, and the result is shown in figure 2.1.x. ( the pipeline in figure 1.3 was used ~. The mask generated contains the skull of the dataset, only the bone densities connected to the seed point. If the user lowered the opacity he will eventually see the mask itself, 2.1.C.
Cedara Software Core. Rage 27 Figure 2.1 In ~rder to identify the: abject selected by the user the application can threshold the dataset, based on the ~pacity, and apply a binary region gr~win~, as sh~wn iia~ innate o 'Thi;> aneth~d c°.a t rbs ,~ _~ ~~,~' a IAP PrS Segmentation tlrchit~°cture ~ ' ~uili generate an a ~as~ C which is ~~p~radents orz tlae oglacity used for tile threshold.
In general if a mask has been generated using a threshold ~t,,t~ can be used urith a classification in which densities outside ~t,,t.,~ have to set to zero.
There is a very simple method to get around this limitation if necessary-.
The mask can be regenerated each time the opacits,- is changed. This ~,vili_ have a behavior snore natural from the users' perspective as shown in figure 2.2.
A ~ I3 ~ C
Figure 2.2 The o~acit~r mask can he recoall~lsted ad eac:~l opacity change., Ial~age A shows tile origi~.a~ opacity settings, image ~
shows the resh of the t~lresho~l~ and seeding of the hone straz~taa~~e. ~3no~ the opacity is lowered front ~ the mask is r~couted, and the result is shogun i~ Iig~r~ ~.
'1 he pipeline used in 2.2 is shown in fagure 2.3, the differenc,~ with the pipeline used in 1.3 is tha t the connection between the pv~ opacity object and the_ri Thr3 object is kept after than the mask is generated.
,' ___p voj _.
figure 2.3 l~e~gling the co~lnection of the °I'llr3 object with the opacity pvx wild allow regeneration of the ynask on the fly.
I'ipefine 2.3 will resolve the lhnitation imposed by the threshold used for the regic>n growing, but it will trigger a, possibly expensive, computation for each opacity changes. It will hence limit the interactivity of this Cedara Software Corp. Page 28 r~a _.~AI~ PrS Segmentation f3rchit,ecture operation. The performances of rotation and colour changes will remain unchanged.
I~Tor~l Repl~c~ent As explained in the "PrS 3I~ tendering l~rchitecture" ~Jhite Paper the shading is computed using the norr:rlai for each voxel. The normal is approximated using they central dif:Eerence or Sobel operators, which utilize the neighbor densities of the voxel. Once the dataset is clipped with a hinary object the neighbor o:E the voxel on the surface changes, since tfie voxel outside the binary object are not utilized during the projection. So the normal of these voxels has to be recomputed..
Figure ~ .4 shows why thi s operation is necessary. ~YJhen the bitvol clip s in an homogeneous region, like in ~.4.~9 the normals have several directions, and so the cut surface ~~ill look uneven with several dark points. ~~eplacing the normal will ma(~e the surface look flat, a s expected by the user.
I
I ~' ~~~ ' ill S E~' , ., ~ ~' st ~t ~ f~~',~

.; ~x,.i ' ~ x f3 . u~~

Y , ~ P H ds ...
....; /~:

F~
,~

~
b ~
~ x~ ~ #~ ~ .:

~ s, a ~~Y ~~6 i I

The replacement of the normal is necessary vrhen the application uses the binarjT ebject to cut the anatomy, as in figure 2.4, not when it uses to extract t.~e anatomy as in figure 2.~ ox 2.1.
Since the same binary object can be used for both purposes at the same time the IP~P renderer will replace the normal all the time. In situations in which the binary object is used to e:~tract some feature the application can simply dilate the binary mask to avoid the normal replacement. In Cedara Software Corp. Page 29 .r " .r~i _,.:
fAP PrS Segmentation Architecture 'a ~~ ~~ '~"~°""~' this situation the mask is based on a threshold or gray level. reg~.on growing and the dilation wilj. guarantee that the bowndary ofd the mask will be on transparent voxels and hence invisible. It is suggested. a dilation of 2 or 3 voxels.
Note that in the situation in which a, binary- object is used for extracting the anav~omy and cut at the same time, it is generated as intersection of binary c>bjects. The binary object used for the extraction has to be dilated before the intersection. In this way the duality of the final render is guaranteed while allowing the user to edit the object. Figure 2.'i shows an example: of this scenario.
Figure 2.5 l~xaa~~ ~1 shows the sl;:ull which has heerr extracted as descrih~ed in 2.~. This ~h~ect is caxt ~asi~aextraasi~n, as sh~~'w in ~.4.
Tlae bitv~l ~~r the extracti~n is dilated, while the bitv~1 e~saruded is got, rage 13 sh~~vs tlae final binary ~h~~ct s~periraac~se:d ~~a the image 1~. The cutting area is exacly as defined by the a~ser, while the mash is dilated t~ guarantee the ima~~ quality.
For uns:'~aded i-olume rendering the dilation is not necessar'~ ~.nd, it will not affect the final dualit~~.
I~I~lti Classifricati~ns ~~ls presented in the "PrS 3D Rendering Architecture" White Paper the IAI' processing server allows the definition of several classifications in the same; dataset, each one associate ~~ith its own binary object.
The binary objects can be overlapping, and so several classifications can be defined in the same location. This :represent the fact that in the dataset the one voxel can contain several anatomical structures, and it is in the Cedara Software Corp. Page 30 ~' : e'~ ~ a, rv~
;t~lP PrS Segmentation Archit~°cture nature of the data. This is also the satne reason for which these structures shares densities. The next section aJill analyze this situation in details, since it is relevant for the measurements.
The ~P processing seYVer supports two policies for the overlapping classifications, and the application. can extend with a customized extension.
SlI~CC CrlC~°1I1 The Surface Rendering engine renders directly the binary object. The IAP
processing server suppo~s the visualization of several objects, each one with its own rotation matrix. Other features include texture mapping of the original gray level, and depth shading. Please refer to "PrS 3I~
lZendering architecture" ~7hite Paper for a full description of the functionality.
Cedara Software Corp. Page ~ 1 IAP PrS Segmentation Architecture easu~eent~ ~,. Se~rn~t~t~~n One of the most important functions in any medical imaging szpplication is to quantify the abnormality of the anatomy. 'This inforxnatiori is used to decide on the treatment to apply and. to quantify the proggess of the treatment delivered.
The IAP processing server suppori_s a large number of 2I~ and 3D
measurements. In this white paper we'll describe how the measurements can be used in conjunction with segrro;ntation.
The measurement model has to follow the visualization mode°1 in order to be consistent with it and measu:ee what the user is actually seeing, binary measurements for Surface Rendering or gray level measurements for volume Reordering.
In this vvllite paper we'll focus on surface and volume measuremexit since they are the most commonly used. For a complete hst of measurement supported please refer to the Ii~leas3 roan page.
ef~niti~n ~f ~~e I~(e~.su~~~nrs During the sampling process the object is not necessarily aligned with the main axes. 'hhis cause that some voxe~ls are partially occluded, so in other terms the volume sampled by this voxel is only partially occupied by the object L~eing sampled. This is know as "Partial Volume Artifact" , and cause that the object spans across voX:els with several different densities.
Figure 3.0 shows graphically this situation. Object l~ is next to object I3.
d,~ is the density of ~cloxel 100% filled with object A which we'll consider also homogeneous. dl; is the density of voxel completely filled vrith object B. ~e ~.e'll also assume in this example that dA < ds and the vaue of the density of the background is zero.
i ~. . s~, j,. : 3 a~, K, : ., ~ ;
Figu~~ :3.0 I~ this ~xax~le tlae~~ ate tu~~ ~~j~~ts, A and 1~, whiely have been saa~apl~~l along the g~a~d ;~h~~n Ita the a~t~~e.
Cedara Softu-are Corp. Page 32 hvI' PrS Segmentation llrchitecture The voxels included by the object ~A can be divided in three different regions:
~ Red Region : Voxel completely covered by object t~. They have density d,,.
Vellow Region : voxel partially covered by object A and background.
~ Blue Region : Voxel covered by object l~ and I3.
Since in the scanning process the density of a voxei is the average density of the materials in the volume covered by the voxel, we can conclude that the yellow densities have value less then d,~, the blue xange greater then d~. In the picture there is also l~ughlighted the green area, which are voxel partially occluded with the material ~. The range of densities of the green voxels overlap with the red, yellow and blue. C~rap~iically the distributson of the densities is shown in figure 2.1 defa.ritie.r p'igure 2.1 "1f°he yellow region leas voxels partially oecluded with object ~~ a~ad the bacl~g~ound, hence the density will he a ~aeighte~
sup of dA and hackgrouud density, whieh is zero. Sianilarly the hlue rel;ion has voxel with densities het-c~een d~ and dB. '~.'he green region ~~as voxel partially occluded with object 1i and background her~ce tihey sale h ve the full range of densities.
Depcncling on the model used by the application (binary or gray level the volume and the surface of the ohject r~ can be computed according to the following rules:
Gray Level binary i li'or each voxel is the The number of voxel Volume dataset the volume of the binary object covered by the object representing the object c A has to be added. A are counted.
Surface Z' ol~e Rendering The voxel at the doesn't define any boundary of the binary Cedara Software Corp. Page 33 a~ ~ ~'~
:LAP 1'rS Segmentation Architecture surface for t=he object, , object representing so this measurement is voxel A are counted.
mot applicable.
Ca>ray Level Let us assume that with the previous segmentation tools we can identify all the voxels in which object A lays, even partially. Figure 2.2 shows the object and also the histogram of the voxel in this area.
de~asities )~'i~~re 2.2 '~'Fae oa~line~i area ~epxeaent the vowels ie~entifi~ed by the segent~tion ~toc~ss. The i~isto~~~rn of the voxeH in thus ~~ea is sla~~n in the Heft. The difference 1~~~,v~arn this hist~br~n>< end the one in fi~n~e 2.1 is t~~t the vo~>eH in the ~~een ~~gio~~ ~.re not ~>res~nt.
The dif9=erence of the histogram in figure 2.2 and 2.1 is that the voxel of the green area are removed. As described in the section "Segme:ntation as a Mask" this correspond in removing the densities which are shared across the objects and do not belong to the object segmented.
Note tluat inside the mask the density of each voxel represents the occlusion (percentage) of the Object A in the volume of -the voxel, re-gardless its location.
To correctly visualize and measure tlae volume of the Object l~l we have to define for each density the percentage of the volume occul>ied. Since the density is the average of the objects presented in the voxels, the estimate is straightforward:
1. For the voxels in the yellow region the occlusion is simply densitST/'d,~
sinc<° the density of the background is zero.
2. For the voxels in the blue region the occlusion for a density is Aden city-d")~cd,~-d~). .
Setting the opacity according with these criteria guarantees good duality of the volume rendered image. This opacity curve is shown in figure 2.3.
Cedara Software Corp. Page ~~

r'~;~,~
Ir'1P PrS Segmentation Architecture '~,~~""-'-#of voa elr -% Object B (Opacity >
,, d., ~~'~s ~..
rfen.rities Figure B.3 ~paoity carve fox the vo~eis i~ the seg»ente°ci region.
The opacity for each ~ea~sity represe~at the ao~a~at of o~~ject ~ in the re~~ora covered by tie voxel.
So in order to measure the volume the application can request the histogram of the densities values in the segmented area and weigh each density with the opacity:
(*) lTolume - ~aEi~~"s;t~~s htstogram(d) * opacity( d ) As explained in the section "Opacity and Segmentation" and "Gradient Replacement" the application sho~~ld dilate the bitvol is generated thought a threshold and region growing. This operation will include voxels semi-transparent and so it will not affect the measurement of the volume as defined in (*) It is not really possible to measure the surface on the object.. since the borders of it are not known. However looking at picture 2.2 it is clear that it can be estimated with the yellow and blue areas since these are the voxels in which the border of the object lays.
The reader can find more information and more detailed mathematical description in "Volume lZen daring" A. Drebin et alts. Computer Clraphics l~ugust x988.
i3in~ry To segment an object the application typically sea cwo thresholds, depending on those some part of the yellow and blue region can be included: or excluded by the segmentation. Figure 2.4 shows an example of this situation. "I'he area of the histogram between the two thj~esholds is the volume of the object created.
~,~ :. ~.
=Ei :_: f~, I~ i~;e.,i :" ~,~2t~..~ .
de~sitzes Cedara Software Corp. Page 3$

,'' ' ..a.-- ~ ;
v 1' .SAP PrS Segmentation l~rchitecture The surface of the voxel can be sim~>ly estimate as the number of voxels on the houndary of the binary object.
IMP ~f>ject liZ~deI
The example described in the previous section explain s how the application can measure volume and surface on the binary and gray level objects. In the real ease scenario Chore are several objects involved and overlapping, and they usually don't have constant densities. ~-Iowever the same principle it is still applicabt.e to obtain an estimate of the measurements.
The ~3P processing server with the hJleas3 Object supports all the functionality reduired to perform tze measurements described in the section. IVIeas3 computes histogram, average value, max, min and standard deviation of the stack. If a binary object is connectee~ to it the measurements will be limited to the ekes included.
~Ieas3 ;also computed the volume arld surface of binary objectsy it supports two different policies for the=. surface measurements :
1. :Edges : measure the perimetet-s of each plane in the binary object ~. ~~oin : count the voxels in t:he bitvol which have at least one neigbour not belonging to th~° bitvol (ie the voxel on the surface of the bitvol ).
C~edara Software Corp. Page 36 et~~l es~ritiA~
"F~r~ ~C~ Rer~d~rin~ a~rchit~cture - Pert '~"

-1 d ad'9~'r~ ,s.-PrS 3D Rendering Architecture White Paper ~'3.1~1(~Il Imaging Application Platform (IAP) is a well-established platforbn product specifically targeted for medical imaging. Its goal is to accelerate the developynent of applications for medical and biological applications. IAP has been usod since 1991 to build applications ranging from review staticms to advancer post-processing workstations. IAP supports a wide set of functionality includin;~ database, hardcopy, DIC~M services, image processing, anal reconstruction. The design is based on a client/server architecture and each class of iunctionality is implemented in a. separate server. All the servers are based on a data reference pipeline model; an application instantiates objects and connected them together to obtain a live entity that responds to direct or indirect stimuli. 'Chis paper will focus on the image processing server (further referred to as the processing server or PrS) and, in particular, on the three-dimensional (3D) rendering architecture.
Two difj~erent rendering engines are at the core of 3D in IAP: a well proven and established solid renderer, and ati advanced volume renderer, called ~~Iulti Mode Renderen ). These tyro ren.derers together support a very large set of clinical applications. The integration bet~~een these tyro technologies is very tight. Data structures can be exchanged between the tzvo renderers making it possible to share functionality such as rer_onstruction, visualization arid measurement. A set of tools is provided that allows interactive manipulation of volumes; variation of opacity and color, <:L:t surfaces, camera, light positions and shading parameters. The motivation for this architecture is that our c.~,inical exposure has lead to the observation that there are many different rendering techniques available, each of which is optimal for a different visualization tasl~.
It is rare that a clinical task does not benefit from combining several of these renderin;; techniques in a specific protocol. II~P also extends the benefits of the 3D architecture with its infrastructure by making additional fiznc~tionality available to the MNIR: extremely flexible adaptation to various memory scenario;, support for mufti-threading, and asynchronous behavior.
All this comes together in a state of the art platform product that can handle not only best-case trade-show demonstrations but also real-life clinic~~l scenarios easily and efficiently. The Cedara 3D platform technology is powerful'., robust, and well-balanced and it has no rivals on the market in terms of usage in the field, clinical insight, and breadth of functionality.
Cedara Software Corp. page 1 ~f .,~ .~:..--Prc~ 3D Rendering Architecture tXlhite Paper ~ S ry ~Iotu~se Rendering A technique used to visualize 3D sampled data that does not require any geometrical intermediate stnzcture.
Surface Rendering A technique used to visualize 3D surfaces represented by either polygons or voxels that have been previously extracted from a sampled dataset.
Interpolation A set of techniques used to generate missing data between known samples.
~oxel A 3D discrete sample.
Shape Interpolation An interpolation: technique for binary objects that allows users to smoothly connect arbitrary contours.
ultiplara;ar or Curved Arbitrary cross-sections of a 3D sampled dat.aset.
I~.eformattin~
l~LP Maximum Intensity 1 rojection. A visualization technique that projects the maximum value along the viewing direction. Typically used fox visualizing an~ographic data.
RBI Region of Interest. An irregular region which includes onl~T the voxels that have to be processed.
Page 2 Cedara Soft'vare Corp.

PrS 31~ Rendering Architecture White Paper ~<
lC~~i~~1 C~~1 1 ~x~~,~r aray 6v~e~ ct~~~.li~-For many years a controversy has raged about which technology is batter for visualization of biological data: volume rendering or surface rendering. Each technique has advantages and drawbacks. (For details, see "Error!
Ra~fer°~~c~
source raot found." on page Errors ~oc~krrq°k not defan~d..) Wluc:h technique is best for a specitlc rendering task is a choice that is deliberately deferred to application designers and clinicians. In fact, the design of the processing server makes it possible to combine the best of both teclnuques.
The processing server provides a unif~ring framework where data structures used by these two technologies can be easily shared and exchanged. In fact, is designed for visual data processing where the primary sources of data are sampled image datasets. For this reason, all our data structures are voxel-based.
The two most important are binary solids and gray level slice stacks. f>everal objects i9a our visualization pipelines accept both types of data, providing a high level of flexibilitST and interoperability. For example, a binary solid cart be directly ~risualized or used as a mask for the volume rendering of a gr;~y level slice staci~. Conversely, a gray level sfice stack can be rendered direstlr and also used to texture map the rendering of a solid object or to provide gradient information for gray level gradient shading.
A gray level slice stack is generally composed of a set of parallel cross-sectional images acquired by a scanner, and can be arbitrarily spaced and offset relative to each other. Several pixel types are suppot-ted from 8-bit unsigned to l.6-bit signed with a floating point scaling factor. Planar, arbitrary, and cu_~-ved reformats (with user-defined thickness) are available for slice stacks a~zd binary solids. Slice stacks can also be interpolated to generate isotropic volumes or resampled at a different resolution. They can be volume rendered with a variety of compositing modes, such as IE~Iaximmn Intensity Projection (i~IP), Shaded Volume :rendering, and ~Jnshaded Volume Rendering. Ii~Iultiple stacks can be registered and rendered together, each with a different compositing mode.
To generate binary solids, a wide range o.a segmentation operations are provided: simple thresholding, geometrical ROIs, seeding, morphological operatiorLS, etc. Several of these operations are available in both 2I~ a~xd 3D.
Binary solids can be recons~:ructed from a slice stack, and shape interpolation can be applied during the reconstn~ction phase. 'X~hen binarST solids lave been reconstmcted, logic operations (e.g., intersection) are available together with disarticulation and cleanup functionality. Texture mapping and arbitrary visibility :filters are also avaLable far binary solids.
Cedars Software Corp. Page 3 a' Prc~ 3I~ Rendering Architecture White 1 aper ~h~ll~~,l C~II.~Y°1~S
The following list is not meant to be exhaustive but it does capture the most signifrca:nt clinical requirements for a complete rendering engine. We want to underline the requirements of a real-life 3f? rendering system for mencal imaging because the day-to-day clinical situations are normally qtute different from the best-case approach normally shown in demonstrations at trade shows.
The Ced.ara platform solution has no rivals in the market in terms of usage in the field, clinical insight and. breadth of functionality.
'Tumors Tumors disturb surrounding vascular structure and, in some cases, do not have well defined boundaries. Consequently, a. set of rendering options needs to be available for them which can be mixed in a single image, including surface rendering (allowing volume calculations :lnd providing semi-transparent surfaces at various levels of translucency, representation of segmentation confidence, etc.), volume rendering, and vasculature rendering techniques such as MIP. An image in which mutiple rendering modes are used should still allow for the full range of measurement capabilities. For e:~ample, it should be possible to make the slain and skull translucent to show a tumor to be seen, while still iillowing measurements to be made along the skin surface.
Figure 1 - 'i~olum~ rendered tuanors.
In (a), tl~e skin is semi-transparent anal allows the user to see the tumor and its relationship with the vasculature. In (b), the location of the tcxrnor is shorn relative to the brain and vasculature.
a. pXJ a Display of Correlated Data Eros ~ Differ~,nt Modalities There is ;~ need to display and explore data from multiple acquisitions in a single image (also called mufti-channel data). Sorne of the possibilities include:
pre-and post-operative data, metabolic PET data with higher resolution IVY
anatomic info, pre- and post-contrast studies, Ilk and l~/IRA, etc.
Page 4 Cedars Soft<vare Corp.

Pre 3~ Rendering Architecture Mute Paper 'The renderer is able to fuse them during the ray traversal, and with a real 3I~
fusion, x~ot just a 2D overlay of the images.
Figure ~ - Renderings of different modalities.
In (a), the rendering of an ultrasound kidney dataset. '1'he red shows the visualization of the power mode; the ;Bray shows the visualizatic)n of th;, ~
mode. Data provided by Sonoline Elegra. In (b), the volume rea7dering of a liver study. 'I'he dataset was acquired with a ~iawkeye scanner. "I'he low resolution C'I' shows the anatoyny while the Spect data highlights the hot spots. Ii3ata provided by ;~agnbarr I-Io~spital, Israel. In (c), the P/.~I~
provide;9 the details of the vasculatur~°. in red, while the l~I~I
provides the details ssf the anatomy. (In this image, only the brain has been ~vndered.) a, b.
c.
Cedara Software Corp. Page 5 PrS~ 3I~ Rendering Architecture Mute Paper 4'~ ~' dental .'E'acliage A dental package requires orthogonal, oblique, and curved reformatting plus accurate measurement capabilities. In ad~~ition, our experience sugge;>ts that 3I) rendering should be part of a dental pacl~:age.
In 3h, svarface rendering with cut surfaces corresponding to the orth~~gonal, oblique and curved dental reformats are required. In addition the ability to label the surface of a ~I~ object -arith lines corresponding to the intersection of the object surface with the reformat planes mould be useful. Since a dental package is typically used to provide infarmation on where and how to insert dental implants, it should be possible to display geometric implant models vi dental images and to obtain sizing and drilling trajectory information through extremely accurate measurements and surgical simulation. The discussion above on prosthesis design applies here also.
~,arge Size I~ataset Modern ;scanners are able to acquire a large amount of data, for example, a normal s~~udy of a spiral CT scan can easily be several hundreds megal'~ytes.
T'ne system must be able to handle this study without any performance penalty and without =slowing down the workflow of the radiologist. The processing server can accomplish this since it directly manages the buffers and optimizes, or completely avoids, swapping these buffers to disk. Fox more information on this funcjionality, please refer to the Menloly Management section in the PrS
White Paper.
Since volume rendering requires large data structures, Memory Management is extremehr relevant in these scenarios. Fig,are 4 shows the rendering of~ the CT
dataset o:E the visible human project. The processing server allows yom to rotate this dataset without any swapping on dish after the pre-processing ha~~ been completed.
Page 6 Cedara Soft,vare Corp.
Figure _'B - Application ~f a curvilinear reformat for a dental pacl~ag~m Pr~~ 3D Rendering Architecture Wiaite Paper ~iga.~re 4 - Vola~rne rendering of the visible ha~rnan project.
Since the processing serer performs the interpolation before this rendering, the rendered dataset has a size of 512:~51~x1024, 12 bits per pixel. Including gradient inforrrlation, the dataset size is 1 Gigabyte~ Since the processing server performs some compression of the data and optimizes th~.e memory's buffer, :here is no swapping during rotation, even on a 1 Gigab~~te system.
Cedara Software Corp. Page 7 ,~e~
Pr~~ 3D Rendering Architecture Wl-ute Paper ~a ea °~1~~ ~s~cs Volume rendering is a flexible technique for visualizing sampled irnai;e data (e.g., CT; IVY, Ultrasound, I~Tuclear Medi.cine). The key benefit of this technique is the ability to display the sampled data directly without using a geometrical representation and without the need for .segmentation. This makes al l the sampled data available during the visualir:ation and, by using a variable opacity transfer :function, allows inner stre.~ctures to appear semi-transparent.
The process of generating an image can be described intuitively using; the ray-casting idea. Casting a ray from the observer through the volume generates each pixel in the final image. Samples have to be interpolated along the ray' during the traversal. Fach sample is classified and the image generation process is a numerical approximation of the volume rendering integral.
Figure 50 - C~rzceptual illttstratioaa ~f volume rendering.
rt3=. .~°' viewer light ~'olum Tmage In volurr.~e rendering, a fundamental role is played by the classification.
The classification defines the color and opacity of each voxel in the dataset. The opacity defines "how much" of the voxel is visible by associating a value from (fully transparent) to 1 (fully opaque). Using a continuous range of values avoids abasing because it is not a binary threshold. The color allo~rs the user to distinguish between the densities (that represent different tissues in a CT
study, for example) in the 3I~ image. Figure 6 shows three opacity settings for the same CT dataset.
A classification in which each voxel depends only on the density is called ~l~bal, vThile a classification in which each voxel depends on its position and density is called 1~ectY. A global classification function works pretty wall with CT
data since each tissue is characterized by ;1.range of I-~ounsfield units. In ICI, the global classification function has a very limited applicability as she~wn in Figure 7. To handle 1VlR data properly, a local classification is necessapy.
Page 8 Cedara Software Corp.

Pr6~ 3I~ Rendering Architecture White Paper pigure al - Different opacity settings for a C'I' dataset.
T'he first row shows the ~~olue rendevred irraage, while the second row shows t:~e histogram plus the opacity eurve. Different colors have been used to differentiate the soft tissue aa~.d the bone in the 3D view.
~-_~ _ ~ _,_. ~~.........
L f l I
_,_..._., ' -i~.____,_rv-~._._______ .: ..
figure l - Different opacity settings for an MIA dataset.
In , the underling physics is not e'~patible with the global opacity transfer function; it is not possible to select different tissues dust: by changing it. As will be explained in tl-ae next section9 the proces;9ing server overcomes this problem by allowing the application to use a local classyflcation.
I I _.__. -___ .I
/-._ ~, ~
~i ~ ' l ~ i~~°i4' 3 ~/
...::. ,,_..___.________ ._....._~___.___ _ ~ __._________:
Cedara Software Corp. Page 9 1'r~~ 3D Rendering Architecture ~~ute Paper ~~''.~ ° ~~' The CAF' renderer also takes advantage of optimization techniques that leverage several view-independent steps of the rendering. Specifically, some oEthe view-independent processing optimizations implemented are:
~ inte~-palation;
gradient generation; arid ~ background suppression.
Naturally, all these optimizations are achieved at the cost of the initial computation and memory °aecessary to store this pre-calculated information and, l~erefare, can be selectively disabled depending on the available configuration. The application developer also has several controllable:
configurations available that allows the developer to gracefully decrease the amount of inforgnation cached depending on the system resources available.
~I1C~~~°lli~ ~C~.~S
As we saw, the general model that the IA.P volume renderer uses to generate a projecticn is to composite all the voxels :hat lie "behind" a pixel in tide image plane. The compositing process differs for every rendering mode, and the most important are illustrated here.
In the ccampositing process, the followin;~ variables are involved:
~ Op(cl) opacity for density d. The opacity values are from 0.0 (complete transparent) to 1.0 (fully opaque).
~ Colar(d) color for density d.
'~'(x,,;~,z) density in the volume at the location x, y, z.
~ I(u,v) intensity of output image at pi~cel location u,v.
~ (x,y9 7) = SampleLocation(Ray, i) this notation represents the computation of th.e i-th sample point along the ray. This can be accomplished with nearest neighbor or trilinear interpol,~tion.
~ FG.ay = ComputeRay(u,v) this notation represent the computation of the ray ooRav n passing through the pixel u,v i:r~ the image plane. Typically, this invoi.ve the definition of the sampling step and the direction of the ray.
W I~lartnal(x,y,z) : normal of the voxel a.t location x,y,z. Typically, this is approximated with a central difference operator.
~ L light vector.
° Represents the dot product Figure ~ shows a graphical representation of these values.
Page 10 Cedara Softavare Corp.

~~' "' r ~ ,~~
T'rS~ 3I) Rendering .Architecture W'rute Taper Figure ~3 - ~Jaltaes ineolved in the ray <:asting process.
Normal vector L
i-th sample point SampleLocation(Ra5 I~Iaxima:dm ~r~tensity ~'rojectior~ (IVIIP~, This is the pseudo-code for the ATP:
for every pixel u,v in I j, ray = ComputeRay(u,v) for every sample point i along ray {
(x,y,z)= SampleLocation(ray, i ) if( I(u,v) < V(x,y,z) ) I(u,v) = V(x,y,z) t Density ~Tolu~ne Rendering (l~) This is tl~e pseudo code for I~VR (referred to as "Multicolor" in the TAP man pages:
for every pixel u,v in I {
ray = ComputeRay(u,v) ray op;~city = 0.0 for rwery sample point i along ray (x,y,z)= SampleLocation(ray, i ) <alpha = ( 1.0 - ray opacity ) * op(V(x,y,z)) 7:(u,v) += color(V(x,y,z)) * alpha ray opacity += alpha i' Cedara Software Corp. Page 1l Ray = ComputeRay(u,v) Prc~ 3I) Rendering Architecture Wrote Paper bae~r~ ~ ~ ..,.~' Shaded Volume Rendering This is the pseudo code for D~IR (referred to as "Shaded Multicolor" in the IAI' mate pages):
for every pixel u,v ir: I {
ray = ComputeRay(u,v) -ray opacity = 0.0 for every sample point i along ray (x,y,z)= SampleLocation(ray, i ) alpha = ( 1.0 - ray opacity ) * op(V(x,y,z)) shade = Normal(x,y,z)o L
I(u,v) += color(V(x,y,z)) * alpha * shade ray opacity += alpha b Figure 9 shows the same dataset rendered with these three rendering modes.
Figure Si _ Rendering ruodes.
I~IFP IJnshaded Shaded jec~ ~e~
For a generic description of the IAP object model, please refer to the PrS
White Paper. In this section, we illustrate in detai the object involved in the volume rendering pipeline.
In Figure 10, you can see the IAP volume rendering pipeline. The ori;~nal slices are kept in a number of raster objects (Ra.s). Those Teas objects are collected in a single slice stack (Ss) and then passed to the interpolator object {Sirlterp).
I-Iere, the developer has the choice of several interpolation techniques (i.e., c:ubic, linear, nearest neighbor). Tlle output of the Sinterp then goes into the view independent preprocessor (~lol) and then finally to the projector o6jeW
(Cproj).
From that point on, the pipeline becomes purely two-dimensional. If ~Cproj is Page 12 Cedara Soft~~are Corp.

8~; t ..~
PrS 3I? Rendering Architecture bite Paper used in gray scale mode (no color table applied during the projection, like in Figure 9), the application can apply window/level to the autput image in V2.
If Cproj is in color mode, V2 will ignore the window/level setting. The application can add arindow/level in color mode using a dynamic extension of the Filter2 object, as reported in Appendix C.
One of the keys to correct interpretation of a volume dataset is motion. The amount of data in today's clinical dataset is so high that it is very difficult to perceive all the details in a static picture. :E-Iistorically, this has been done using a pre-recorded cine-loop, but now it can be done by interactively changing the view position. Apart from the raw speed of the algorithm, it is also canvenient to enhance the "perceived speed" of a volume rendering algorithm by using a progressive refinement update. Progressive refinement can be implemented in several ways in IAP but the easiest and rr~ost common v~ay to do it for volume rendering is to set up a second downsamvpled data-stream. The two branches, each processing the input data at different resolutions, are rendered ira alternation.
Figure 2 ~. shows how the pipeline has to be modified in order to achieve progress refinement in IA,P. Two volume rendering pipelines (Sinterp, Volume, Cproj) are set in parallel and are fed with the same stack and output in the same window. One of these pipelines, called l~i~~ ~~ee~a', uses a down sampled dataset (the downsampling is performed in Sinterp). This allows rendering at a very high. rate (more than 1Q frames per second) even on a general purpose PC.
The second pipeline is called High ~uctlaty. It renders the original dataset usually interpolated with a linear or cubic kernel.
Cedara Software Corp. Page s~
Figure 1.0 - FiFeline used for volume renderingo PrS 3D Rendering ArchitectL~re White P aper -Figure i.1 - FIigh speed/high quality pipelines.
High Speed and Nigh Quality are joined at the progressive reilnement object (Prefj. T.he High Quality pipeline is always interruptible to guarantee application responsiveness, while the I-sigh Speed pipeline is normally in atomic mode (non-interruptible) and executes at a higher priority. The application developer has control of all these options., and can alter the settings depending on the requirements. For example, if a cine-loop has to be generated, the High Speed pipe~ne ~.s disconnected because the intet~rnediate renderings are not used in this scenario. For rendering modes that require opacity and color control, a set of Pvx objects specifying a pi~:el value transformation can be connected to Vol.
The transformation is specified independently from the pixel type an<i can be a simplified linear r amp or full piece-wise linear. Using the full piece-wise linear specification, the application writer can implement several different ypes of opacity and color editors specific to each medical imaging modality. ~'~nother common addition to the volume rendering pipeline is a cut plane object, Lima or Binary object. 1 hese objects specify th.e region of the stacl~ that have to be projected and can be connected either to Sinterp ~r Vol.
The processing server can also render a n:lulti-channel dataset together. l~_s discussed in "Clinical Scenarios" on page :1, this feature is very important in modalities such as fe/1R and ultrasound, wl~~ich can acquire multiple datasets. The renderer will interleave the datasets during the projection, which guarantees that it is a read 3D fusion, rather than a 2D overlay. Examples of this functionality is in Figure 2 on page 5. Several stack are rendered together simply connecting several Volume object to the same Cproj object, as shown in Figure 1~.
As in the case of one Volume object the developer can use an High Speed pipeline and 1-Iigh Quality pipeline to enhance the interactivity.
Page 14 Cedara Software Corp.

PrS 3D Rendering Architectuxe ~l:~ite Paper v~~ee~,t~°e~;
As we have seen in "Clixlical Scenarios" on page 4, a renderer needs to have a much higher level of sophia-tication to be effective in clinical practice. The processing server provides functionality jvlaicli is beyond the "standard"
volume rendering. The most relevant are presented here.
Clipping T~ols The user in 3D visualization is usually interested in the relationsl ip between the anatomical structures. In order to investigate that relationship, it is necessary that the application be allowed to clip part of the dataset to reveal the interns':
stnac~:ure. Char experience shows that there are two kinds of clipping:
Permanent - part of the dataset (normally irrelevant structures) is permanently removed.
~ Temporary - part of the dataset (normally relevant anatomy] is temporarily removed to reveal internal structures.
To accommodate these requirements, anti in order to maX;mi7e performance, the processing server implements two kinds of clipping:
~ Clipping at pre-processing time.
This feature implements "permanent" clipping. The voxels clipped in this stage are not included in the data stn~cture used by the renderer, and are not even computed, if possible.
~ Clipping at rendering time.
This feature implements "temporary" clipping. The voxels are only skipped during the rendering process and are kept in the rendering data structure.
Using clipping at pre-processing time optimizes rendering performance, but any changes to the clipping region will require a (potentially) expensive pre-Cedars Software Corp. Page 15 figure l2 - tendering ~f a ynulti-cllarmel dataseto 4~ v.. ~°°
~' ~ f.
PrS 3D Rendering Architecture W~ute Paper processing which usually prevents using this feature interactively. The clipping at rendering time instead allows the application to interactively clip the dataset, since the clipping is perfor,Tned skipping some voxel during the projection and so changing this region if ftdly interactive.
Usually t:he application generates taro types of clipping regions:
~ Gecmetrical Clipping regions (e.g., g;enerated by a bounding box or oblique plane).
~ Irrey, rlar Clipping regions (e.g., generated by outlining an anatomical region).
The processing server supports these two kinds of clipping both at pre-processing time and rendering time.
Preprocessing Rendering The object that performs ; The render object, Cproj, accepts the pre-processing, Vol, i a bounding box as input. In the accepts a bounding box case of a mufti-channel dataset, for clipping. ,.ach volume can be clipped Geometrical independently. Using the of object, it is also possible to clip each volume with an oblique plane or a box rotated with respect to the orthogox~.al axes.

The Vol object accepts a Any binary volume can be used a Irregular bitvol as input. Vol will clipping region. The processing preprocess only the voxels server also allows to interactmeiy included in the bitvol. translating this region.
Irregular clipping at rendering time is very useful in several situations, but particularly during ~/IIP rendering. Normally several anatomical structures overlap in the NIIP image so interactively moving a clipping region (e.g., cylinder, oblique slab, or sphere) can clarify the abilities on the image. See Figure 13 for an example.
Another very common way to clip the dataset is based on the density. For example, or~ a CT scan the densities below the IIounsfield value of the water are backgroLmd and hence have no diagnostic information. Fven in this case, the processing server supports the clipping at pre-processing time (using the Vol object) and at rendering time (simply changing the opacity).
Page 16 Cedara Software Core.

9' S
~y ,,~ ~ a I'rS 3D Rendering Architecture ~Xllute Paper Figure ',L~ - Interactive clipping on tlta:1~IP view.
The user can interactively move the semi-sphere that clips the dataset and identify the region of the AVI~I. Usually the clipping region is also overlaid in the orthogonal I~III' views.
Local Classification The global classification is a very simple :model and has limited applicability. As shown i~w~ Figure 7, the MR dataset does not allow the clinician to visualize any specific tissue or organ. In the CTA study, it is not possible to distinguish between contrast agent and bone. both cases significantly reduce the ability to understand the anatomy. In order to overcome these problems, it is necessary to use the local classification.
Fy defixutior~, the local classification is a iunc'tion that assigns color and opacity to each voxel in the dataset based on its density and location. In the reality the color is associate to the voxel depending on the anatomical strucrture to v~hich it belongs. So it is possible to split the dataset in several regions (representing the anatomical structures) and assigning a co:lormap and opacity to each one of them.
In other terms the local classification is implemented with a set of classifications, each one applied to a different region of the dataset. For this reason is very often referred as Multiple Classification.
The processing server supports Multiple Classification in a very easy and efficient manner. The application can create several binaxy objects (the regions) and apply for each one of them a different classification. The Multiple Classification is directly supported by the MclVoI object which allow to apply several classifications to the output of the Vol object. Figure 24 shows the case in which two classifications are defined ird the same stack.
Cedara Software Corp. Page 17 PrS 3I~ Rendering Architecture tXlhite Paper Figure a4 - Pipeline to use two classifications in the same volume.
Each classification (Cl) is defined in the area of the bitvol (w) object associated with it. The wf object has been added to the pipeline to allow it to iilteractively translate the area of the classification to obtain, for example, features as shown in Figure 13.
Since all the classifications are sharing the same data structure (produced by Vol) the amount of memory required is icldependent of the number of classifications. This design also allows you to add/remove/move a classification with rnrn9rY,al pre-processing.
The l~/Icl~Vol object allows controlling of the classification in the area in which one or rriore regions are overlapping. Thia is implemented through a set of policies that can be extended with run-time extension. This feature is a powerful tool for the application which knows in advance what each classification represents, as shown in "Et~rorl T~ef~renc~ source not found.".
Figure 15 shows how this pipeline care produce tile correct result on a C' i'A
study. The aorta is filled with a contrast agent that has the same densities as the bane. Defining one bitvol (shown in Figure 15a) which defines the area of the aorta allows the application to apply the proper classification to t he stack.
Note that the bitvol does not have to exactly match the stnlcture of interest , but rather basely contains it or excludes the obstructing anatomies. The direct volume rendering process, by using an appropriate opacity transfer function, v,~~ll allots for the visualization of all the details contained in the bitvols.
This approach can be applied to a variety of different modalities. In Figure 1 on page 4, it is applied to an NfR study. The .main benefit of otlr approach is that the creation of the bitvols is very time effective. Please refer to the "IAP
PrS
Segmentation Architecture" ~X111ite Paper for a complete list of the functionalities.
Page 18 Cedara Software Corp.
s~F~ ci y~" ~.9 I'rS 3Ih Rendering Architecture brute Taper Figure ii5 - Local classification applied to a C'1'l~ study.
In (a), Iiit~ol is used to define the region of the aoa-ta. In (b), CTA of the abdo~ainal aorta is classified using local classification.
The processing server functionality for tlae creation/manipulation of bitvols includes:
~ Shape Interpolation - Allows the clinician to reconstruct a smooth binary-object from a set of 2D ROIs. It is very effective for extracting complex organs that have an irregluar shape like the brain. pigure 16 shows how this functionality can be applied.
~ Extrusion - Allows the clinician to e~nrude a 2D ROI along any direction. It is very useful for eliminating structures that obscure relevant parts of the anatomy, and is often used in ultrasotGnd applications.
~ Seeding - 3I~ region growing in the binar5r object. Tbis functionality very naturally eliminates noise or irrelevant structures around the area of interest.
~ I~isarticulation - A kind of region growing which allows the clinician to separate objects which are loosely connected, and represent different anatomical structures.
~ Dilarion/~rosion - Each binary object can be dilated or eroded ua 3I~ with an arbitrary number of pixels for each axis.
~ -(Jnion, Intersection and complement: of the binary objects.
~ Gray Level region growing - Allows l:he clinician to segment the object without requiring the knowledge of thresholds.
The processing server does not limit the iraanipulation of the binary object to these tools. If the application has the knowledge of the region to define its own segmentation (e.g., using an anatomical atlas can delineate the brain and generate directly the bitvol -used in pigure 1~), it is straightforward introducing this bitval in the rendering ,pipeline.
Cedara Software Corp. Page 19 ..m, Pr~~ 3~ Rendering Architecture Wlute Paper Figure :46 - Example of shape ir~terpolationa To extract a complex organ, the user roughly outlines the brain contours ora a few slices and then shape iraterpolat:ion reconstruction generates the bitvol. The outlining of the brain does raot have to be precise, btit just includes the region of int.erest~ The volt~rne rendering process, with a proper opacity table, will extract the details on the brain.
U olwne Kendering 4 ~. si ~,z.
~~ a ., ~ .
The functionality supported byle~Icl~lol can also be used to arbitrarily cut the dataset and texture map the original gray level on the cutting surface. ~ne of the main benefits of this feature is that the gray level from the MPRs can be embedded in the 3L~ visuaL~zation. Figure 17 shows how this can be done in two different ways. 1n Figure 17a, the I!fi'R planes are visualized with the original gray lever lllformat10I1. This can be used t:o correlate the IVIPR shown by the application in the dataset. In Figure 17b, a cut into the dataset shows the depth of the fracture on the bone.
Figure 18 shows another example of this feature. In this case, a spherical classifica~:ion is used to reveal the bone structures in a CT dataset (Figure 18a) or the meningioma in an IVY dataset (Figure 18b).
Page 20 Cedara Software Corp.

...
°3~4 a ..d-- ~" r PrS 3I) Rendering Architecture Wlute Paper Figa~re '.t7 - Exa~a~ple of Embedding g-~-ay level from the ~Ihl~s.
The functionality of I~clVo1 allov,~s you to embed the information from the ~il'IZs in the visualization. 'This can be used for showing tlae location of the I~PI~s fn the space of investigating the depth of some str~act~re.
a.
pigure 1.~ - E.xarnple of clipping achievable ~vith NIclVol.
In ~a), t!here are two classifications in the CT datasetm the sl{in vrhich is applied to the -hole dataset; and the bone which as applped to the sphere.
In (b), there are f~ur different classifications 'for details, see "error!
Reference source not found.").
a. b.
To minimize memory consumption, ~Icl'llol supports multiple downstream connections so that several Cproj objects can share the same data structure.
It is possible ~.o associate a group of classifications to a specific Cproj, so each Cproj object can display a different set of objects. Note that in order to mininuze the memory consumption, Volume keeps ori!y one copy of the processed dataset.
Thus, if some Cproj object:. connected tc~ ii/iclVol are using Shaded Volume Rendering, while others are using 1~/IIP or iJnshaded Volume Rendering, the performances will be severely affected since Volume will remove the data structure of the gradient when the MIP or Unshaded Volume Rendering is scheduled for computation, and it will recoirlpute it when the Shaded Volume Cedara Software Corp. Page 21 PrS 3L~ Rendering Architecture ~Xlluee Paper Rendering is scheduled. In this scenario, we suggest you use two Vols, and two MclVols, one for Shaded and one for Umshaded Volume Rendering, and connect the Cproj to the IVIcIVoI dependuzg on the rendering mode used.
Page 22 Cedara Software Corp.

ata~~ e~r~ tile "Pr3~ Rer~derar~g rc~ri~ectu~e -~ F'~~ 2y9 PrS 3I) Rendering Architecture White Paper ~pen NLII~F~
There is a class of applications that requires anatomical data to be rendered with synthetic objects, usually defined by polygons. Typically, applications oriented for plam~ing (i.e., radiotherapy or surgery) require this feature.
The processing server supports this functionality in a very flexible wayr. The application can provide to t:he renderer images with associated Z buffer that has to be embedded in the scerze. The Z buffer stores for each pixel in the image the dista;2ce from the polygon to the image plane. The Z buffer is widely used in computer graphics and supported by several libraries. The application has t:he freedom to choose the 3D library, and, ifs necessary, write their own.
Figure 1 shows an example of this functionality. The cone, which has been rendered using GpenGL, is embedded in the dataset. Notice that some voxels in front of the cone are semi-transparent, while other voxels behind the cone are not rendered at all. This functionality doesn't require any pre-processing so changing the geometry can be fully interactive.
The Z buffer is a very simple and effective way to embed geometry in the ~Iolu.~ne, but imposes some limitations. For each pixel in the image there must be only one polygon projected onto it. This restricts the application to fully opaque polygons or non-overlapping semi-transparent polygons.
Cedara Software Corp. Page 3 PrS 3I~ Rendering Architecture White Paper -°~' ' ~
Figure 7. - Geometry embedded ifB the volumetric dataset.
~pacity lVfcsdtalatiorg One technique widely used in volume rendering to enhance the quality of the image is changing the opacity of each vo:~el depending on the magnitude of the gradient at the voxel's location. This technique, called opacit5l modulation, allows the user to enhance she transition between the anatomical structures (characterized by strong magnitude) and suppress homogeneous regions in the dataset. It greatly improves the effect of translucency because when the homogeneous regions are rendered with :Low opacity, theytend to became dark and suppress details. 'CJsing opacity modal anon, the contribution of these regiojas can be completely suppressed.
The processing server supports opacity modulation in a very flexible manner. A
modulation table, which defines a multip:(ication factor for the opacity of each voxel depending on its gradient, is defined for each range of densities. In Figure 2, for example, the modulation has been applied only to the soft tissue densities in the CTA dataset.

v ~a~ ...r °'i J a.m...N"' PrS 3I~ Rendering Architecture V~~hite Paper Figure 2. - ~pacity anodulation used to enhance translucency.
Suppressing the contribution of a horr~ogeneous region, which usually appears dark with low opacity, allows the user to visualize the inner details of the data.
The effect of the modulation is also evident in Figure 3 ~rhich shows two slices of the dataset with the same settings used in Figure 2. The bone structure is the same in hoth images, while in Figure 2Cb, only the border of the soft tissue is visible (characterized by strong magnitudej, while the inner part has been removed (characterized by:~ow magnitudej.
Figure .~ - Cross-section of the dataset: using the settings front Figure 2C.
a~ b.
As mentioned before, opacity modulation. can also enhance details in the dataset. In Figure 4, the vasculature rendered is enhanced by increasing the opacity a~: the border of the vessels characterized by high gradient magnitude.
As yow can see, the vessel in Figure 22b appears more solid and better defined.
Cedara Software Corp. Page 5 P rS 3I~ Rendering Architecture White Paper '~' Figure 14 - ~pacity modulation used to enhance vasculature visualization.
a. b.
Jne more application of opacity modulation is to n-~ia,im;~.e the effect of "color bleeding" due to the partial volume artifact in the CTi~ dataset. in this case, the opacity modulation is applied to the densities of the contrast agent and the strong magnitude is suppressed. Typicall~ry the user tunes this value for each speci~uc dataset. Figure 5 shows an example of a CTS of the necl~ region.
Figure 5 _ (Jpacity modulation used to suppress "color bleeding".
In (b) the opacity modulation is used to suppress the high gradient magnitude associated with the contrast agent densities.
a. b.
in this situation, opacity modulation represents a good tradeoff between user intervention and image quality. To use the local classification, the user has to segment the carotid from tl.e dataset, while in this case, the user has only to change a slider to interactively adjust for the best image quality. l~.Tote that when using this technique, it is not possible to measure the volume of the carotid, since the system does not have any information about the object. Measurement requires the segmentation of the anatomical structure, and hence multiple classification. Please refer to the "fAP Pr l Segmentation Archites.~ture"
White PrS 3Ia Rendering ArchitectLare White Paper Paper for a detailed description of the measurements and their relationship with the local classification.
The processing server allows the application to set a modulation table for each density range of each Volume object rendered in the scene. Currently, opacitEJ
modulation is supported when the input rasters are ~ bit or 12 hit.
Fixing of Renderiri~ lodes The app~!.ication usually chooses the rendering method based on the feature that the user is looking for in the dataset. Typically, ZVIIP is used to show the vasculatlare on an MR dataset, while Unshaded or Shaded Volume Rendering is more appropriate to show the anatomy C~.il~e brain or tumor). In some situations, both these features have to be visualized together, hence different rendering modes have to be mixed.
The processing server provides this functionality because we have seen some clinical benefit. For example, Figure 6 shows a CTA dataset in which two regions have been classified9 the carotid and the bone structure. In Figure 24a, both regions are rendered with Shaded 'Volume Rendering. The user can appreciate the shape of the anatomy but cannot see the calcification inside the carotid. l.n Figure 2Q-b, the carotid is rendered using IvIIP while the bone using Shaded ~7olume Rendering.. In this image, the calcification is easily visible.
The processi~ig server merges the two regions. during the rendering, not as~ a 2D
overlay. This guarantees that the relative positions of the anatomical structures are correct.
Figure f - C'ff~ study sh~win~ mixed rendering.
In (a), the ear~tyd and b~ne are b~th rendered with Shaded V~lurne Rendering. In (b)9 the car~tid is rendered with 1I1~ and the b~r~e with Shaded V~lurne Rend~rin~.
a. b.
Another situation in ~rhich r~his functionality can be used is with a mufti channel dataset. Iia Figure 2.21 an NIV! and CT dataset are rendered together. In this Cedara Software Corp. page ~

r s ..~
PrS 3I~ Rendering Architecture ~Ihite Paper example the user is looping for the °°hot spots" in the I~~I
dataset and their relationship with the surrounding anatomy. The hot spots are by their nature visualized using I~IIP rendering, while the anatomy in the CT dataset can be properly rendered using Shaded Volume R endering. Figure 2.21 shows the rlaixirzg rendering mode. The hot spots are very visible in the body, while in Figure 2.21A depicts both data set rendered using Shaded mode, in this case the location of the '°hot spot" in the body is .not as clear.
Currently, the processing server allows the fzxsion of onlyMIP with Shaded or Unshaded Volume Rendering.
Figure i' - Different a~~dalities and rendering rn~des.
In (a), a dataset and a CT dataset are b~th rendered using Shaded Volulrce Rendering. In (b), the dataset is rendered using NTIP and the CT dataset is rendered using Sl-~aded Volurne Rendering. Data pr~vided by Ragnbatn I-Iospital, Israel.
a. b.
C~~rdinate Query in the application, there is often the neeeL to correlate the 3D iynages with the 2h ~L'R.s. For example, Figure 8 shows 'that the user can see the ster~osis in the MIP view, but needs the MPR to estimate the degree of stenosis.

PrS 3D Rendering Architecture White Paper Since the voxel can be semi-transparent irs volume rendering, selecting a point in the 3L) view does not uniquely determine a location inside the stack;
rather it determirf.es a list of voxels which contribute to the color of the pixel selected.
Several algorithms can be used to select the ~roxel in this list to be considered as the "selected point". For example, the first non-transparent voxel can be selected or the first opaque woxel can be used instead. Our experience shows that the agoritlun described in "A method for specifying 3D interested regio;zs on Volume Rendered Images and its evaluation for Virtual Endoscopy System", Toyofumi SAITO, Kensaku I~LQRI, Yasiihito SUE1~10A, jun-ichi HASEG.A~UA, jun-ichiro T'ORIWAKI, iu2d Razuhiro KATADA, CARS2000 San Francisco, works very ~~ell. The algorithm selects the voxel that contributes most to ~:he color of the selected pixel. It is filly automatic (no threshold requested) and it works in a very intuitiv~~ way. "Errorl Refference so farce not ffound." shows an implementation of thi:~ algorithm.
This functionality can be also used to create a 3D marker. ~'Ihen a specific point is selected in stack space, the processing server can track its position across rotations and the application can query the list of voxels that contribute to that pixel=_. The application can then verify if tlae point is the one visible.
Since the marker is rendered as a 2D overlay, the image thickness of the line does not increase ~uhen the image is zoomed. See 1~igure 9.
Cedara Software Corp. Page R
Figure t$ - Localization off an anatomical point using a Ii~II~ viewa The user clicks on tl~~ stenosis visible on the 1VIIP view (note the position off the pointer)o The application then :moves the NiPR planes to that location to allow the user to estirrryate the degree off stenosis.

~"°;
s~ ~.~--, ~xr°
PrS 3D Rendering Architecture tXll~ste Paper 3 - ~r ~' Figure ~ - Example of a ~IW r~arkerm A specific point at the base of the orbit is tracked during the interactive rotation of the dataset< In (a), the point is visible to the user and hence the marker is rendered and drawn in red. In (b), the visibility of the point is occluded by bony structures, hence the marker is drawn in blue a. be The processing server returns all of the voxels projected irl the selected pixel to the application. The following values are returned to the application (for details on these parameters, see " Error! Referf;nce source not foundm" on. page Error! p~ookrnark not defined.):
~ T he density of the voxel: ~1 (xy,z) ~ The opacity of the voxel: op(V(x,y,z)]
~ If color mode is used, the color of the voxel ~ The accumulated opacity: ray opacity The accumulated colors if color mode is used, or accumulated density: I(x,y) During coordinate query, nearest neighbor ianterpolation is used. The application can then scan this list to determine the selected voxel using its own algorithm.
Note that the IAP processing server also returns the voxels that do not contribute to the final pixel. This is done on purpose so that the application can, for example, determine the size/thickness of the object (e.g., vessel or bone) on which the user has clicked.

s"
PrS 3D Rendering l~r; hitecture White Paper ~ ~ °'r Perspective Projection In medical imaging, the term "3D image" is very often referred to as an image generated using parallel projection. The word p~rdlded indicates that the projector rays (the rays described in the pseudo code of "Error! ~~f~rer~ce Source 23~~ fotandr" on page Error! Eookarl~ riot definedo) are all parallel.
'This technique creates an ixnpressian of the three-dirnensionalit~l in the image but does not simulate human vision accurately. In human vision, the rays are not parallel but instead converge on a point, the eye of the viewer (more specifically the retina). This rendering geometry is very often referred to as .persp~etive projection. Perspective projection allows the generation of more realistic images than parallel projection. Figure 1d and Figure 11 illustrate the difference between These two projection s gnethods.
Cedara Software Corp. Page 11 Figure 1-,0 - Parallel and Perspective Projec~io~as PrS 3D Rendering Arcll'ttecture White Paper 3' ~'"''~
Figaare 1.I - Paraiiei projection and Perspective projection schemes.
a. i~.
'There is an important implication of using perspective projection in medical imaging. In perspective projection, different parts of the object are magnified with different factors: parts close to the eye look bigger than objects further away. Tlus implies that on a perspective :image, it is not possible to compare object sizes or distances. An example of this is shown in Figure 12.
Figaare lit - Perspective rraagnifies part of the dataset with different factors.
In (a), the image is rendered with paraiiei projection. The ~reiiow rnaricer siaows that the two vessels do not have tlae earns si~~. In (is~, than image is rendered with perspective projection. 'hhe rod anaricer shows that tire earns two ~Tes;9eis appear to have tire sarrne s.i~e.
a. h.

~._ ~ ;:
PrS 3D Rendering Architecture bite Paper Althoug'n not suitable for measurement, perspective projection is useful in medical imaging for several reasons. It car_ simulate the view of the endoscope and the ::-adiographic acquisition.
The geometry of the acquisition of radiography is, by its nature, a perspective projection. Using a CT dataset, it is theoretically possible to reconstruct the radiograph of a patient from any position. This process is referred to as DRR
(Digital 'deconstructed Radiography) and is used in Radiotherapy Planning. one of the technical difficulties of DRR is that the x-ray used in the CT scanner has a different energy (and hence different characteristic) compared to that used ;n x-ray radiography. The IAP processing server allows correction of those differences using the opacity map as a x-.ray absorption curve for each specific voxel. Figure 13 shows two examples of DRR.
Figure 'l3 - D s of a CT dataset.
To correct for the different x-ray abso~rptions between CT x-ray and radiographic x-ray, the opacity curve has been approximated to an exponential f~anctiora. The function is designed t~ highlight the bone voxels in the dataset which in the radiography absorb nZOre than in CT.
The perspective projectiorA can also simulate the acquisition from an endoscope.
This allows the fly-through of the dataset to perform, for example, virtual colonoscopy. Figure 14 shows two images generated from inside the CTA
dataset.
Cedara Software Corp. Page 13 .._ PrS 3D Rendering l~rchitecture ~Xlf:~ite Paper Figure ~l4 - Perspective allows "fly-though" anirnations.
These g~~ag~s are frames from the C7:~. chest dataset. The white objects are cal~~flcatyon ~n the aorta. In (a), the aneurysm is shown frorr~ inside the aorta. In (b), the branch to the iliac aa-teries is shown. Refer to flee I11P
Image gallery for the full animation.
a. b.
Since perspective projection involves different optimizations than parallel projection, it has been implemented as a separate object, Pproj, which supports the same input and output connections as Cproj. The pipeline shown in this white paper can be used in perspective mode by simply switching Cproj with Pproj (although currently riot all the fun<:tionality available in Cproj is available in Pprojj.
Because of the sampling scheme implemented in the Perspective renderer, it is possible to have a great level of detail even when the images are mag~dfied several tames.
Figure ~.5 - Perspeetive renderer allo~.~s a high level of detail.
The perspective rendering engine is released as a separate dll on win32. The application can replace this dll with its ocvn implementation as long as it is compliant with the same interface and uses the same data stru~t~.'xe.

Claims

What is claimed is:
1. A system for measuring and rendering a 3D object acquired by a scanner, the system comprises (i) an acquisition scanner, (ii) a network where the acquired 2D images, representing the cross-section of the bag, are transmitted to a computer, and (iii) the computer, or workstation, where are displayed/measured.
CA002365323A 2001-12-14 2001-12-14 Method of measuring 3d object and rendering 3d object acquired by a scanner Abandoned CA2365323A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002365323A CA2365323A1 (en) 2001-12-14 2001-12-14 Method of measuring 3d object and rendering 3d object acquired by a scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002365323A CA2365323A1 (en) 2001-12-14 2001-12-14 Method of measuring 3d object and rendering 3d object acquired by a scanner

Publications (1)

Publication Number Publication Date
CA2365323A1 true CA2365323A1 (en) 2003-06-14

Family

ID=4170870

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002365323A Abandoned CA2365323A1 (en) 2001-12-14 2001-12-14 Method of measuring 3d object and rendering 3d object acquired by a scanner

Country Status (1)

Country Link
CA (1) CA2365323A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005002190B4 (en) * 2005-01-17 2007-04-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Scanner and method for operating a scanner
DE102009052315A1 (en) * 2009-11-02 2011-05-12 Siemens Aktiengesellschaft A method for highlighting local features in anatomical volume representations of vascular structures and computer system for performing this method
CN111247560A (en) * 2017-09-06 2020-06-05 福沃科技有限公司 Method for maintaining perceptual constancy of objects in an image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005002190B4 (en) * 2005-01-17 2007-04-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Scanner and method for operating a scanner
US7469834B2 (en) 2005-01-17 2008-12-30 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Scanner and method for operating a scanner
DE102009052315A1 (en) * 2009-11-02 2011-05-12 Siemens Aktiengesellschaft A method for highlighting local features in anatomical volume representations of vascular structures and computer system for performing this method
US8417008B2 (en) 2009-11-02 2013-04-09 Siemens Aktiengesellschaft Method for highlighting local characteristics in anatomical volume renderings of vessel structures and computer system for carrying out this method
DE102009052315B4 (en) 2009-11-02 2019-03-07 Siemens Healthcare Gmbh A method for highlighting local features in anatomical volume representations of vascular structures and computer system for performing this method
CN111247560A (en) * 2017-09-06 2020-06-05 福沃科技有限公司 Method for maintaining perceptual constancy of objects in an image
CN111247560B (en) * 2017-09-06 2023-10-27 福沃科技有限公司 Method for preserving perceptual constancy of objects in an image

Similar Documents

Publication Publication Date Title
US6480732B1 (en) Medical image processing device for producing a composite image of the three-dimensional images
Stytz et al. Three-dimensional medical imaging: algorithms and computer systems
US7529396B2 (en) Method, computer program product, and apparatus for designating region of interest
US4835688A (en) Three-dimensional image processing apparatus
US6181348B1 (en) Method for selective volume visualization via texture mapping
EP3493161B1 (en) Transfer function determination in medical imaging
US7924279B2 (en) Protocol-based volume visualization
Preim et al. 3D visualization of vasculature: an overview
US20050143654A1 (en) Systems and methods for segmented volume rendering using a programmable graphics pipeline
Burns et al. Feature emphasis and contextual cutaways for multimodal medical visualization.
CA2973449C (en) Method, device and system for simulating shadow images
JP5295562B2 (en) Flexible 3D rotational angiography-computed tomography fusion method
US10580181B2 (en) Method and system for generating color medical image based on combined color table
US20100246957A1 (en) Enhanced coronary viewing
JP7423338B2 (en) Image processing device and image processing method
EP1945102B1 (en) Image processing system and method for silhouette rendering and display of images during interventional procedures
CN111932665A (en) Hepatic vessel three-dimensional reconstruction and visualization method based on vessel tubular model
EP3828836B1 (en) Method and data processing system for providing a two-dimensional unfolded image of at least one tubular structure
Kerr et al. Volume rendering of visible human data for an anatomical virtual environment
Pavone et al. From maximum intensity projection to volume rendering
EP3933848A1 (en) Vrds 4d medical image processing method and product
CA2365045A1 (en) Method for the detection of guns and ammunition in x-ray scans of containers for security assurance
CA2365323A1 (en) Method of measuring 3d object and rendering 3d object acquired by a scanner
CA2365062A1 (en) Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset
US20230342957A1 (en) Volume rendering apparatus and method

Legal Events

Date Code Title Description
FZDE Dead