CA2365045A1 - Method for the detection of guns and ammunition in x-ray scans of containers for security assurance - Google Patents

Method for the detection of guns and ammunition in x-ray scans of containers for security assurance Download PDF

Info

Publication number
CA2365045A1
CA2365045A1 CA002365045A CA2365045A CA2365045A1 CA 2365045 A1 CA2365045 A1 CA 2365045A1 CA 002365045 A CA002365045 A CA 002365045A CA 2365045 A CA2365045 A CA 2365045A CA 2365045 A1 CA2365045 A1 CA 2365045A1
Authority
CA
Canada
Prior art keywords
rendering
dataset
prs
segmentation
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002365045A
Other languages
French (fr)
Inventor
Arun Menawat
Levant Tinaz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cedara Software Corp
Original Assignee
Cedara Software Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cedara Software Corp filed Critical Cedara Software Corp
Priority to CA002365045A priority Critical patent/CA2365045A1/en
Publication of CA2365045A1 publication Critical patent/CA2365045A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G01V5/20
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30112Baggage; Luggage; Suitcase

Abstract

A system for detecting guns and ammunition in X-ray scans of containers for security assurance is disclosed. The system comprises a security scanner computer workstation for use in conjunction with an X-ray based scanner, and a shape detection software which detects the specific shapes and sizes.

Description

Method for the defection of guns and ammunition in X Ray scans of containers for security assurance Field of the Invention The present invention relates to a method and system for detecting guns s and ammunition in X-ray scans of containers for security assurance.
Summary and Advantages of the Invention X-ray based 2D and 3D security imaging scanners, such as used at airports, have requirements to detect guns. The. problem of detecting a gun is complicated by the fact that it can have many forms and may be disassembled into several components. Some or all of the constituent components may not be easily recognizable as components of a gun and the components could be in different items of luggage.
According to the present invention, the above problems of detecting a gun can be reduced to the detection of a barrel of a gun and the bullets. The ~5 detection of the barrel is further reduced to that of finding a hollow cylinder within a predetermined range of diameters corresponding to currently available light ammunition calibers. It's possible that the gun or components thereof could be in one baggage item and the ammunition in another. Therefore; the present invention also describes a method of detecting the bullets which is reduced to 2o finding a metallic cylinder with the same diameter range as described for the barrel and a predetermined length range.
A further understanding of the other features, aspects, and advantages of the present invention will be realized by reference to the following description, appended claims, and accompanying drawings.
2s Detailed Descrgation of the Preferred Embodiments) According to one aspect of the present invention, there is provided a system and method (algorithm) of detecting guns and ammunition in X-ray scans
2 of containers for security assurance.
According to one embodiment of the invention, a security scanner computer workstation for use in conjunction with an X-ray based scanner, either 2D or 3D CT based having shape detection software which detects the specific shapes and sizes described above. Well known existing shape detection algorithms such as Hough transforms and software are used with auto-separation and labeling of suspect images and automatic noise elimination to compensate for the effects of operator fatigue. The resultant detected threats and other baggage contents are displayed on the computer screen in 3D views using volume rendering graphics techniques.
The present invention includes a software shape detection algorithms tuned to detect the presence of hollow cylinders (a bullet. is a hollow metallic cylinder containing explosive chemicals).
The present invention will, be further understood by the additional descriptions A, B and C attached hereto.
While the present invention has been described with reference to specific embodiments, the description is~illustrative of the invention and is not to be construed as limiting the invention. Various modifications may occur to those skilled in the art without departing from the true spirit and scope of the invention 2o as defined by the appended claims.

Additional D~scription A
"IAP PrS Segmentation Architecture"
3--r IAP PrS Segmentation Architecture Introduction Imaging Application Platform (IAP) is a well-established platform product specifically targeted at medical imaging. IAP supports a wide set of functionality including database, hardcopy, DICOM services, image processing, and reconstruction. The design is based on a client/server architecture and each class of functionality is implemented in a separate server. This paper will focus on the image processing server (further referred to as the processing server or prserver) and m particular on the segmentation functionality.
Segmentation, or feature extraction, is an important feature for any medical raging application. When a radiologist or a physician look s ar an rage he/she will mentally isolate the structure relevant to the diagnosis. If the structure has to measured and/or . visualized by the computer the radiologist or physician has to identify such structure on the original images using the software, and this process is called segmentation. For the purpose of this document, segmentation is a.
process in which the user (radiologist, technician, or physician) identifies which part of one image, or a set of images, belongs to a .specific structure.
The scope of this white paper is to describe the tools available in the IAP
processing server thax can automate and facilitate the segmentation process. They are presented in terms of how they operate and how they can be used in combination with the visualization and measurement functionality. The combination of these functionality allows the application to build a very effective system from the user's perspective in which the classification and measurements are carried- out with a simple click. This is referred as Point and Click Classification (PCC).
Several segmentation tools have been published. The majority of them are designed to ' segment a particular structure in a specific image modality. In the IAP processing server we implemented algorithm, which are proven and have a large applicability in the clinical practice. The tools have been chosen in order to cover a large set of clinical requirement s.
Since we recognize that we can not provide the best solution for all segmentation needs, our architecture is designed to be extensible. If the application requires a specific segmentation algorithm,, it is possible to emend the functionality supported by the processing server through a dll or a shared library.
This white paper assumes that the reader is familiar with the IAP
processing server architecture and has minimal experience with the IAP
Cedara Software Corp. Page 1 IAP PrS Segmentation Architecture 3D visWization functionality. The reader can refer also to the "IAP-PrS
Image Processing" White Paper and "PrS Rendering architecture" White Paper.
Cedara Software Core. Page 2 lAP PrS Segmentation Architecture Glossary Vohytne Rendering A technique used to visualize three-dimensional sampled data that does not require any geometrical intermediate structures.
Surface Rendering A technique used to visualize three-dimensional surfaces represented by either polygons or voxels that have been previously extracted from a sampled dataset.
Interpolation A set of techniques used to generaxe missing data between known samples.
Voxel A three dimensional discrete sample.
Shape Interpolation An interpolation technique for binary objects that allows users to smoothly connect arbitrary contours.
Mttltiplanar or Curved Arbitrary cross sections of a three-dimensional sampled Reformatting daxaset.
Binary Object (Bitvol) . Structure which stores which voxels, in a slice stack, satisfy a specific property (for example the voxels belonging to an anatonucal structure).
ROI Region of interest: Irregular region which includes only the voxels that have to be processed It is very often represented as a bitvol Segmentation Process which lead to the identification of a set of voxels in an image or' set of images, which satisfy a specific property.
C,edara Software Corp. Page 3 IAP PrS Segmentation Architecture Segmentation Tools The segmentation process can vary considerably from application to application. This is usually due to the level of automation and the workflow. This is related to how the application uses the tools rather then the tools themselves. The IAP processing server doesn't force any workflow. A general approach could be to automate the process as much as possible and allow the user to review and. correct the segmentation.
The goal then is to minimi-rx the user intervention rather then make the segmentation entirely automatic.
Overview The IAP processing server supports both binary tools, which have been proven through the years as reliable, as well as advanced tools with very sophisticated functionality.
The binary tools operate on a binary object; they do not use the original density from the images. These tools include binary region growing and extrusion.
The advanced tools operate .on a gray level image. They typically allow an higher level of automation. These tools are based on gray level region growing.
Figure 1.0 shows a schematic diagram of how these tools operate all together.
Cedars Software Corp. p~ 4 IAP PzS Segmentation Architecture Shape interpolation Gray Level R~'o~n growing ~, 1 threshold ~ day Level Object ~ ~ Object Extrusion Rcgion growing Figure 1.0 Taxonomy of the segmentation tools supported by the IAP processing server.
The scope of each tool in Figure 1.0 is as follows:
Shape Interpolation: Reconstruct a binary object by interpolating an anisotropic . stack of 2D ROIs. This functionality is implemented in the Recon object ~ Extrusion: Generate a binary object by extruding in one direction.
This functionality is implemented in the ExtBv object ~ Thresholding: Generate a binary object by selecting all the vozels in the slice stack within the range of densities. This functionality is implemented in the Thr3 object.
~ Binary Region Growing: Connectivity is evaluated on the binary image. This functionality is implemented in the Seed3 object..
~ ~Grap Level Region Growing: Connectivity is evaluated on the gray level image, with no thresholding necessary before the segmentation process. This functionality is .implemented in the Seg3 object.
The IAP processing server architecture allows to connect these objects in any possible way. This is a very powerful feature since the segmentation is usually accomplished in several steps. For example Cedars Software Core. Page 5 IAP PrS Segmentation Architecture the slice stack can be threshold and then the structure isolated using a region growing. Figure 1.1 was generated using this technique.

A . B

Figure 1.1: Region growing after a normal threshold can be used to isolate an objects very e~ciently. Image A is the result of the threshold with the bone wiadow in the CT dataset. The bed is removed with a single click of the 'mouse. The resulted binary object is used as a mask for the Volume Renderer.
Figure 1.2 shows .the. part of the pipeline which implements the segmentation in Figure 1.1. The Ss object is the input slice stack, and contains the original images. The Thr3 object is the object that performs the thresholding, and the Seed3 object perfornis the region growing on a point specified by the user.
__ Ss Thr3 ---~ Seed3 Figure 1.2 : The pipeline used for the generation of the binary object in Figure 1.1.
The IAP processing server also supports several features for manipulating binary objects diiecdy:
~ Erosion ~ Dilation ~ Intersection Ceda.ra Software Coxp. Page G

IAP PrS Segmentation Architecture ~ Union ~ Difference ~ Complement For example as we'll see in the next section it is necessary to dilate a binary object before using it as a clipping region for Volume Rendering.
For example, in the pipeline in Figure 1.3, the binary object has to be dilated before the connection to the Volume Rendering pipeline. This can be accomplished by simply adding a new Bv object at the end of the pipeline, as shown in Figure 1.3.
Figute 1.3 : Pipeline in Figure 1.2 with the Bv object added which will perform a dilation on the result of the region growing.
Binary Tools The tools presented in this sections implement the well known techniques that have been used for several years in the medical market.
Some of them, like Seed3, extend the standard functionality in order to minimize xhe user intervention.
Thresholding (Thr3) Thresholding is on of the most basic tools and is often used as a starting point in order to perform more complicated operations, as shown in Figute 1.1. The Thr3 object reconstructs a binary object selecting all the voxels in a specific density range in the slice stack. If the slice stack is not anisotropy.the application can choose to generate the binary object using cubic or nearest neighbor interpolation.
Thr3 also supports an automatic dilation in the X and the Y direction.
This is useful in situations, like the one in Figure 1.3, where the binary object be has to be used in the Volume Rendering pipeline.
Extrusion (ExtBv) Extrusion projects a 2D shape along any direction. This feature is very powerful when it is used in conjunction with 3D visualization. In fact, it Cedara Software Corp. p~ 7 IAP PrS Segmentation Architecture allows irrelevant structures to be eliminated very quickly and naturally, as shown in Figure 1.4.

Figure 1.4 : F,xtiusion is a mechanism which works very well in conjunction with 3D visualization. The user draws ROI which defines the region of the dataset in which he is interested in. The data outside the region is removed.
Figures 1.4.A and 1.4.B show the application from the user's perspective.
The user outlines the relevant anatomy and only that area will be rendered Figure 1.4.C shows the binary object that has been generated .
through extrusion. This object has been used to restrict the area for the volume renderer, and so eliminate unwanted structures.
Shape Interpolator (Recon) The shape interpolator reconstructs a 3D binary object from a stack of 2D ROIs. The stack of 2D ROIs can be unequally spaced and present branching, as shown in Figure 1.5. The Recon object supports nearest neighbor and cubic interpolation kernels, which are guaranteed to generate smooth surfaces; (see Figure 1:6). This functionality is used when the user manually draws some ROIs on the original slices or retouches the ROI generated through a threshold.
Cedara Software Corp. , ~ Page 8 3~~
IAP PrS Segmentation Architecture A B

Figure 1.6 The shape interpolation can be accomp4shed with cubic interpolation (A) or nearest neighbor interpolation (B).
Binary Connectivity (Seed3) Connectivity is a well proven technique used to quickly and efficiently isolate a structure in a binary object. The user, or the application, identifies a few points belonging to the structure of interest, called seed points. All the voxels that are connected to the seed points are extracted, typically removing the background. This process is also referred as "region, growing".
Region growing is very often used to 'clean' an . object that has been obtained by thresholding. Figure 1.1 shows an example of a CT dataset where the bed has been removed simply by selecting a point on the skull.
Cedars Software Core. page 9 Figure 1.5 The Shape interpolation prpcess reconstruct a 3d binary object from a set of~2D ROIs, even if they are not equally spaced and have branching.

IAP PrS Segmentation Architecture 'This functionality is very effective when combined with the Coordinate Query functionality of the prserver volume renderer. Coordinate Query allows the application to identify the 3D location in the stack when the user clicks on the rendered image. By combining these two tools, the entire operation of segmentation and clean up can be done entirely in 3D
as shown in Figure 1.7. See the "PrS 3D Rendering Architecture" White Paper for more details on the Coordinate Query.
A ( B..
Figure 1.7 MR pheriperial angio dataset. The Volume Rendering visualization of the vasculature also includes other unrelated structures (A). By just clicking on the vessel, the user can elimixiate the background irrelevant structures (B).
The Seed3 object implements a six-neighbor connectivity. It also supports the following advanced functionality in order to facilitate the identification of the structure:
1. Tolerance: when a seed point is not actWy on the object,.Seed3 will move it to the nearest voxel belonging to the binary object, if the distance is less then the tolerance specified by the application: This functionality allows the applicaxion to compensate for rounding errors and imprecision from the user.
2. Filling: This functionality removes all the holes from the object segmented It is sometimes used for volume measurements.
3. Small links: When a bitvol is. generated using a noisy dataset, several parts of could be connected by narrow structures. The Seed3 object allows the "smallest bridge" in which the region growing can grow to be specified. This functionality allows the system to be insensitive to noise. Figure 1.8 'shows how this feature can extract the brain in an MR dataset.
Cedara Software Corp. Page 10 IAP PrS Segmentation Architecture A

Figure 1.8 : MR dataset of the -brain. The seed point is set in the brain. In (A) the region growing fails in extracting the brain since there are small connections from the brain to tire skin. In (B) the brain is extracted because the small connection ate not followed.
4. Disarticulation: ~ This is the separation of different anatomical structures that are connected. The application can specify two sets of seed points, one for the object to be.kept and one for the object to be removed. Seed3 will perform erosion until these two sets of points are no longer,. connected and then. perform a conditional dilation the same amount as the erosion. This operation is computationally intensive. It works well if the bitvol has a well-defined structure, i.e. the regions to be separated do not have holes inside them, and narrow bridges link them. On the other hand, if the regions are thin and the thickness is comparable to the thickness of the bridges, then the result may not be optimal. Figure 1.9 shows how this feature can be applied to select half of the hip in a CT
dataset.
,: .
n..

Cedara Software Corp. Page 11 IAP PrS Segmentation Architecture Figure 1.9 In the binary volume in (A) the user sets one seed point to select the part to include (green) and one seed point to select the part to remove (red). The system identifies the part of the two structures with minimal connection and separates the structures there. (B) shows the result.
Gray Level Connectivity The concept of connectivity introduced by the binary images can be extended for the gray-level images. The gray level connectivity between two voxel measure the "level of confidence" for which these voxels belong to the same structure. This definition leads to algorithm which enables a more automate method for segmentation and reduces user intervention. Grey level connectivity tends to work very well when the structure under investigation has a narrow range of densities with respect to the entire dynamic range of the 'images.
The Basic Algorithm ' w The algorithm takes a slice stack as input and a set of seed points. For each voxel in the stack it calculates the "level of confidence" with which this voxel belongs'to~the structure identified by the seeds. Voxels far from the seeds or with a different density than the seeds have a low confidence value, whereas voxels close to the seed points and with similar density have high confidence values. Note that no thresholds are required for this process. From the mathematical point of view the "confidence' level""is defined as connectivity from a specific voxel to a seed point. The definition of connectivity from a voxel to a seed point according to' Rosenfeld is C(seed,voxel)'MaicP(,~~,~EP(s~~ ~,(Z)~
Where P(seedvoxel) is any possible path from the seed point to the voxel, ~.(.) is a function that assigns a value between 0 and 1 to each element in the stack. In our application w(.) is defined as follows:
p.(voxel)-1-~ density(voxel)-density(seed) ~
The connectivity is computed as:
C(seed,voxel)=1-Minn",~,~ ~axzEp(xed,v~ I density(z)-density(seed) In simple terms the' connectivity of awoxel to a seed point is obtained by:
1. Considering all the paths from the seed point to the voxel.
Cedars Software Corp. Page 12 ~3 IAP PrS Segmentation Architecture 2. Labeling each path with the maximum difference between the seed point's density and the density of each voxel in the path.
3. Selecting the path with minimum label's value.
4. Set the connectivity as 1.0 minus the label's value.
For multiple seed the algorithm in step 2 uses the average density of the seed points.
The algorithm computes the connectivity values for each voxel in the stack and produces an output slice stack which is called "connectivity map". Figure 1.10 shows a 2D example :using an MR image arid its connectivity map. The area in which the seed has been placed has the connectivity values higher then the rest of the image, and so it appears brighter.
._ I B
Figure 1.10 : Image A' shows an MR slice and the place where the seed point has been. Image B shows the connectivity map.
Figure 1.11 shows a 3D example in which the connectivity map is rendered using MIP. In this example the dataset is an MR acquisition of the head, a.nd several seeds point have been set in the brain, which appears to be the brightest' region.
Cedara Software Corp. . Page 13 IAP PrS Segmentation Architecture Figure 1.11: MIP image of a connectivity map. In this example several seed points have been set in the brain of this MR dataset.
The MIP image shows that the brain is the brightest area.
The connectivity map is thresholded at different values in order to extract the area of interest as a bitvol~. Note that in this case the user need only to control one threshold value. The connectivity map has always to be threshold from the highest value (which represent the seed points ) to a lower one defined by the user. The user increasing the threshold removes irrelevant structures and refines the anatomical structure where the seed has been planted. From the user perspecti ve this method is quite natural and effective, the user visually browse the possible solution interactively. Figure 1.12 shows an example of user interaction; the connectivity map shown in figure 1.11 is thresholded at increasing values until the brain is extracted.
Cedara Software Core. ~ Page 14 3 /~~-Figure 1.12. : The connectivity map shown in figure 1.11. is threshold and the 'binary object is used to extract the anatomical feature from the original dataset. This process is done interactively.
In figufe 1~: the binary object computed thresholding the connectivity map is applied as a mask for .the volume renderer . As mention in the previous section iris necessary a small dilation before the connection to the volume rendering pipeline. Figure 1.13 shows the complete pipeline.
Seg3 ~--1~ T~3~-.~. ( Bv ~-.-~ Bvf Ss Vol ~ Cproj _.~
,-Figure 1.13_;,pipeline used for t>~e generation of the images in 1.12.
The connectivity map , is thresholded interactively changing the threshold.fpt the :Th3 object. The Bvf object is used .to avoid the pre-processing in the Vol Object. Please refer to the "PrS 3D
Renderiag~A~chitetture" for a detailed explanation of the possible usage of the. clipping tools in -the Volume Rendering pipeline.
Contrast Table In order to optimize the segmentation in terms of accurary and speed the Seg3 object can use a contrast table which enhance the contrast between the anatomical structure under investigation and the rest of the anatomy.
The region growing process will operate on the image once it has been remapped with the contrast table. The connectivity will be calculated as follow:
Cedara Software Corp. - ~ ' ~ -''r, Page 15 IAP PrS Segmentation Architecture ~-~6 IAP PrS Segmentation Architecture C(seed,voxel)=1-Min ~s~~ [Ma~EP~s~~ ~ contrast table(density(z))-contrast table(density(seed)) ~
The applicaxion can take advantage of this functionality in several ways:
~ Reducing the noise in the image.
~ Increase. the accurary of the segmentation eliminating densities that are not part of the anatomy under investigation. For example in a CTA dazaset the liver doesn't contain intensity as high as the bone. Hence they can be remapped as zero (background) and excluded from the segmentation.
~ Limit the user mistakes: if the user sets a seed point in a region a region which is reriiapped on a low value ( as defined by the application ) the seed point will be ignored For example if the user in the intent to segment the liver in a GTA dataset sets a seed point: on the bone, it will not considered during the region growing process.
'The application is not force to use the contrast table, when it is not used the system will operate on the original density values. For example the brain m figure 1.12 was extracted without contrast table.
The application can expose this functionality directly to the user, or if appropriate, use the rendering setting used to view the images.
~ In order to segment structure withy high density the window level that the user set iii the 2D view can be used as contrast table.
The opacity ctuwe to render a speafic tissue can be used a remappmg table. The users in order to visualize the tissues properly has to set the opacity to 100% for all the densities in the tissues and then lower values for density which partially contains also part of ~ other tissues. So the opacity curve implicitly maximizes the contrast between the tissue and the rest of the anatomy.
The Extended Algorithm The basic algorithm is a very powerful tool, as shown in figure 1.12. In order to extend its functionality the Seg3 object implements two extensions:
Cedara Software Corp. Page 16 IAP PrS Segmentation Architecture 1. Distance path. In some situation' the structure, which the user is trying to extract, is connected with something else. For example for the treatment planning of the AVM shown in figure 1.14 the feeding artery and the draining vein have to be segmented from the nodule of the AVM. The density of these three structures is the same ( since it is the same blood which flaw in all of them ) and they are connected.

A B

Figure 1.14 MR dataset of the head region showing an AVM.
The image A if the MIP~of the dataset the image B is the binary segmentation of the dataset. Binary region growing is not able to segmented the three anatomical structures ( veins, artery, avm ) require for the treatment planning.
In order to facilitate the segmentation of the structures Seg3 can reduce the connectivity value of any voxel proportionately to the distance' oa the path from the seed point. Seeding the vein, as shown in figure 1.15 will cause voxel with the same density, but in the nodule of the avm, to have lower connectivity value; and hence exclude them. Note that the distance is measured along the vessel, since the densities outside the vessel's range will be remapped to zero by the contrast table arid not considered. The user browsing through the possible solution will visually see the segmented area following the vein as shown in figure 1.15.
Cxdara Software Corp. Page 17 IAP PrS Segmentation Architecture Figure 1.15 Vein segmented at different threshold value. The user changing the threshold can visually follow the vein. In order to generate these images the pipeline in figure 1.13 has been used. Distance path is usually used in conjuction with contrast table.
Figure 1.16 shows the example analyzed by Dr: Eldrige. Iri this case the functionality was used not just for segmentation but for increasing the understanding of the pathology following several vessels and see how they are interacting.
As we mention in the section "Binary Connectivity" disarticulaxion can be similar situation. However disarticulation .is mainly designed for bone structures and doesn't allow any level of control. Distance path is instead designed for vessels and allows the user to have a fine control on the region segmented 2. Growth along an axe. In some protocols, like the peripheral angiography, the vessel will follow mainly one axe of the dataset. The applicatiori'c:ari use 'thisunformation to facilitate the segmentation process at;d force the region growing process to follow the vessel along the main axe, and so select the main lumen instead of the small branches: Figure 1:17 shows an example of this functionality.
. , .,-., . ... .
Cedara Software Core. Page 18 Figure .1.16 Example ~ 1.14 analyzed by Dr. Eldrige. Dr. Eldrige used the distance path functionality to follow the vessel involved. in the aneurysm and analyzed their interaction.

~ ~9 IAP PrS Segmentation Architecture Figure 1.17: Segmentation of a~ vessel in a MR Peripheral angiography. Seg3 allows to associate weights to each one the axes, each weight represent an'~ncremental reduction of the connectivity value for the region growing to follow that axe.
Embedding the knowledge The benefit of the algorithm; goes behind the fact that the application doesn't have to set a priori threshold. The application can embed the knowledge of the structure that the user is segmenting in several ways:
1. As presented in the previous section the contrast table is the simplest and effective way for the application to guide the region growing.
2. The number of densities expected in the object can be used to guide the region growing. Note that the process requires only the number of densities and not to specify the densities included. The threshould in the connectivity map identify the number of connectivity values included in the solution and hence the number of densities as defined by the C(vogel,seed)~ formula: . Note that when the distance map or the growth along an axe: is used the voxels with same contrast value can have different connectivity according to their distance to the seed point.
3. The volume size ( as number of voxels) of the object can be used to guide the region growing. The volume size of the object can be simply measured querying the histogram of the connectivity map and adding all the values from the threshold to the maBimim value.
4. The relative position of the seed point can be used to guide the application in forcing the region growing process to follow a Cedara Software Core. Page 19 IAP PrS Segmentation Architecture particular axe. For 'example in the dataset in figure 1.17 the user will set several seed points along the Y axe. Evaluating the displacement of the points in the:ZX plane the application can estimate how much the vessel is following the Y axe and so how much the region growing has to' be bound to that.
The information gathered with the previous 4 points can be used by the application for two different purposed:
Optimize for performances. The time requested by the Seg3 object is proportional to the number of voxels selected. Avoid including unwanted. structure will speedup the process. For example in protocol used for dataset in figure 1.12 the volume of the brain can not be more then 30% of the entire volume since the whole head is inane field of view. So the solutions first two solution could be not even included in the connectivity map since the region growing should have been stopped before.
~ Identify the best solution, the one that most likely is what the user is looking for. This solution can be proposed as default.
More specifically the previous information can be used for these two puiposed in the following way:
Opdcnize for Pexfom~aacesIdentify best solution Setting to zero the densities, which are guaranteed not to belong to Contrast ~e ~ to segment, Table will improve performances and also the quality of the segmentation by reducing the number of possible solution.

The Seg3 object The threshold set accepts as a in the input the number connectivity map of densities to is actually the include in the solution.number of densities It will to be Number of select the closer included in the densities after solution' after Densities the contrast ' tablethe contrast table as been as been aPPli~ ' ~~ these applied densities have been included the process will stop: , The Seg3 object This value can be accept as a used to limit input the number the threshold in of voxd to the include. connectivity map.
Querying the Volume Size histogram of the connectivity map the application can estunate the volume of the object segmented for each threshold.

Relative . Comg ~e region Not Applicable.
position growing of the in rocess will mdir ut seed reduce the C,edara Software Core. Page 20 IAP PrS Segmentation Architecture ' points number of voxel to include.
Using this functionality will increase the cost associate in the evaluation of each voseL
The applications is expected to use some conservative values for the volume size and number of densities to stop the region growing process, and the realistic value to select the default solution.
The ability to embed the knowledge of the object under investigation makes gray level region growing well suited for being protocol-driven.
For each protocols the application can define a group of preset which target are the relevant anatomical structure.
Binary versus Gray ~,evel Region Growing The basic algorithriz as presented in the previous section can be proven to be equivalent to a binary . region growing where the threshold are known m advanced. So this process doesn't have to be validated for accuracy since the binary region growing is already m use right now.
Avoiding a priori knowledge of the threshold values has a major advantage for the application: .
1. The number of solution that the user has to review are limited and pre-calculated. Requiring the user to set only one value, for the selection of the solution, means that the user has to evaluate ( at worst ) 256 solution for an 8 bit dataset, while using the binary region growing the user will have to evaluate 256'256= 65536 soluxion since all the combiriatiorl of the low and high threshold have to be potentially analyzed.
2. Finding the best threshold is not natural from the user's perspective.
Figure 1.18.B shows a CTA of the abdominal region in which the Aorta has been segriiented. To obtain this result the user has seeded the Aorta with the settings shown in figure 1.18.A.
Cedara Software Corp. Page 21 IAP PrS Segmentation Architecture Figure 1:18 Image A shows the Aorta extracted from the dataset shown in figure B. In this case only one seed point was used.
In order to obtaimthe same result with the binary region growing the user has to manually identify the best threshold for the Aorta, which is shown in figure 1.19 and then seeded. Looking at figure 1.19.A is not clear that these are the best setting for the segmentation and so they can be easily overlooked.

Figure 1.19 Threshold setting necessary to extract the Aorta as in $gure 1.18. The image A appears with several holes and it is not clear with the Aorta is still connected with the bone.
3. The threshold set by the user can be dictated. by the volume of the object rather then the density values.
Our experience shows that the quality of the result achievable with this functionality is not achievable with the normal thresholding. Even in situation in which the thresholds are known in advanced is preferable to use this information as contrast table, and avoid binary segmentation.
Advance .usage The gray level region growing allows reducing the time to perform the segmentation from the user perspective. It guides the user in the selection of the best threshold of the structure under investigation.
In the previous section we have been using the connectivity map for the extraction of the binary object using the pipeline in figure 1.13. In some C~edara Software Corp. Page 22 ~ c~3 IAP PrS Segmentation Architecture situation it could be valuable to use the connectivity map to enhance the visualization. The connectivity map tend to have bright and uniform values in the structure seeded and darker values elsewhere. This characteristic can be exploded in the MIP visualization to enhance the vessels in the MR.A and GTA dataset. Figure 1.20 shows an example of this applicaxion; image 1.20.A is the NIIn of an MRA dataset, ah~e 1.20.B is the MII' of the connectivity map. Figure 1.21 shows the MII' of the dataset in which the connectivity map and the original dataset have been averaged, and compared it with the NIIP of original dataset in the upper left corner. In this case it is visible that the contribution of the connectivity help to suppress the background values enhancing the vessels.
tai:
G h;p .

;
i 1;:e:
. y:.

W : ~:y:
i :::

.. .,. B

. . . ..
..<>. ., Figure 1~0'' Image A is the MIP of a MRA pheripet~ial dataset.
Image B shows the connectivity .map of the same region when the main vessel has been seeder~_ Cedara Software Core. ' _ Page 23 :, ...

lAP PrS Segmentation Architecture Figure 1.21 MIP. of the dataset obtained averaging the connectivity map and the original dataset shown in figure 1.20. In the upper left corner it is superimposed the MIP of the original dataset. It is visible the averaging,helps in suppressing the background density while prese~ing ~the~details of the vessel.
In some situation it is not necessary to have the user setting directly the seeds for the identification of the anatomical structures. The seeds point can be identified as a result of a threshold. on the images under investigation. Note the threshold is necessary to identify only some point in the structure and riot the entire structure, so the application can use very conservative values. For example in the CTA a very high threshold can be used to generate some seed point on the bone. An MRA a very high threshold can be used to generate seed points on the vessels. The seed points in Figure 1.21 ;and 1.20 have been generated using this method. Figure 1.22 shows an example of this technique.
", A r ~ B

Figure 1.22 Image !A shops the seed point generated in a CTA
dataset of therneckregio~ for the identification of the bone. Image B shows the; ,b9ny~ ; segmented from the same dataset. In this example th~.user interveption is minimized only in the selection of the best tle~sh~ohd:-~ ," p,.::
c . _ .
Seg3 suppers ~thisltech,iiique since it is designed to accept a bitvol as a input for, the ident~cation of the seed points. The bitvol cars be generated°by a threshold, and edited automatically of manually.
Cedara Software Core. , Page 24 . ', ~~ r.~.. ,~..y,.

IAP PrS Segmentation Architecture VISLI~IZltlon anC~ Segmentation Visualization is the process that usually follows the segmentation. It is used for the validation of the segmentation as well as correction through the tools presented.on the previous section.
The IAP processing server supports two rendering engines that can be used for this task. A very sophisticated Volume Renderer and a Solid Renderer. Please refer"to the "PrS 3D Rendering Architecture" D~hite Paper for .a detailed description of these two engines: This section will focus mainly on how these engines deal with binary objects.
~r,. , Volume Rendering Volume Rendering allows~'the direct visualization of the densities in a dataset, using opacity and colour tables, which are usually referred as classification.
The IAP processing server extends the definition of classification. It allows define several regions (obtained by segmentation) and associate a colour and opacity to each one of them. We refer as a local classification, since It allows to specify colour and opacity based on voxel's density and location. It is descnbed in details in the "Local Classification" section of the "PrS 3D Rendering Architecture" White Paper.
Local chass'~cation' 'is~ necessary in the situation in wlich several anatomical'' striicture sliace the some densities; situation extremely common: in=the. medical imaging. For example in figure 1.18 the read density of the Aorta appears also in the bone due to the partial volume .
artifact.
So the goal of an application using Volume Rendering as a primary rendering tools is to classify the dataset, not necessarily to segment the data. The goal is to allow the user to point on an anatomical structure on the 3D image and have the system classify that properly. This is the target of the "Point and Click Classification (PCC)" developed in Cedara's applicaxions which is based on the tools and techniques presented in this wlute paper.
As we'll describe in this section segmentation and classification are tight together. ' .
Cxdara, Software Corp. Page 25 IAP PrS Segmentation Architecture Segmentation as a Mask The IAP volume renderer uses a binary object to define the area for each classification. ~ When an application classifies an anatomy that shares densities with other anatomical structure, it has to remove the ambiguities on the shared densities defining a region ( binary object ) which includes only the densities of the anatomy.
The binary object has to loosely contain all the relevant (visible) densities of the anatomy, it doesn't have to define the precisely boundary of the anatomy. It is supposed to mask out the shared destines which do not belong to the anatomy. The opacity function allows the Volume Renderer to visualize the fine details in the dataset. The section "Local Classification" of the "PrS 3D Rendering Architecture" White Paper describe this concept as well.
Figure 2.0 show an -example of this important concept. 2ØA is the rendering of the entire dataset, where the densities of the brain are shared with the skin and other structures. Although the dataset in 2Ø A has a very detailed brain it is not visible since the skin is on top of it. Using the shape interpolation the user can define the mask 2ØB which loosely contains the 'brain, this mask removes the ambiguities of which are the densities, of the brain and allow the proper visualixation of. the brain, 2ØC.
Cedara Software Corp. , . . . . Page 26.
Figure 2.0 MR dataset in which the brain as been extracted. The dataset A bas been masked with the binary object B to obtain the visualization of the brain C. In this case Shape Interpolation was used to generate the mask.

IAP PrS Segmentation Architecture The benefits of this approach are two:
1. The definition of the mask is typically time-effective, even in the case of figure 2.0 where the mask is created manually it takes about 1 "2 minutes to an trained user.
2. The mask doesn't have to precisely define the object, small error or imprecision are tolerated In figure 2.0 the user doesn't have to outline the brain in fine details, rather to outline where it is present on a few slices.
Opacity and Segmentation The segmentation setting which are used for the definition of the binary object ( mask ) are related. to the opacity used for the visualization. For example in figure 2.0 is the user lower the opacity enough it will visualized the mask 2:0.B instead of the brain 2ØC.
'This will happens when the segmentation is based on some criteria and the visualization on different ones.
Figure 2.1 shows an example in which the user click on the skull in order to remove from the bed in the background. The application uses a region growing based on the dataset thresholded by the opacity, and the result is shown m figure 2.f.B."("the pipeline in figure 1.3 was used ). The mask generated contaiiis'~the ~ skull of the dataset, only the bone densities connected to the seed point. If the user lowered the opacity he will eventually see the mask itself, 2.1.C:
B. ( C
Figure 2.1 In order to identify the object selected by the user the application can 'threshold the dataset, based on the opacity, and apply a binary region growing, as shown in image B. This method Cedara Softarare Corp. , . Page 27 IAP PrS Segmentation Architecture will generate an a mask C which is dependents on the opacity used for the threshold.
In general'if a "mask has been generated using a threshold (tl,t~ can be used with a' classification in which densities outside (tl,t~] have to set to zero.
There is a very simple method to get around this limitation if necessary.
The mask can be regenerated each time the opacity is changed. This will have a behavior more natural from the users' perspective as shown. in figure 2.2.
i ~ i Figure 2.2 The opacity mask can be recomputed ad each opacity change. Image A shows: the original opacity settings, image B
shows the result of the threshould and seeding of the bone structure. Once the opacity is lowered from B the mask is recomputed, and the result is shown in figure C.
The pipeline used in 2.2 is shown in figure 2.3, the difference with the pipeline used in 1.3 is that tlie'connection between the pvx opacity object and then Thr3 object is kept:after that the mask is generated.
~3- T Seed3 Bv Bvf .:..>,. . w Ss Vol -~. Cproj _.i.
Figure 2.3 keeping ; the : connection of the Thr3 object with the opacity pvx will allow regeneration of the mask on the fly.
Pipeline 2.3 will resolve the limitation imposed by the threshold used for the region growing, but it will trigger a, possibly expensive, computation for each opacity changes. It will hence limit the interactivity of this Cedars Software Corp. , : Page 28 IAP PrS Segmentation Architecture operation. The performances of rotation and colour changes will remain utichat~ged.
Normal Replacement As explained in the ."PrS 3D 'Rendering Architecture" White Paper the shading is computed using the normal for each voxel. The normal is approximated using the central difference or Sobel operators, which utilize the neighbor densities of the voxel. Once the dataset is clipped with a binary object the neighbor of the voxel on the surface changes, since . the voxel outside the binary object are not utilized during the projection. So the normal of these voxels has to be recomputed.
Figure 2.4 shows why this operation is necessary. When the bitvol clip s in an homogeneous region; like. iri 2.4.A, the normals have several directions, and so the ' cut surface will look uneven with several dark points. Replacing the normal will make the surface look flat, as expected by the user. . , ..

A B

Figure 2.4 Image A shows the cut surface if the normals replacement doesn't take place. The surface looks uneven with some dark point since the normal could .be zero in the hotriogeneous region. When the normal is replaced with the binary abject's normal, Image B, the surface looks flat as expected by the user.
The replacement of the normal is necessary when the application uses the binary object to cut the ~ anatomy, as in figure 2.4, not when it uses to extract the anatomy as in figure 2.0 or 2.1.
Since the same binary object can be used for both purposes at the same time the IAP renderer will replace the normal all the time. In situation s in which the binary object is used to extract some feature the application can simply dilate the binary mask to avoid the normal replacement. In Czdara Software Core. . , page 29 IAP PrS Segmentation Architecture this situation the mask is . based on a threshold or gray level region growing and the dilation grill guarantee that the boundary of the mask will be on transparent voxels and hence invisible. It is suggested a dilation of 2 or 3 voxels.
Note that in the situation in which a binary object is used for extracting the anatomy and cut at the same time, it is generated as intersection of binary objects. The binary object used for the extraction has to be dilated before the intersection. In this way the quality of the final render is guaranteed while allowing the user to edit the object. Figure 2.5 shows an example of this scenario.
r:

A g Figure 2.5 Image A shows the skull which has been extracted as described in 2.1...This object is cut using extrusion, as shown in 1.4.
The bitvol for the extcactioir is dilated, white the bitvol extruded is not, image $ ..shows the final binary object superimposed on the image A. The'cutting area is exacly as defined by the user, while the mask is dilated to guarantee the image quality.
For unshaded volume rendering the dilation is not necessary and, it will not affect. they final quality.
Mufti Classifications As presented in the "PrS 3D Rendering Architecture" White Paper the IAP processing server allows: ale definition of several classifications in the same dataset, each one associate with its own binary object.
The binary objects can be overlapping, and so several classifications can be defined in the same location. This represent the fact that in the,dataset the one voxel taxi .contain several anatomical structures, and it is in the Cedara Software Corp. ~ - Page 30 ,._ . .. . . :, ..

~-~l IAP PrS Segmentation Architecture nature of the data. This is also the same reason for which these structures shares densities. The next section will analyze this situation in details, since it is relevant for the measurements.
The IAP processing. server supports two policies for the overlapping classifications; and the application can extend with a . customized extension.
Surface Rendering The Surface Rendering engine renders directly the binary object. The IAP
processing server supports the visualization of several objects, each one with its own rotation matrix. Other features include texture mapping of the original gray level, and depth shading. Please refer to "PrS 3D
Rendering Architecture" White' Paper for a full description of the functionality.
Cedara Software Corp. Page 31 IAP PrS Segmentation Architecture Measurements and Segmentation One. of the most important functions in any medical imaging application is to quantify the abnormality of the anatomy. This infotrnation is used to decide on the treatment to apply and to quantify the progress of the treatment delivered.
The IAP processing server supports a large number of 2D and 3D
measurements. In this white paper we'll describe how the measurements can be used in conjunction with segmentation.
The measurement model has to follow the visualization model in order to be consistent with it and measure what the user is actually seeing, binary measurements for Surface Rendering or gray level measurements for Volume Rendering.
In this white paper we'll focus on surface and volume measurement since they are the most commonly used. For a complete list of measurement supported please refer to the Meas3 man page. .
Definition of the Measurements During the sampling process the object is not necessarily aligned with the main ayes. This cause that some voxels are partially occluded, so in other terms the volume sampled by this voxel is only partially occupied by the object being sampled. This is know as "Partial Volume Artifact" , and cause that the object spans across voxels with several different densities.
Figure 3.0 shows graphically this situation. Object A is next to object B.
dA is the density of voxel 100% filled with object A which we'll consider also hoinogelieous. dB is the density of voxel completely filled with object B. We we'll also assume in this example that dA < dB and the value of the density of the background is zero.
Figure 3.0 In this e~cample there are two objects, A and B, which have been sampled along the grid shown in the picture.
Cedara Software Corp. Page 32 IAP PrS Segmentation Architecture The voxels included by the object A can be divided in three different regions:
~ Red Region : Vosel completely covered by object A. They have density dA.
~ Yellow Region : voxel partially covered by object A and background.
~ Blue Region : Voxel covered by object A and B.
Since in the scanning process the density of a vo$el is the average density of 'the materials in the volume covered by the vogel, we can conclude that the yellow densities have value less then dA, the blue range greater then due. In the picture there is also highlighted the green area, which are vosel.partially occluded with the material B. The range of densities of the green vogels overlap with the red, yellow and blue. Graphically the distribution of.the densities is shown in figure 2.1 den~itier Figure 2.1 The yellow region has voxels partially occluded with object A and the background, hence the density will be a weighted sum of dA and background density, which is zero. Similarly the blue region has voxel with densities between d~ and d$. The green region has voxel partially occluded with object B and background;
hence they can have ,the full range of densities.
Depending on the model used by the application (binary or gray level) the volume and the surface of the object A can be computed according to the following rules:
Gray Level Binary For each voxel The number of is the voxel Volume ~taset the volumeof the binary object covered by the representing object the object A has to be added.A are counted.

Surface Volume Rendering The vosel at the doesn't define bound of the an bin Cedara Software Corp. Page 33 IAP PrS Segmentation Architecture surface for the object, object representing so this measurement is voxel A are counted.
not applicable.
Gray Level Let us assume that with the previous segmentation tools we can identify all the voxels in which obrect A lays, even partially. Figure 2.2 shows the object and also the histogram of the vogel in this area.
demiitier Figure 2.2 The oulined area represent the voxels identified by the segmentation process. The histogram of the voxel in this area is shown in the left. The digerence between this histogram and the one in figure 2.1 is that the voxel in the green region are not present.
The difference of the histogram in figure 2.2 and 2.1 is that the vogel of the green area are removed. As described in the section "Segnyentation as a Mask" this correspond in removing the densities which are shared' across the objects and,do not belong to the object segmented.
Note that inside xhe iiiaslc, the density of each vogel represents the occlusion (percentage) of the Object A in the volume of. the voxel, re~a_rdless its location.
To correctly visualize and measure the volume of the Object A we have to define for each density the percentagc of the volume occupied. Since the density is the average of the objects presented in the vowels, the estimate is straightforward: , 1. For the vosels in the yellow region the occlusion is simply density/d~
since the density of the background is zero.
2. For the vogels in the 'blue region the occlusion for a density is (density-dB)/(dB-d~.
Setting the opacity according with these criteria guarantees good quality of the volume rendered image. This opacity curve is shown in figure 2.3.
Cedara Software Core. . Page 34 3-~
IAP PrS Segmentation Architecture voxels Object B ( Opacity ) densities Figure 2.3 Opacity curve for the voxels in the segmented region.
The opacity for each density represent the amount of object A In the region covered by the voxel.
So in order to measure the volume the application can request the histogram of the densities values in the segmented area and weigh each density with the opacity:
(*) volume = ~dE~~~a h~togram(d) * opacity( d ) As explained in the section "Opacity and Segmentation" and "Gradient Replacement" the application should dilate the bitvol is generated thought a threshold and region growing. This operation will include vosels semi-transparent and so it will not affect the measurement of the volume as defined in (*) It is not really possible to measure the surface on the- object, since the borders of it are not known. However looking at picture 2.2 it is clear.
that it can be estimated with the yellow and blue areas since these are the voxels in which the Border of the object lays.
The reader can find more information and more detailed mathematical description in "Volume v Rendering" A. Drebin et altr. Computer Graphics August 2988.
Binary To segment an object : the application typically set two thresholds, depending on those some part of the yellow and .blue region can be included or excluded by the. segmentation. Figure 2.4 shows an example of this situation. The area of the histogram between the two thfesholds is the volume.of the object created.
densities Cedara Software Corp. Page 35 3 ~
IAP PrS Segmentation tlrchitecture Figure 2.4 When the application sets two threshold will include some voxels in the blue and yellow region. The picture on the left shows the voxels included in the segmentation, while the figure on the right shows the histogram and the two thresholds, the area covered by the histogram between the two thresholds (in green) represent the volume of the segmented object.
The surface of the voxel can be simply estimate as the number of voxels on the boundary of the binaiy object.
IAP Object Model The example described in the previous section explain s how the application can measure volume and surface on the binary and gray level objects. In the real case scenario there are several objects involved and overlapping, and they usually don't have constant densities. However the same principle it , is still applicable to obtain an estimate of the measurements:
The IAP processing server with the Meas3 Object supports all the functionality,.required to perform the. measurements described in the section. Meas3 computes ; histogram, average . value, max, min and standard deviation of the stack. If a binary object is connected to it the measurements will be limited to the area included.
Meas3 also computed the volume and surface of binary objects, it supports two different policies for the surface measurements 1. Edges : measure the perimeters of each plane in the binary object 2. Coin : count the voxels in the bitvol which have at least one neigbour not belonging to the bitvol (ie the voxel on the surface of the bitvol ).
Cxdara Software Corp. . Page 36 Additional Description B
"PrS 3D Rendering Architecture - Part 1"

~/
PrS 3D Rendering Architecture White Paper Introduction Imaging Application Platform (IAP) is a well-established platform product specifically targeted for medical imaging. Its goal is to accelerate the development of applications for medical and biological applications. IAP has been used since 1991 to build applications ranging from review stations to advanced post-processing workstations. IAP supports a wide set of functionality including database, hardcopy, DICOM services, image processing, and reconstruction. The design is based on a client/server architecture and each class of functionality is implemented in a separate server. All the servers are based on a data reference pipeline model; an application instantiates objects and connects them together to obtain a live entity that responds to direct or indirect stimuli. This paper will focus on the image processing server (further referred to as the processing server or PrS) and, m parttcular, on the three-dimensional (3D) rendering architecture.
Two different rendering engines are at the core of 3D in IAP: a well proven and established solid renderer, and an advanced volume renderer, called Mufti Mode Renderer (NflvlR). These two renderers together support a very large set of clinical applications. The integration between these two technologies is very tight. Data structures can be exchanged between the two renderers making it possible to share functionality such as reconstnic-~ion, visualization and measurement. A set of tools is provided that allows interactive manipulation of volumes, variation of opacity and color, cut surfaces, camera, light positions and shading parameters. The motivation for this architecture is that our clinical exposure has lead to the observation that there are many different rendering techniques available, each of which is optimal for a different visualization task.
It is rare that a clinical task does not benefit from combining several of these rendering techniques in a specific protocol. IAP also extends the benefits of the 3D architecture wlth its infrastructure by making additional functionality available to the MMR: extremely flexible adaptation to various memory scenarios, support for mufti-threading,. and asynchronous behavior.
All this comes together in a state of the art platform product that can handle not only best-case trade-show demonstrations but also real-Life clinical scenarios easily and efficiently. The Cedara 3D platform technology is powerful, robust, and well-balanced and.it has no rivals on the market in terms of usage in the field, clinical insight, and breadth of functionality.
Cedara Software Corp. Page 1 PrS 3D Rendering Architecture White Paper Glossary Volume Rendering A technique used to visualize 3D sampled data that does not require any geometrical intermediate structure.
Surface Rendering A technique used to visualize 3D surfaces represented by either polygons or voxels that have been previously extracted from a sampled dataset.
Interpolation A set of techniques used to generate missing data between known samples.
Voxel A 3D discrete sample.
Shape Interpolation A,n interpolation technique for binary objects that allows users to smoothly connect arbitrary contours.
Multiplanar or Curved Arbitrary cross-sections of a 3D sampled dataset.
Reformatting MIP Maximum Intensity Projection. A visualization technique that projects the maximaim value along the viewing direction. Typically used for visualizing angiographic daxa.
ROI Region of Interest. An irregular region which includes only the voxels that have to be processed.
Page 2 ~ Cedara Software Corp.

. CA 02365045 2001-12-14 .~-3 PrS 3D Rendering Architecture White Paper Application Scenarios Binary and Gray Level Functionality For many years a controversy has raged about which technology is better for visualization of biological data: volume rendering or surface rendering. Each technique has advantages and drawbacks. (for details, see "Error! Reference source not found." on page Error! Bookmark not defined..) Which technique is best for a specific rendering task is a choice that is deliberately deferred to application designers and clinicians. In fact, the design of the processing server makes it possible to combine the best of both techniques.
The processing server provides a unifying framework where data srxuccures used by these two technologies can be easily shared and exchanged. In fact, IAP
is designed for visual data processing where the primary sources of data are sampled image datasets. For this reason, all our data strictures are voxel-based.
The two most important are binary solids and gray level slice stacks. Several objects in our visualization pipelines accept both types of data, providing a high level of flexibility and interoperability. For example, a binary solid can be directly visualized or used as a mask for the volume rendering of a gray level slice stack. Conversely, a graylevel slice stack can be rendered directly and also used to texture map the rendering of a solid object or to provide gradient information for gray level gradient shading.
A gray level slice stack is generally composed of a set of parallel cross-sectional images acquired by a scanner, and can be arbitrarily spaced and offset relative to each other. Several pixel types are supported from 8-bit unsigned to 16-bit signed with a floating point scaling factor. Planar, arbitrary, and curved reformats (with user-defined thickness) are available for slice stacks and binary solids. Slice stacks can also be interpolated to generate isotropic volumes or resampled ax a different resolution. They can be volume rendered with a variety of compositing modes, such as Ma~mum Intensity Projection (MIP), Shaded Volume Rendering, and Unshaded Volume Rendering. ' Multiple stacks can be registered and rendered together, each with a different compositing mode.
To generate binary solids, a wide range of segmentation operations are provided: simple thresholding, geometrical ROIs, seeding, morphological operations, etc. Several of these operations are available in both 2D and 3D.
Binary solids can be reconstructed from a slice stack, and shape interpolation can be applied during the reconstruction phase. When binary solids have been reconstructed, logic operations (e.g., intersection) are available together with disarticulation and cleanup functionality. Texture mapping and arbitrary visibility fikers are also available for binary solids.
Cedara Software Corp. Page 3 PrS 3D Rendering Architecture White Paper C ' 'cal Scenarios The following list is not meant to be exhaustive but it does capture the most significant clinical requirements for a complete rendering engine. We want to underline the requirements of a real-life 3D rendering system for medical imaging because the day to-day clinical situations are normally quite different from the best-case approach normally shown in demonstrations at trade shows.
The Cedara platform solution has no rivals in the market in terms of usage in the field, clinical insight and breadth of functionality.
Tumors Tumors disturb surrounding vascular stru~re ands in some cases, do not have well defined boundaries. Consequently, a set of rendering options needs to be available for them which can be mnced in a single image, including surface rendering (allowing volume calculations and providing semi-transparent surfaces at various levels of translucency, representation of segmentation confidence, etc.), volume rendering, and vasculature rendering techniques such as MII'. An image in which multiple rendering modes are used should still allow for the full range of measurement capabilities. For example, it should. be possible to make the skin and skull translucent to allow a tumor to be seen, while still allowing measurements to be made along the skin surface. , Figure 1- Volume rendered tumors.
In (a), the skin is semi-transparent and allows the user to see the tumor and its relationship with the vasculature. In (b), the location of the tumor is shown relative to the brain and vasculature.
a.
Display of Correlated Data from Different Modalities There is a need to display and explore data from multiple acquisitions in a singe image (also called mufti-channel data). Some of the possibdiues include: pre-and post-operative data, metabolic PET data with higher resolution MR.
anatomic info, pre- and post-contrast studies, MR and MRA, etc.
Page 4 Software Corps PrS 3D Rendering Architecture White Paper The renderer is able to fuse them during the ray traversal, and with a real 3D
fusion, not just a 2D overlay of the images.
Figure 2 - Renderings of different modalities.
In (a), the rendering of an ultrasound kidney dataset~ The red shows the visualization of the power mode; the gray shows the visualization of the B
mode. Data provided by Sonoline Elegra. In (b), the volume rendering of a liver study. The dataset was acquired with a Hawkeye scanner. The low resolution CT shows the anatomy while the Spect data highlights the hot spots. Data provided by Rambam Hospital, Israel. In (c), the MRA
provides the details of the vasculature in red, while the MRI provides the details of the anatomy. (In this image, only the brain has been rendered.) b.
c.
Cedars Software Corp. Page 5 ~.- 6 PrS, 3D Rendering Architecture White Paper Dental Pac~c'age A dental package requites orthogonal, oblique, and curved reformatting plus accurate measurement capabilities. In addiuoii, our experience suggests that rendering should be part of a dental package.
In 3D, surface rendering with cut surfaces corresponding to the orthogonal, oblique and curved dental reformats are required. In addition the ability to label the surface of a 3D object with lines corresponding to the intersection of the object surface with the reformat planes would be useful Since a dental package is typically used to provide information on where and haw to insert dental implants, it should be possible to display geome~c implant models in dental 3D
images and to obtain suing and drilling trajectory. information through extremely accurate measurements and surgical siiriulation. The discussion above on prosthesis design applies here also.
Large Size Dataset Modern scanners are'able rto acquire a large amount of data, for example, a normal sto~;a spiral: CT scan can easily be several hundreds megabytes. The system mu"xt be able to handle this study without any performance penalty and without slowing down the work$ow of the radiologist. The processing server can accomplish this since it directly manages the buffers and optimizes, or completely avoids, swapping these buffers to disk. For more information on this functionality, please refer to the Memory Management section in the PrS
White Paper.
Since volume rendering requires large data yes, Memory Management is extremely relevant in these scenarios. Figure 4 shows the rendering of the CT
dataset of the visible human project. The processing server allows you to rotate ~s ~~~ ~~°~ an3' ~PP~g on disk after the pre-processing has been completed.
Page 6 Cedara Software Corp.
Figure 3 - Application of a. curvilinear reformat for a dental package.

'~ 7 PrS 3D Rendering Architecture ~e Paper Figure 4 - Volume rendering of the visible human project, Since the processing server performs the interpolation before the rendering, the rendered dataset has a size of Sl2~cS12x1024,12 bits per pixel-. Including gradient information, the dataset size is-1 Gigabyte. Since the processing server performs some compression of the data and optimizes the memory's buffer, there is no swapping during rotation, even on a 1 Gigabyte system.
Cedara Software Corp. Page 7 PrS 3D Rendering Architecture White Paper Volume Rendering The Basics Volume rendering is a flexible technique for virsualizing sampled image data (e.g., CT, MR, Ultrasound, Nuclear Medicine). The key benefit of this technique is the ability to display the sampled data directly without using a geometrical representation and without the need for segmentation. This makes all the sampled data available duringahe visualization and, by using.a variable opacity transfer function, allows inner structures to appear semi-transparent.
The process of generating an image can be described intuitively using the ray-casting idea. Casting a ray from the observer through .the volume generates each pixel m the final image. Samples have to be interpolated along the ray during the traversal. Each sample is classified and the image generation process is a numerical approximation of the volume rendering integral.
Figure 5 - Conceptual illustration of volume rendering.
~d~ :,~ viewer 'i li~
Voliim Imege In volume rerideting, a fundamental role is played by the classification. The classification defines the color and opacity of each voxel in the dataset. The opacity defines "how much" of the vaxel is visible by associating a value from (fully transparent) to 1 (fully opaque). Using a continuous range of values avoids aliasing because it is not a bmaiy threshold. The color allows the user to distinguish between the densities (that represent different tissues in a CT
study, for example) in the 3D image. Figure 6 shows three opacity settings for the same CT dataset.
A classification in which each voxel. depends only on the density is called global, while a classification in which each voxel depends on its position and density is called local. A global classification function works pretty well with CT
data since each tissue is characterized by a range of Hounsfield units. In MRI, the global classification function has a very limited applicability as shown in Figure 7. To handle MR data properly, a local classification is necessary.
Page 8 . Cxdara Software Corp.

PrS 3D Rendering Architecture White Paper Figure 6 - Different opacity settings for a CT dataset.
The first row shows the volume rendered image, while the second row shows the histogram plus the opacity curve. Different colors have been used to differentiate the soft tissue and the bone in the 3D view.
I
' ' Figure 7 - Different opacity settings for an MR dataset.
In MR, the underling'physics is not compatible with the global opacity transfer function; it is not possible to select different tissues just by changing' it. As will be explained in the next section, the processing server overcomes thisproblem by allowing the application to use a local classification. .
,., _ . , ~ , t :~ . , ..,..
s., , .. . i Cedars Software Corp. Page 9 PrS 3D Rendering Architecture White Paper The IAP renderer also takes advantage of optimization techniques that leverage several view-independent steps of the rendering. Specifically, some of the view-independent processing optimizarions implemented are:
~ interpolation;
~ gradient generation; and background suppression.
Naturally, all these optimizations are achieved at the cost of the initial computation and memory necessary to store this pre-calculated information and, therefore, can be selectively disabled depending on the available configuration. The application developer also has several controllable configurations available that allows the developer to gracefully decrease the amount of informaxion cached depending on the system resources avat'lable.
Rendering Modes As we saw, the general model that the IAP volume renderer uses to generate a projection is to composite all the voxels that lie "behind" a pixel in the image plane. The eompositmg process differs for every rendering mode, and the most unportant are illustrated here.
In the compositing process, the following variables are involved:
~ Crp(d) opacity for density d. The opacity values are from 0.0 (complete transparent) to 1.0 (fully opaque).
~ Color(d) color for density d.
~ V(xy,z) density in the volume at the location x, y, z.
~ I(u,v) intensity of output~image at pixel location u,v.
~ (xyz) = SampleLocation(Ray, i) this notation represents the computation of the i-th sample point along the ray. 'This can be accomplished with nearest neig)abor or trilinear interpolation.
~ Ray = Compu,v); this notation represent the computation' of the ray "Ray.. passing through the pixel u,v in the image plane. Typically, this involve the definition of the sampling step and the directlon,of the ray.
~ Normal(xyz) : normal of the voxel at location xyz. Typically, this is approximated with a central difference operator.
~ L light vector.
~ ° Represents the dot product Figure 8 shows a graphical representation of these values.
Page 10 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper ~ ~ "' Figure 8 . Values involved in the ray casting process.
Ray = ComputeRay(u,v) Normal r~~
vector L
i-th samq~le point SampleL,ocation(Ray Maximum Intensity Projection (MIP) This is the pseudo-code for the MIP:
for every pixel u,v in I {
ray = ComputeRay(u,v) for every sample point i along ray {
(x,y,z)= SampleLocation(ray, i ) if( I(u,v) < V(x,y,z) ) I'(u,v) ~ V(x,Y.z) }
}
Density Volume Rendering.(DV'R) This is the pseudo code for DVR (referred to as "Multicolor" in the IAP man Pas).
for every pixel u,v in I {
ray ~. ComputeRay(u,v) ray opacity = 0.0 for every sample point~i along ray ~
(x,y,z)= SampleLocation(ray, i ) alpha ~ ( 1.0 - ray opacity ) * op(V(x,y,z)) I(u,.v) += color(V(x,y,z)) * alpha ray opacity += alpha }
Cedars Software Corp. Page 11 PrS 3D Rendering Architecture White Paper ~~' Shaded VoIW ne Rendering This is the pseudo code for DVR (referred to as "Shaded Multicolor" in the IAP man pages):
for every pixel u,v in I
ray = ComputeRay(u,v) ray opacity = 0.0 for every sample point i along ray (x,y;z)= SampleLocation(ray, i ) alpha = ( 1.0 - ray opacity ) * op(V(x,y,z)) shade = Normal(x,y,z)o L
I(u,v) += color(V(x,y,z)) * alpha * shade ray opacity += alpha Figure 9 shows the same dataset rendered with these three rendering modes.
Figure 9 - Rendering modes.
MIP
Object Model For a generic description of the IAP object model, please refer to the PrS
White Paper. In this section, we illustrate in detail the object involved in the volume rendering pipeline.
In Figure 10, you can see the IAP volume rendering pipeline. The original slices are kept in a number of raster objects (Ras). Those Ras objects are collected in a singe slice stack (Ss) and then passed to the interpolator object (Sirnerp).
Here, the developer has the choice of several irnerpolation techniques ('~.e., cubic, linear, nearest neighbor). The output of the Smtecp then goes into the view independent preprocessor (Vol) and then finally to the projector object (Cproj).
From that point on, the pipeline becomes purely rwo-dimensional If Cpro~ is Page 12 Cedara Software Corp.

PrS 3D Rendering Architecture White Paper used: in gray scale mode (no color table applied during the projection, like in Figure 9), the application can apply window/level to the-output image in V2.
If Cpro~ is m color mode, V2 will ignore the window/level setting, 'The application can add windowllevel in color mode using a dynamic extension of the Filter2 object, as reported in Appendix C.
Figure 10 - Pipeline used for vohune rendering.
One of the keys to correct interpretation of a volume dataset is motion. The amount of data in today's clinical dataset is so high that it is very difficult to perceive all the details in a static picture. I-fistorically, this has been done using a pre-recorded cine-loop, but now it can be done by uiteractively changing the view position. Apart from the raw speed of the algorithm, it is also convenient to enhance the "perceived speed" of a volume rendering algorithm by using a progressive refinement update. Progressive refinement can be implemented in several ways in IAP but the easiest and most common way to do it for volume rendering is to set up a second downsampled data-stream The two branches, each processing the input data.at different resolutions, are rendered in alternation.
Figure 11 shows how the pipeline has to be modified in order to achieve progress refinement in IAP. Two volume rendering pipelines (Sinterp, Volume, Cpro~) are set in parallel and are fed with the same stack and output m the same window. One of these pipelines, called High Speed, uses a down sampled dataset (the downsampling is performed in Sii~terp}. This allows rendering at a very high rate (more than 10 frames per second) even on a general purpose PC.
The second pipeline is called High Quality. It renders the original dataset usually interpolated with a linear or cubic kernel Cedars Software Corp. Page 13 PrS 3D Rendering Architecture White Paper I-figh Speed and High Quality are j oined at the progressive refinement object (Pref): The High Quality pipeline is always interrupttble to guarantee application responsiveness, while the I-fig~i. Speed pipecLine is .normally in atomic mode (non-intenuptible) and executes at a higher priority: The application developer has control of all these options, and can alter the settings depending on the requirements. For example, if a cine-loop has to be generated, the High Speed pipeline is disconnected because the intermediate renderings are not used in this scenario. For rendering modes that require opacity and color control, a set of Pvx objects speafying a pixel value transformation can be connected to Vol.
The transformation is specified independently from the.pixel type and can be a simplified linear ramp or full piece-arse linear. Using the full piece-wise linear specification, the application writer can implement several different types of opacity and color editors specific to each medical imaging modality. Another common addition to the volume rendering pipeline is a cut plane object, r -~m or Binary object. These objects specify the region of the stack that have to be projected and can be connected either to Sinterp or Vol.
The processing server can also render a multi-channel dataset together. As discussed in "Clinical Scenarios" on page 1, this feature is very important in modalities such as MR and ultrasound, which can -acqitire multiple datasets.
The renderer will interleave the datasets during the projection, which guarantees that it is a real 3D fusion, rather than a 2D overlay. Examples of this functionality is in Figure 2 on page 5. Several stack are rendered together simply connecting several Volume object to the same Cproj object, as shown in Figure 12.
As in the case of one Volume object the developer can use an High Speed pipeline and High Quality pipeline to enhance the interactivity.
Page 14 Cedars Software Corp.
Figure 11- High-speed/high quality pipelines.

PrS 3D Rendering Architecture White Paper Advanced Features As we have seen .in "Clinical Scenarios" on page 4, a renderer needs to have a much higher level of sophistication to be effective in clinical practice. The processing server provides functionality which is beyond the "standard" volume rendering. The most relevant are presented here.
Clipping Tools The user in 3D visualization is usually interested in the relationship between the anatomical structures. In order to investigate that relationship, it is necessary that the application be allowed to clip part of the dataset to reveal the internal structure. Our experience shows that there are two kinds of clipping.
Permanent - part of the dataset (normally irrelevant structures) is permanently removed Temporary - part of the dataset (normally relevant anatomy) is temporarily removed to reveal internal structures.
To accommodate these requu~ements, and in order to maximize performance, the processing server implements two kinds of clipping:
' ~PP~ ~ pre-processing time.
This feature.unplements "permanexlt"' clipping. The voxels clipped in this stage are not included in the data structure used by the renderer, and are not even computed, if possible.
'. CJippmg at rendering time.
This feature implements "temporary" clipping. The voxels are only skipped during the rendering process and are kept m the rendering data structure.
Using clipping at pre-processing time optimizes rendering performance, but a~
changes to the clipping region will require a (potentially) expensive pre-Cedars Software Corp. Page 15 Figure 12 - Rendering of a mufti-channel daxaset.

PrS 3D Rendering Architecture White Paper processing which usually prevents using this feature interactively. The clippnig at rendering time instead allows the application to interactively clip the dataset, since the clipping is performed skipping some voxel during the projection and so changing this regron if fully interactive.
Usually the application generates two types of clipping regions:
~ Geometrical Clipping regions (e.g., generated by a bounding box or oblique plane).
~ ~' ~PP~ regrons (e.g., generated by outlining an anatomical r~on).
The processing server supports these two kinds of clipping both at pre-processing time and rendering time.
Preprocessing Rendering .

The object that 'The render. object, performs Cproj, accepts the pre-processing,a bounding box as input.
Vol, In the accepts a boundingcase of a mufti-channel box dataset, for clippin g. each volume. can be Geometrical clipped independently. Using the of object, it is also possrble to clip each volume with an oblique plane or a box rotated with respect to the orthogonal axes.

The Vol object Any binary volume can accepts a be used a bitvol as input.clipping region. The Vol will processing preprocess only server also allows the voxels to rnteractively included in the bit L

vo translating this region.

~~ ~PPmg ~ r~~.ng time is very useful in several situations, but P~Wing M'IP rendering. Normally several anatomical structures overlap ru the MII' image so interactively moving a clipping region (e.g., rylinder; oblique slab, or sphere) can clarify the abilities on the rmage. See Frgure 13 for an example.
Another very common way to clip the dataset is based on the density. For example, on a CT scan the densities below the Hounsfield value of the water are background and hence have no diagnostic information. Even in this cask the processing server supports the clipprtig, at pre-processing time (using the Vol object) and at rendering time (simply changing the opacity):
Page 16 , .. ... Cedara Software Corp.

PrS 3D Rendering Architecture White Paper Figure 13 - Interactive clipping on the MIP view, The user can interactively move the semi-sphere that clips the dataset and identify the region of the AVM. Usually the clipping region is also overlaid in the orthogonal MIP views.
Local Classification The global classification is a very simple model and has limited applicability. As shown in Figure 7, the MR dataset does not allow the clinician to visualize arty specific tissue or organ. In the CTA study, it is not possible to distinguish between contrast agent and bone. Both cases significantly reduce the ability to understand the anatomy. In order to overcome these problems, it is necessary to use the local classification.
By definition; the local classification is a function that assigns color and opacity to each voxel in the dataset based on its density and location. In the reality the color is associate to the voxel depending on the anatomical suucriire to which it belongs. So it is possible to split the dataset in several regions (representing the anatonucal structures) and assigning a colomiap and opacity to each one of them.
In other temys the local classification is implemented with a set of classifications, each one applied to a different r~ion of the dataset. For this reason is very often referred as Multiple Classification.
The processing server supports Multiple Classification is a very easy and efficient manner. The application.can create several binary objects (the regions) and apply for each one of them a different classification. The Multiple Classification is dixectly supported by the MclVo1 object which allow to apply several classifications to the output of the Vol object. Figure 14 shows the case in which two classifications are defined in the same stack.
Cedara Software Core. -. Page 17 PrS 3D Rendering Architecture White papa Figure 14 - Pipeline to use two classifications in the same volume.
Each classification (C1) is defined in the area of the bitvol ($v) object associated with it. The Bvf~ob~ect has beers added to the pipeline to allow it to interactively translate the area of the classification to obtain, for example, features as shown in Figure 13.
Since all the classifications are sharing the same data stn~cture (produced by VoI) the amount of memory required is independent of the number of classifications. This design also allows you to add/remove/move a classification with minimal pre-processing.
The MclVo1 object allows controlling of the classification in the area in which one or more regions are overlapping. This is implemented through a set of policies that can be extended with run-time extension. This feature is a powerful tool for the application which knows in advance what each classification represents, as shown in "Error! Reference source not found".
Figure 15 shows how this pipeline can produce the correct result on a GTA
study. The aorta is filled anth a contrast agent that has the same densities as the bone. Defining,one bitvol (shown in Figure 15a) which defines the area of the aorta allows the application to apply the proper classification to the stack.
Note that the bitvol does not have to exactly match the structure of interest, but rather loosely contains it or excludes the obstructing anatomies. 'The direct volume rendering process, by using an appropriate opacity transfer function, will allow. for the visualization of all the details contained m the bitvols.
This approach can be applied to a variety of different modalities. In Figure 1 on page 4, ~t is applied to an MR study. The main benefit of our approach is that the creation of the bitvols is very time effective. Please refer to the "IAP
PrS
Segmentation Architecture" V~lhite Paper for a complete list of the functionalities.
Page 18 . . . Cedara Software Corp.
s~ B~ ci U

~-~9 prS 3D Rendering Architecture White Paper Pigure 15 - Local classification applied to a CTA study.
In (a), Bitvol is used to define the region of the aorta. In (b), CTA of the abdominal aorta is classified using local classification.
The processing server functionality for the creation/manipulation of bitvols includes:
~ Shape Interpolation - Allows the clinician to reconsrxuct a smooth binary obiect from a set of 2D ROIs. It is very effective for eictracting complex organs that have an,irregluar shape lie the brain. Figure 16 shows how this functionality can be applied ~ Extrusion:.- Allows the clinician to exude a. . 2D ROI along aa3r direction.
It is very uiseful, for: eliminating structures that obscure relevant pans of the anatomy, and is often used m ultrasound applications.
~ Seeding- 3D region growing in the binary object. This functionality very naturally eliminates nose or relevant structures around the area of interest.
~ Disarticularion - A kind of region growing which allows the clinician to separate objects which are loosely connected, and represent different anatomical structures.
~ Dilation/Erosion - Each binary object can be dilated or eroded in 3D with an arbitrary number of pixels for each axis.
~ Union, Intersection and complement of the binary objects.
~ Gray Level region garowing - Allows the clinician to segment the object without requiring the knowledge of thresholds.
The processing~server'does not limit the manipulation of the binary object to these tools. If the application has the knowledge of the region to define its own segmentation (e.g., using axi anatomical atlas can delineate the brain and generate directly the bitvol used in Figure 16); it is straightforward introducing this bitvol in the rendering pipeline.
Cedars Software Corp. Page 19 PrS 3D Rendering Architecture White Paper ..--Figure 16 - Example of shape interpolation.
To extract a complex organ, the user roughly outlines the brain contours on a few slices and then shape interpolation reconstruction generates the bitvol. The outlining of the brain does not have to be precise, but just includes the region of interest. The volume rendering process, with a proper opacity table, will extract the details on the brain.
The functionality supported by MclVol can also be used to arbitrarily cut the dataset and texturewiap'the origiua~l gray level on the cutting surface, One of the main benefits of this feature is that the gray level from the MPRs can be embedded in the 3D visualization. Figure 17 shows how this can be done in two different ways: In Figure 17a, the MPR planes are visualized with the original gray level information. This can be used to correlate the MPR shown by the application in the dataset. In Figure 17b, a cut into the dataset shows the depth of the fracture on the bone.
Figure.18 shows another example of this feature: In this case, a spherical classification is used to reveal the bone structures in a CT dataset (Figure 18a) or the mening~oma in an MR dataset (Figure 18b).
Page 20 . .; i . . . , . Software Core.

PrS 3D Rendering Architecture White Paper 0 Figure 17 - Example of embedding gray level from the MPRs.
The functionality of McIVoI allows you to embed the information from the MPRs in the visualization. This can be used for showing the location of the MPRs in the space of investigating the depth of some structure.
a. b.
Figure 18 - Example of cligping achievable with MclVol.
In (a), there are .two classifications in the CT dataset: the skin which is applied to the whole dataset;. and the bone which is applied to the sphere.
In (b), there are, four different classifications (for details, see "Error!
Reference soaijice not, found;";).
..:~ . . - .
a_ ,~ ~rK., . ..,., ,. b To minimize memory consumption, MclVol supports. mukiple doovnstreaal connectionsso that several Cproj objects can share the same data structure. It is possible to associate a group~of classifications to a specific Cproj, so each Cproj object can display a different set of objects. Note that in order to minimiT.e the memory corisrimption, Volume keeps only one copy of the processed dataset.
Thus, if some Cproj objects connected to MclVol are using Shaded Volume Rendering, while others are using MIP or Unshaded Volume Rendering, the performances will be severely affected since Volume will remove the data structure of the gradient when the MIP o* Unshaded Volume Rendering is scheduled for comlnuation, and it will recoinpute it when the Shaded Volume Cedars Software Corp. Page 21 PrS 3D.Rendering Architecture White Paper Rendering is scheduled In this scenario, we suggest you use two Vols, and two MclVols, one for Shaded and one for Unshaded Volume Rendering, and connect the Cproj to the MclVo1 depending on the rendering mode used.
Page 22 Software Corp.

Additional Description C
"PrS 3D Rendering Architecture - Part 2"

PrS 3D Rendering Architecture White Paper ,.-!
..
Open M1VIR
There is a class of applications that requires anatomical data to be rendered with synthetic objects, usually defined by polygons. Typically, applications oriented for planning (i.e., radiotherapy or surgery) require this feature.
The processing. server supports this functionality in a very flexible way. The application can provide to the renderer images with associated Z buffer that has to be embedded.uz the scene. The Z buffer stores for each pixel in the image the distance from the polygon to the image plane. The Z buffer is widely used in compuxer graphics and supported by several libraries. The application has the freedom to choose the 3D library, and, if necessary, write their oavn.
Figure 1 shows an example of this funaronality. The cone, which has been.
rendered using OpenGL, is embedded in the dataset. Notice that some voxels ' in front of the cone are semi transparent, while other voxels behind the cone are not rendered at all. This functionality doesn't require any pre-processing so changing the geometry can be fully interactive.
The Z buffer is a very simple and effective way to embed geometry in the Volume, but imposes some limitations. For each pixel in the image there must be only one polygon projected onto it. This restricts the application to fully opaque polygons or non-overlapping semi-transparent polygons.
Cedars Software Corp. Page 3 PrS 3D Rendering Architecture White Paper Figure 1- Geometry embedded in the volumetric dataset.
Opacity Modulation One technique widely used in volume rendering to enhance the quality of the image is changing the opacity of each voxel depending on the magnitude of the gradient at the voxel's location. This technique, called opacity modulation, allows the user to enhance the transition between the anatomical structures (characterized by strong magnitude) and suppress homogeneous regions in the dataset. It greatly improves the effect of translucenry because when the homogeneous regions are rendered with low opacity, they tend to became dark and suppress details. Using opacity modulation, the contribution of these regions can be completely suppressed.
The processing server supports opacity modulation in a very flexible manner. A
modulation table, which defines a multiplication factor for the opacity of each voxel depending on its gradient, is defined for each range of densities. In Figure 2, for example, the modulation has been applied only to the soft tissue densities in the CTA dataset.

PrS 3D Rendering Architecture White Paper Figure 2 - Opacity modulation used to enhance translucency.
Suppressing the contn'bution of a homogeneous region, which usually appears dark with low opacity, allows the user to visualize the inner details of the data.
The effect of the modulation is also evident in Figure 3 which shows two slices of the dataset with the same settings used in Figure 2. The bone strucnxre is the same in both images, while in Figuie 20b, only the border of the soft tissue is visible (characterized by_strong magnitude), while the inner pan has been removed (characterized .by low magnitude).
Figure 3 -,Cross-section of the dataset using the settings from Figure 20.
b.
As mentioned before, opacity modulation can also enhance details is the dataset. In Figure 4; the vasculature rendered is enhanced by increasing the opadty at the border of the vessels characterized by high gradient magnitude.
As you can see, the vessel in Figure ?.2b appears more solid and better defined Cedars Software Corp. Page 5 PrS 3D Rendering Architecture White Paper Figure 4 - Opacitymodulation used to enhance vasculature visualization.
a. b.
One more application of opacity modulation is to minimize the effect of "color bleeding" due to the partial volume artifact in the CTA dataset. In this case, the opa«ty ion is applied to the densities of the contrast agent and the strong magnitude is suppressed. Typically, the user tunes this value for each specific dataset. Figure 5 shows an example of a CTA of the neck region.
;,,...t;, .
Figure 5 - Opacity modulation used to. sugpress "color bleeding".
In (b) the opacitymodulation is used to suppress the high gradient magnitude associated~;witli thevegrttrast agent densities.
a. ° b.
In this situation, opacity modulation represents a good tradeoff between usei intervention and image To use the local classification, the user has to segment the carotid from~data~et; while in this case, the user has only to change a slider to interactively adjust for the best image quality. Note: that when using this technique, it is not possible to measure the volume of the carotid, since the system does not have .any information about the object. Measurement requires the segmentation of the anatomical structure, and. hence mulriple classification. Please refer to the "IAP PrS Segmentaxion Architecture" White PrS 3D Rendering Architecture White Paper Paper for a detailed description of the measurements and their relationship with the local classification.
The processing server allows the application to set a modulation table for each density range of each Volume object rendered in the scene. Currently, opacity modulation is supported when the input rasters are 8 bit or 12 bit.
Mixing of Rendering Modes The application usually chooses .the rendering method based on the feature that the user is looking for in the dataset. Typically, Ma' is used to show the vasculature Lori an MR dataset, while Unshaded or Shaded Volume Rendering is more appropriate to show the anatomy (like brain or tumor). In some situations, both these features have to be visualized together, hence different rendering modes have to be mixed.
The processing server provides. this functionality because we have seen some clinical benefit. For example, Figure 6 shows a CTA dataset in which two regions have been classified; the cararid and the bone structure. In Figure 24a, both regions are rendered with Shaded Volume Rendering. The user can appreciate the shape of the anatomy but cannot see the calcification inside the carotid. In Figure 24b, the carotid is. rendered using MIP while the bone using Shaded Volume Rendering. In this image, the caldfication is easily visible.
The processing server merges the two regions during the rendering, not as a 2D
overlay. This guarantees.that.the relative positions of the anatomical structures are correct.
Figure 6 - CTA study showing mixed rendering.
In (a), the, carotid and bone are both rendered with Shaded Volume Rendering: In:(b), the carotid is rendered with MIP and the bone with Shaded Volume Rendering. ,.
a: b.
Another situation in which this functionality can be used is with a muhi channel dataset. In Figure 2.21 an NM and CT dataset are rendered together. In this Cedars Software Corp. ' Page 7 PrS 3D Rendering Architecture White Paper example the user is looking for the "hot spots" in the NM dataset and their relationship with the surrounding anatomy. The hot spots are by their nature visualized using Ma' rendering, while the anatomy in the CT dataset can be properly rendered.using Shaded Volume Rendering. Figure 2.21B shows the mixing rendering mode. The hot spots are very visible in the body, wht~e in Figure 2.21A depicts both data set rendered using Shaded mode, in this case the , location of the "hot spot" in the body is not as clear.
Cun ently, the processing server.allows the fusion of only MIP with Shaded or Unshaded Volume Rendering.
Figure 7 - Different modalities and rendering modes.
In (a), a NM dataset and a CT dataset are both rendered using Shaded Volume Rendering.~In'(b);~ahe NM dataset is rendered using MIP and the CT dataset is rendered' using Shaded Volume Rendering. Data provided by Rambam Hospital, Israel:
.;. , , . : .::..
b:
Coordinate .Query. .
In the application, there is, often the need to correlate the 3D images with the 2D MPRs: For example, Figure 8 shows that the user can see the stenosis in the MlP vieavbutneeds the MPR to estimate the degree of stenosis.
. <~.. ~::. ::,~ ... :. ; : :.

PrS 3D Rendering Architecture White Paper Since the voxel can .be semi-transparent in.volume rendezing, selecting a point in the 3D view does not'uniquely determine a location inside the stack; rather it determines a list,of voxels;whi'ch'contribute to the color of the pixel selected.
Several algorithms can be usedto select the voxel in this list to be considered as the "selected:'~oint'" ~o~'example, the first non-transparent voxel can be selected or~tri~;~~st~;opaque~oxel can be used instead. Our experience shows that the a~gon;.thm described ixl "A method for specifying 3D interested regions on Volun~e.Rendeied hnages~ and its evaluation for Virtual Endoscopy System", Tcryofumi~SAITO, Kensaku MORI, Yasuhito SLTENGA, jun-ichi HASEGAWA;~jun-iehiro.TO>ZIWAKI, and Kazuhiro KATADA, CARS2000 San Francisco, works v~~r,",w~ell:.:The algorithm selects the voxel than contribuxes most to the color of the selected piitel: It is fully automatic (no threshold requested) and it works in a very intuitive way. "Error! Reference source not found." shows an implementatio~~of this algorithm This functionality can be also used to create a 3D marker. When a specific point is selected in stack space, the processing server can track its position across rotations and the application can'query the list of voxels that contribute to that pixel. The application can then verify if the point is the one visible. Since the marker is rendered as a 2D_:overlay, the image thickness of the line does not increase when the .image is z~med. See Figure 9.
,;;,.._. .;' ~:.: :.. ~.:.w:
..._ ~ .~'.n~.~,t_.:1.,.
:;v". ~.'., x .-".: :.;;',... y. . , '.
. . , n ,vt:~~S:,i...
,.
Cedars Software Core. Page 9 Figure 8 - Localization of an anatomical point using a MIP view:
The user clicks on the stenosis visible on the MIP view (note the position of the pointer). The application then moves the MPR planes to that location to allow 'the user to estimate the degree of stenosis..

PrS 3D Rendering Architecture White Paper Figure 9 - ~Exainple: of a 3D marker.
A specific:point at the base of the orbit is tracked during the interactive rotation of.the dataset. In (a), the point is visible to the user and hence the marker is rendered and drawn in red. In (b), the visibility of the point is occluded by bony structure~; hence the marker is drawn in blue.
a. :w. .~_~.,::. , . . . b.
The processing server xeturns all ~of the voxels projected in the selected pixel to the application. The following values are returned to the application (for details on these parameters; see..".Error! Reference source not found." on page Error! Bookmark not defined.):
...:..y v.y,..~.. :.. ',...~.., ~ The density. of the voxel: V (xyz) ~ The opaci of~the voi~el o~[V(xyz)]
~,:. ::.,f..'.F"' :. ';.,et:>a'... ,.~,Y..:
~ If coIpr'hiode is used, the .color of the voxel ;,x~y, ...
~ The accumulated opacity:~,ray; opacity ~ The accumulated color; if color mode is used, or accumulated density: I(xy) During coordinate query, nearest neighbor interpolation is used. The application can then scan this list to determine the selected voxel using its own algorithm Note that the IAP processing se=ver also returns the voxels that do not contribute to the final pixel. This is done on purpose so that the application can, for example, determuie the size/thickness of the object (e.g.; vessel or bone) on which the user has clicked.
" . .,.. , ..,., '::....

PrS 3D Rendering Architecture White Paper Perspective Projection In medical imaging, the term "3D image" is very often referred to as an image generated using parallel projection. The word parallel indicates that the projector rays (the rays described in the pseudo code of "Error! Reference source not found:" on page Error! Bookmark not defined.) are all parallel.
This technique creates an impression of the three-dimensionality in the image but does not simulate human vision accurately. In human vision, the rays afe not parallel .but instead converge on a point, the eye of the viewer (more specifically the. xetina).,'This rendering geometry is very often referred to as perspective, projection. Perspective projection allows the generation of more realistic images.than parallel projectian. Figure 10 and Figure 11 illustrate the difference between these two projections methods.
Figure 10 - Parallel and Perspective Projections Parallel Projection ~ Perspective Projection Image Plane ~ nage Plane ,:,..
Focal Point Cedara Software Core. Page 11 PrS 3D Rendering Architecture White Paper Figure 11- Parallel projection and Perspective projection schemes. .
a. b:
There is an important implication of using perspective projection in medical imaging. Iil perspective projection; different parts of the object are~magnified with different factors;. parts close to the eye look bigger than objects further away. This implies that on a perspective image, it is not possible to compare object sizes or, distances. An ei~ample of this is shown in Figure 12.
Figure 12 -Perspective magnifies part of the dataset with differeat factors.
In (a), the image is rendered with parallel projection. The yellow marker shows that the two vessels do cot have the same size. In (b), the image is rendered with perspective projection. The red marker shows that. the same two vessels appear to have the same size.
a. , . ... _...,......,..:::. h_ PrS 3D Rendering Architecture White Paper Although not suitable for measurement, perspective projection is useful in medical imaging for several reasons. It can simulate the view of the endoscope and the radiographic acquisition.
The geometry of the acquisition of radiography is, by its nature, a perspective projection. Using a CT dataset;:it is theoretically possible to reconstruct the radiograph of a patient from, ariy position. This process is referred to as DRR
(Digital Reconstructed Radiography) and is used in Radiotherapy Planning. Cane of the technical difficulties of DRR is that the x-ray used in the CT scanner has a different energy (and hence different characteristic) compared to that used in x-ray radiography. The IAP processing server allows correction of those differences using the opacity map as a x-ray absorption curve for each specific voxeL Figure 13 shows two examples of DRR
Figure 13 - DRRs of a CT dataset.
To correct for the different:x-tax absorptions between CT x-ray and radiographic x-ray, the opacity' curve has been approximated to an exponential function. The function is designed to highlight the bone voxels in the dataset which in the radiography absorb more than in CT.
~~Y
The perspective projection can also simulate the .acquisition from an endoscope.
This allows the fly,through of the dataset to perform; for example, virtual colonoscopy. Figure 14 shows two images generated from inside the CTA
dataset.
.Ts: y Cedars Software Corp. ' . . Page 13 PrS, 3D Rendering Architecture White Paper Figure 14 - Perspective allows "fly-though" animations.
These images are frames from the CTA chest dataset. The white objects are calcification in the aorta. .In (a), the aneurysm is shown from inside the aorta. In (b), the branch to the iliac arteries is shown. Refer to the IAP
Image Gallery for the full animation.
b.
..., ..: . :. : . : ;.: ..:; . .
Since perspective projeetiou.involyes different optimizations than parallel projection, ~t has been implemented as, a separate object, Pproj, which supports the same input and output connections as Cproj. The pipeline shown in this white paper can be used in perspective mode by simply switching Cproj with Pproj (although currently not all the functionality available in Cproj is available in Pproj).
Because of the sampling scheme implemented in the Perspective renderer, it is possible to have a great level of detail even when the images are minified several times.
Figure 15 - Perspective renderer allows a high level of detail.
The perspective rendering engine is released as a separate dll on win32. The application can replace this dll with its own implementation as long as it is compliant with the same interface and uses the same data structure.

Claims

What is claimed is:
1. A system for detecting guns and ammunition in X-ray scans of containers for security assurance, the system comprises a security scanner computer workstation for use in conjunction with an X-ray based scanner, and a shape detection software which detects the specific,shapes and sizes.
CA002365045A 2001-12-14 2001-12-14 Method for the detection of guns and ammunition in x-ray scans of containers for security assurance Abandoned CA2365045A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002365045A CA2365045A1 (en) 2001-12-14 2001-12-14 Method for the detection of guns and ammunition in x-ray scans of containers for security assurance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002365045A CA2365045A1 (en) 2001-12-14 2001-12-14 Method for the detection of guns and ammunition in x-ray scans of containers for security assurance

Publications (1)

Publication Number Publication Date
CA2365045A1 true CA2365045A1 (en) 2003-06-14

Family

ID=4170840

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002365045A Abandoned CA2365045A1 (en) 2001-12-14 2001-12-14 Method for the detection of guns and ammunition in x-ray scans of containers for security assurance

Country Status (1)

Country Link
CA (1) CA2365045A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2501022A (en) * 2009-05-26 2013-10-09 Rapiscan Systems Inc Detecting firearms in tomographic X-ray images
US8837669B2 (en) 2003-04-25 2014-09-16 Rapiscan Systems, Inc. X-ray scanning system
US9020095B2 (en) 2003-04-25 2015-04-28 Rapiscan Systems, Inc. X-ray scanners
US9048061B2 (en) 2005-12-16 2015-06-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US9113839B2 (en) 2003-04-25 2015-08-25 Rapiscon Systems, Inc. X-ray inspection system and method
US10295483B2 (en) 2005-12-16 2019-05-21 Rapiscan Systems, Inc. Data collection, processing and storage systems for X-ray tomographic images
CN110261923A (en) * 2018-08-02 2019-09-20 浙江大华技术股份有限公司 A kind of contraband detecting method and device
US10591424B2 (en) 2003-04-25 2020-03-17 Rapiscan Systems, Inc. X-ray tomographic inspection systems for the identification of specific target items
CN110261923B (en) * 2018-08-02 2024-04-26 浙江大华技术股份有限公司 Contraband detection method and device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10175381B2 (en) 2003-04-25 2019-01-08 Rapiscan Systems, Inc. X-ray scanners having source points with less than a predefined variation in brightness
US11796711B2 (en) 2003-04-25 2023-10-24 Rapiscan Systems, Inc. Modular CT scanning system
US8837669B2 (en) 2003-04-25 2014-09-16 Rapiscan Systems, Inc. X-ray scanning system
US9020095B2 (en) 2003-04-25 2015-04-28 Rapiscan Systems, Inc. X-ray scanners
US10901112B2 (en) 2003-04-25 2021-01-26 Rapiscan Systems, Inc. X-ray scanning system with stationary x-ray sources
US9113839B2 (en) 2003-04-25 2015-08-25 Rapiscon Systems, Inc. X-ray inspection system and method
US9442082B2 (en) 2003-04-25 2016-09-13 Rapiscan Systems, Inc. X-ray inspection system and method
US9618648B2 (en) 2003-04-25 2017-04-11 Rapiscan Systems, Inc. X-ray scanners
US10591424B2 (en) 2003-04-25 2020-03-17 Rapiscan Systems, Inc. X-ray tomographic inspection systems for the identification of specific target items
US9675306B2 (en) 2003-04-25 2017-06-13 Rapiscan Systems, Inc. X-ray scanning system
US9638646B2 (en) 2005-12-16 2017-05-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US10295483B2 (en) 2005-12-16 2019-05-21 Rapiscan Systems, Inc. Data collection, processing and storage systems for X-ray tomographic images
US9048061B2 (en) 2005-12-16 2015-06-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US10976271B2 (en) 2005-12-16 2021-04-13 Rapiscan Systems, Inc. Stationary tomographic X-ray imaging systems for automatically sorting objects based on generated tomographic images
GB2501022A (en) * 2009-05-26 2013-10-09 Rapiscan Systems Inc Detecting firearms in tomographic X-ray images
GB2501022B (en) * 2009-05-26 2014-02-12 Rapiscan Systems Inc X-ray tomographic inspection systems for the identification of specific target items
CN110261923A (en) * 2018-08-02 2019-09-20 浙江大华技术股份有限公司 A kind of contraband detecting method and device
CN110261923B (en) * 2018-08-02 2024-04-26 浙江大华技术股份有限公司 Contraband detection method and device

Similar Documents

Publication Publication Date Title
US11666298B2 (en) Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
CN109801254B (en) Transfer function determination in medical imaging
US7924279B2 (en) Protocol-based volume visualization
US20070276214A1 (en) Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images
US20100246957A1 (en) Enhanced coronary viewing
EP1945102B1 (en) Image processing system and method for silhouette rendering and display of images during interventional procedures
Zhang et al. Dynamic real-time 4D cardiac MDCT image display using GPU-accelerated volume rendering
Chen et al. Sketch-based Volumetric Seeded Region Growing.
CA2365045A1 (en) Method for the detection of guns and ammunition in x-ray scans of containers for security assurance
US10692267B1 (en) Volume rendering animations
WO2009049852A2 (en) Method, computer program and workstation for removing undesirable objects from a digital medical image
Klein et al. Visual computing for medical diagnosis and treatment
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
CA2365062A1 (en) Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset
WO2006067714A2 (en) Transparency change of view-obscuring objects
Bartz et al. Visualization and exploration of segmented anatomic structures
Beyer Gpu-based multi-volume rendering of complex data in neuroscience and neurosurgery
Tory et al. Visualization of time-varying MRI data for MS lesion analysis
Zheng et al. Visibility guided multimodal volume visualization
CA2365043A1 (en) Method for the transmission and storage of x-ray scans of containers for security assurance
Chernoglazov Tools for visualising mars spectral ct datasets
EP1923838A1 (en) Method of fusing digital images
Ropinski et al. Interactive importance-driven visualization techniques for medical volume data
US20230342957A1 (en) Volume rendering apparatus and method
Jung Feature-Driven Volume Visualization of Medical Imaging Data

Legal Events

Date Code Title Description
FZDE Discontinued