WO2006067714A2 - Transparency change of view-obscuring objects - Google Patents

Transparency change of view-obscuring objects Download PDF

Info

Publication number
WO2006067714A2
WO2006067714A2 PCT/IB2005/054282 IB2005054282W WO2006067714A2 WO 2006067714 A2 WO2006067714 A2 WO 2006067714A2 IB 2005054282 W IB2005054282 W IB 2005054282W WO 2006067714 A2 WO2006067714 A2 WO 2006067714A2
Authority
WO
WIPO (PCT)
Prior art keywords
objects
view
transparency
mio
certain object
Prior art date
Application number
PCT/IB2005/054282
Other languages
French (fr)
Other versions
WO2006067714A3 (en
Inventor
Kees Visser
Hubrecht L. T. De Bliek
Juergen Weese
Gundolf Kiefer
Marc Busch
Helko Lehmann
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006067714A2 publication Critical patent/WO2006067714A2/en
Publication of WO2006067714A3 publication Critical patent/WO2006067714A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency

Definitions

  • This invention relates to a system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention further relates to a system for generating a sequence of images from a multidimensional data set, the sequence displaying the certain object from progressing viewing angles, the system comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention yet further relates to an image acquisition device comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention yet further relates to an image workstation comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention yet further relates to a method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention yet further relates to a computer program product designed to perform the method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • the invention yet further relates to an information carrier comprising the computer program for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
  • An implementation of such a system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects exists in a number of commercially available software packages.
  • the Algotec's Provision system http://www.algotec.com/web/products/provision.htm
  • Angiography One of the applications included with this system and called Angiography discloses tools for removal of an obscuring anatomy.
  • the 3D Multi-Tissue package with its segmentation tools for simultaneous reconstruction and manipulation of multiple tissues features virtual cutting devices to allow for quick exposure of structures of interest and for adjustable transparency levels to provide a clear look inside.
  • the invention provides a system comprising segmenting means for segmenting a multidimensional image data set into the plurality of objects, first selecting means for selecting the certain object from the plurality of objects, second selecting means for selecting a viewing angle from the range of viewing angles, identifying means for identifying a view-obscuring object when the certain object is viewed from the viewing angle selected, and transparency adjustment means for changing the transparency of the view- obscuring object identified.
  • the view-obscuring object is an object from the plurality of objects that obscures the view of the certain objects. An object obscures the view of the certain object if it blocks the view of at least a part of the certain object when the certain object is viewed from the viewing angle selected.
  • an object obscures the view of the certain object if at least part of it appears in the border zone of the certain object when the certain object is viewed from the viewing angle selected.
  • the transparency adjustment means is characterized in that the transparency of a part of the view-obscuring object depends on the closeness of the part of the view-obscuring object to the certain object. In practice it is usually beneficial to make the closer parts of the view-obscuring object in the rendered image more transparent than the less close parts.
  • the closeness is defined by a by a sequence of surroundings of the certain object. Starting with the set of pixels of the certain object in the rendered image, one can construct another bigger set of pixels including the pixels of certain object. This set defines the closest surrounding pixels. Continuing this process one can construct the second closest surrounding, the third closest surrounding, and so on.
  • the closeness is defined by a distance function.
  • the closeness is defined as the distance of a pixel from the image of the certain object.
  • An example of a distance function is the Euclidean distance function. Other distance functions can also be used.
  • the system comprises a resolution adjustment means arranged for adjusting the resolution of a collection of objects. This feature is especially useful for real-time rendering of large multi-dimensional data sets at a preserved image quality of the certain object comprising the structures of interest. The less important objects comprising structures of lesser clinical importance can be rendered at a lower resolution for faster rendering.
  • the system for generating a sequence of images from a multidimensional data set, the sequence displaying the certain object from progressing viewing angles comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs
  • the image acquisition device according to the invention comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
  • the image workstation comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
  • the method according to the invention for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects is characterized in that this method comprises step of segmenting a multidimensional image data set into the plurality of objects, step of selecting a certain object from the plurality of objects, step of selecting a viewing angle from the range of viewing angles, step of identifying a view- obscuring object, which obscures view of the certain object when the certain object is viewed from the viewing angle selected, and step of adjusting the transparency of the view-obscuring object identified, as mentioned in the opening paragraphs.
  • the step of selecting the certain object from the plurality of objects is based upon a pre-selected property of the certain object.
  • the selection of the certain object can be done by the system employing the method of the invention and can be based on a user-pre-selected property such as the presence of a bifurcation point of the blood vessel or the opacity threshold.
  • the computer program product according to the invention performs the method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
  • the information carrier comprises the computer program product for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
  • Figure Ia shows a scene comprising a plurality of objects at a viewing angle
  • Figure Ib shows the certain object present in the scene shown in Figure Ia
  • Figure Ic shows the view-obscuring object present in the scene shown in Figure Ia;
  • Figure 2a shows the scene shown in Figure Ia, where the view-obscuring object has its transparency adjusted
  • Figure 2b shows the scene shown in Figure Ia where the view-obscuring object is transparent
  • Figure 2c shows the scene shown in Figure Ia where part of the view- obscuring object has its transparency adjusted
  • Figure 3 shows the scene shown in Figure Ia where part of the view-obscuring object has its transparency adjusted
  • Figure 4 shows a block diagram of a system for visualizing the certain object according to the invention
  • Figure 5 shows an exemplary algorithm for the identification and for the transparency adjustment of the view-obscuring object
  • Figure 6 shows a block diagram of a method for visualizing the certain object according to the invention
  • Figure 7 shows a block diagram of an image acquisition device comprising the system for visualizing the certain object from a range of viewing angles in a scene comprising a plurality of objects according to the invention.
  • Figures Ia, b and c illustrate a scene comprising a plurality of objects at a fixed viewing angle.
  • Figure Ia comprises all objects present in the scene.
  • a clinician will recognize the abdominal aortic aneurysm 11, i.e. an abnormal ballooning 11 of the abdominal portion of the aorta 12, combined with the spine 13 and hips 14.
  • the certain object comprising the structure of interest to be visualized is the abdominal aorta.
  • MIO Malignant object
  • LIO Less Interesting object
  • Figure Ic The particular scene shown in Figure Ia, comprising the MIO and the LIO will be used in the following paragraphs to illustrate the embodiments of the present invention.
  • the view-obscuring object is an object from the plurality of objects present in the scene that obscures the view of the MIO.
  • An object obscures the view of the MIO if it blocks the view of at least a part of MIO when the MIO is viewed from the selected viewing angle.
  • an object obscures the view of the MIO if at least part of it appears in the zone adjacent to the border of the MIO when the MIO is viewed from the selected viewing angle.
  • the factors determining whether or not the LIO obscures the view of the MIO comprise the shape of MIO and the shape of LIO, their location and orientation with respect to each other, the viewing angle, and human visual perception.
  • the LIO can be treated as one single object. This is the case used in our example as illustrated in Figures 1 a-c.
  • the LIO can be treated as a collection of objects identified in a segmentation process.
  • the MIO can be treated as one single object or as a collection of objects identified in the process of image segmentation.
  • Figures 2a, 2b and 2c illustrate how to improve the visibility of the MIO as known in the prior art. These figures display the scene shown in Figure Ia.
  • the view-obscuring LIO has its transparency adjusted at 75%.
  • the transparency level can be a user-entered parameter or it can be a parameter associated with each object identified in the segmentation process.
  • the transparency of the LIO can be adjusted to the preset transparency level associated with the LIO.
  • the advantage of such representation of the scene is that the MIO is shown in the context of the scene in which it occurs. By increasing the transparency of the LIO more details of the MIO are made visible.
  • the distinction between these pixels, and hence the distinction between the MIO and the LIO becomes less clear, and thus more difficult to see.
  • the transparency of the LIO parts can be made dependent on how close are these parts to the image of MIO, as shown in Figure 3.
  • the parts of LIO that are blocking the view of MIO are made 100% transparent.
  • the parts of LIO that are closer to the MIO image have higher transparency than the parts of LIO that are less close to the MIO image. In this way the clinician is able to clearly see the details of the MIO while staying fully aware of how the MIO is located and oriented with respect to the surrounding structures.
  • the present invention is not constrained to any specific image rendering technique.
  • image rendering technique for example, the Iso-surface Projection algorithm or the Maximum Intensity
  • the Projection algorithm can be used.
  • the Iso-surface Projection the rays are terminated when they hit the iso-surface of interest.
  • the iso-surface is defined as the level set of the intensity function, i.e. as the set of all voxels having the same intensity. This method is used in rendering the images of the present invention.
  • the Maximum Intensity Projection the pixel is set to the maximum value along the ray. More information on image rendering can be found in Barthold Lichtenbelt, Randy Crane, and Shaz Naqvi, Introduction to Volume Rendering (Hewlett-Packard Professional Books) Prentice Hall; Bk&CD-Rom edition (1998).
  • the transparency adjustment as used in the present invention is to be understood as any technique used to improve the visibility of an object occluded or overlaid by another object.
  • the term "transparency adjustment” can be used, for example, in its literal meaning as modulating the opacity, or as changing the color saturation, or as changing the screen door transparency described in reference 1.
  • the images for illustrating the present invention are adjusted by adjusting of the opacity of the view-obscuring objects.
  • the images used for illustrating this invention are greyscale images, the techniques described in this invention are applicable to both the greyscale and the color images.
  • a subsequent adjustment of the resolution of LIO or of the selected objects comprised in LIO can be used to speed-up image rendering.
  • the MIO is processed at full resolution, while the multidimensional data of the LIO are first down-sampled to a reduced resolution, for example by a factor of 2 in each dimension, before rendering.
  • a reduced resolution for example by a factor of 2 in each dimension
  • the down-sampling factor can be a user-determined parameter or preferably can be an object-specific parameter associated with each object identified in the segmentation process.
  • This embodiment leads to an improved rendering speed without compromising the image quality of the diagnostically relevant objects because the diagnostically relevant objects, i.e. the MIO, are processed at full resolution without down-sampling the corresponding 3D image data set.
  • This improved rendering speed is especially important for movies generated by rendering a data set from different viewpoints, as discussed in the following paragraph.
  • the speed-up results from the reduced resolution, and hence the reduced data set size of the LIO, and from the possibility to skip many blocks of the MIO, those that contain only the background voxels.
  • This approach is also beneficial for rendering on graphic accelerators as the reduction of the size of the down-sampled LIO reduces the onboard memory consumption.
  • a clinician sometimes wants to see certain structures of interest from many sides. Therefore, in such cases it is desirable to generate a movie from a multidimensional data by displaying the MIO from different viewing angles to see different projections of the MIO. It is possible that at some viewing angles the MIO is not obscured by the LIO. In this case both the MIO and the LIO can be displayed with no need for transparency adjustment of the LIO. As the viewing angle changes, some fragments of the LIO can approach the MIO until they overlay the MIO obscuring the view of the latter.
  • This problem can be dealt with in various ways by adjusting the transparency of the view-obscuring object, for example, by adjusting the transparency of the LIO locally, only in the areas adjacent to or blocking the view of MIO, as proposed in the present invention.
  • the zones can be defined using a distance function or in any other way.
  • FIG. 4 shows a block diagram of a system 40 for visualizing the MIO according to the invention.
  • the system 40 takes in the volume data 401 obtained by an image acquisition device. This data is fed to the segmentation engine 41.
  • the segmentation engine is a storage element for storing the segmented image and retrieving the binary volumes 402 of the objects of the actual volume data.
  • it can be a segmentation engine implementing any of the known segmentation methods, possibly an interactive method, for calculating the binary volume 402 of every object identified in the volume data.
  • the user must provide an input 403 for defining the MIO.
  • This can be either a single object obtained in the segmentation process or a group of objects together forming the structure of interest.
  • the selection of the MIO components can be done in many ways. For example, it can be done using a list of objects identified by the segmentation process. All other objects automatically become components of the LIO. It is also possible to have an option for identifying the LIO with all remaining objects automatically becoming the components of the MIO. The choice of the method may depend on, for example, the number of objects comprised in the MIO versus the number of objects comprised in the LIO.
  • the MIO selection engine 42 uses the user input and the results of the segmentation to create the MIO binary volume data 405 and the LIO binary volume data 406 by applying a simple threshold criterion, for example. Then the user must provide the viewing angle 404. This viewing angle 404, along with the original volume data 401 and the binary volumes of MIO 405 and of LIO 406 are now the input data used by the render engines 43 and 44 to select the viewing angle from the range of viewing angles and to calculate the 2D images of LIO and MIO and their Z-Buffers.
  • the render engine 43 calculates the 2D image of MIO and the MIO Z-buffer 407.
  • the render engine 44 calculates the 2D image of LIO and the LIO Z-buffer 408.
  • the render engine employs the Iso-surface Projection algorithm.
  • the render engine 44 may allow for down-sampling of the LIO data for faster processing the LIO image at a lower resolution.
  • these two engines can be replaced by one render engine, which can employ a scheduling algorithm to render both 2D images and their respective Z-buffers.
  • the Z-buffer comprises the z coordinates, also referred to as the depth coordinates, of the corresponding pixels. It is typically used to ensure that an object in the foreground will be shown over the objects behind it.
  • Z-buffer algorithm in HyperGraph, A project of the ACM SIGGRAPH Education Committee, the Hypermedia and Visualization Laboratory, Georgia State University, and the National Science Foundation (USE-8954402), (DUE-9255489), (DUE-9752398), (DUE9751419), G. Scott Owen, Project Director, available at http://www.siggraph.org/education/materials/HyperGraph/scanline/visibility/zbuffer.htm.
  • the LIO identification and transparency adjustment engine 45 uses the MIO image and its Z-buffer 407 for determining if the LIO obscures the view of the MIO, i.e. for identifying the view-obscuring objects, and for calculating the adjusted 2D image of LIO 409 so that the MIO can be better visible when displayed together with the LIO.
  • Figure 6 shows an algorithm for the LIO transparency adjustment used to obtain the image shown in Figure 3. This algorithm is written in a pseudocode modeled on the C language. It uses the masks to define the sequence of surroundings of the MIO used to determine the closeness of the LIO pixels to the MIO in the rendered 2D images. The inner mask is the binary map of the rendered 2D image of the MIO.
  • the dilated masks can be obtained using dilations of this inner mask.
  • the use of dilation operators is explained in an article entitled "Dilation" in Hypermedia Image Processing Reference, by Bob Fisher, Simon Perkins, Ashley Walker, and Erik Wolfart, available at http://www.cee.hw.ac.uk/hipr/html/dilate.html.
  • the exact definition of the dilation, the number of dilated masks and the transparency adjustment corresponding to each mask can be parameters of the system, which can be defined by the system user or which can be preset to some default values.
  • the image shown in Figure 3 was constructed using 3 masks: the inner mask, the dilated mask, and the extra-dilated mask.
  • the dilated mask was obtained by applying five iterations of 8-connected dilation to the inner mask.
  • Bob Fisher, Simon Perkins, Ashley Walker, and Erik Wolfart describe the concept of pixel connectivity in an article "Pixel Connectivity", Hypermedia Image Processing
  • the extra-dilated mask was obtained applying five iterations of 8-connected dilation to the dilated mask.
  • the transparency assigned to the pixels within the inner mask is 100%, the transparency assigned to the pixels outside the inner mask but inside the dilated mask is 80%, the transparency assigned to the pixels outside the dilated mask but inside the extra dilated mask is 70%, the transparency assigned to the pixels outside the extra dilated mask is 0%.
  • the algorithm works as follows: every pixel of the 2D image of LIO overlapping the MIO, as defined by the inner mask, i.e.
  • the 2D image of LIO with adjusted transparency is up-sampled, if necessary, and fused with the rendered 2D image of MIO by the image fusion engine 46, which produces the final 2D image 410 of the MIO and of the LIO, with adjusted transparency of the LIO.
  • the image fusion engine 46 which produces the final 2D image 410 of the MIO and of the LIO, with adjusted transparency of the LIO.
  • two or more engines of the system for image visualization shown in Figure 4 can be combined into one engine if required. Also, it is conceivable to split one engine into a plurality of engines, each performing a subtask of the task of the corresponding engine. For example, down-sampling performed by the render engine 54 can be delegated to a separate engine. In some embodiments certain engines may not be required and can be absent from the system. For example, in the case of making the LIO 100% transparent there is no need for the transparency adjustment of the LIO and the fusion of the fully transparent LIO with the MIO. In this case it is fully sufficient to render and display the MIO. The steps of the method employed by the system shown in Figure 4 are presented in Figure 6.
  • This method comprises step 61 of segmenting the volume data into objects, step 62 of selecting the MIO from the plurality of objects identified in the segmentation step 61, step 63 of selecting the viewing angle from a range of viewing angles, step 64 of rendering the 2D images and the Z-buffers of the LIO and of the MIO, step 65 of identifying the obscuring parts and adjusting the transparency of the LIO, and step 66 of fusing the MIO and the transparency adjusted LIO into one image.
  • Step 63 may involve, for example, selecting a single viewing angle for displaying one scene, or selecting a rotation axis plus a range of viewing angles plus a viewing angle increment plus the speed of the rotation for displaying a movie-like sequence of scenes from varying viewing angles.
  • Step 65 involves determining whether or not the LIO obscures the view of the MIO.
  • step 65 may involve determining which individual objects of the LIO obscure the view of the MIO.
  • Other steps correspond to the engines of the system for image visualization shown in Figure 4 and were discussed in previous paragraphs. As in the case of the engines of the system, in some embodiments two or more steps can be combined together, for example steps 65 and 66, or can be omitted, for example steps 65 and 66.
  • the step of selecting the MIO can be based upon a property of the objects defined in the segmentation step. This pre-selected property can by decided by the user. For example, one can designate all structures with high transparency as the MIO using intensity thresholding. This can be used to adjust the transparency of all high- density structures such as bones. Alternatively, one can designate all blood-vessels as the MIO or just the blood vessels exhibiting the aneurysm or some other pathological or organic features.
  • Figure 7 shows a block diagram of an image acquisition device 70 comprising the system 71 for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects according to the invention. Such an image acquisition device 70 further comprises an image acquisition apparatus 72.
  • This image acquisition apparatus can transfer the acquired data to the image visualization system for visualization of the clinically interesting images.
  • the image acquisition apparatus can comprise a data storage unit or an image preprocessing unit.
  • the data acquired during a scan procedure can be stored in this storage unit.
  • the acquired data can be preprocessed, for example the acquired data can be segmented, and this preprocessed data can be then transferred to the image visualization system.
  • this data can be send to the image visualization system.

Abstract

It is an object of the invention to provide a more convenient system 40 for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects at multiple viewing angles with some of the objects being view-obscuring objects, capable of identifying the view-obscuring objects and adjusting their transparency for obtaining a better view of the certain object comprising the interesting anatomical features. To achieve this object, the invention provides a system comprising segmenting means 41 for segmenting a multidimensional image data set into the plurality of objects, first selecting means 42 for selecting the certain object from the plurality of objects, second selecting means 43 and 44 for selecting a viewing angle from the range of viewing angles, identifying means 45 for identifying a view-obscuring object that obscures the view of the certain object when the certain object is viewed from the viewing angle selected, and transparency adjustment means 45 for changing the transparency of the view-obscuring object identified.

Description

Viewing-angle dependent image visualization
This invention relates to a system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
The invention further relates to a system for generating a sequence of images from a multidimensional data set, the sequence displaying the certain object from progressing viewing angles, the system comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
The invention yet further relates to an image acquisition device comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects. The invention yet further relates to an image workstation comprising the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
The invention yet further relates to a method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects. The invention yet further relates to a computer program product designed to perform the method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
The invention yet further relates to an information carrier comprising the computer program for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects.
An implementation of such a system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects exists in a number of commercially available software packages. For example, the Algotec's Provision system (http://www.algotec.com/web/products/provision.htm) features a wide range of capabilities, from 2D viewing, through Multi-Planar Reformatting to advanced 3D processing and Volume Rendering. One of the applications included with this system and called Angiography discloses tools for removal of an obscuring anatomy. The 3D Multi-Tissue package with its segmentation tools for simultaneous reconstruction and manipulation of multiple tissues features virtual cutting devices to allow for quick exposure of structures of interest and for adjustable transparency levels to provide a clear look inside. However, the exposition of interesting structures and the removal of obscuring anatomy have to be done manually for every viewing angle by selecting the relevant parts of the obscuring anatomy and by changing their transparency. This is very inconvenient for a clinician such as a medical doctor who wants to see the interesting tissue from a variety of viewing angles with a minimal requirement for interaction.
Recently a conference paper entitled "Importance driven volume rendering" by Ivan Viola, Armin Kanitsar, and Meister Eduard Grδller published in IEEE Visualization 2004, October 10-15, Austin, Texas, USA, hereinafter referred to as reference 1, introduced "importance driven volume rendering as a novel technique for automatic context display of volume data". The authors assign a new parameter, the so-called object importance measure, to every object identified in the segmentation of the volume data and propose a few methods for evaluating the so-called sparseness as a function of the viewing angle and of the object importance. The adjustment of sparseness encompasses all computer graphics techniques that enhance the view of an object hidden behind another object. If at a given viewing angle a less important object obscures the view of a more important object, the less important object having lower importance measure than the more important object, the less important object is rendered with higher sparseness than the more important object. This enables a clinician to see many interesting details of the more important object through the structures of the less important object. In the description of the present invention the term transparency is used in lieu of sparseness.
However, an additional challenge to the clinician is to clearly see the details of the more important object even when it does not overlay the less important object and yet obscures the view of the more important object. This occurs when the pixels of the rendered image of the more important object have similar intensities, similar colors, or similar texture as the neighboring pixels of the rendered image of the less important object. In this case it may be practically impossible to make the distinction between these two objects or to see the details of the more important object in its border zone.
It is an object of the invention to provide a system for visualizing a certain object comprising the interesting anatomical features from a range of viewing angles, in a scene comprising a plurality of objects with some of the objects being the view-obscuring objects, which system is suitable for enhancing the image clarity and for making the details of the more important object visible, while preserving as much context information as possible. To achieve this object, the invention provides a system comprising segmenting means for segmenting a multidimensional image data set into the plurality of objects, first selecting means for selecting the certain object from the plurality of objects, second selecting means for selecting a viewing angle from the range of viewing angles, identifying means for identifying a view-obscuring object when the certain object is viewed from the viewing angle selected, and transparency adjustment means for changing the transparency of the view- obscuring object identified. The view-obscuring object is an object from the plurality of objects that obscures the view of the certain objects. An object obscures the view of the certain object if it blocks the view of at least a part of the certain object when the certain object is viewed from the viewing angle selected. Moreover, an object obscures the view of the certain object if at least part of it appears in the border zone of the certain object when the certain object is viewed from the viewing angle selected. By adjusting the transparency of the rendered image of the view-obscuring object in the zone close to the border of the certain object, the visibility of the border of the certain object and of the structures of interest in the border zone of the certain object can be significantly improved. It is also important that all image enhancements are done automatically with no need for a user interaction. The transparency adjustment may involve changing colors, or changing intensity contributions of pixels of the rendered objects, or changing other characteristics of the image as described in reference 1 or as known in the art.
In one embodiment of the invention the transparency adjustment means is characterized in that the transparency of a part of the view-obscuring object depends on the closeness of the part of the view-obscuring object to the certain object. In practice it is usually beneficial to make the closer parts of the view-obscuring object in the rendered image more transparent than the less close parts.
In a further embodiment of the invention the closeness is defined by a by a sequence of surroundings of the certain object. Starting with the set of pixels of the certain object in the rendered image, one can construct another bigger set of pixels including the pixels of certain object. This set defines the closest surrounding pixels. Continuing this process one can construct the second closest surrounding, the third closest surrounding, and so on. In yet a further embodiment of the invention the closeness is defined by a distance function. Here the closeness is defined as the distance of a pixel from the image of the certain object. An example of a distance function is the Euclidean distance function. Other distance functions can also be used. In yet a further embodiment of the invention the system comprises a resolution adjustment means arranged for adjusting the resolution of a collection of objects. This feature is especially useful for real-time rendering of large multi-dimensional data sets at a preserved image quality of the certain object comprising the structures of interest. The less important objects comprising structures of lesser clinical importance can be rendered at a lower resolution for faster rendering.
The system for generating a sequence of images from a multidimensional data set, the sequence displaying the certain object from progressing viewing angles, comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs The image acquisition device according to the invention comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
The image workstation according to the invention comprises the system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
The method according to the invention for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects is characterized in that this method comprises step of segmenting a multidimensional image data set into the plurality of objects, step of selecting a certain object from the plurality of objects, step of selecting a viewing angle from the range of viewing angles, step of identifying a view- obscuring object, which obscures view of the certain object when the certain object is viewed from the viewing angle selected, and step of adjusting the transparency of the view-obscuring object identified, as mentioned in the opening paragraphs.
In yet a further embodiment of the invention the step of selecting the certain object from the plurality of objects is based upon a pre-selected property of the certain object. In this embodiment the selection of the certain object can be done by the system employing the method of the invention and can be based on a user-pre-selected property such as the presence of a bifurcation point of the blood vessel or the opacity threshold. The computer program product according to the invention performs the method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
The information carrier according to the invention comprises the computer program product for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects as mentioned in the opening paragraphs.
These and other aspects of the invention will become apparent from and elucidated with reference to the embodiments described hereinafter as illustrated by the following Figures:
Figure Ia shows a scene comprising a plurality of objects at a viewing angle; Figure Ib shows the certain object present in the scene shown in Figure Ia; Figure Ic shows the view-obscuring object present in the scene shown in Figure Ia;
Figure 2a shows the scene shown in Figure Ia, where the view-obscuring object has its transparency adjusted;
Figure 2b shows the scene shown in Figure Ia where the view-obscuring object is transparent; Figure 2c shows the scene shown in Figure Ia where part of the view- obscuring object has its transparency adjusted;
Figure 3 shows the scene shown in Figure Ia where part of the view-obscuring object has its transparency adjusted;
Figure 4 shows a block diagram of a system for visualizing the certain object according to the invention;
Figure 5 shows an exemplary algorithm for the identification and for the transparency adjustment of the view-obscuring object;
Figure 6 shows a block diagram of a method for visualizing the certain object according to the invention; Figure 7 shows a block diagram of an image acquisition device comprising the system for visualizing the certain object from a range of viewing angles in a scene comprising a plurality of objects according to the invention. Figures Ia, b and c illustrate a scene comprising a plurality of objects at a fixed viewing angle. Figure Ia comprises all objects present in the scene. A clinician will recognize the abdominal aortic aneurysm 11, i.e. an abnormal ballooning 11 of the abdominal portion of the aorta 12, combined with the spine 13 and hips 14. The certain object comprising the structure of interest to be visualized is the abdominal aorta. This certain object, hereinafter referred to as the MIO (Most Interesting object) is shown in Figure Ib, where all other objects of the scene of Figure Ia have been removed. These all other objects comprised in the scene that are not a part of the MIO, are hereinafter referred to as the LIO (Less Interesting object) and are shown in Figure Ic. The particular scene shown in Figure Ia, comprising the MIO and the LIO will be used in the following paragraphs to illustrate the embodiments of the present invention.
The view-obscuring object is an object from the plurality of objects present in the scene that obscures the view of the MIO. An object obscures the view of the MIO if it blocks the view of at least a part of MIO when the MIO is viewed from the selected viewing angle. Moreover, an object obscures the view of the MIO if at least part of it appears in the zone adjacent to the border of the MIO when the MIO is viewed from the selected viewing angle. By adjusting the transparency of the rendered image of the view-obscuring object in the zone close to the border of the MIO, the visibility of the border of the MIO and of the MIO structures of interest near the border of MIO can be significantly improved. It is important to realize that a LIO does not always obscure the view of a
MIO. The factors determining whether or not the LIO obscures the view of the MIO comprise the shape of MIO and the shape of LIO, their location and orientation with respect to each other, the viewing angle, and human visual perception.
In some cases, the LIO can be treated as one single object. This is the case used in our example as illustrated in Figures 1 a-c. Alternatively, the LIO can be treated as a collection of objects identified in a segmentation process. Similarly, the MIO can be treated as one single object or as a collection of objects identified in the process of image segmentation.
Figures 2a, 2b and 2c illustrate how to improve the visibility of the MIO as known in the prior art. These figures display the scene shown in Figure Ia. In Figure 2a the view-obscuring LIO has its transparency adjusted at 75%. The transparency level can be a user-entered parameter or it can be a parameter associated with each object identified in the segmentation process. Once an object is classified as the LIO and the system determines that it blocks the view of the MIO, the transparency of the LIO can be adjusted to the preset transparency level associated with the LIO. The advantage of such representation of the scene is that the MIO is shown in the context of the scene in which it occurs. By increasing the transparency of the LIO more details of the MIO are made visible. Nevertheless, the presence of the view-obscuring object, even if it is made partially transparent, may still prevent the clinician to clearly see-through all the important details of the MIO. This problem can be solved by making LIO 100% transparent, i.e. by removing it from the scene altogether, as shown in Figure 2b. Unfortunately, in this case the MIO is taken out of its context and thus it is difficult to understand the location and orientation of MIO. Another approach known from the prior art is shown in Figures 2c. Here only the part of LIO blocking the view of MIO is made transparent. As a result, the whole MIO is visible and can be examined by a clinician. Remark that when the neighboring pixels of the MIO and LIO, that is the MIO pixels close to the LIO image and the LIO pixels close to the MIO image, have similar greyscale values, the distinction between these pixels, and hence the distinction between the MIO and the LIO becomes less clear, and thus more difficult to see. In order to improve visibility in the border zone of the rendered image of MIO, the transparency of the LIO parts can be made dependent on how close are these parts to the image of MIO, as shown in Figure 3. The parts of LIO that are blocking the view of MIO are made 100% transparent. In addition, the parts of LIO that are closer to the MIO image have higher transparency than the parts of LIO that are less close to the MIO image. In this way the clinician is able to clearly see the details of the MIO while staying fully aware of how the MIO is located and oriented with respect to the surrounding structures.
The skilled person will understand that there are many ways to define the relation of closeness between a pixel and an image such as the image of MIO. Some of these ways may employ a distance function. Other may involve the use of a sequence of masks, each mask defining a surrounding of the MIO, constructed using, for example, the dilation operators as applied to the binary mask of the MIO. This construction is discussed at length in the following paragraphs. The present invention is not limited to any particular definition of this closeness relation.
The present invention is not constrained to any specific image rendering technique. For example, the Iso-surface Projection algorithm or the Maximum Intensity
Projection algorithm can be used. In the Iso-surface Projection the rays are terminated when they hit the iso-surface of interest. The iso-surface is defined as the level set of the intensity function, i.e. as the set of all voxels having the same intensity. This method is used in rendering the images of the present invention. In the Maximum Intensity Projection the pixel is set to the maximum value along the ray. More information on image rendering can be found in Barthold Lichtenbelt, Randy Crane, and Shaz Naqvi, Introduction to Volume Rendering (Hewlett-Packard Professional Books) Prentice Hall; Bk&CD-Rom edition (1998). The transparency adjustment as used in the present invention is to be understood as any technique used to improve the visibility of an object occluded or overlaid by another object. Thus, the term "transparency adjustment" can be used, for example, in its literal meaning as modulating the opacity, or as changing the color saturation, or as changing the screen door transparency described in reference 1. The images for illustrating the present invention are adjusted by adjusting of the opacity of the view-obscuring objects. Although the images used for illustrating this invention are greyscale images, the techniques described in this invention are applicable to both the greyscale and the color images.
In another embodiment of the invention, a subsequent adjustment of the resolution of LIO or of the selected objects comprised in LIO can be used to speed-up image rendering. In this embodiment it is proposed that the MIO is processed at full resolution, while the multidimensional data of the LIO are first down-sampled to a reduced resolution, for example by a factor of 2 in each dimension, before rendering. In the simplest implementation of down-sampling, one can remove every other voxel in each of the 3 dimensions of a 3D image. This reduces the size of the image, i.e. the number of voxels representing the 3D image, eightfold. The down-sampling factor can be a user-determined parameter or preferably can be an object-specific parameter associated with each object identified in the segmentation process. This embodiment leads to an improved rendering speed without compromising the image quality of the diagnostically relevant objects because the diagnostically relevant objects, i.e. the MIO, are processed at full resolution without down-sampling the corresponding 3D image data set. This improved rendering speed is especially important for movies generated by rendering a data set from different viewpoints, as discussed in the following paragraph. The speed-up results from the reduced resolution, and hence the reduced data set size of the LIO, and from the possibility to skip many blocks of the MIO, those that contain only the background voxels. This approach is also beneficial for rendering on graphic accelerators as the reduction of the size of the down-sampled LIO reduces the onboard memory consumption.
For better understanding of the images, a clinician sometimes wants to see certain structures of interest from many sides. Therefore, in such cases it is desirable to generate a movie from a multidimensional data by displaying the MIO from different viewing angles to see different projections of the MIO. It is possible that at some viewing angles the MIO is not obscured by the LIO. In this case both the MIO and the LIO can be displayed with no need for transparency adjustment of the LIO. As the viewing angle changes, some fragments of the LIO can approach the MIO until they overlay the MIO obscuring the view of the latter. This problem can be dealt with in various ways by adjusting the transparency of the view-obscuring object, for example, by adjusting the transparency of the LIO locally, only in the areas adjacent to or blocking the view of MIO, as proposed in the present invention. Alternatively one can modulate the transparency of the whole LIO in order to avoid interference between the image of LIO and the image of MIO as well as to secure a smooth transition between the progressing scenes of the sequence. To this end one can define a sequence of zones around the image of MIO. As soon as the image of LIO overlaps with the outermost zone, its transparency is increased to a preset level. When the image of LIO overlaps with the second outermost zone, its transparency is further increased to a second predefined level and so on until the LIO enters the innermost zone and becomes 100% transparent. The zones can be defined using a distance function or in any other way.
Figure 4 shows a block diagram of a system 40 for visualizing the MIO according to the invention. The system 40 takes in the volume data 401 obtained by an image acquisition device. This data is fed to the segmentation engine 41. In the preferred embodiment the segmentation engine is a storage element for storing the segmented image and retrieving the binary volumes 402 of the objects of the actual volume data. Alternatively, it can be a segmentation engine implementing any of the known segmentation methods, possibly an interactive method, for calculating the binary volume 402 of every object identified in the volume data.
Once all the objects are identified the user must provide an input 403 for defining the MIO. This can be either a single object obtained in the segmentation process or a group of objects together forming the structure of interest. The selection of the MIO components can be done in many ways. For example, it can be done using a list of objects identified by the segmentation process. All other objects automatically become components of the LIO. It is also possible to have an option for identifying the LIO with all remaining objects automatically becoming the components of the MIO. The choice of the method may depend on, for example, the number of objects comprised in the MIO versus the number of objects comprised in the LIO.
The MIO selection engine 42 uses the user input and the results of the segmentation to create the MIO binary volume data 405 and the LIO binary volume data 406 by applying a simple threshold criterion, for example. Then the user must provide the viewing angle 404. This viewing angle 404, along with the original volume data 401 and the binary volumes of MIO 405 and of LIO 406 are now the input data used by the render engines 43 and 44 to select the viewing angle from the range of viewing angles and to calculate the 2D images of LIO and MIO and their Z-Buffers. The render engine 43 calculates the 2D image of MIO and the MIO Z-buffer 407. The render engine 44 calculates the 2D image of LIO and the LIO Z-buffer 408. Preferably, the render engine employs the Iso-surface Projection algorithm. In some embodiments the render engine 44 may allow for down-sampling of the LIO data for faster processing the LIO image at a lower resolution. In an alternative embodiment of the system, these two engines can be replaced by one render engine, which can employ a scheduling algorithm to render both 2D images and their respective Z-buffers. The Z-buffer comprises the z coordinates, also referred to as the depth coordinates, of the corresponding pixels. It is typically used to ensure that an object in the foreground will be shown over the objects behind it. A concise description of the Z-buffer algorithm can be found in an article entitled "Z-buffer algorithm", in HyperGraph, A project of the ACM SIGGRAPH Education Committee, the Hypermedia and Visualization Laboratory, Georgia State University, and the National Science Foundation (USE-8954402), (DUE-9255489), (DUE-9752398), (DUE9751419), G. Scott Owen, Project Director, available at http://www.siggraph.org/education/materials/HyperGraph/scanline/visibility/zbuffer.htm.
The LIO identification and transparency adjustment engine 45 uses the MIO image and its Z-buffer 407 for determining if the LIO obscures the view of the MIO, i.e. for identifying the view-obscuring objects, and for calculating the adjusted 2D image of LIO 409 so that the MIO can be better visible when displayed together with the LIO. Figure 6 shows an algorithm for the LIO transparency adjustment used to obtain the image shown in Figure 3. This algorithm is written in a pseudocode modeled on the C language. It uses the masks to define the sequence of surroundings of the MIO used to determine the closeness of the LIO pixels to the MIO in the rendered 2D images. The inner mask is the binary map of the rendered 2D image of the MIO. The dilated masks can be obtained using dilations of this inner mask. The use of dilation operators is explained in an article entitled "Dilation" in Hypermedia Image Processing Reference, by Bob Fisher, Simon Perkins, Ashley Walker, and Erik Wolfart, available at http://www.cee.hw.ac.uk/hipr/html/dilate.html. The exact definition of the dilation, the number of dilated masks and the transparency adjustment corresponding to each mask can be parameters of the system, which can be defined by the system user or which can be preset to some default values. The image shown in Figure 3 was constructed using 3 masks: the inner mask, the dilated mask, and the extra-dilated mask. The dilated mask was obtained by applying five iterations of 8-connected dilation to the inner mask. Bob Fisher, Simon Perkins, Ashley Walker, and Erik Wolfart describe the concept of pixel connectivity in an article "Pixel Connectivity", Hypermedia Image Processing
Reference, available at http://www.cee.hw.ac.uk/hipr/html/connect.html. The extra-dilated mask was obtained applying five iterations of 8-connected dilation to the dilated mask. The transparency assigned to the pixels within the inner mask is 100%, the transparency assigned to the pixels outside the inner mask but inside the dilated mask is 80%, the transparency assigned to the pixels outside the dilated mask but inside the extra dilated mask is 70%, the transparency assigned to the pixels outside the extra dilated mask is 0%. The algorithm works as follows: every pixel of the 2D image of LIO overlapping the MIO, as defined by the inner mask, i.e. by the MIO binary map, is made 100% transparent; next, every pixel of LIO outside the inner mask, but inside the dilated mask is made 80% transparent; next, every pixel of LIO outside the dilated mask, but inside the extra dilated mask is made 70% transparent; finally, the pixels of LIO outside the extra dilated mask are 0% transparent, which is equivalent to say that their transparency is not adjusted. In this algorithm it does not matter whether the LIO overlays the MIO or not. As long as the pixel of LIO belongs to any of the dilated or to the extra-dilated mask the transparency of this LIO pixel is adjusted. The skilled person will understand that there are many ways to define the relation of closeness between a pixel and an object, such as the image of MIO. Some of these ways may employ a distance function. Other ways may involve the use of an increasing sequence of masks, where each subsequent mask contains the preceding mask, as in the case of the algorithm presented in Figure 6. The present invention is not limited to any particular definition of this closeness relation.
In the final processing step, the 2D image of LIO with adjusted transparency is up-sampled, if necessary, and fused with the rendered 2D image of MIO by the image fusion engine 46, which produces the final 2D image 410 of the MIO and of the LIO, with adjusted transparency of the LIO. There are other methods for implementing the transparency adjustment of the view-obscuring object. Alternatively one can use the blending of the intensities of MIO and LIO. The choice of the particular method for adjusting of the transparency of the LIO and for fusing the adjusted images of the LIO and of the MIO presented in this description serves the purpose of explaining the working of the system and does not limit the scope of the claims. The skilled person will understand that two or more engines of the system for image visualization shown in Figure 4 can be combined into one engine if required. Also, it is conceivable to split one engine into a plurality of engines, each performing a subtask of the task of the corresponding engine. For example, down-sampling performed by the render engine 54 can be delegated to a separate engine. In some embodiments certain engines may not be required and can be absent from the system. For example, in the case of making the LIO 100% transparent there is no need for the transparency adjustment of the LIO and the fusion of the fully transparent LIO with the MIO. In this case it is fully sufficient to render and display the MIO. The steps of the method employed by the system shown in Figure 4 are presented in Figure 6. This method comprises step 61 of segmenting the volume data into objects, step 62 of selecting the MIO from the plurality of objects identified in the segmentation step 61, step 63 of selecting the viewing angle from a range of viewing angles, step 64 of rendering the 2D images and the Z-buffers of the LIO and of the MIO, step 65 of identifying the obscuring parts and adjusting the transparency of the LIO, and step 66 of fusing the MIO and the transparency adjusted LIO into one image. Step 63 may involve, for example, selecting a single viewing angle for displaying one scene, or selecting a rotation axis plus a range of viewing angles plus a viewing angle increment plus the speed of the rotation for displaying a movie-like sequence of scenes from varying viewing angles. Step 65 involves determining whether or not the LIO obscures the view of the MIO. In particular, if the LIO comprises a plurality of objects, step 65 may involve determining which individual objects of the LIO obscure the view of the MIO. Other steps correspond to the engines of the system for image visualization shown in Figure 4 and were discussed in previous paragraphs. As in the case of the engines of the system, in some embodiments two or more steps can be combined together, for example steps 65 and 66, or can be omitted, for example steps 65 and 66.
In some embodiments the step of selecting the MIO can be based upon a property of the objects defined in the segmentation step. This pre-selected property can by decided by the user. For example, one can designate all structures with high transparency as the MIO using intensity thresholding. This can be used to adjust the transparency of all high- density structures such as bones. Alternatively, one can designate all blood-vessels as the MIO or just the blood vessels exhibiting the aneurysm or some other pathological or organic features. Figure 7 shows a block diagram of an image acquisition device 70 comprising the system 71 for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects according to the invention. Such an image acquisition device 70 further comprises an image acquisition apparatus 72. This image acquisition apparatus can transfer the acquired data to the image visualization system for visualization of the clinically interesting images. Alternatively, the image acquisition apparatus can comprise a data storage unit or an image preprocessing unit. The data acquired during a scan procedure can be stored in this storage unit. Optionally the acquired data can be preprocessed, for example the acquired data can be segmented, and this preprocessed data can be then transferred to the image visualization system. When all the data required for visualization of the clinically interesting images is ready for transmission to the system 71, this data can be send to the image visualization system.
The order in the described embodiments of the method of the current invention is not mandatory. A person skilled in the art may change the order of steps and perform steps concurrently using threading models, multi-processor systems or multiple processes without departing from the concept as intended by current invention.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of a plurality of such elements or steps other than those listed in the claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of suitably programmed computer. In the system claims enumerating several means, several of these means can be embodied by one and the same item of computer readable software or hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A system for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects, the system comprising: segmenting means (41) for segmenting a multidimensional image data set into the plurality of objects; - first selecting means (42) for selecting the certain object from the plurality of objects; second selecting means (43; 44) for selecting a viewing angle from the range of viewing angles; identifying means (45) for identifying a view-obscuring object when the certain object is viewed from the viewing angle selected; transparency adjustment means (45) for changing the transparency of the view-obscuring object identified.
2. A system as claimed in claim 1, wherein the transparency adjustment means is characterized in that the transparency of a part of the view-obscuring object depends on the closeness of the part of the view obscuring object to the certain object.
3. A system as claimed in claim 2, wherein the closeness is defined by a sequence of surroundings of the certain object.
4. A system as claimed in claim 2, wherein the closeness is defined by a distance function.
5. A system as claimed in claim 1 further comprising - resolution adjustment means (44; 46) for adjusting the resolution of a collection of objects.
6. A system for generating a sequence of images from a multidimensional data set, the sequence displaying the certain object from progressing viewing angles, the system comprising the system according to any one of the claims 1 to 5.
7. An image acquisition device (70) comprising the system (71) according to any one of the claims 1 to 6.
8. An image workstation comprising the system according to any one of the claims 1 to 6.
9. A method for visualizing a certain object from a range of viewing angles in a scene comprising a plurality of objects the method comprising the steps of segmenting a multidimensional image data set into the plurality of objects (61); - selecting the certain object from the plurality of objects (62); selecting a viewing angle from the range of viewing angles (63); identifying a view-obscuring object, which obscures view of the certain object when the certain object is viewed from the viewing angle selected; adjusting the transparency of the view-obscuring object identified (65).
10. A method as claimed in claim 9, wherein the selecting of the certain object from the plurality of objects is based upon a pre-selected property of the certain object.
11. A method as claimed in claim 9 or claim 10, wherein the adjusting the transparency step is characterized in that the transparency of a part of the view-obscuring object depends on the closeness of the part of the view obscuring object to the certain object.
12. A method as claimed in claim 9 or claim 10, wherein the closeness is defined by a distance function.
13. A method as claimed in claim 9 or claim 10 further comprising the step (64; 66) of adjusting the resolution of a collection of objects. 14. A computer program product designed to perform the method as claimed in any one of the claims 9 to 13.
15. An information carrier comprising the computer program as claimed in claim
14.
PCT/IB2005/054282 2004-12-20 2005-12-16 Transparency change of view-obscuring objects WO2006067714A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP04106703.4 2004-12-20
EP04106703 2004-12-20

Publications (2)

Publication Number Publication Date
WO2006067714A2 true WO2006067714A2 (en) 2006-06-29
WO2006067714A3 WO2006067714A3 (en) 2006-08-31

Family

ID=36297351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/054282 WO2006067714A2 (en) 2004-12-20 2005-12-16 Transparency change of view-obscuring objects

Country Status (1)

Country Link
WO (1) WO2006067714A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014180944A1 (en) * 2013-05-10 2014-11-13 Koninklijke Philips N.V. 3d modeled visualisation of a patient interface device fitted to a patient's face
TWI610270B (en) * 2008-09-25 2018-01-01 皇家飛利浦電子股份有限公司 Three dimensional image data processing
CN111080807A (en) * 2019-12-24 2020-04-28 北京法之运科技有限公司 Method for adjusting model transparency
US11006091B2 (en) 2018-11-27 2021-05-11 At&T Intellectual Property I, L.P. Opportunistic volumetric video editing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CSEBFALVI B ET AL: "Fast opacity control in rendering of volumetric CT data" WSCG'97. FIFTH INTERNATIONAL CONFERENCE IN CENTRAL EUROPE ON COMPUTER GRAPHICS AND VISUALIZATION '97. IN COOPERATION WITH IFIP WORKING GROUP 5.10 ON COMPUTER GRAPHICS AND VIRTUAL WORLDS UNIV. WEST BOHEMIA PLZEN, CZECH REPUBLIC, vol. 1, 1997, pages 79-87 vol.1, XP002381684 ISBN: 80-7082-306-2 *
GIBSON S F F ED - ASSOCIATION FOR COMPUTING MACHINERY: "Using distance maps for accurate surface representation in sampled volumes" 1998 SYMPOSIUM ON VOLUME VISUALIZATION. RESEARCH TRIANGLE PARK, NC, OCT. 19 - 20, 1998, NEW YORK, NY : ACM, US, 1998, pages 23-30,163, XP002155117 ISBN: 1-58113-105-4 *
IVAN VIOLA ARMIN KANITSAR MEISTER EDUARD GROLLER: "Importance-Driven Volume Rendering" VISUALIZATION, 2004. IEEE AUSTIN, TX, USA 10-15 OCT. 2004, PISCATAWAY, NJ, USA,IEEE, 10 October 2004 (2004-10-10), pages 139-145, XP010903114 ISBN: 0-7803-8788-0 cited in the application *
LAMAR E ET AL: "Multiresolution techniques for interactive texture-based volume visualization" VISUALIZATION '99. PROCEEDINGS SAN FRANCISCO, CA, USA 24-29 OCT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 24 October 1999 (1999-10-24), pages 355-543, XP010365019 ISBN: 0-7803-5897-X *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI610270B (en) * 2008-09-25 2018-01-01 皇家飛利浦電子股份有限公司 Three dimensional image data processing
US10043304B2 (en) 2008-09-25 2018-08-07 Koninklijke Philips N.V. Three dimensional image data processing
WO2014180944A1 (en) * 2013-05-10 2014-11-13 Koninklijke Philips N.V. 3d modeled visualisation of a patient interface device fitted to a patient's face
US9811948B2 (en) 2013-05-10 2017-11-07 Koninklijke Philips N.V. 3D modeled visualisation of a patient interface device fitted to a patient's face
US11006091B2 (en) 2018-11-27 2021-05-11 At&T Intellectual Property I, L.P. Opportunistic volumetric video editing
US11431953B2 (en) 2018-11-27 2022-08-30 At&T Intellectual Property I, L.P. Opportunistic volumetric video editing
CN111080807A (en) * 2019-12-24 2020-04-28 北京法之运科技有限公司 Method for adjusting model transparency

Also Published As

Publication number Publication date
WO2006067714A3 (en) 2006-08-31

Similar Documents

Publication Publication Date Title
Kalkofen et al. Comprehensible visualization for augmented reality
US7889900B2 (en) Medical image viewing protocols
Bruckner et al. Enhancing depth-perception with flexible volumetric halos
Viola et al. Importance-driven feature enhancement in volume visualization
Viola et al. Importance-driven volume rendering
Kalkofen et al. Interactive focus and context visualization for augmented reality
US7924279B2 (en) Protocol-based volume visualization
US7889194B2 (en) System and method for in-context MPR visualization using virtual incision volume visualization
CN113436303A (en) Method of rendering a volume and embedding a surface in the volume
Haubner et al. Virtual reality in medicine-computer graphics and interaction techniques
Chen et al. Sketch-based Volumetric Seeded Region Growing.
WO2006067714A2 (en) Transparency change of view-obscuring objects
Englmeier et al. Hybrid rendering of multidimensional image data
Ylä-Jääski et al. Fast direct display of volume data for medical diagnosis
Debarba et al. Anatomic hepatectomy planning through mobile display visualization and interaction
Turlington et al. New techniques for efficient sliding thin-slab volume visualization
CA2365045A1 (en) Method for the detection of guns and ammunition in x-ray scans of containers for security assurance
Tory et al. Visualization of time-varying MRI data for MS lesion analysis
Bruckner et al. Illustrative focus+ context approaches in interactive volume visualization
Corcoran et al. Perceptual enhancement of two-level volume rendering
Beyer Gpu-based multi-volume rendering of complex data in neuroscience and neurosurgery
Tam et al. Volume rendering of abdominal aortic aneurysms
Ropinski et al. Interactive importance-driven visualization techniques for medical volume data
Hermosilla et al. Uncertainty Visualization of Brain Fibers.
CA2365062A1 (en) Fast review of scanned baggage, and visualization and extraction of 3d objects of interest from the scanned baggage 3d dataset

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05850079

Country of ref document: EP

Kind code of ref document: A2