WO2008063081A2 - Procédé de visualisation d'une image tridimensionnelle d'une partie de corps humain - Google Patents

Procédé de visualisation d'une image tridimensionnelle d'une partie de corps humain Download PDF

Info

Publication number
WO2008063081A2
WO2008063081A2 PCT/NO2007/000413 NO2007000413W WO2008063081A2 WO 2008063081 A2 WO2008063081 A2 WO 2008063081A2 NO 2007000413 W NO2007000413 W NO 2007000413W WO 2008063081 A2 WO2008063081 A2 WO 2008063081A2
Authority
WO
WIPO (PCT)
Prior art keywords
volume
voxel
screen
viewing direction
maximum
Prior art date
Application number
PCT/NO2007/000413
Other languages
English (en)
Other versions
WO2008063081A3 (fr
Inventor
Andreas Abildgaard
Original Assignee
Oslo Universitetssykehus Hf
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oslo Universitetssykehus Hf filed Critical Oslo Universitetssykehus Hf
Publication of WO2008063081A2 publication Critical patent/WO2008063081A2/fr
Publication of WO2008063081A3 publication Critical patent/WO2008063081A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present invention relates to a method for visualising a set of voxel data on a screen, the screen being capable of providing a three-dimensional image (3D), the set of voxel data representing a three dimensional image of at least a part of a human.
  • the invention also relates to a corresponding visualisation system, comprising a screen and a connected processing means.
  • the invention further relates to a computer program product for implementing the method on a visualisation system with a processing unit.
  • Medical imaging systems such as X-ray computerized axial tomography (CT) and magnetic resonance imaging are capable of producing exact cross-sectional image data that express the physical properties related to electron and nuclear density, respectively, of the human body. Reconstruction of three-dimensional (3D) images using collections of parallel two- dimensional images representing cross- sectional data has been applied in the medical field for some time.
  • CT computerized axial tomography
  • magnetic resonance imaging are capable of producing exact cross-sectional image data that express the physical properties related to electron and nuclear density, respectively, of the human body.
  • Reconstruction of three-dimensional (3D) images using collections of parallel two- dimensional images representing cross- sectional data has been applied in the medical field for some time.
  • Various rendering techniques are applied for three-dimensional display of medical images.
  • One such technique is surface rendering.
  • a limitation of this technique is that it does not adequately visualize the tissue within solid organs; it is optimised for visualisation of surfaces and boundaries.
  • volume-rendering Instead of overlaying surfaces using a complex model of three-dimensional data, volume-rendering relies on the assumption that three-dimensional objects are composed of basic volumetric building blocks, so-called "voxels".
  • the voxels are three-dimensional analogs to the more familiar two-dimensional pixels (abbreviation for picture element) used for normal screens, and similarly a voxel has a spatial coordinate and associated voxel values like signal intensity, transparency, assigned colour, and so forth.
  • volume rendering By using a set of voxel data it is possible by volume rendering to provide an image of the body part of the patient under examination by various volume rendering techniques such as image ordering (also called ray casting), object ordering (also called compositing or splatting), and domain rendering (by e.g. Fourier transformation).
  • image ordering also called ray casting
  • object ordering also called compositing or splatting
  • domain rendering by e.g. Fourier transformation.
  • Associated projection types for obtaining an image on a two- dimensional screen include maximum intensity projection (MIP), minimum intensity projection (MIN-IP), iso-surface projection, and transparency projections.
  • volume rendering techniques have to be specially designed and tailored to the specific type of tissue and/or the searched abnormality in the tissue.
  • appropriate volume rendering settings or projection types for detection of pulmonary nodules in CT images or analysis of fractures in skeletal CT images may differ significantly.
  • 3D displays for visualisation of radiological images.
  • Some of the 3D display devices provide a true three-dimensional visualisation based on rapid cycling between multiple spatially different oriented views, see for instance WO 2007/119064 and WO 2005/112474 to Setred AS.
  • These display devices are "true" 3D displays, unlike the stereoscopic displays that have existed for several decades.
  • the stereoscopic displays require the use of special goggles ("glasses") with polarisation or special colours, whereas these modern autostereoscopic 3D display devices function without such requirements.
  • these improved 3D displays provide true 3D visualisation where the view changes e.g. when the observer moves his head horizontally.
  • 3D modelling of radiological image data sets for 2D display is typically applied for vascular structures such as arteries and veins, skeletal structures, the biliary tree and nerve paths in the central nervous system.
  • vascular structures such as arteries and veins, skeletal structures, the biliary tree and nerve paths in the central nervous system.
  • These anatomical structures are suited for modelling on a 2D screen because they represent long, often branching structures that interconnect and have defined surfaces, surrounded by zones that are not of interest and are shown only as "empty space” between the structures of interest.
  • the well defined borders between the structures of interest and the surrounding space are fundamental for the common techniques used for visualisation of 3D models on 2D screens. The most important depth cues in the models are:
  • an improved method for visualising a three-dimensional image of a human body part would be advantageous, and in particular a more efficient and/or reliable method that would improve the detection rate for pathology would be advantageous.
  • the invention preferably seeks to mitigate, alleviate or eliminate one or more of the above mentioned disadvantages singly or in any combination.
  • This object and several other objects are obtained in a first aspect of the invention by providing a method for visualising a set of voxel data on a screen, the screen being capable of providing a three-dimensional image (3D), the set of voxel data representing a three dimensional image of at least a part of a human body in a first volume (IV), the method comprising;
  • - providing a set of voxel data comprising spatial information for each voxel point, and, for each voxel point, a corresponding set of voxel values, - selecting a second volume (2V) from the first volume (IV), the second volume being a sub-volume of the first volume,
  • MF substantially continuous and spatially non-uniform modulation function
  • the invention is particularly, but not exclusively, advantageous for obtaining a relatively simple and effective method for searching for abnormalities or visualizing the anatomy in a human body part in a quite intuitive manner due to the non-uniform modulation of the second volume.
  • the present invention also provides an improved detection and identification of abnormal tissue structures and/or forms, such as pulmonary nodules, and various pathological changes in solid tissues, skeleton and vessels.
  • the improved depth perception and/or lowering of the amount of over-projection provided by the present invention may result in a better ability to detect focal pathology and other abnormalities than was hitherto possible with rendering methods within medical imaging on three-dimensional (3D) screens.
  • diagnostic procedures may be more efficient with the present invention as the non-uniform modulation within the second volume provides the user with an improved over-view of the part of the tissue being examined.
  • use of the current invention provides the user with visual clues to the position of the centre of the second volume relative to the first or total volume being studied, thereby improving the perception of the location of focal pathology along the viewing direction and also the accuracy of 3D navigation within the first volume.
  • the nonlinear modulation of signal intensity along the viewing direction may be used to highlight a central zone in the second volume (2V), thus providing visual clues to the specific location of the second volume within the first volume (IV). In the following this will also be denoted a so-called "focus zone”. This focus zone may be made narrow, and thus provide a detailed anatomical depiction of a localised part of the second volume (2V). This may be advantageous for analysis of pathology observed in the data.
  • the second volume (2V) may also be termed a "subvolume" within the context of the present invention. Particularly, the second volume (2V) may be relatively narrow i.e. if the second volume is spatially positioned within a short distance.
  • the second volume (2V) may be termed a "slab”.
  • the first volume (IV) may also be termed a "total volume” in the context of the present invention, though the first volume (IV) may still be a subset of an even larger volume.
  • the anatomical volume that shall be displayed is a subvolume of the total anatomical volume that has been covered by the preoperative radiological investigation.
  • the depicted subvolume can be made larger along the viewing direction) than with the commonly employed techniques (MIP, Volume Rendering) for tissue rendering, because the signal modulation prevents the signal overflow that otherwise will tend to occur with large subvolumes.
  • MIP Volume Rendering
  • the first volume (IV), or parts thereof, may be displayed on the screen together with the image generated from the second volume (2V) in order to improve e.g. navigation around in the image data.
  • the second volume (2V) may be displaced, either rotationally or translationally, by a viewer along the viewing direction.
  • the viewer may initiate or directly control a displacement of the second volume.
  • the viewer may perform a rotation of the second volume (2V) around an axis positioned in central position in the second volume (2V), or alternatively around an axis positioned in a central position relative the three dimensional (3D) screen seen by the viewer.
  • the second volume (2V) may be displaced by a viewer in a substantially continuous manner along the viewing direction.
  • the size of the second volume (2V) may be substantially unchanged during displacement by the viewer.
  • the so-called focus zone mentioned above is advantageous for this navigation, because it provides a visual clue to the position of the rotational axis when the second volume (2V) is rotated with respect to the first volume (IV).
  • Another advantage provided by this embodiment may be the possibility to visualise a large volume without signal overflow.
  • the size of the second volume (2V) that can be meaningfully visualised can be in the order of 2 to 3 times the size of a meaningful MIP subvolume.
  • the parts of the second volume that has e.g. a lower signal than the focus zone will be displayed less distinctly, but still aids the viewer in the appreciation of the anatomy and the location of this part of the second volume (2V) i.e. the focus zone.
  • the type of voxel value being modulated may be selected from: signal intensity, transparency, and colour value.
  • the non-uniform modulation function (MF) may have a maximum point or a minimum point along the viewing direction. Possibly, the maximum point or the minimum point can define a central position of the second volume if the modulation function is symmetric around such a central position. More particularly, the maximum point or the minimum point may not be positioned at an end or a front portion of the second volume; preferably the maximum point or the minimum point may be positioned within a central portion constituting 20%, 30%, 40% or 50% of the second volume (2V).
  • the non-uniform modulation function (MF) may have a substantially symmetric shape around the maximum point or the minimum point because the present inventor has found that this provides the best "focus" zone. More preferably, wherein the non-uniform modulation function (MF) has a maximum point along the viewing direction, and the modulation function is constantly decaying away from the maximum point.
  • the maximum/minimum point may be an absolute maximum/minimum.
  • the performed modulation may be a so-called "soft" modulation meaning that the modulation is relatively slowly changing the relevant voxel value within the second volume (2V). More specifically, the ratio of the derivative of the modulation function (MF) to the modulation function (MF) may be maximum 5%, maximum 10%, maximum 20%, maximum 30%, maximum 40%, or maximum 50 %. Alternatively or additionally, the 2 nd derivative of the modulation function may be constantly or monotonously increasing/decreasing. Some preferred shapes of the non-uniform modulation function may be shapes chosen from: a bell shape, a Gaussian shape, or a parabolic shape. Other similar mathematical function may readily be found to be suitable for use within the context of the present invention.
  • the modulation function (MF) may be adapted to perform a local averaging process within the second volume by smoothing the one or more voxel value(s) within the second volume. This is particularly beneficial for imaging of solid or dense tissue.
  • the set of voxel data may comprise a plurality of slices, where each slice represents a cross-sectional portion of the part of the human body being examined.
  • the second volume (2V) may then comprise at least 10 slices, preferably at least 20 slices, or even more preferably at least 30 slices.
  • the second volume (2V) may comprise a maximum of 50 slices, preferably a maximum of 80 slices, or even more preferably a maximum of 100 slices.
  • a slice may represent a cross-sectional portion with a thickness of approximately 2 mm, preferably a thickness of approximately 1 mm, or even more preferably a thickness of approximately 0.5 mm.
  • the part of the human body being examined may contain substantial amounts of solid tissue. More specifically, solid tissue may be operationally defined as tissue where every or nearly every voxel point should be studied in order to examine the solid tissue appropriately.
  • the present invention in particular provides an efficient tool for visualising solid tissue due to the improved depth perception and/or the lowering of the amount of over-projection.
  • the part of the human body may therefore be an organ parenchyma selected from: liver, pancreas, spleen, and kidney.
  • the part of the human body may be selected from: the musculoskeletal system, the central nervous system, the cardiovascular system, urinary tract, lymphatic system, and bile ducts.
  • the image may be generated on the screen by a volume-rendering method e.g. image ordering (also called ray casting), object ordering (also called compositing or splatting), or domain rendering (by e.g. Fourier transformation). Also methods like maximum intensity projection (MIP), minimum intensity projection (MIN-IP), iso-surface projection, and transparency projections can be applied.
  • image ordering also called ray casting
  • object ordering also called compositing or splatting
  • domain rendering by e.g. Fourier transformation
  • MIP maximum intensity projection
  • MIN-IP minimum intensity projection
  • iso-surface projection iso-surface projection
  • transparency projections can be applied.
  • the three-dimensional (3D) screen may also be capable of providing a two-dimensional image (2D) if needed.
  • Appropriate types of 3D screens include, but are not limited to, stereoscopic display devices, auto- stereoscopic display devices, holographic display devices, volumetric display devices.
  • Three-dimensional imaging (3D) may particularly advantageously be applied within the visualising of substantially solid tissue, because the present invention provides an improved perception of depth and/or lowering of the amount of over-projection in the image.
  • a 3D screen provides direct appreciation of signal positions along the depth direction in the voxel dataset. Hence there is less need for the visual clues are needed as compared to the case where 2D screens are used for 3D dataset visualisation.
  • a specific feature of a 3D display, as compared to 2D displays, is the ability to provide meaningful visualisation of selected small volumes from anatomical structures that are more continuous and solid, such as tissues and organs. This feature may also make it possible to navigate within the voxel dataset obtained from solid tissues.
  • the three-dimensional image (3D) screen may be an autostereoscopic display device and comprises a switchable aperture with horizontal slits positioned in front of a two-dimensional (2D) screen.
  • the three-dimensional image (3D) screen is an autostereoscopic display device and comprises at least a first and a second switchable aperture positioned in front of a two-dimensional (2D) screen.
  • a two-dimensional (2D) screen For more details on this type of 3D screens the reader is referred to WO 2007/119064.
  • This may provide a kind of background image for the image generated from the second volume (2V), which is particularly advantageous for navigation within a large and/or complex image data set, especially with solid or dense tissue.
  • the set of voxel data may be provided by an imaging technique such as magnetic resonance imaging (MRI), computed tomography (CT), positron electron tomography (PET), single photon emission computed tomography (SPECT), ultrasound scanning, rotational angiography, positron electron tomography computed tomography (PET-CT), positron electron tomography magnetic resonance (PET-MR), and tomo-synthesis. Also other combinations of these imaging modalities may be applied within the context of the present invention.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • PET positron electron tomography
  • SPECT single photon emission computed tomography
  • ultrasound scanning rotational angiography
  • PET-CT positron electron tomography computed tomography
  • PET-CT positron electron tomography magnetic resonance
  • PET-MR positron electron tomography magnetic resonance
  • tomo-synthesis tomo-synthesis
  • the present invention is aimed at medical imaging, it is contemplated that the present invention may additionally be exploited within other technical fields where computer-aided reconstruction and design are applied. Such fields include, but are not limited to, architecture, geology, mechanical designing, and so forth.
  • Such fields include, but are not limited to, architecture, geology, mechanical designing, and so forth.
  • the part of the human body represented by the set of voxel data in the first volume could instead by a building part, a geological body or object, a mechanical body or object, and so forth.
  • the invention relates to a visualisation system for visualising a set of voxel data on a screen, the screen being capable of providing a three- dimensional image (3D), the set of voxel data representing a three dimensional image of at least a part of a human body in a first volume (IV), the system comprising:
  • - recording means for storing a set of voxel data comprising spatial information for each voxel point, and, for each voxel point, a corresponding set of voxel values
  • - selecting means for selecting a second volume (2V) from the first volume (IV), the second volume being a sub-volume of the first volume
  • - processing means for applying, on the sub-set of voxel data within the second volume (2V), a substantially continuous and spatially non-uniform modulation function (MF) on one or more type(s) of voxel values along a viewing direction, the applied modulation function (MF) being a direct function of the spatial coordinate along the viewing direction, and - image generating means capable of generating an image on the three- dimensional (3D) screen along the viewing direction from the sub-set of modulated voxel data within the second volume (2V).
  • MF substantially continuous and spatially non-uniform modulation function
  • the invention in a third aspect, relates to a computer program product being adapted to enable a computer system comprising at least one computer having data storage means associated therewith to control an visualisation system according to the second aspect of the invention.
  • This aspect of the invention is particularly, but not exclusively, advantageous in that the present invention may be implemented by a computer program product enabling a computer system or similar processing means to perform the operations of the first aspect of the invention.
  • some known visualisation system may be changed to operate according to the present invention by installing a computer program product on a computer system controlling the said visualisation system.
  • Such a computer program product may be provided on any kind of computer readable medium, e.g. magnetically or optically based medium, or through a computer based network, e.g. the Internet.
  • the first, second and third aspect of the present invention may each be combined with any of the other aspects.
  • Figure 1 is a schematic drawing of a patient being inserted into a scanning device according to the present invention
  • FIG. 2 is a schematic illustration of a scanning device for obtaining computed tomography (CT) images by attenuation of X-rays from an X-ray source that rotates around the patient, JL O
  • Figure 3 is a schematic illustration of how a sequence of so-called slices is combined into an image on a screen
  • Figure 4 is a 3D reconstruction of cross-sectional CT images of the lungs in a patient made by a maximum intensity projection (MIP),
  • MIP maximum intensity projection
  • Figure 5 is a schematic drawing of a method for visualising a set of voxel data on a screen according to the present invention
  • Figure 6 is an example of a modulation function for performing a substantially continuous and spatially non-uniform modulation according to the present invention.
  • Figure 7 is also a cross-sectional CT MIP image, similar to Figure 4, of the lungs in a patient but the modulation function of Figure 6 is applied according to the present invention.
  • a "voxel” is a unit cube with a unit vector along the X-axis, a unit vector along the
  • a "part of the human body” is an anatomical region of interest.
  • the part of the human body is represented as a plurality of voxels having voxel signal intensity (e.g. CT values) in the range of specified signal intensity included in a spatial region specified for the said part of the human body.
  • voxel signal intensity e.g. CT values
  • FIG. 1 is a schematic drawing of a patient 50 being inserted into or positioned in a scanning device 100 according to the present invention.
  • a part 51, e.g. the head as shown, of the patient 50 is inserted into the scanning device 100 for medical imaging, though the entire patient could also be inserted in a scanning device if designed accordingly.
  • the scanning device 100 could be any type of medical imaging devices such as magnetic resonance imaging (MRI), computed tomography (CT), positron electron tomography (PET), single photon emission computed tomography (SPECT), ultrasound scanning, and rotational angiography, but in the following the teaching of the present invention will be demonstrated with respect to X-ray attenuated computed tomography (CT) for illustrative purposes.
  • CT computed tomography
  • PET positron electron tomography
  • SPECT single photon emission computed tomography
  • ultrasound scanning and rotational angiography
  • FIG. 2 is a schematic illustration of a scanning device 100 for obtaining computed tomography (CT) from a centrally positioned patient 50.
  • the CT scanning device 100 comprises an X-ray source 110 and a two-dimensional X-ray detector 120 for measuring the attenuation of the X-ray, indicated by X, through the patient 50 at the shown peripheral position of the source 110 and the detector 120.
  • the X-ray source 110 and a two-dimensional X-ray detector 120 are rotationally displaceable around the patient 50 as indicated by the two curved arrows to a desirable peripheral position V.
  • Figure 3 is a schematic illustration of how a sequence of so-called slices Sl, S2, S3, and S4 is combined into an image 300 on a screen 200 for CT imaging of lungs.
  • the slices Sl, S2, S3, and S4 are typically two-dimensional images acquired by a scanning device 100 as shown in Figures 1 and 2. Each slice Sl, S2, S3, or S4 thereby represents a cross-sectional set of voxel data i.e. spatial coordinates with corresponding voxel (CT) values.
  • CT voxel
  • the thickness of the slices may be from 0.1 mm to 2 mm. For illustrative purposes only four slices are shown in Figure 3, but any number of slices, which the scanning device 100 can record, may be used in the context of the present invention.
  • the set of slices Sl, S2, S3, and S4 constitutes a set of voxel data
  • the set of voxel data represents a three dimensional image of at least a part of a human body, i.e. the lungs, in a first volume (IV).
  • volume rendering a projection of the set of voxel data along the viewing direction 310 can be generated and shown as an image 300 on the three-dimensional (3D) screen 200.
  • An autostereoscopic 3D display device can be implemented by synchronising a high frame rate screen for displaying a two dimensional image with a fast switching shutter. If each frame on the screen is synchronised with a corresponding slit, and the images and slits are run at sufficient speeds to avoid flicker, typically 50 Hz or above, then a 3D image can be created.
  • the high frame rate screen is viewed through one open slit of the shutter, each eye sees a different part of the screen, and hence each eye sees a different part of an image displayed on the screen.
  • One image is displayed on the screen whilst the corresponding slit is open.
  • a second slit is open another corresponding frame is displayed.
  • the arrow 310 indicates a viewing direction of the image 300.
  • the volume rendering is performed along the arrow 310 by e.g. image ordering, object ordering or domain rendering or any other suitable volume rendering.
  • a transfer function can be applied on the set of voxel data so as to define colour and opacity and other relevant parameters for the imaging process.
  • MIP maximum intensity projection
  • Figure 4 is a 3D reconstruction of cross-sectional CT images of the lungs in a patient made by a maximum intensity projection (MIP) as seen on a two- dimensional screen because three-dimensional view cannot easily be represented on paper.
  • the reconstruction is made from 20 axial CT images with a distance of 1 mm.
  • the voxel value with the maximum intensity along a direction parallel to the viewing direction 310 is set as the sample value of the image 300 for that direction.
  • the MIP technique is simple and requires no depth cues.
  • MIP is typically used for visualising long, thin objects such as blood vessels or for visualising a sub-volume of a lung tissue.
  • Figure 5 is a schematic drawing of a method for visualising a set of voxel data on a screen 300, the set of voxel data representing a three dimensional image of at least a part of a human body 50 in a first volume IV.
  • a set of voxel data comprising spatial information for each voxel point is provided, and, for each voxel point, a corresponding set of voxel values.
  • a second volume 2V is selected from the first volume IV, the second volume thereby being a sub-volume of the first volume as shown in Figure 5.
  • a substantially continuous and spatially nonuniform modulation function MF on one or more type(s) of voxel values, e.g. voxel intensity or transparency, along a viewing direction 310.
  • the applied modulation function MF is a direct function of the spatial coordinate along the viewing direction 310.
  • an image 300 on the three-dimensional (3D) screen 200 along the viewing direction 310 from the sub-set of modulated voxel data within the second volume 2V, which can be seen and examined by the viewer 400.
  • the second volume 2V can be displaced by a viewer 400 along the viewing direction 310.
  • the viewer may use a computer interaction device such as a mouse or keypad for displacing the second volume 2V.
  • the second volume can thereby be displaced by a viewer 400 in a substantially continuous manner along the viewing direction 310.
  • the size of the second volume 2V is substantially unchanged. It should be mentioned that the viewing direction can also be changed by a viewer 400.
  • Figure 6 is an example of a modulation function MF for performing a substantially continuous and spatially non-uniform modulation according to the present invention.
  • the horizontal axis is a spatial coordinate designated X which is parallel to the viewing direction 310 in question.
  • the vertical axis indicates the relative reduction of the voxel value being modulated, thus 100% relative reduction corresponds to complete reduction, whereas 0% relative reduction corresponds to no reduction of the voxel value.
  • Various mathematical functions are applicable within the teaching of the present invention. However, tests performed by the inventor indicate that bell-shaped functions, either inverted or normal, work well for modulation of the voxel values within the second volume 2V.
  • the modulation function MF is smooth and gradually changing within the second volume 2V. While the set of voxel data is inherently discrete, and therefore discontinuous, due to the limited resolution and/or sampling of the imaging technique applied, the modulation function can be substantially continuous and modify the voxel values at the discrete spatial positions where the modulation function MF is applied.
  • Figure 7 is also a cross-sectional CT MIP image similar to Figure 4 (thus also displayed on a two-dimensional screen for illustrative purposes) of the lungs of a patient but additionally the modulation function MF of Figure 6 is applied according to the present invention.
  • the present invention in particular provides a relative enhancement of the centre of the second volume 2V while preserving the visualising of depth in the image 300.
  • the improved image quality enables better identification of nodules in the lungs.
  • two nodules indicated by arrows Al and A2 are clearly visible, whereas these nodules are hardly visible in Figure 4 which is a conventional CT MIP image.
  • the present invention provides 1) more true positive findings of lesions, 2) a comparably number of false positive finding of lesions, and 3) a substantially lower number of false negative finding of lesions in the lungs, as compared to the conventional MIP methods.
  • Table 1 only represents a preliminary test, but more substantiated tests have supported the results of Table 1; Andreas Abildgaard et al., Improved visualisation of artificial pulmonary nodules with a new subvolume rendering technique, 57 th Nordic Conference of Radiology and 18 th Nordic Conference of Radiography, Malm ⁇ , Sweden, 9-12 May, 2007.
  • this can be achieved by either looking at separate, thin images generated from the voxel data that depict the anatomically relevant structures in the slice, or by looking at 3D models of the vessels or bile ducts.
  • the invention can be implemented in any suitable form including hardware, software, firmware or any combination of these.
  • the invention or some features of the invention can be implemented as computer software running on one or more data processors and/or digital signal processors.
  • the elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.

Abstract

L'invention concerne un procédé destiné à visualiser un ensemble de données voxels sur un écran tridimensionnel. Le procédé consiste à créer un ensemble de données voxels comportant une information spatiale pour chaque point voxel et, pour chaque point voxel, un ensemble de valeurs de voxels correspondantes. Il consiste également à sélectionner un deuxième volume (2V) d'un premier volume (1V), le deuxième volume étant un sous-volume du premier. Puis, sur le sous-ensemble de données voxels dans le deuxième volume (2V), une fonction de modulation (MF) sensiblement continue et non uniforme dans l'espace est appliquée sur une valeur de voxel dans une direction de visualisation, la fonction de modulation (MF) appliquée étant une fonction directe de la coordonnée spatiale dans la direction de visualisation. Enfin, une image est générée sur l'écran tridimensionnel dans la direction de visualisation à partir du sous-ensemble de données voxels modulées dans le deuxième volume (2V). Ce procédé est relativement simple et efficace pour chercher des anomalies ou visualiser l'anatomie d'une partie de corps humain de manière sensiblement intuitive au moyen de la modulation non uniforme du deuxième volume. Selon des tests préliminaires effectués par l'inventeur, l'invention porte également sur une détection et une identification améliorées de structures et/ou formes tissulaires anormales telles que des nodules pulmonaires, et diverses modifications pathologiques dans les tissus solides, le squelette et les vaisseaux.
PCT/NO2007/000413 2006-11-23 2007-11-22 Procédé de visualisation d'une image tridimensionnelle d'une partie de corps humain WO2008063081A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DKPA200601542 2006-11-23
DKPA200601542 2006-11-23

Publications (2)

Publication Number Publication Date
WO2008063081A2 true WO2008063081A2 (fr) 2008-05-29
WO2008063081A3 WO2008063081A3 (fr) 2009-03-05

Family

ID=37846223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO2007/000413 WO2008063081A2 (fr) 2006-11-23 2007-11-22 Procédé de visualisation d'une image tridimensionnelle d'une partie de corps humain

Country Status (1)

Country Link
WO (1) WO2008063081A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016161198A1 (fr) * 2015-04-02 2016-10-06 Hedronx Inc. Génération de modèle tridimensionnel virtuel sur la base de modèles d'hexaèdre virtuel
KR102104889B1 (ko) * 2019-09-30 2020-04-27 이명학 가상 입체면 모델에 기초한 3차원 모델 데이터 생성 구현 방법 및 시스템

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5986662A (en) * 1996-10-16 1999-11-16 Vital Images, Inc. Advanced diagnostic viewer employing automated protocol selection for volume-rendered imaging

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5986662A (en) * 1996-10-16 1999-11-16 Vital Images, Inc. Advanced diagnostic viewer employing automated protocol selection for volume-rendered imaging

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
JIANLONG ZHOU ET AL: "Focal region-guided feature-based volume rendering" 3D DATA PROCESSING VISUALIZATION AND TRANSMISSION, 2002. PROCEEDINGS. FIRST INTERNATIONAL SYMPOSIUM ON JUNE 19-21, 2002, PISCATAWAY, NJ, USA,IEEE, 19 June 2002 (2002-06-19), pages 87-90, XP010596643 ISBN: 0-7695-1521-4 *
MING-YUEN CHAN ET AL: "MIP-Guided Vascular Image Visualization with Multi-Dimensional Transfer Function" ADVANCES IN COMPUTER GRAPHICS LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, vol. 4035, 1 January 2006 (2006-01-01), pages 372-384, XP019041351 ISBN: 978-3-540-35638-7 *
REITINGER ET AL.: "User-Centric Transfer Function Specification in Augmented Reality" JOURNAL OF WSCG, vol. 12, no. 1-3, 2 February 2004 (2004-02-02), XP002425790 *
TAPPENBECK A. , PREIM B. AND DICKEN V.: "DISTANCE-BASED TRANSFER FUNCTION DESIGN: SPECIFICATION METHODS AND APPLICATIONS" SIMULATION UND VISUALISIERUNG 2006 (SIMVIS 2006), 2 March 2006 (2006-03-02), XP002511517 SCS-Verlag Magdeburg, Germany *
TURLINGTON J Z ET AL: "New Techniques for Efficient Sliding Thin-Slab Volume Visualization" IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 8, August 2001 (2001-08), pages 823-835, XP011036127 ISSN: 0278-0062 *
YEN S Y ET AL INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS ASSOCIATION FOR COMPUTING MACHINERY: "FAST SLIDING THIN SLAB VOLUME VISUALIZATION" PROCEEDINGS OF THE 1996 SYMPOSIUM ON VOLUME VISUALIZATION. SAN FRANCISCO, OCT. 28 - 29, 1996, PROCEEDINGS OF THE SYMPOSIUM ON VOLUME VISUALIZATION, NEW YORK, IEEE/ACM, US, 28 October 1996 (1996-10-28), pages 79-86, XP000724432 ISBN: 0-89791-865-7 *
ZHOU J., DÖRING A., TÖNNIES K.D.: "Distance Transfer Function Based Rendering" TECHNICAL REPORT - INSTITUTE FOR SIMULATION AND GRAPHICS, UNIVERSITY OF MAGDEBURG, GERMANY, February 2004 (2004-02), XP002425789 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016161198A1 (fr) * 2015-04-02 2016-10-06 Hedronx Inc. Génération de modèle tridimensionnel virtuel sur la base de modèles d'hexaèdre virtuel
US9646411B2 (en) 2015-04-02 2017-05-09 Hedronx Inc. Virtual three-dimensional model generation based on virtual hexahedron models
KR20170134592A (ko) * 2015-04-02 2017-12-06 헤드론엑스 인코포레이티드 가상 육면체 모델에 기초한 가상 3차원 모델 생성
US9928666B2 (en) 2015-04-02 2018-03-27 Hedronx Inc. Virtual three-dimensional model generation based on virtual hexahedron models
KR102070945B1 (ko) 2015-04-02 2020-01-29 이명학 가상 육면체 모델에 기초한 가상 3차원 모델 생성
KR102104889B1 (ko) * 2019-09-30 2020-04-27 이명학 가상 입체면 모델에 기초한 3차원 모델 데이터 생성 구현 방법 및 시스템

Also Published As

Publication number Publication date
WO2008063081A3 (fr) 2009-03-05

Similar Documents

Publication Publication Date Title
CN1864634B (zh) 扩大对象区域的二维图像的显示范围的方法
JP4421016B2 (ja) 医用画像処理装置
CN1864633B (zh) 扩大对象区域的立体图像的显示范围的方法
JP6211764B2 (ja) 画像処理システム及び方法
KR102109588B1 (ko) 유방 조직 이미지를 프로세싱하고, 디스플레잉하고, 네비게이팅하기 위한 방법
JP6058286B2 (ja) 医用画像診断装置、医用画像処理装置及び方法
Lindseth et al. Multimodal image fusion in ultrasound-based neuronavigation: improving overview and interpretation by integrating preoperative MRI with intraoperative 3D ultrasound
US20080167551A1 (en) Feature emphasis and contextual cutaways for image visualization
US20050119550A1 (en) System and methods for screening a luminal organ ("lumen viewer")
Goldwasser et al. Techniques for the rapid display and manipulation of 3-D biomedical data
JP2020506452A (ja) Hmdsに基づく医学画像形成装置
EP3561768B1 (fr) Visualisation de fissures pulmonaires en imagerie médicale
EP1743302A1 (fr) Systeme et methode pour creer une vue panoramique d'une image volumetrique
JP6430149B2 (ja) 医用画像処理装置
WO2013012042A1 (fr) Système, dispositif et procédé de traitement d'image, et dispositif de diagnostic par imagerie médicale
Wieczorek et al. GPU-accelerated rendering for medical augmented reality in minimally-invasive procedures.
US20140015836A1 (en) System and method for generating and displaying a 2d projection from a 3d or 4d dataset
Stadie et al. Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning
van Beurden et al. Stereoscopic displays in medical domains: a review of perception and performance effects
Vogt Real-Time Augmented Reality for Image-Guided Interventions
WO2008063081A2 (fr) Procédé de visualisation d'une image tridimensionnelle d'une partie de corps humain
Fellner et al. Stereoscopic volume rendering of medical imaging data for the general public
Rahman et al. A framework to visualize 3d breast tumor using x-ray vision technique in mobile augmented reality
Cui et al. Anatomy visualizations using stereopsis: current methodologies in developing stereoscopic virtual models in anatomical education
Abhari et al. Use of a mixed-reality system to improve the planning of brain tumour resections: preliminary results

Legal Events

Date Code Title Description
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07860900

Country of ref document: EP

Kind code of ref document: A2