EP1259940A2 - Method and system for selecting and displaying a portion of an image of a body - Google Patents

Method and system for selecting and displaying a portion of an image of a body

Info

Publication number
EP1259940A2
EP1259940A2 EP01908094A EP01908094A EP1259940A2 EP 1259940 A2 EP1259940 A2 EP 1259940A2 EP 01908094 A EP01908094 A EP 01908094A EP 01908094 A EP01908094 A EP 01908094A EP 1259940 A2 EP1259940 A2 EP 1259940A2
Authority
EP
European Patent Office
Prior art keywords
pointer
viewpoint
stylus
target point
image volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01908094A
Other languages
German (de)
French (fr)
Inventor
Eyal Zadicario
Roni Yagel
David Freundlich
Yoav Modan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insightec-Image Guided Treatment Ltd
Original Assignee
Insightec-Image Guided Treatment Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insightec-Image Guided Treatment Ltd filed Critical Insightec-Image Guided Treatment Ltd
Publication of EP1259940A2 publication Critical patent/EP1259940A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present invention relates to interactive displays and, more particularly, to a
  • Slices of these image volumes commonly are used to guide surgical procedures, but in a manner derived from the old days of X-ray film: many successive slices of the image volume, taken in a predetermined orientation relative to the image volume, are posted on a chart in the operating room, and the surgeon performs surgery on the patient with
  • the digital image volume is registered to the
  • the medical instrument is provided with a magnetic field sensor that is used to measure the position and orientation of the medical instrument.
  • the fiducial marker includes a similar magnetic field sensor so that an icon representing the medical instrument can be displayed, superposed on a display of the
  • 3D digital image to represent the true position and orientation of the medical
  • a slice of the 3D digital image may be extracted for display from the 3D digital image in any convenient (vertical, horizontal, oblique) orientation relative to the (typically parallelipipedal) 3D data volume.
  • image volume of a body having a surface including the steps of: (a) registering the
  • a memory for storing the image volume; (c) a processor for retrieving the portion of the image volume from the memory, with reference to the viewpoint; and (d) a monitor
  • a user including: (a) a handle; (b) a stylus, operationally connected to the handle; and (c) a mechanism for providing signals representative of a location and an orientation of an element of the device selected from the group consisting of the
  • the primary application of the present invention is to the interactive display of slices of a 3D image volume of the body of a surgical patient, and the present invention is described herein with reference to this primary application. Nevertheless, it is to be understood that the scope of the present invention includes the interactive display of any body, animate or inanimate, for which a 3D image volume has been acquired.
  • a “viewpoint” is an abstract geometric entity, in the space occupied by and surrounding the body, that defines the portion of the image volume
  • viewpoint is patterned after the use of this term to describe the relationship of a camera to an object that the
  • the camera is photographing.
  • the viewpoint may include as many
  • the "viewpoints" of the present invention generally include a target point that may be
  • the present invention may also include orientations with respect to the body and planes
  • the method of the present invention begins by registering the image volume
  • fiducial points are matched to corresponding reference points in the image volume. According to another preferred embodiment of the present invention
  • coordinates of a feature of the body are sampled and matched to a representation of that feature in the image volume.
  • the feature may be the surface of the body, or alternatively a feature internal to the body.
  • the coordinates of the surface of the skeletal bone are sampled using an echolocation procedure such as ultrasound.
  • viewpoint style is used herein to refer to the qualitative aspects of the viewpoint. These qualitative aspects include, but are not necessarily limited to, the type of coordinates included in
  • the viewpoint including specific values of the coordinates, are selected.
  • One preferred viewpoint of the present invention includes a target point on the
  • One or more slices of the image volume are displayed with reference to the target point and the display orientation.
  • the stylus is perpendicular to the handle.
  • the tip of the stylus is pointed at the target point and
  • the handle is pivoted to indicate the display orientation.
  • the tip of the handle is pivoted to indicate the display orientation.
  • This hand-held pointer constitutes an invention in its own right: a six-degree-of freedom input/control device.
  • the image volume is stored in a memory, and slices of the image volume are retrieved from the memory and displayed on a monitor, under the control of a processor.
  • the pointer includes a mechanism for providing signals representative, both of the location of the tip of the stylus, and of the spatial orientation of the pointer itself. These signals are sent to the processor, which retrieves and displays the slices of the image volume in accordance with these signals.
  • a preferred optical embodiment of this mechanism includes coils rigidly mounted in the handle of the pointer.
  • a preferred optical embodiment of this mechanism includes light emitting
  • the display orientation of the slices of the image volume usually coincides
  • viewpoint styles also defined with reference to the body using the hand-held pointer.
  • the pointer functions as a virtual camera, with the tip of the stylus defining the three spatial coordinates of the virtual camera and with the pivot orientation of the pointer defining the pitch and
  • the stylus defines an axis that runs through the tip of the stylus, and pointing the stylus at the body defines, as a viewpoint, a point along the axis within the body.
  • the display consists of three mutually perpendicular planes of the image volume whose common intersection point is the viewpoint.
  • both the stylus and the handle define respective axes.
  • the display again consists of three mutually perpendicular slices of the image volume: a first slice in a plane that includes the stylus axis and that is perpendicular to the body axis, a second slice in a plane parallel to the plane defined
  • the pointer functions in two modes, a control mode and a viewpoint definition mode.
  • control mode the pointer is used to control the
  • the pointer sends control signals to
  • control signals include, inter alia, signals that define the desired viewpoint style.
  • the signals are sent from the pointer to the
  • FIG. 1 is an illustration of a system of the present invention
  • FIG. 2 illustrates the selection of slices of the image volume for display
  • FIG. 3 shows two preferred embodiments of the pointer of FIG. 1
  • FIGS 4 and 5 illustrate two other methods of selecting slices of the image
  • the present invention is of a method and system for interactively displaying
  • the present invention can be used to display slices of an image volume of the body of a patient immediately prior to
  • Figure 1 illustrates a system of the present
  • Image volume 14 is a three dimensional parallelipipedal array of voxels
  • imaged in image volume 14 is shown as a dashed box 14' in Figure 1. Note that part
  • box 14' extends past body 10, so that part of the surface 12 of body 10 is included
  • a processor 16 retrieves selected slices of image volume 14
  • the slices to be displayed are selected using a pointer 26.
  • 26 is a pointed tip 28 which is pointed at a target point 30 on surface 12 of body 10, or
  • Figure 2 is an enlargement of a portion of Figure 1 that shows how pointer 26 is used to select slices
  • pointer 26 is shown pointing at target point 30.
  • Pointer 26 defines an axis 34 that intersects tip 28.
  • Pointer 26 is pivoted
  • processor 16 computes the pixels of
  • buttons 36 and 38 pushing button 36 moves the display depth to a deeper slice
  • pushing button 38 moves the display depth to a shallower slice.
  • pivoting pointer 26 about target point 30, as shown in Figure 2B changes the
  • pointer 26 is oriented at angle ⁇ d from vertical, slice 40d is displayed, and when
  • processor 16 by a datalink that is represented symbolically in Figure 1 by an arrow 32.
  • datalink 32 is a wireless datalink.
  • target point 30 determined by target point 30. Specifically, the point on the slice, at which the
  • Processor 16 immediately alters both the lateral positioning and display of the slice
  • pivot orientation As noted above, the orientation of pointer 26 is referred to herein as the "pivot orientation”. A suitable mechanism is provided to indicate the pivot orientation of
  • a transmitter 22 drives AC currents in three
  • Processor 16 infers, from these signals, the
  • pointer 26 Preferably, the location of tip 28 is expressed in Cartesian coordinates and
  • Figures 3A and 3B are side and top views, respectively, of a preferred
  • This embodiment of pointer 26 includes a lozenge-shaped handle 50 that defines a handle axis 35 and from which emerges a stylus 48 that
  • axis 34 includes tip 28 and that defines axis 34. To distinguish axis 34 from axis 35 in the
  • axis 34 is referred to herein as the "stylus
  • Stylus 48 is rigidly attached to handle 50, with stylus axis 34 substantially
  • Handle 50 includes buttons 36 and 38 as described
  • Buttons 42 and 44 are used to toggle pointer 26 between viewpoint definition
  • trackball 46 In control mode, trackball 46 is used to move a cursor on the screen of monitor
  • buttons 42 and 44 are used to select and/or activate
  • magnification of the displayed slices may be adjusted using a slider, as may the default 45-degree orientations of the three slices of the "3D textured plane" display.
  • a force sensor such as the TrackPointTM sensor used in IBM laptop computers may be used. Such a force sensor is sensitive to
  • buttons 36 and 38 replaces both trackball 46 and one of buttons 36 and 38.
  • the force sensor is mounted on a rotary dialer knob such as the dialer knob used in the ABA2020 VoicePodTM record/playback device
  • Twisting the force sensor counterclockwise represents a -z
  • buttons 36 and 38 are not needed.
  • handle 50 is partially cut away to show three mutually
  • perpendicular sensor coils 52 that are used to sense the fields generated by antennas
  • Coils 52 are rigidly attached to handle 50, so that a simple rigid translation and
  • sensor is embedded inside stylus 48 or is rigidly attached to stylus 48, near tip 28.
  • sensor coils 52 in handle 50 The location of sensor coils 52 in handle 50 is illustrative, rather than
  • Appropriate field sensors may be positioned anywhere on or in pointer 26.
  • the NOG ATM Cardiac Mechanics and Hemodynamic Mapping System produced by Biosense Webster (a subsidiary of Johnson and Johnson of New Brunswick NJ), includes such a sensor, mounted within a catheter, that is small
  • Figure 3C is a front view of an alternative preferred embodiment of pointer 26.
  • this embodiment of pointer 26 includes four infrared light emitting
  • diodes 54 for use in conjunction with an optical tracking system, such as the Optotrak system available from Northern Digital Inc. of Toronto, Ontario, Canada.
  • tip 28 is placed in contact with the fiducial points and the coordinates of the fiducial points are
  • surface 12 may be used as fiducial points.
  • small blocks of a material that has a high contrast in the imaging modality used to create image volume 14 are
  • image volume 14 is a CAT scan, then the material of the blocks is radio-opaque.
  • volume 14 serve as the reference points.
  • a second registration method is based on fitting a mathematical surface to a
  • image volume 14 is
  • body 10 Some parts of body 10, such as a leg, are too symmetrical for the second
  • an interior feature such as the surface of a
  • skeletal bone is used for registration.
  • the mathematical surface is fitted to a portion
  • distal end 58 is preferably, for this purpose, and as illustrated in Figure 3B, distal end 58
  • stylus 48 includes a piezoelectric sensor 56 that serves as both an ultrasound transmitter and an ultrasound receiver. Sensor 56 is connected to appropriate
  • image volume 14 is registered to body 10 by
  • target point 30 is only one of many possible viewpoint styles, each of which defines a different display mode.
  • a second viewpoint style is defined by using the preferred
  • Stylus 48 is
  • coordinates of the virtual camera are the spatial coordinates of tip 28.
  • Pitch is defined as rotation in the planes of axes 34 and 35.
  • Yaw is defined as rotation about body axis
  • Roll is defined as rotation about stylus axis 34.
  • surfaces of internal organs of patient 10, as represented in image volume 14, are contoured and shaded by methods that are well-known in the art, and are displayed as a three-dimensional rendition of these organs as these organs would be photographed
  • volume 14 parallel to stylus axis 35 is displayed, to simulate the output of an x-ray
  • a third viewpoint style is defined as illustrated in Figure 4.
  • Point 30' is defined to be at a distance d from tip 28 along stylus axis 34.
  • Pointer 26 is
  • the display consists of three orthogonal planes 60, 62 and 64, of the voxels of image volume 14, that all intersect at
  • a fourth viewpoint style is defined as illustrated in Figure 5. This viewpoint
  • Viewpoint plane 66 is
  • Viewpoint plane 68 is the
  • Viewpoint plane 70 is perpendicular to viewpoint
  • Image slice 72 is the intersection of viewpoint plane
  • Image slice 74 is parallel to viewpoint plane 68.
  • slice 76 is parallel to viewpoint plane 70.
  • the lateral position of image slice 74 within imaged volume 14' is controlled using trackball 46.
  • volume 14' is not shown explicitly in Figure 5, and image slices 72, 74 and 76 are
  • stylus 48 need not be rigidly attached to handle 50.

Abstract

A method and system for displaying an image volume of a body such as the body of a surgical patient. The system of the present invention includes a memory for storing the image volume, a monitor for displaying the image volume, a pointer for selecting portions of the image volume to display with reference to the body, a mechanism for determining the position and orientation of the pointer, and a processor for overall control. The image volume is registered with the body. A portion, such as a slice, of the image volume is defined by a viewpoint that is in turn defined by positioning and orienting the pointer relative to the body. For example, the viewpoint may be defined by a target point on the body and by the orientation of the pointer. The tip of the pointer is pointed at the target point, or is placed in contact with the target point, and the pointer is pivoted. A slice of the image volume is displayed in accordance with this viewpoint. Registration of the image volume with the body is facilitated by using the pointer to sample the coordinates of points on the surface of the body. Alternatively, a suitably equipped pointer is used to sample the coordinates of points within the body, for example by ultrasound echolocation. Preferably, the pointer also is used for overall processor control.

Description

METHOD AND SYSTEM FOR SELECTING AND DISPLAYING A PORTION OF AN IMAGE OF A BODY
FIELD AND BACKGROUND OF THE INVENTION
The present invention relates to interactive displays and, more particularly, to a
method and system for displaying a portion of a previously obtained image volume of
the interior of a body with reference to the body itself. Following the discovery of X-rays at the end of the Nineteenth Century, images of the interiors of patients have been used by physicians as diagnostic tools. In particular, such images have been used to guide surgical procedures. Before the development of digital imaging modalities such as CAT scans and MRI scans, surgeons typically brought x-ray photographs of patients into the operating room, hung the photographs on light boxes and used the photographs as guides in surgery. More recently, it has become possible to acquire 3D digital image volumes of patients. Slices of these image volumes commonly are used to guide surgical procedures, but in a manner derived from the old days of X-ray film: many successive slices of the image volume, taken in a predetermined orientation relative to the image volume, are posted on a chart in the operating room, and the surgeon performs surgery on the patient with
reference to these thumbnail images.
It is known to navigate a medical instrument, such as a catheter, an endoscope
or a biopsy needle, through the body of a patient, with reference to a 3D digital image
volume. For example. Acker et al., in US 5,558,091, attach a fiducial marker to the
body of a patient and acquire a 3D digital image volume of a portion of the patient
together with the fiducial marker. The digital image volume is registered to the
patient with reference to the fiducial marker. The medical instrument is provided with a magnetic field sensor that is used to measure the position and orientation of the
medical instrument as the medical instrument is navigated through the body of the
patient. The fiducial marker includes a similar magnetic field sensor so that an icon representing the medical instrument can be displayed, superposed on a display of the
3D digital image, to represent the true position and orientation of the medical
instrument within the body of the patient. In particular, a slice of the 3D digital image may be extracted for display from the 3D digital image in any convenient (vertical, horizontal, oblique) orientation relative to the (typically parallelipipedal) 3D data volume.
In many surgical procedures, notably orthopedic procedures such as total hip replacement and total knee replacement, in which the surgeon must cut through the patient's bones at a precise angle, it would be useful to have a similarly flexible method of displaying slices of 3D digital image volumes, rather than forcing the surgeon to visualize the surgical target in three dimensions based on slices of the digital image volume taken in a predetermined, fixed orientation. There is thus a widely recognized need for, and it would be highly advantageous to have, a method and system for interactively selecting and displaying slices of a 3D digital image volume of the body of a patient with reference to the patient's body itself.
SUMMARY OF THE INVENTION
According to the present invention there is provided a method of displaying an
image volume of a body having a surface, including the steps of: (a) registering the
image volume with the body; (b) selecting a viewpoint with reference to the body; and
(c) displaying a first portion of the image volume with reference to the viewpoint. According to the present invention there is provided a system for selecting a
portion of an image volume of a body and for displaying the selected portion,
including: (a) a pointer for selecting a viewpoint with reference to the body; (b) a
memory for storing the image volume; (c) a processor for retrieving the portion of the image volume from the memory, with reference to the viewpoint; and (d) a monitor
for displaying the retrieved portion.
According to the present invention there is provided a six-degree-of-freedom
input device, including: (a) a handle; (b) a stylus, operationally connected to the handle; and (c) a mechanism for providing signals representative of a location and an orientation of an element of the device selected from the group consisting of the
handle and the stylus.
The primary application of the present invention is to the interactive display of slices of a 3D image volume of the body of a surgical patient, and the present invention is described herein with reference to this primary application. Nevertheless, it is to be understood that the scope of the present invention includes the interactive display of any body, animate or inanimate, for which a 3D image volume has been acquired.
As defined herein, a "viewpoint" is an abstract geometric entity, in the space occupied by and surrounding the body, that defines the portion of the image volume
that is to be displayed. The use in this manner of the term "viewpoint" is patterned after the use of this term to describe the relationship of a camera to an object that the
camera is photographing. In the case of a camera, the viewpoint may include as many
as seven coordinates: the three spatial coordinates of the camera relative to the object;
pitch, yaw and roll coordinates of the camera; and a magnification coordinate. The "viewpoints" of the present invention generally include a target point that may be
inside the body, on the surface of the body or outside the body. The "viewpoints" of
the present invention may also include orientations with respect to the body and planes
that intersect the body.
The method of the present invention begins by registering the image volume
with the body of the patient. According to one preferred embodiment of the present invention, four or more fiducial points are matched to corresponding reference points in the image volume. According to another preferred embodiment of the present
invention, coordinates of a feature of the body are sampled and matched to a representation of that feature in the image volume. The feature may be the surface of the body, or alternatively a feature internal to the body. In the case of the internal feature being the surface of a skeletal bone, the coordinates of the surface of the skeletal bone are sampled using an echolocation procedure such as ultrasound.
The image volume having been registered to the body of the patient, a viewpoint, for defining the portion of the image volume to be displayed, is selected. The process of viewpoint selection includes two phases. In the first phase of viewpoint selection, the viewpoint style is selected. The term "viewpoint style" is used herein to refer to the qualitative aspects of the viewpoint. These qualitative aspects include, but are not necessarily limited to, the type of coordinates included in
the viewpoint. In the second phase of viewpoint selection, the quantitative aspects of
the viewpoint, including specific values of the coordinates, are selected.
One preferred viewpoint of the present invention includes a target point on the
body surface adjacent to the portion of the body that is included in the image volume,
and a display orientation. One or more slices of the image volume are displayed with reference to the target point and the display orientation. Preferably, the selections of
the target point and of the display orientation are effected using a hand-held pointer
that includes a handle to which is rigidly attached a stylus. Most preferably, the stylus is perpendicular to the handle. The tip of the stylus is pointed at the target point and
the handle is pivoted to indicate the display orientation. Alternatively, the tip of the
stylus is placed in contact with the target point and the stylus is pivoted about the
target point, using the handle, to indicate the display orientation. This hand-held pointer constitutes an invention in its own right: a six-degree-of freedom input/control device. The image volume is stored in a memory, and slices of the image volume are retrieved from the memory and displayed on a monitor, under the control of a processor. The pointer includes a mechanism for providing signals representative, both of the location of the tip of the stylus, and of the spatial orientation of the pointer itself. These signals are sent to the processor, which retrieves and displays the slices of the image volume in accordance with these signals. A preferred electromagnetic
embodiment of this mechanism includes coils rigidly mounted in the handle of the pointer. A preferred optical embodiment of this mechanism includes light emitting
diodes rigidly mounted on the handle of the pointer.
The display orientation of the slices of the image volume usually coincides
with the spatial orientation of the stylus, with the planes of the slices being
perpendicular to the long axis of the stylus; but this is not obligatory. To allow for the pointer of the present invention having a different spatial orientation than the display
orientation, the spatial orientation of the pointer is referred to herein as the "pivot
orientation". Successively deeper or shallower slices of the image volume, relative to the direction in which the pointer points, are displayed on the monitor at a fixed
display orientation. Alternatively, a single depth is selected, the pointer is pivoted
about the target point, and successive slices at that depth but at different orientations
are displayed on the monitor.
Further preferred embodiments of the present invention include other
viewpoint styles, also defined with reference to the body using the hand-held pointer.
According to one preferred embodiment of the present invention, the pointer functions as a virtual camera, with the tip of the stylus defining the three spatial coordinates of the virtual camera and with the pivot orientation of the pointer defining the pitch and
yaw coordinates. According to a second preferred embodiment of the present invention, the stylus defines an axis that runs through the tip of the stylus, and pointing the stylus at the body defines, as a viewpoint, a point along the axis within the body. The display consists of three mutually perpendicular planes of the image volume whose common intersection point is the viewpoint. According to a third preferred embodiment of the present invention, both the stylus and the handle define respective axes. The display again consists of three mutually perpendicular slices of the image volume: a first slice in a plane that includes the stylus axis and that is perpendicular to the body axis, a second slice in a plane parallel to the plane defined
by the two axes, and a third slice that is perpendicular to the first two slices.
Most preferably, the pointer functions in two modes, a control mode and a viewpoint definition mode. In control mode, the pointer is used to control the
processor with reference to a cursor displayed on the monitor, much in the manner of
a conventional computer mouse. For this purpose, the pointer sends control signals to
the processor. These control signals include, inter alia, signals that define the desired viewpoint style. In viewpoint definition mode, signals indicative of the spatial
position and orientation of the pointer are transmitted to the processor, as discussed
above in the context of the first preferred viewpoint style, to complete the selection of
the desired viewpoint. In both modes, the signals are sent from the pointer to the
processor via a wireless datalink.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is herein described, by way of example only, with reference to
the accompanying drawings, wherein: FIG. 1 is an illustration of a system of the present invention;
FIG. 2 illustrates the selection of slices of the image volume for display;
FIG. 3 shows two preferred embodiments of the pointer of FIG. 1;
FIGS 4 and 5 illustrate two other methods of selecting slices of the image
volume for display.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention is of a method and system for interactively displaying
portions of an image volume of a body. Specifically, the present invention can be used to display slices of an image volume of the body of a patient immediately prior to
surgery or during surgery.
The principles and operation of interactive image display according to the
present invention may be better understood with reference to the drawings and the
accompanying description. Referring now to the drawings, Figure 1 illustrates a system of the present
invention, used to select and display slices of a 3D image volume 14 stored in a
memory 18. Image volume 14 is a three dimensional parallelipipedal array of voxels
corresponding to a portion of a body 10 of a surgical patient. The volume of space
imaged in image volume 14 is shown as a dashed box 14' in Figure 1. Note that part
of box 14' extends past body 10, so that part of the surface 12 of body 10 is included
in the imaged volume. A processor 16 retrieves selected slices of image volume 14
from memory 18 and displays these slices on a monitor 20.
The slices to be displayed are selected using a pointer 26. One end of pointer
26 is a pointed tip 28 which is pointed at a target point 30 on surface 12 of body 10, or
which, alternatively, is placed in contact with target point 30. Figure 2 is an enlargement of a portion of Figure 1 that shows how pointer 26 is used to select slices
of image volume 14 for display. In Figures 2A and 2B, pointer 26 is shown with tip
28 in contact with target point 30. In Figure 2C, pointer 26 is shown pointing at target point 30. Pointer 26 defines an axis 34 that intersects tip 28. Pointer 26 is pivoted
about tip 28 to define the orientation of the slices of image volume 14 that are
displayed on monitor 20. Specifically, the slices that are displayed on monitor 20 are
perpendicular to axis 34. Three such slices, 40a, 40b and 40c, are indicated in Figures
2A and 2C by dashed lines. As these slices generally are not parallel to the planes of
the voxel array that constitute image volume 14, processor 16 computes the pixels of
slices 40 by interpolating the voxels of image volume 14. The depth, along the
extension of axis 34 into body 10, of the slice that is displayed, is controlled by two
buttons 36 and 38: pushing button 36 moves the display depth to a deeper slice and
pushing button 38 moves the display depth to a shallower slice. Alternatively, pivoting pointer 26 about target point 30, as shown in Figure 2B, changes the
orientation (the tilt) of the displayed slice at constant depth. In Figure 2B, when
pointer 26 is oriented at angle αd from vertical, slice 40d is displayed, and when
pointer 26 is at angle αe from vertical, slice 40e is displayed. As discussed below,
signals representative of the spatial orientation of pointer 26, and so of the orientations
of slices 40, as well as signals generated using buttons 36 and 38, are relayed to
processor 16 by a datalink that is represented symbolically in Figure 1 by an arrow 32.
Most preferably, datalink 32 is a wireless datalink.
The lateral positioning of the displayed slice on the screen of monitor 20 is
determined by target point 30. Specifically, the point on the slice, at which the
extension of axis 34 into body 10 intersects the slice, is positioned in the center of the screen. Rotating pointer 26 about axis 34 rotates the displayed slice about the center point on the screen.
It should be noted that the selection, using pointer 26, of target point 30 and of the display orientation of the slice that is displayed on the screen of monitor 20 are concurrent, and are described separately above only for clarity of exposition.
Processor 16 immediately alters both the lateral positioning and display of the slice
displayed on the screen of monitor 20 in response to the positioning and orientation of
pointer 26, as pointer 26 is translated and pivoted.
As noted above, the orientation of pointer 26 is referred to herein as the "pivot orientation". A suitable mechanism is provided to indicate the pivot orientation of
pointer 26 to processor 16. One such mechanism, an electromagnetic mechanism
similar to the mechanism taught by Ben-Haim et al. in WO 96/05768, which
publication is incorporated herein by reference for all purposes as if fully set forth herein, is partially depicted in Figure 1. A transmitter 22 drives AC currents in three
transmitting antennas 24 to generate low-frequency electromagnetic fields. These
fields induce the flow of corresponding AC currents in three mutually perpendicular
sensing coils (not shown) in pointer 26. Signals representative of these currents are
relayed to processor 16 by datalink 32. Processor 16 infers, from these signals, the
location of tip 28 (hence, the location of target point 30) and the pivot orientation of
pointer 26. Preferably, the location of tip 28 is expressed in Cartesian coordinates and
the pivot orientation of pointer 26 is expressed as Euler angles, in a suitable
coordinate frame. In the present context, the Euler angles are considered to be examples of "coordinates" of the pivot orientation of pointer 26.
Figures 3A and 3B are side and top views, respectively, of a preferred
embodiment of pointer 26. This embodiment of pointer 26 includes a lozenge-shaped handle 50 that defines a handle axis 35 and from which emerges a stylus 48 that
includes tip 28 and that defines axis 34. To distinguish axis 34 from axis 35 in the
context of this embodiment of pointer 26, axis 34 is referred to herein as the "stylus
axis". Stylus 48 is rigidly attached to handle 50, with stylus axis 34 substantially
perpendicular to handle axis 35. Handle 50 includes buttons 36 and 38 as described
above, and three other manual input interfaces: a trackball 46 and two more buttons 42
and 44. Buttons 42 and 44 are used to toggle pointer 26 between viewpoint definition
mode, in which the position and orientation of pointer 26, along with buttons 36 and
38, define the viewpoint, and control mode, in which trackball 46 and buttons 36 and
38 are used to control the display on monitor 20 in the manner of a conventional
mouse. (It should be noted that, as described below, there are alternative preferred
viewpoint styles in which trackball 46 participates in the definition of the viewpoint.) In control mode, trackball 46 is used to move a cursor on the screen of monitor
20 to point at graphical user interface tools such as icons, sliders and menu selections
on the screen of monitor 20, and buttons 42 and 44 are used to select and/or activate
these graphical user interface tools, for the purpose of controlling processor 16 and
modifying the appearance of the displayed portion of image volume 14. For example,
the magnification of the displayed slices may be adjusted using a slider, as may the default 45-degree orientations of the three slices of the "3D textured plane" display.
Appropriate circuitry is included in handle 50 to sense the forces applied to buttons
36, 38, 42 and 44 and to trackball 46 and to relay signals representative of these forces
to processor 16 via datalink 32. When pointer 26 is toggled from viewpoint definition mode to control mode, the viewpoint coordinates of the displayed portion of image volume 14 are frozen until pointer 26 is toggled back to viewpoint definition mode.
As an alternative to trackball 46, a force sensor such as the TrackPoint™ sensor used in IBM laptop computers may be used. Such a force sensor is sensitive to
forces in the ±x (right-left), ±y (forward-back) and -z (down) directions, and so
replaces both trackball 46 and one of buttons 36 and 38. The remaining button 36 or
38 is used to toggle the force sensor between interpreting a force in the z-direction as a
-z force and interpreting a force in the z-direction as a +z force.
As a further subalternative, the force sensor is mounted on a rotary dialer knob such as the dialer knob used in the ABA2020 VoicePod™ record/playback device
produced by Altec Lansing of Milford PA. Twisting the force sensor clockwise
represents a +z force. Twisting the force sensor counterclockwise represents a -z
force. Under this subalternative, buttons 36 and 38 are not needed. In Figure 3B, handle 50 is partially cut away to show three mutually
perpendicular sensor coils 52 that are used to sense the fields generated by antennas
24. Coils 52 are rigidly attached to handle 50, so that a simple rigid translation and
rigid rotation of the Cartesian coordinates and the Euler angles defined by coils 52
transforms these coordinates and angles to the Cartesian coordinates of tip 28 and the
Euler angles of stylus 48. Alternatively, a sufficiently miniaturized magnetic field
sensor is embedded inside stylus 48 or is rigidly attached to stylus 48, near tip 28.
The location of sensor coils 52 in handle 50 is illustrative, rather than
limitative. Appropriate field sensors may be positioned anywhere on or in pointer 26. For example, the NOG A™ Cardiac Mechanics and Hemodynamic Mapping System, produced by Biosense Webster (a subsidiary of Johnson and Johnson of New Brunswick NJ), includes such a sensor, mounted within a catheter, that is small
enough to be mounted similarly in stylus 48 near tip 28.
Figure 3C is a front view of an alternative preferred embodiment of pointer 26.
Instead of coils 52, this embodiment of pointer 26 includes four infrared light emitting
diodes 54, for use in conjunction with an optical tracking system, such as the Optotrak system available from Northern Digital Inc. of Toronto, Ontario, Canada.
Several methods may be used to register image volume 14 to body 10. In one
registration method, a number of fiducial points on surface 12 are matched to
corresponding reference points in image volume 14. For this purpose, tip 28 is placed in contact with the fiducial points and the coordinates of the fiducial points are
measured. At least four such fiducial points are needed, with the accuracy of the
match being improved by using more than four fiducial points. Anatomical features
of surface 12 may be used as fiducial points. Alternatively, small blocks of a material that has a high contrast in the imaging modality used to create image volume 14 are
attached to surface 12 prior to the acquisition of image volume 14. For example, if
image volume 14 is a CAT scan, then the material of the blocks is radio-opaque.
These blocks then serve as the fiducial points, and their representations in image
volume 14 serve as the reference points.
A second registration method is based on fitting a mathematical surface to a
portion of a representation, in image volume 14, of a feature of body 10. The most
convenient feature of body 10 to use for this purpose is surface 12. A set of points,
typically between 40 and 60 points, on surface 12 are digitized by touching tip 28 to
these points and measuring the coordinates of these points. Then image volume 14 is
registered to body 10 by applying to image volume 14 the rigid translation and the rigid rotation that leads to the best fit, in a least squares sense, of the mathematical
surface to the digitized points. Because the points on surface 12 that are to be sampled for this purpose are not known a priori, the portion of the representation of
surface 12 in image volume 14 to which the mathematical surface is fitted generally is
more extensive than the portion of surface 12 that is actually sampled by tip 28.
Some parts of body 10, such as a leg, are too symmetrical for the second
registration method to obtain a unique match of the sampled points of surface 12 to the mathematical surface. In such a case, an interior feature such as the surface of a
skeletal bone is used for registration. The mathematical surface is fitted to a portion
of the representation of the surface of the bone in image volume 14. Coordinates of
points on the surface of the real bone are sampled by an echolocation method such as
ultrasound. Preferably, for this purpose, and as illustrated in Figure 3B, distal end 58
of stylus 48 includes a piezoelectric sensor 56 that serves as both an ultrasound transmitter and an ultrasound receiver. Sensor 56 is connected to appropriate
ultrasound transmission and reception circuitry (not shown) in handle 50 in the
conventional manner. As before, image volume 14 is registered to body 10 by
applying to image volume 14 the rigid translation and the rigid rotation that leads to
the best fit, in a least squares sense, of the mathematical surface to the sampled points.
As noted above, the viewpoint style described above that is based on surface
target point 30 is only one of many possible viewpoint styles, each of which defines a different display mode. A second viewpoint style is defined by using the preferred
embodiment of pointer 26, as illustrated in Figure 3, as a virtual camera. Stylus 48 is
pointed at body 10, and the displayed image is provided in accordance with what
would be photographed by a camera pointed similarly at body 10. The spatial
coordinates of the virtual camera are the spatial coordinates of tip 28. Pitch is defined as rotation in the planes of axes 34 and 35. Yaw is defined as rotation about body axis
35. Roll is defined as rotation about stylus axis 34. In one preferred display mode, surfaces of internal organs of patient 10, as represented in image volume 14, are contoured and shaded by methods that are well-known in the art, and are displayed as a three-dimensional rendition of these organs as these organs would be photographed
by a camera having a position and orientation similar to that of pointer 26. In another
preferred display mode, used to display a CT image volume 14, a projection of image
volume 14 parallel to stylus axis 35 is displayed, to simulate the output of an x-ray
camera having a position and orientation similar to that of pointer 26.
A third viewpoint style is defined as illustrated in Figure 4. Here, a target
point 30' is defined to be at a distance d from tip 28 along stylus axis 34. Pointer 26 is
held with target point 30' inside imaged volume 14'. The display consists of three orthogonal planes 60, 62 and 64, of the voxels of image volume 14, that all intersect at
target point 30'.
A fourth viewpoint style is defined as illustrated in Figure 5. This viewpoint
style includes three viewpoint planes 66, 68 and 70. Viewpoint plane 66 is
perpendicular to handle axis 35 and includes stylus axis 34. Viewpoint plane 68 is the
plane defined by axes 34 and 35. Viewpoint plane 70 is perpendicular to viewpoint
planes 66 and 68 and includes handle axis 35. When stylus 26 is pointed at body 10,
three slices 72. 74 and 76 of imaged volume 14' are defined with the help of
viewpoint planes 66, 68 and 70. Image slice 72 is the intersection of viewpoint plane
66 with imaged volume 14'. Image slice 74 is parallel to viewpoint plane 68. Image
slice 76 is parallel to viewpoint plane 70. The lateral position of image slice 74 within imaged volume 14' is controlled using trackball 46. The depth of image slice 76
within imaged volume 14' is controlled using buttons 34 and 36. For clarity, imaged
volume 14' is not shown explicitly in Figure 5, and image slices 72, 74 and 76 are
shown truncated at surface 12 of body 10.
As noted above, the description of the mechanism, for sensing the position and
orientation of pointer 26, as being located in handle 50, is illustrative rather than
limitative. In the variant of pointer 26 that uses a field sensor mounted in stylus 48,
that sensor defines the axes that define the viewpoint styles. In such a pointer 26,
stylus 48 need not be rigidly attached to handle 50.
While the invention has been described with respect to a limited number of
embodiments, it will be appreciated that many variations, modifications and other
applications of the invention may be made.

Claims

WHAT IS CLAIMED IS:
1. A method of displaying an image volume of a body having a surface,
comprising the steps of:
(a) registering the image volume with the body;
(b) selecting a viewpoint with reference to the body; and
(c) displaying a first portion of the image volume with reference to said
viewpoint.
2. The method of claim 1, wherein said viewpoint includes: (i) a target point; and
(ii) a first display orientation,
3. The method of claim 2, wherein said target point is on the surface of the body.
4. The method of claim 3, further comprising the steps of:
(d) providing a pointer having a tip, said selecting of said target point then
being effected by positioning said tip in contact with said target point.
5. The method of claim 4, wherein said selecting of said first display
orientation is effected by pivoting said pointer about said target point while said tip is
in contact with said target point.
6. The method of claim 5, further comprising the step of:
(f) determining a pivot orientation of said pointer relative to the body, said
first display orientation then being determined by said pivot
orientation.
7. The method of claim 6, wherein said first display orientation is substantially identical to said pivot orientation.
8. The method of claim 3, further comprising the step of:
(d) providing a pointer, said selecting of said target point then being effected by pointing said pointer at said target point.
9. The method of claim 8, wherein said selecting of said first display orientation is effected by pivoting said pointer.
10. The method of claim 9, further comprising the step of:
(e) determining a pivot orientation of said pointer relative to the body, said first display orientation then being determined by said pivot
orientation.
11. The method of claim 10, wherein said first display orientation is
substantially identical to said pivot orientation.
12. The method of claim 3, further comprising the step of:
(d) displaying a second portion of the image volume with reference to said
target point and said display orientation.
13. The method of claim 3, further comprising the steps of:
(d) replacing said first display orientation of said viewpoint with a second display orientation; and
(e) displaying a second portion of the image volume with reference to said target point and said second display orientation.
14. The method of claim 2, wherein said target point is outside the body, the method further comprising the step of:
(d) providing a pointer having a tip, said selecting of said viewpoint then being effected by positioning said tip at said target point and by pivoting said pointer in accordance with said first display orientation.
15. The method of claim 1, wherein said viewpoint includes a target point
within the body, the method further comprising the step of:
(d) providing a pointer that defines an axis and that has and a tip on said axis, said selecting of said viewpoint then being effected by pointing
said pointer at the body so that said target point is on said axis at a
predetermined distance from said tip.
16. The method of claim 1 , wherein said viewpoint includes a first plane
that intersects the body, the method further comprising the step of:
(d) providing a pointer that includes a stylus and a handle, said stylus
defining a stylus axis, said handle defining a handle axis, said selecting of said viewpoint then being effected by pointing said pointer at the
body so that said stylus axis intersects the body, said first plane then being perpendicular to said handle axis and including said stylus axis.
17. The method of claim 16, wherein said viewpoint further includes a
second plane that includes both said axes.
18. The method of claim 17, wherein said stylus includes a tip on said stylus axis, and wherein said viewpoint further includes a third plane that is perpendicular to said first and second planes and that intersects said stylus axis at a fixed distance from said tip along said stylus axis.
19. The method of claim 1, wherein said registering of the image volume with the body is effected by steps including:
(i) providing the image volume with at least four reference points;
(ii) selecting a like number of fiducial points on the surface of the body;
and (iii) matching each said reference point to a respective said fiducial point.
20. The method of claim 1 , wherein said registering of the image with the
body is effected by steps including:
(i) obtaining, from the image volume, a representation of at least a first
part of a feature of the body; (ii) sampling coordinates of at least a second part of said feature of the
body; and
(iii) matching said sampled coordinates to said representation.
21. The method of claim 20, wherein said feature is the surface of the body.
22. The method of claim 20, wherein said feature is an interior feature of the body.
23. The method of claim 22, wherein said sampling of said coordinates is effected by steps including echolocation of said interior feature.
24. The method of claim 1, further comprising the steps of:
(d) providing a plurality of viewpoint styles; and
(e) selecting one of said viewpoint styles; said viewpoint then being selected in accordance with said selected viewpoint style.
25. A system for selecting a portion of an image volume of a body and for
displaying the selected portion, comprising: (a) a pointer for selecting a viewpoint with reference to the body;
(b) a memory for storing the image volume;
(c) a processor for retrieving the portion of the image volume from the
memory, with reference to said viewpoint; and
(d) a monitor for displaying the retrieved portion.
26. The system of claim 25, wherein said pointer includes a mechanism for providing, to said processor, signals representative of said viewpoint.
27. The system of claim 26, wherein said mechanism is an optical mechanism.
28. The system of claim 26, wherein said mechanism is an electromagnetic mechanism.
29. The system of claim 26, wherein said mechanism includes a datalink to
said processor.
30. The system of claim 29, wherein said datalink is wireless.
31. The system of claim 25, wherein said viewpoint includes a target point,
and wherein said pointer includes:
(i) a handle; and
(ii) a stylus, operationally connected to said handle, for pointing at said
target point to select said target point.
32. The system of claim 31, wherein said stylus includes a tip for placing
adjacent to said target point to select said target point.
33. The system of claim 31, wherein said stylus is rigidly attached to said
handle.
34. The system of claim 33, wherein said stylus is substantially
perpendicular to said handle.
35. The system of claim 31, wherein said stylus includes a piezoelectric sensor at a distal end thereof.
36. The system of claim 25, wherein said pointer includes a mechanism for providing, to said processor, signals for controlling said processor.
37. The system of claim 36, wherein said mechanism includes a datalink to
said processor.
38. The system of claim 37, wherein said datalink is wireless.
39. The system of claim 36, wherein said processor is operative to interpret
at least one of said signals as a signal for selecting a viewpoint style, said processor
then retrieving said portion of the image volume from the memory with reference to
said selected viewpoint style.
40. A six-degree-of-freedom input device, comprising:
(a) a handle;
(b) a stylus, operationally connected to said handle; and
(c) a mechanism for providing signals representative of a location and an orientation of an element of the device selected from the group
consisting of said handle and said stylus.
41. The device of claim 40, wherein said stylus is rigidly attached to said handle.
42. The device of claim 41, wherein said stylus is substantially perpendicular to said handle.
43. The device of claim 40, wherein said mechanism is electromagnetic.
44. The device of claim 43, wherein said mechanism includes a plurality of coils rigidly mounted in said element.
45. The device of claim 40, wherein said mechanism is optical.
46. The device of claim 45, wherein said mechanism includes a plurality of
light emitting diodes rigidly mounted on said element.
47. The device of claim 40, wherein said stylus includes a piezoelectric
sensor at a distal end thereof.
EP01908094A 2000-02-29 2001-02-27 Method and system for selecting and displaying a portion of an image of a body Withdrawn EP1259940A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US515624 1983-07-20
US51562400A 2000-02-29 2000-02-29
PCT/IL2001/000183 WO2001065490A2 (en) 2000-02-29 2001-02-27 Method and system for selecting and displaying a portion of an image of a body

Publications (1)

Publication Number Publication Date
EP1259940A2 true EP1259940A2 (en) 2002-11-27

Family

ID=24052105

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01908094A Withdrawn EP1259940A2 (en) 2000-02-29 2001-02-27 Method and system for selecting and displaying a portion of an image of a body

Country Status (3)

Country Link
EP (1) EP1259940A2 (en)
AU (1) AU2001235951A1 (en)
WO (1) WO2001065490A2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8256430B2 (en) 2001-06-15 2012-09-04 Monteris Medical, Inc. Hyperthermia treatment and probe therefor
DE10313829B4 (en) * 2003-03-21 2005-06-09 Aesculap Ag & Co. Kg Method and device for selecting an image section from an operating area
CN104602638B (en) 2012-06-27 2017-12-19 曼特瑞斯医药有限责任公司 System for influenceing to treat tissue
US10675113B2 (en) 2014-03-18 2020-06-09 Monteris Medical Corporation Automated therapy of a three-dimensional tissue region
US9486170B2 (en) 2014-03-18 2016-11-08 Monteris Medical Corporation Image-guided therapy of a tissue
US20150265353A1 (en) 2014-03-18 2015-09-24 Monteris Medical Corporation Image-guided therapy of a tissue
US10327830B2 (en) 2015-04-01 2019-06-25 Monteris Medical Corporation Cryotherapy, thermal therapy, temperature modulation therapy, and probe apparatus therefor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3664336B2 (en) * 1996-06-25 2005-06-22 株式会社日立メディコ Method and apparatus for setting viewpoint position and gaze direction in 3D image construction method
US6369812B1 (en) * 1997-11-26 2002-04-09 Philips Medical Systems, (Cleveland), Inc. Inter-active viewing system for generating virtual endoscopy studies of medical diagnostic data with a continuous sequence of spherical panoramic views and viewing the studies over networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0165490A3 *

Also Published As

Publication number Publication date
WO2001065490A2 (en) 2001-09-07
WO2001065490A3 (en) 2002-03-28
AU2001235951A1 (en) 2001-09-12

Similar Documents

Publication Publication Date Title
US6991605B2 (en) Three-dimensional pictograms for use with medical images
US7072707B2 (en) Method and apparatus for collecting and processing physical space data for use while performing image-guided surgery
US6049622A (en) Graphic navigational guides for accurate image orientation and navigation
US5823958A (en) System and method for displaying a structural data image in real-time correlation with moveable body
US11504095B2 (en) Three-dimensional imaging and modeling of ultrasound image data
US6379302B1 (en) Navigation information overlay onto ultrasound imagery
US7831096B2 (en) Medical navigation system with tool and/or implant integration into fluoroscopic image projections and method of use
US7643862B2 (en) Virtual mouse for use in surgical navigation
US9320569B2 (en) Systems and methods for implant distance measurement
US6442417B1 (en) Method and apparatus for transforming view orientations in image-guided surgery
US7885441B2 (en) Systems and methods for implant virtual review
EP2096523A1 (en) Location system with virtual touch screen
AU2008249201B2 (en) Flashlight view of an anatomical structure
US20080154120A1 (en) Systems and methods for intraoperative measurements on navigated placements of implants
WO2010056561A1 (en) Systems and methods for image presentation for medical examination and interventional procedures
WO2002024094A2 (en) Non-ivasive system and device for locating a surface of an object in a body
US20080240534A1 (en) Method and Device For Navigating and Measuring in a Multidimensional Image Data Set
WO2001065490A2 (en) Method and system for selecting and displaying a portion of an image of a body
US6028912A (en) Apparatus and method for point reconstruction and metric measurement on radiographic images
JP6548110B2 (en) Medical observation support system and 3D model of organ
RU2735068C1 (en) Body cavity map
EP3637374A1 (en) Method and system for visualising a spatial surface curvature of a 3d-object, computer program product, and computer-readable storage medium
EP4160543A1 (en) Method for analysing 3d medical image data, computer program and 3d medical image data evaluation device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020830

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17Q First examination report despatched

Effective date: 20030401

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20040907