WO1999000052A1 - Method and apparatus for volumetric image navigation - Google Patents
Method and apparatus for volumetric image navigation Download PDFInfo
- Publication number
- WO1999000052A1 WO1999000052A1 PCT/US1998/013391 US9813391W WO9900052A1 WO 1999000052 A1 WO1999000052 A1 WO 1999000052A1 US 9813391 W US9813391 W US 9813391W WO 9900052 A1 WO9900052 A1 WO 9900052A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- virtual image
- data
- image data
- displaying
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/06—Devices, other than using radiation, for detecting or locating foreign bodies ; determining position of probes within or on the body of the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/06—Devices, other than using radiation, for detecting or locating foreign bodies ; determining position of probes within or on the body of the patient
- A61B5/061—Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2072—Reference field transducer attached to an instrument or patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/367—Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
Definitions
- This invention pertains generally to systems and methods for generating images of three dimensional objects for navigation purposes, and more particularly to systems and methods for generating such images in medical and surgical applications.
- Precise imaging of portions of the anatomy is an increasingly important technique in the medical and surgical fields.
- techniques have been developed for performing surgical procedures within the body through small incisions with minimal invasion. These procedures generally require the surgeon to operate on portions of the anatomy that are not directly visible, or can be seen only with difficulty.
- some parts of the body contain extremely complex or small structures and it is necessary to enhance the visibility of these structures to enable the surgeon to perform more delicate procedures.
- planning such procedures requires the evaluation of the location and orientation of these structures within the body in order to determine the optimal surgical trajectory.
- New diagnostic techniques have been developed in recent years to obtain images of internal anatomical structures. These techniques offer great advantages in comparison with the traditional X-ray methods. Newer techniques include microimpulse radar (MIR) , computer tomography (CT) scans, magnetic resonance imaging (MRI) , positron emission tomography (PET), ultrasound (US) scans, and a variety of other techniques. Each of these methods has advantages and drawbacks in comparison with other techniques.
- MIR microimpulse radar
- CT computer tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- US ultrasound
- ultrasound scanning in contrast, is a relatively rapid procedure; however it is limited in its accuracy and signal-to-noise ratio.
- the imaging problem is especially acute in the field of neurosurgery, which involves performing delicate surgical procedures inside the skull of the patient.
- the above techniques have improved the surgeon's ability to locate precisely various anatomical features from images of structures within the skull.
- This has only limited usefulness in the operating room setting, since it is necessary to match what the surgeon sees on the 2D image with the actual 3D patient on the operating table.
- the neurosurgeon is still compelled to rely to a considerable extent on his or her knowledge of human anatomy.
- the stereotactic technique was developed many years ago to address this problem.
- a frame of reference is attached to the patient's head which provides reference points for the diagnostic images.
- the device further includes guides for channeling the surgical tool along a desired trajectory to the target lesion within the brain.
- This method is cumbersome and has the drawback that the surgeon cannot actually see the structures through which the trajectory is passing. There is always the risk of damage to obstacles in the path of the incision, such as portions of the vascular or ventricular system.
- the surgeon is in the position much like that of a captain piloting a vessel traveling in heavy fog through waters that have many hazards, such as shoals, reefs, outcroppings of rocks, icebergs, etc.
- This patent describes a system for providing images along the line of sight of the surgeon in a dynamic realtime fashion.
- the images that are displayed are resliced images from a three-dimensional data reconstruction which are sections or slices orthogonal to the line of sight, taken at various positions along this line specified by the user.
- the sectional planes that are used to define the virtual images may constitute various slices through the body chosen by the surgeon.
- These images may be superimposed on actual images obtained by an image recording device directed along the line of sight such as a video camera attached to the surgeon's head, and the composite images may be displayed.
- the present invention provides an improved system and method for displaying 3D images of anatomical structures in real time during surgery to enable the surgeon to navigate through these structures during the performance of surgical procedures.
- This system is also useful in planning of surgical procedures.
- the system includes a computer with a display and input devices such as a keyboard and mouse.
- the system also includes a position tracking system that is connected both to the computer and also to the surgical probes or other instruments that are used by the surgeon.
- the position tracking system provides continual real time data to the computer indicating the location and orientation of the surgical instrument in use.
- the computer further includes a memory containing patient data produced by imaging scans, such as CT or MRI scans, from which 2- dimensional and 3-dimensional images of the anatomical structure may be generated. Means are provided for registration of these images with respect to the patient.
- the computer memory is further provided with programs that control the generation of these anatomical images.
- These programs include software for segmentation of the scan images to identify various types of structures and tissues, as well as the reconstruction of 2D and 3D images from the scan data.
- This software allows these images to be displayed with various magnifications and orientations, and with various sectional views produced by slice planes in various locations and orientations, all controlled by the surgeon.
- This image-generating software has the important feature that it produces 3D images that are perspective views of the anatomical structures, with user-controlled means for varying the viewing orientation and location, and also varying the displayed transparency or opacity of various types of tissues, structures, and surfaces in the viewed region of interest. This enables the user to effectively "see through" surfaces and structures in the line of sight of the image to reveal other structures that would otherwise be hidden in that particular view.
- the images are generated from the viewpoint of the surgical probe or instrument that is in use, looking from the tip of the instrument along its longitudinal axis.
- an invasive surgical instrument such as a scalpel or forceps
- the display provides a three dimensional perspective view of anatomical structures from a viewpoint inside the body.
- These images are all generated in real time "on the fly”.
- the position tracking system continually provides data to the computer indicating the location and orientation of the instrument, and the displayed image is continually updated to show the structures toward which the instrument is pointing.
- the system provides means for integrating these images with those generated from the scan data.
- the software enables the user to overlay the "actual images” generated by these instruments with the "virtual images” generated from the scan data.
- a second object of this invention is to provide a system and method for generating such an image with user-controlled means for varying the location and orientation of the viewpoint corresponding to the image.
- Another object of this invention is to provide a system and method for generating such an image with user-controlled means for varying the opacity of structures and surfaces in the viewed region of interest, so that the displayed image shows structures and features that would be otherwise hidden in a normal view.
- Yet another object of this invention is to provide a system and method for generating such an image with a viewpoint located at the tip of the instrument being used by the surgeon in the direction along the longitudinal axis of the instrument.
- Figure 1 is a schematic perspective drawing of the apparatus of the present invention in operating room use during the performance of neurosurgical procedures.
- Figure 2 is a schematic block diagram of the computer system and optical tracking system of the present invention.
- Figure 3 is a schematic block diagram of the navigation protocol using pre-operative data that is followed in carrying out the method of the present invention.
- Figure 4 is a schematic block diagram of the navigation protocol using ultrasound intra-operative data that is followed in carrying out the method of the present invention.
- Figure 5 is a schematic block diagram of the endoscopic protocol that is followed in carrying out the method of the present invention.
- Figure 6 is a schematic flow chart of the pre-operative computer program that implements the pre-operative protocol of the present invention.
- Figure 7 is a schematic flow chart of the intra- operative ultrasound computer program that implements the ultrasound protocol of the present invention.
- Figure 8 is a schematic flow chart of the intra- operative endoscope computer program that implements the endoscope protocol of the present invention.
- Figure 9 is a drawing of a display generated according to the present invention, showing axial, coronal, and sagittal views of a head, together with a three-dimensional perspective view of the head taken from an exterior viewpoint .
- Figure 10 is a drawing of a display generated according to the present invention, showing sectional axial, coronal, and sagittal views of a head, together with a three- dimensional perspective view of the head taken from an interior viewpoint.
- Figure 11a is a drawing of a plastic model of a human skull and a surgical probe that has been used to demonstrate the present invention.
- Figure lib is another drawing of the model skull of Figure 11a, with the top of the skull removed to show model internal structures for demonstration purposes.
- Figure 12 is a simplified reproduction of two displays produced by the present invention for the model skull shown in Figures 11a, lib.
- Figure 13 is a simplified reproduction of two further displays of the invention for the skull in Figures 11a, lib.
- Figure 14 is a reproduction of a composite display produced by the present invention for an actual human head.
- Figure 1 shows the apparatus of the invention as used in performing or planning a neurosurgery operation.
- the patient's head (112) has a tumor or lesion (117), which is the target object of the operation.
- Fiducial markers (113), (114) are attached to the head to enable registration of images generated by previously obtained scan data according to techniques familiar to persons of ordinary skill in the relevant art.
- a surgical probe or instrument (109) held by the surgeon is directed toward the tissues of interest.
- a computer (101) is connected to user input devices including a keyboard (103) and mouse (104), and a video display device (102) which is preferably a color monitor.
- the display device (102) is located such that it can be easily viewed by the surgeon during an operation, and the user input devices (103) and (104) are placed within easy reach to facilitate use during the surgery.
- the apparatus further includes a position tracking system, which is preferably an optical tracking system (hereafter "OTS") having a sensing unit (105) mounted overhead in view of the operating table scene, and at least two light emitting diodes (LED's) (110), (111) mounted on the surgical instrument (109). These LED's preferably emit continuous streams of pulsed infrared signals which are sensed by a plurality of infrared detectors (106), (107), (108) mounted in the sensing unit (105) in view of the surgical instrument (109) .
- OTS optical tracking system
- the instrument (109) and the sensing unit (105) are both connected to the computer (101), which controls the timing and synchronization of the pulse emissions by the LED's and the recording and processing of the infrared signals received by the detectors (106) - (108) .
- the OTS further includes software for processing these signals to generate data indicating the location and orientation of the instrument (109) .
- the OTS generates the position detecting data on a real time continuous basis, so that as the surgical instrument (109) is moved, its position and orientation are continually tracked and recorded by the sensing unit (105) in the computer (101) .
- the OTS may be preferably of the type known as the "FlashPoint 3-D Optical Localizer", which is commercially available from Image Guided Technologies of Boulder, Colorado, similar to the systems described in U.S.
- the invention is not limited to this particular OTS, and other position tracking systems, such as sonic position detecting systems, may also be utilized.
- the surgical instrument (109) is elongated in shape, having a longitudinal axis and tip (115) pointing toward the tissues of interest.
- the instrument may be an endoscope having a conical field of view (116) that is indicated by dotted lines in Figure 1.
- the instrument shown in the Figure is held at a position external to the patient's head. If an incision (118) has been made into the skull, the instrument may be inserted through the incision; this alternative position is shown by dotted lines in Figure 1. In both positions the instrument is held so that there is an unobstructed line of sight between the LED's (110), (111) and the sensing unit (105).
- the instrument may include a laser targeting system (not shown in the drawings) to illuminate and highlight the region under examination.
- Figure 2 shows a schematic block diagram of the computer system connected to the position tracking system.
- the computer (101) includes a central processing unit (CPU) (201) communicative with a memory (202), the video display (102), keyboard and mouse (103), (104), optical detectors (106) - (108), and the LED's mounted on the surgical instrument (109) .
- the computer memory contains software means for operating and controlling the position tracking system.
- the OTS components (105) - (109) may be connected to and controlled by a separate computer or controller which is connected to the computer (101) and provides continual data indicating the position and orientation of the surgical instrument (109) .
- Figure 3 is a schematic block diagram of the protocol for handling pre-operative data ("pre-op protocol") to generate images during surgery according to the present invention. It is assumed that three-dimensional image data of the patient's head have been previously obtained from one or more of the techniques that are known to persons of ordinary skill in the medical imaging arts. Preferably these data are acquired from CT, MIR and/or MRI scan techniques to provide images with improved accuracy and detail, compared to ultrasound scan data for example. The scan data are loaded and stored (301) into the computer memory (202) through additional input means such as disk drives or tape drives, not shown in the drawings.
- pre-op protocol pre-operative data
- the patient data is registered (302) according to one of the generally known techniques.
- This procedure may be either a three-dimensional registration of the entire data set, or a slice-by-slice sequence of two-dimensional registrations.
- the image is reconstructed (303) in memory, using volumetric or surface rendering to produce an array of 3-dimensional voxel data.
- Segmentation (304) is then carried out on these data to distinguish various anatomical features, such as different types of material in the head (bone, brain tissue, vascular and ventricular structures, etc.) and the location of surfaces, using one or more of known segmentation techniques.
- the segmentation process includes assigning different display colors to different types of structures to facilitate their identification and distinction in a color video display.
- the vascular system may be displayed in red, the ventricular system may be shown in blue, bones may be colored brown, and so on.
- these assignments may be varied by the user by means of the keyboard (103) or mouse (104) .
- the display opacities may be varied by the user by means of the keyboard (103), mouse (104), or other input device (such as a voice-activated device) to further facilitate their identification and distinction of hidden or obstructed features in the video display.
- segmentation (309) can be done for each 2-dimensional image sample, and the 3-dimensional data are then reconstructed (310) from the segmented data slices. This alternative protocol is shown by dotted lines in the Figure.
- the next phase of the pre- op protocol is to determine the location and orientation of the view vector (305) to define the image to be displayed.
- This view vector is obtained by querying the OTS to ascertain the current location and orientation of the surgical instrument (109) .
- the three-dimensional scan data is then manipulated (306) to position and orient the resulting three-dimensional perspective view and to define cutting planes and reference markers in the displayed image indicating and clarifying this view.
- the manipulated three-dimensional perspective image is then displayed (307) on the video display (102) .
- other two-dimensional images such as 2D sectional views for any cutting planes, are preferably also displayed along with the 3D perspective display for purposes of elucidation.
- the pre-op protocol is a continuing loop process in which the OTS is repeatedly queried (308) for changes in the location of the view vector corresponding to changes in the position and orientation of the surgical instrument (109) .
- the displayed images are continually being updated during the surgical procedure, and the resulting displays are constantly refreshed in real time.
- the image data are also stored or buffered and made available for further use (311) according to subsequent protocols.
- the surgical instrument (109) may include an ultrasound transducer located at the tip (115), which itself scans and detects ultrasound imaging data when placed in contact with the patient's head.
- Figure 4 is a schematic block diagram showing the intra-operative (“intra-op”) ultrasound (“US”) protocol for handling the US image data during surgery.
- the ultrasound transducer is a phased focusing array which generates data from a planar fan-shaped sector of the anatomical region of interest, where the central axis of the transducer lies in the plane of the scan sector which, in this context, is collinear with the longitudinal axis of the surgical instrument (109) .
- US scan data is collected and stored (401) for a cone-shaped volume in the region of interest. This cone defines the "field of view" of the transducer scan.
- the location and orientation of the transducer is tracked and determined (402) by the OTS, and the US data is used to reconstruct (403) three-dimensional intra-op image data for the region of interest.
- This data is manipulated (404) in a way analogous to the manipulation (306) of the pre-op data, and then used to generate three-dimensional images (405) , together with any desired corresponding two- dimensional images of the ultrasound data.
- These intra-op images are fused (406) with the pre-op images generated by the pre-op protocol (311), and the composite images are further displayed.
- the OTS is continually strobed (407), and the ultrasound images are constantly refreshed.
- FIG. 5 is a schematic block diagram of the intra-op protocol in which an endoscope is place at the tip 115 of the surgical instrument (109) .
- This protocol is also applicable for procedures utilizing a surgical microscope in place of the endoscope.
- Image data is acquired (501), using a CCD camera or other known technique, representing a 2- dimensional image in a plane orthogonal to the line of sight of the endoscope or microscope, which in this context is the longitudinal axis of the surgical instrument (109).
- the location and orientation of the instrument is tracked and determined (502) by the OTS, and analog-to-digital (“A/D”) conversion (503) is carried out on the data.
- A/D analog-to-digital
- the location of the viewpoint is determined (504) from the OTS data, and the endoscope or microscope image data is manipulated (505) to generate the desired image (506) for display.
- These intra-op images are fused (508) with the pre-op images generated by the pre-op protocol (311), and the composite images are further displayed.
- the OTS is continually strobed (507), and the ultrasound images are constantly refreshed.
- Figure 6 is a schematic block diagram of a flow chart for a program that implements the pre-op protocol.
- the program starts (601) by causing the computer to receive and load (602) previously obtained scan data for the patient, such as MRI or CT data.
- the computer further reads data from the OTS (603) to register the scanned patient data (604) .
- the scanned data is used to reconstruct image data (605) in three dimensions, and segmentation (606) is carried out on this reconstruction.
- segmentation is carried out on 2D slices (615), and these segmented slices are then reconstructed into the full 3D image data.
- the program next reads input data from the keyboard (103) or mouse (104) to enable the user to select a field of view for image displays (607) .
- the image data is then manipulated and transformed (608) to generate the requested view, along with any selected reference markers, material opacities, colors, and other options presented to the user by the program.
- the user may request a 3D display of the entire head, together with a superimposed cone showing the field of view for an endoscope, microscope, ultrasound transducer, or other viewing device being used during the surgery.
- the resulting manipulated image is then displayed (609) preferably in color on the video display (102) .
- the computer next reads the OTS data (610) and determines (611) whether the surgical instrument has moved.
- program control returns to the selection of a new field of view (607) and the successive operations (608) - (610) shown in Figure 6. If the position of the instrument has not changed, the displayed image is stored (612), refreshing any previously stored display image. The program further looks for requests from the user (613) whether to discontinue operation, and it there are no such requests, the operations (611) and (612) are repeated. Thus the computer remains in a loop of operations until the user requests termination (614) .
- FIG. 7 is a schematic block diagram of a flow chart for a program that implements the ultrasound intra-op protocol.
- the program starts (701) by causing the computer to receive and load the data from a US transducer at the tip (115) of the surgical instrument (109) .
- data is produced normally using polar or spherical coordinates to specify locations in the region of interest, and the program converts (703) this data preferably to Cartesian coordinates.
- OTS data is read (704) to determine the position and orientation of the surgical instrument (109) , and US data from the aggregation of aligned data slices is utilized to reconstruct 3D image data (705) representing the US scan data.
- This image data is manipulated and transformed (706) by the program in a manner similar to the manipulation (608) of the pre-op data (608), and the resulting image is displayed (707) .
- the OTS is queried (709) to determine whether the surgical instrument has moved ' (713) , and if so a new US display image is constructed.
- the program queries the user (716) whether to carry out another US scan of the region of interest. If so, program control returns to the operation (702) in Figure 7 and fresh US data is obtained by the US transducer. If another scan is not requested (716) , the program returns to operation (705) and a new 3D image is reconstructed from the present US scan data.
- the OTS query (709) determines that the surgical instrument has not moved since the last query, the US image is fused (710) with the pre-op image obtained by the program shown in Figure 6, and the combined image is displayed (711) .
- the OTS is again queried (712) to determine (713) whether the surgical instrument has moved. If so, the program returns to the new scan user query (716) . Otherwise the program further looks for requests from the user (714) whether to discontinue operation, and if there are no such requests, the operation (713) is repeated.
- the computer remains in a loop of operations until the user requests termination (715) , similarly to the pre-op program of Figure 6.
- the endoscope/microscope intra-op protocol is implemented preferably by the endoscope intra-op program having a flow chart shown in schematic block diagram form in Figure 8.
- the program causes the computer to receive and load image data from the endoscope (802) .
- This data is digitized (803) and preferably displayed (804) on the video display (102) .
- the OTS is queried (805) to receive information determining the location and orientation of the endoscope (806) .
- the pre-op data obtained by the pre-op program illustrated in Figure 6 is retrieved (807), and utilized to reconstruct a 3-dimensional virtual image (808) from the viewpoint of the endoscope.
- This image is displayed '(809), in a manner similar to the 3D display of images by the pre- op program illustrated in Figure 6.
- This image is fused (810) with the endoscope image displayed in operation (804), and the combined image is also displayed (811) .
- the OTS is then strobed (812) to determine (813) whether the endoscope has moved since the last query, and if so, program control returns to the operation (802) which refreshes the image data received by the endoscope. Otherwise the program further looks for requests from the user (814) whether to discontinue operation, and if there are no such requests, the operation (813) is repeated.
- the computer remains in a loop of operations until the user requests termination (815), similarly to the pre-op and intra-op programs of Figures 6 and 7.
- the foregoing program modules may be designed independently, and they can be configured also to run independently.
- the pre-op program may be completed, followed by running of either or both of the intra-op programs.
- these programs operate in parallel during surgery so that the pre-op data images and intra-op data images are all continually refreshed as the operation proceeds.
- Known methods for parallel execution of programs may be utilized to accomplish this result.
- the above programs are carried out preferably on a computer (101) that is adapted for computer graphics applications. Suitable computers for these programs are commercially available from Silicon Graphics, Inc. of Mountain View, California. Graphics software modules for most of the individual image processing operations in the above programs are also available from Silicon Graphics, Inc. as well as other sources.
- the drawing shows a highly simplified sketch of a three-dimensional image display (901) obtained by the above system with the surgical probe (109) of Figure 1 in the position illustrated, pointing toward the target lesion or tumor (117) inside the patient's head (112) .
- the edge of the display (901) is shown by the border (900) .
- the display (901) is a perspective view from the tip (115) of the probe (109) . This display is continuously refreshed, so that as the probe (109) is moved the displayed image (901) immediately changes. It will be noted that, although the probe (109) is shown entirely outside the patient's head, the display (901) shows internal anatomical structures such as the brain and the target lesion (117).
- the display characteristics can be adjusted in real time to emphasize or de-emphasize the internal structures. These structures may be distinguished by displays with different colors for different types of material. Also, the display opacity of the skin, skull, and brain tissue may be reduced to provide or emphasize further structural details regarding the target lesion (117) .
- the display (901) effectively equips the surgeon with "X-ray eyes" to look at hidden structures through obstructing surfaces and objects. With this display, the entire internal structure of the head may be examined and studied to plan a surgical trajectory before any incision is made. Furthermore, if the surgical instrument (109) is a scalpel, the display (901) allows the surgeon to see any structures immediately behind a surface prior to the first incision.
- Figure 9 shows also the conventional axial (902), coronal (903) and sagittal (904) 2D displays for purposes of further clarification and elucidation of the region under examination.
- the field of view (116) is also indicated in the display (901) by the quasi-circular image (905) indicating the intersection of the conical field of view (116) with the surface of the skin viewed by the endoscope (109) .
- This conical field of view is also superimposed, for completeness, in the 2D displays (902) - (904) .
- displays are also presented showing the actual image seen by the endoscope in the field of view (905) , and the 3D perspective image for the same region in the field of view (905) ; these auxiliary displays are not shown in the drawings.
- Similar auxiliary displays are preferably included when the instrument (109) is an ultrasound transducer.
- the endoscope may be inserted to provide an internal view of the target anatomy.
- the drawing shows a highly simplified sketch of a three- dimensional image display (1001) obtained by the above system with the endoscope (109) of Figure 1 in the alternative position shown by the dotted lines, pointing toward the target lesion or tumor (117) .
- the edge of the display (1001) is shown by the border (1000) .
- the display (1001) has been manipulated to provide a three-dimensional sectional view with a cutting plane passing through the tip (115) of the endoscope (109) and orthogonal to its axis.
- FIG. 905 the endoscope field of view (905) is indicated in the display, and in a preferred embodiment auxiliary displays are also presented showing the actual image seen by the endoscope in the field of view (905), and the 3D perspective image for the same region in the field of view (905) ; these auxiliary displays are also not shown in Figure 10.
- This Figure further preferably includes also the conventional axial (1002), coronal (1003) and sagittal (1004) 2D displays for purposes of further clarification and elucidation.
- Figures 11a, lib, 12 and 13 illustrate further the three-dimensional displays that are produced by a preferred embodiment of the present invention. Referring to Figures 11a, lib, a plastic model of a skull has been fabricated having a base portion (1102) and a removable top portion (1101).
- Figures show the model skull (1101), (1102) resting on a stand (1106) .
- Figure 11a also shows a pointer (1104) with LED's (1105) connected to an OTS (not shown in the drawing) that has been used to generate displays according to the invention.
- a plurality of holes (1103) in the top portion (1101) are provided, which allow the pointer (1104) to be extended into the interior of the skull.
- Figure lib shows the skull with the top portion (1103) removed.
- a plastic model of internal structures (1107) is fabricated inside the skull; these internal structures are easily recognizable geometric solids, as illustrated in the Figure.
- Figure 12 is a composite of two displays (1201), (1202) of the skull with the pointer (1104) directed toward the skull from a top center external location, similar to the location and orientation of the pointer shown in Figure 1.
- the display (1201) is a three- dimensional perspective view from this pointer location.
- the display (1202) is the same view, but with the display opacity of the skull material reduced. This reduced opacity makes the internal structure (1107) clearly visible, as shown in the Figure.
- the system enables the surgeon to vary this opacity in real time to adjust the image so that both the skull structure and the internal structure are visible in the display in various proportions.
- the surface contour lines shown in the display (1201) are produced by the finite size of the rendering layers or voxels. These contour lines may be reduced by smoothing the data, or by reducing the sizes of the voxels or layers.
- Figure 13 is a composite of two further displays with the pointer (1104) moved to extend through one of the openings (1103).
- Display (1302) is the view from the tip of the pointer inside the skull.
- Display (1301) is a view of the entire structure from outside the skull along the pointer axis; in other words, display (1302) is substantially a magnification of part of display (1301) .
- Display (1301) shows the skull with a portion cut away by a cutting plane through the tip of the pointer, perpendicular to the pointer axis. Both of these displays clearly illustrate the perspective nature of the three-dimensional displays generated by the present invention.
- Figure 14 is a simplified composite of displays generated by the system for an actual human head.
- Display (1401) is a perspective view of the entire head with a cutaway portion defined by orthogonal cutting planes as shown. The edge of the display in Figure 14 is shown by the border (1400) . This display also shows the field of view of an endoscope pointing toward the head along the intersection line of the two cutting planes, with the tip of the endoscope at the apex of the cone.
- Display (1402) shows the two-dimensional sectional view produced by the vertical cutting plane
- display (1403) shows the corresponding sectional view produced by the horizontal cutting plane.
- the images in displays (1402) and (1403) are also transformed (rotated and magnified) and superimposed on the three-dimensional image in display (1401) .
- Display (1404) is the actual image seen by the endoscope.
- Display (1405) is a virtual perspective view of the endoscope image reconstructed from scan data by volume rendering in accordance with the present invention.
- Display (1406) is a virtual perspective view of the image from the endoscope viewpoint with a narrower field of view, reconstructed from scan data by surface rendering in accordance with the present invention. This display (1406) would be used with a surgical probe in planning a surgical trajectory.
- Display (1407) is a magnification of (1406) (i.e. with a narrower field of view) showing the virtual image that would be seen through a microscope.
- display (1408) is a segmented three-dimensional perspective view of the entire head from the scan data utilizing surface rendering
- display (1409) is the same view with volume rendering.
- Figure 14 illustrates the rich variety and versatility of the displays that are possible with the present system. All of these displays are presented to the surgeon in real time, simultaneously, and can be varied on line.
- this invention provides improved means for navigating through the anatomy during actual surgical procedures.
- the system enables the surgeon to select and adjust the display with the same tool that is being utilized to perform the procedure, without requiring extra manual operations. Since the displays are provided immediately in real time, the imaging does not require any interruption of the procedure.
- the virtual images provided by this system are continuously correlated with the images that are obtained through conventional means.
- the invention is not limited in its application to neurosurgery, or any other kind of surgery or medical diagnostic applications.
- systems implementing the invention can be implemented for actual nautical or aviation navigation utilizing information from satellites to obtain the "pre-op" scan data.
- the pointing device can be implemented by the vessel or aircraft itself, and the video display could be replaced by special imaging goggles or helmets.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Robotics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
A surgical navigation system has a computer (101) with a memory (202) and a display (102) connected to a surgical instrument or pointer (109) and a position tracking system (105, 110, 111), so that the location and orientation of the pointer (109) are tracked in real time, and conveyed to the computer (101). The computer memory (202) is loaded with data from an MRI, CT, or other volumetric scan of a patient, and this data is utilized to dynamically display 3-dimensional perspective images in real time of the patient's anatomy from the viewpoint of the pointer (109).
Description
Description
Method and Apparatus for Volumetric Image Navigation
Technical Field
This invention pertains generally to systems and methods for generating images of three dimensional objects for navigation purposes, and more particularly to systems and methods for generating such images in medical and surgical applications.
Background Art
Precise imaging of portions of the anatomy is an increasingly important technique in the medical and surgical fields. In order to lessen the trauma to a patient caused by invasive surgery, techniques have been developed for performing surgical procedures within the body through small incisions with minimal invasion. These procedures generally require the surgeon to operate on portions of the anatomy that are not directly visible, or can be seen only with difficulty. Furthermore, some parts of the body contain extremely complex or small structures and it is necessary to enhance the visibility of these structures to enable the surgeon to perform more delicate procedures. In addition, planning such procedures requires the evaluation of the location and orientation of these structures within the body in order to determine the optimal surgical trajectory.
New diagnostic techniques have been developed in recent years to obtain images of internal anatomical structures. These techniques offer great advantages in comparison with the traditional X-ray methods. Newer techniques include microimpulse radar (MIR) , computer tomography (CT) scans, magnetic resonance imaging (MRI) , positron emission tomography (PET), ultrasound (US) scans, and a variety of other techniques. Each of these methods has advantages and
drawbacks in comparison with other techniques. For example, the MRI technique is useful for generating three-dimensional images, but it is only practical for certain types of tissue, while CT scans are useful for generating images of other anatomical structures. Ultrasound scanning, in contrast, is a relatively rapid procedure; however it is limited in its accuracy and signal-to-noise ratio.
The imaging problem is especially acute in the field of neurosurgery, which involves performing delicate surgical procedures inside the skull of the patient. The above techniques have improved the surgeon's ability to locate precisely various anatomical features from images of structures within the skull. However this has only limited usefulness in the operating room setting, since it is necessary to match what the surgeon sees on the 2D image with the actual 3D patient on the operating table. The neurosurgeon is still compelled to rely to a considerable extent on his or her knowledge of human anatomy.
The stereotactic technique was developed many years ago to address this problem. In stereotactic surgery, a frame of reference is attached to the patient's head which provides reference points for the diagnostic images. The device further includes guides for channeling the surgical tool along a desired trajectory to the target lesion within the brain. This method is cumbersome and has the drawback that the surgeon cannot actually see the structures through which the trajectory is passing. There is always the risk of damage to obstacles in the path of the incision, such as portions of the vascular or ventricular system. In essence, with previous neurosurgical techniques the surgeon is in the position much like that of a captain piloting a vessel traveling in heavy fog through waters that have many hazards, such as shoals, reefs, outcroppings of rocks, icebergs, etc. Even though the captain may have a very good map of these hazards, nevertheless there is the constant
problem of keeping track of the precise location of the vessel on the map. In the same way, the neurosurgeon having an accurate image scan showing the structures within the brain must still be able to precisely locate where the actual surgical trajectory lies on the image in order to navigate successfully to the target location. In the operating room setting, it is further necessary that this correlation can be carried out without interfering with the numerous other activities that must be performed by the surgeon.
The navigation problem has been addressed in United States Patent No. 5,383,454, issued January 24, 1995 (Bucholz) . This patent describes a system for indicating the position of a surgical probe within a head on an image of the head. The system utilizes a stereotactic frame to provide reference points, and to provide means for measuring the position of the probe tip relative to these reference points. This information is converted into an image by means of a computer. United States Patent No. 5,230,623, issued July 27,
1993 (Guthrie) , discloses an operating pointer whose position can be detected and read out on a computer and associated graphics display. The pointer can also be used as a "3D mouse" to enable the surgeon to control the operation of the computer without releasing the pointer. United States Patent No. 5,617,857, issued April 8, 1997 (Chader et al . ) sets forth an imaging system and method for interactively tracking the position of a medical instrument by means of a position-detecting system. The pointer includes small light-emitting diodes (LED) , and a stationary array of radiation sensors is provided for detecting pulses emitted by these LED's and utilizing this information to ascertain dynamically the position of the pointer. Reference is made also to United States Patent No. 5,622,170, issued April 22, 1997 (Schulz), which describes a
similar system connected to a computer display for displaying the position of an invasive surgical probe relative to a model image of the object being probed (such as a brain) . United States Patent No. 5,531,227, issued July 2, 1996
(Schneider) explicitly addresses the problem recognized in many other references that it is desirable to provide a real time display of a surgical probe as it navigates through the brain. This patent describes a system for providing images along the line of sight of the surgeon in a dynamic realtime fashion. In this system the images that are displayed are resliced images from a three-dimensional data reconstruction which are sections or slices orthogonal to the line of sight, taken at various positions along this line specified by the user. Thus, while the viewpoint for the line of sight is always external to the body, the sectional planes that are used to define the virtual images may constitute various slices through the body chosen by the surgeon. These images may be superimposed on actual images obtained by an image recording device directed along the line of sight such as a video camera attached to the surgeon's head, and the composite images may be displayed.
The systems described above attempt to address the navigation problem in various ways, and they all have the common drawback of requiring a certain level of abstract visualization by the surgeon during an operating room procedure. When the surgeon is proceeding through the brain toward a target tumor or lesion, it is desirable to be fully aware of all of the structures around the surgical trajectory. With previous systems the displays that are presented do not provide all of this information in a single convenient real-time display, and they require the viewer to piece together and re-orient the displayed information to obtain a mental picture of the surrounding structures. These are serious practical disadvantages in an operating
room setting. What is absent from previous systems is a 3D display that shows, in a real-time view, the various structures looking ahead from the surgical probe along a line of sight into the brain in three and two dimensions, including structures hidden by other features.
Disclosure of Invention
The present invention provides an improved system and method for displaying 3D images of anatomical structures in real time during surgery to enable the surgeon to navigate through these structures during the performance of surgical procedures. This system is also useful in planning of surgical procedures. The system includes a computer with a display and input devices such as a keyboard and mouse. The system also includes a position tracking system that is connected both to the computer and also to the surgical probes or other instruments that are used by the surgeon. The position tracking system provides continual real time data to the computer indicating the location and orientation of the surgical instrument in use. The computer further includes a memory containing patient data produced by imaging scans, such as CT or MRI scans, from which 2- dimensional and 3-dimensional images of the anatomical structure may be generated. Means are provided for registration of these images with respect to the patient. The computer memory is further provided with programs that control the generation of these anatomical images. These programs include software for segmentation of the scan images to identify various types of structures and tissues, as well as the reconstruction of 2D and 3D images from the scan data. This software allows these images to be displayed with various magnifications and orientations, and with various sectional views produced by slice planes in various locations and orientations, all controlled by the surgeon.
This image-generating software has the important feature that it produces 3D images that are perspective views of the anatomical structures, with user-controlled means for varying the viewing orientation and location, and also varying the displayed transparency or opacity of various types of tissues, structures, and surfaces in the viewed region of interest. This enables the user to effectively "see through" surfaces and structures in the line of sight of the image to reveal other structures that would otherwise be hidden in that particular view.
Further, the images are generated from the viewpoint of the surgical probe or instrument that is in use, looking from the tip of the instrument along its longitudinal axis. Thus, when an invasive surgical instrument such as a scalpel or forceps is inserted into an incision in the body, the display provides a three dimensional perspective view of anatomical structures from a viewpoint inside the body. These images are all generated in real time "on the fly". Thus, as the instrument is moved or rotated, the position tracking system continually provides data to the computer indicating the location and orientation of the instrument, and the displayed image is continually updated to show the structures toward which the instrument is pointing.
In addition, for probes or instruments being used that are capable themselves of generating images, such as ultrasound probes, endoscopes, or surgical microscopes, the system provides means for integrating these images with those generated from the scan data. The software enables the user to overlay the "actual images" generated by these instruments with the "virtual images" generated from the scan data.
It is an object of this invention to provide a system and method for generating an image in three dimensional perspective of anatomical structures encountered by a surgeon during the performance of surgical procedures.
A second object of this invention is to provide a system and method for generating such an image with user- controlled means for varying the location and orientation of the viewpoint corresponding to the image. Another object of this invention is to provide a system and method for generating such an image with user-controlled means for varying the opacity of structures and surfaces in the viewed region of interest, so that the displayed image shows structures and features that would be otherwise hidden in a normal view.
Yet another object of this invention is to provide a system and method for generating such an image with a viewpoint located at the tip of the instrument being used by the surgeon in the direction along the longitudinal axis of the instrument.
Still another object of this invention is to provide a system and method for generating such an image in real time, such that the displayed image continually corresponds to the position of the instrument being used by the surgeon. Yet a further object of this invention is to provide a system and method for comparing and combining such an image with the image produced by an image-generating instrument being used by the surgeon.
These and other objects, advantages, characteristics and features of the invention may be better understood by examining the following drawings together with the detailed description of the preferred embodiments.
Brief Description of Drawings Figure 1 is a schematic perspective drawing of the apparatus of the present invention in operating room use during the performance of neurosurgical procedures.
Figure 2 is a schematic block diagram of the computer system and optical tracking system of the present invention. Figure 3 is a schematic block diagram of the navigation
protocol using pre-operative data that is followed in carrying out the method of the present invention.
Figure 4 is a schematic block diagram of the navigation protocol using ultrasound intra-operative data that is followed in carrying out the method of the present invention.
Figure 5 is a schematic block diagram of the endoscopic protocol that is followed in carrying out the method of the present invention. Figure 6 is a schematic flow chart of the pre-operative computer program that implements the pre-operative protocol of the present invention.
Figure 7 is a schematic flow chart of the intra- operative ultrasound computer program that implements the ultrasound protocol of the present invention.
Figure 8 is a schematic flow chart of the intra- operative endoscope computer program that implements the endoscope protocol of the present invention.
Figure 9 is a drawing of a display generated according to the present invention, showing axial, coronal, and sagittal views of a head, together with a three-dimensional perspective view of the head taken from an exterior viewpoint .
Figure 10 is a drawing of a display generated according to the present invention, showing sectional axial, coronal, and sagittal views of a head, together with a three- dimensional perspective view of the head taken from an interior viewpoint.
Figure 11a is a drawing of a plastic model of a human skull and a surgical probe that has been used to demonstrate the present invention.
Figure lib is another drawing of the model skull of Figure 11a, with the top of the skull removed to show model internal structures for demonstration purposes. Figure 12 is a simplified reproduction of two displays
produced by the present invention for the model skull shown in Figures 11a, lib.
Figure 13 is a simplified reproduction of two further displays of the invention for the skull in Figures 11a, lib. Figure 14 is a reproduction of a composite display produced by the present invention for an actual human head.
Best Mode for Carrying Out the Invention
Figure 1 shows the apparatus of the invention as used in performing or planning a neurosurgery operation. In this drawing the patient's head (112), has a tumor or lesion (117), which is the target object of the operation. Fiducial markers (113), (114) are attached to the head to enable registration of images generated by previously obtained scan data according to techniques familiar to persons of ordinary skill in the relevant art. A surgical probe or instrument (109) held by the surgeon is directed toward the tissues of interest. A computer (101) is connected to user input devices including a keyboard (103) and mouse (104), and a video display device (102) which is preferably a color monitor. The display device (102) is located such that it can be easily viewed by the surgeon during an operation, and the user input devices (103) and (104) are placed within easy reach to facilitate use during the surgery. The apparatus further includes a position tracking system, which is preferably an optical tracking system (hereafter "OTS") having a sensing unit (105) mounted overhead in view of the operating table scene, and at least two light emitting diodes (LED's) (110), (111) mounted on the surgical instrument (109). These LED's preferably emit continuous streams of pulsed infrared signals which are sensed by a plurality of infrared detectors (106), (107), (108) mounted in the sensing unit (105) in view of the surgical instrument (109) . The instrument (109) and the sensing unit (105) are both connected to the computer (101),
which controls the timing and synchronization of the pulse emissions by the LED's and the recording and processing of the infrared signals received by the detectors (106) - (108) . The OTS further includes software for processing these signals to generate data indicating the location and orientation of the instrument (109) . The OTS generates the position detecting data on a real time continuous basis, so that as the surgical instrument (109) is moved, its position and orientation are continually tracked and recorded by the sensing unit (105) in the computer (101) . The OTS may be preferably of the type known as the "FlashPoint 3-D Optical Localizer", which is commercially available from Image Guided Technologies of Boulder, Colorado, similar to the systems described in U.S. Patent Nos. 5,617,857 (Chader, et al . ) and 5,622,170 (Schulz) discussed previously. However the invention is not limited to this particular OTS, and other position tracking systems, such as sonic position detecting systems, may also be utilized.
As illustrated in Figure 1, the surgical instrument (109) is elongated in shape, having a longitudinal axis and tip (115) pointing toward the tissues of interest. The instrument may be an endoscope having a conical field of view (116) that is indicated by dotted lines in Figure 1. The instrument shown in the Figure is held at a position external to the patient's head. If an incision (118) has been made into the skull, the instrument may be inserted through the incision; this alternative position is shown by dotted lines in Figure 1. In both positions the instrument is held so that there is an unobstructed line of sight between the LED's (110), (111) and the sensing unit (105). In endoscopic and other optical viewing applications, the instrument may include a laser targeting system (not shown in the drawings) to illuminate and highlight the region under examination. Figure 2 shows a schematic block diagram of the
computer system connected to the position tracking system. The computer (101) includes a central processing unit (CPU) (201) communicative with a memory (202), the video display (102), keyboard and mouse (103), (104), optical detectors (106) - (108), and the LED's mounted on the surgical instrument (109) . The computer memory contains software means for operating and controlling the position tracking system. In an alternative preferred embodiment, the OTS components (105) - (109) may be connected to and controlled by a separate computer or controller which is connected to the computer (101) and provides continual data indicating the position and orientation of the surgical instrument (109) .
The above apparatus is operated to carry out surgical protocols that are illustrated schematically in Figures 3 - 5. Figure 3 is a schematic block diagram of the protocol for handling pre-operative data ("pre-op protocol") to generate images during surgery according to the present invention. It is assumed that three-dimensional image data of the patient's head have been previously obtained from one or more of the techniques that are known to persons of ordinary skill in the medical imaging arts. Preferably these data are acquired from CT, MIR and/or MRI scan techniques to provide images with improved accuracy and detail, compared to ultrasound scan data for example. The scan data are loaded and stored (301) into the computer memory (202) through additional input means such as disk drives or tape drives, not shown in the drawings.
The patient data is registered (302) according to one of the generally known techniques. This procedure may be either a three-dimensional registration of the entire data set, or a slice-by-slice sequence of two-dimensional registrations. Following the three-dimensional registration, the image is reconstructed (303) in memory, using volumetric or surface rendering to produce an array of
3-dimensional voxel data. Segmentation (304) is then carried out on these data to distinguish various anatomical features, such as different types of material in the head (bone, brain tissue, vascular and ventricular structures, etc.) and the location of surfaces, using one or more of known segmentation techniques. Preferably the segmentation process includes assigning different display colors to different types of structures to facilitate their identification and distinction in a color video display. For example, the vascular system may be displayed in red, the ventricular system may be shown in blue, bones may be colored brown, and so on. In a preferred embodiment these assignments may be varied by the user by means of the keyboard (103) or mouse (104) . Also in a preferred embodiment the display opacities may be varied by the user by means of the keyboard (103), mouse (104), or other input device (such as a voice-activated device) to further facilitate their identification and distinction of hidden or obstructed features in the video display. In an alternative protocol in which 2-dimensional registration is carried out, segmentation (309) can be done for each 2-dimensional image sample, and the 3-dimensional data are then reconstructed (310) from the segmented data slices. This alternative protocol is shown by dotted lines in the Figure. Referring still to Figure 3, the next phase of the pre- op protocol is to determine the location and orientation of the view vector (305) to define the image to be displayed. This view vector is obtained by querying the OTS to ascertain the current location and orientation of the surgical instrument (109) . With this information, the three-dimensional scan data is then manipulated (306) to position and orient the resulting three-dimensional perspective view and to define cutting planes and reference markers in the displayed image indicating and clarifying this view. The manipulated three-dimensional perspective
image is then displayed (307) on the video display (102) . In addition, other two-dimensional images, such as 2D sectional views for any cutting planes, are preferably also displayed along with the 3D perspective display for purposes of elucidation.
Finally, the pre-op protocol is a continuing loop process in which the OTS is repeatedly queried (308) for changes in the location of the view vector corresponding to changes in the position and orientation of the surgical instrument (109) . Thus the displayed images are continually being updated during the surgical procedure, and the resulting displays are constantly refreshed in real time. The image data are also stored or buffered and made available for further use (311) according to subsequent protocols.
The surgical instrument (109) may include an ultrasound transducer located at the tip (115), which itself scans and detects ultrasound imaging data when placed in contact with the patient's head. Figure 4 is a schematic block diagram showing the intra-operative ("intra-op") ultrasound ("US") protocol for handling the US image data during surgery. Typically the ultrasound transducer is a phased focusing array which generates data from a planar fan-shaped sector of the anatomical region of interest, where the central axis of the transducer lies in the plane of the scan sector which, in this context, is collinear with the longitudinal axis of the surgical instrument (109) . By rotating the instrument and transducer about this axis, US scan data is collected and stored (401) for a cone-shaped volume in the region of interest. This cone defines the "field of view" of the transducer scan.
The location and orientation of the transducer is tracked and determined (402) by the OTS, and the US data is used to reconstruct (403) three-dimensional intra-op image data for the region of interest. This data is manipulated
(404) in a way analogous to the manipulation (306) of the pre-op data, and then used to generate three-dimensional images (405) , together with any desired corresponding two- dimensional images of the ultrasound data. These intra-op images are fused (406) with the pre-op images generated by the pre-op protocol (311), and the composite images are further displayed. Finally, the OTS is continually strobed (407), and the ultrasound images are constantly refreshed. Figure 5 is a schematic block diagram of the intra-op protocol in which an endoscope is place at the tip 115 of the surgical instrument (109) . This protocol is also applicable for procedures utilizing a surgical microscope in place of the endoscope. Image data is acquired (501), using a CCD camera or other known technique, representing a 2- dimensional image in a plane orthogonal to the line of sight of the endoscope or microscope, which in this context is the longitudinal axis of the surgical instrument (109). The location and orientation of the instrument is tracked and determined (502) by the OTS, and analog-to-digital ("A/D") conversion (503) is carried out on the data. The location of the viewpoint is determined (504) from the OTS data, and the endoscope or microscope image data is manipulated (505) to generate the desired image (506) for display. These intra-op images are fused (508) with the pre-op images generated by the pre-op protocol (311), and the composite images are further displayed. Finally, the OTS is continually strobed (507), and the ultrasound images are constantly refreshed.
The foregoing protocols are implemented by program modules stored in the memory (202) of the computer (101) .
Figure 6 is a schematic block diagram of a flow chart for a program that implements the pre-op protocol. The program starts (601) by causing the computer to receive and load (602) previously obtained scan data for the patient, such as MRI or CT data. The computer further reads data from the
OTS (603) to register the scanned patient data (604) . For 3D volumetric rendering, the scanned data is used to reconstruct image data (605) in three dimensions, and segmentation (606) is carried out on this reconstruction. In an alternative embodiment, shown by dotted lines in the Figure, segmentation is carried out on 2D slices (615), and these segmented slices are then reconstructed into the full 3D image data.
The program next reads input data from the keyboard (103) or mouse (104) to enable the user to select a field of view for image displays (607) . The image data is then manipulated and transformed (608) to generate the requested view, along with any selected reference markers, material opacities, colors, and other options presented to the user by the program. In addition, the user may request a 3D display of the entire head, together with a superimposed cone showing the field of view for an endoscope, microscope, ultrasound transducer, or other viewing device being used during the surgery. The resulting manipulated image is then displayed (609) preferably in color on the video display (102) . The computer next reads the OTS data (610) and determines (611) whether the surgical instrument has moved. If so, program control returns to the selection of a new field of view (607) and the successive operations (608) - (610) shown in Figure 6. If the position of the instrument has not changed, the displayed image is stored (612), refreshing any previously stored display image. The program further looks for requests from the user (613) whether to discontinue operation, and it there are no such requests, the operations (611) and (612) are repeated. Thus the computer remains in a loop of operations until the user requests termination (614) .
Figure 7 is a schematic block diagram of a flow chart for a program that implements the ultrasound intra-op protocol. The program starts (701) by causing the computer
to receive and load the data from a US transducer at the tip (115) of the surgical instrument (109) . Such data is produced normally using polar or spherical coordinates to specify locations in the region of interest, and the program converts (703) this data preferably to Cartesian coordinates. Next, OTS data is read (704) to determine the position and orientation of the surgical instrument (109) , and US data from the aggregation of aligned data slices is utilized to reconstruct 3D image data (705) representing the US scan data. This image data is manipulated and transformed (706) by the program in a manner similar to the manipulation (608) of the pre-op data (608), and the resulting image is displayed (707) .
Similarly to the pre-op program shown in Figure 6, the OTS is queried (709) to determine whether the surgical instrument has moved '(713) , and if so a new US display image is constructed. In a preferred embodiment, the program queries the user (716) whether to carry out another US scan of the region of interest. If so, program control returns to the operation (702) in Figure 7 and fresh US data is obtained by the US transducer. If another scan is not requested (716) , the program returns to operation (705) and a new 3D image is reconstructed from the present US scan data. If the OTS query (709) determines that the surgical instrument has not moved since the last query, the US image is fused (710) with the pre-op image obtained by the program shown in Figure 6, and the combined image is displayed (711) . The OTS is again queried (712) to determine (713) whether the surgical instrument has moved. If so, the program returns to the new scan user query (716) . Otherwise the program further looks for requests from the user (714) whether to discontinue operation, and if there are no such requests, the operation (713) is repeated. Thus the computer remains in a loop of operations until the user
requests termination (715) , similarly to the pre-op program of Figure 6.
The endoscope/microscope intra-op protocol is implemented preferably by the endoscope intra-op program having a flow chart shown in schematic block diagram form in Figure 8. Upon starting (801), the program causes the computer to receive and load image data from the endoscope (802) . This data is digitized (803) and preferably displayed (804) on the video display (102) . The OTS is queried (805) to receive information determining the location and orientation of the endoscope (806) . Using this information, the pre-op data obtained by the pre-op program illustrated in Figure 6 is retrieved (807), and utilized to reconstruct a 3-dimensional virtual image (808) from the viewpoint of the endoscope. This image is displayed '(809), in a manner similar to the 3D display of images by the pre- op program illustrated in Figure 6. This image is fused (810) with the endoscope image displayed in operation (804), and the combined image is also displayed (811) . The OTS is then strobed (812) to determine (813) whether the endoscope has moved since the last query, and if so, program control returns to the operation (802) which refreshes the image data received by the endoscope. Otherwise the program further looks for requests from the user (814) whether to discontinue operation, and if there are no such requests, the operation (813) is repeated. Thus the computer remains in a loop of operations until the user requests termination (815), similarly to the pre-op and intra-op programs of Figures 6 and 7. The foregoing program modules may be designed independently, and they can be configured also to run independently. Thus, the pre-op program may be completed, followed by running of either or both of the intra-op programs. Preferably, however, these programs operate in parallel during surgery so that the pre-op data images and
intra-op data images are all continually refreshed as the operation proceeds. Known methods for parallel execution of programs may be utilized to accomplish this result.
The above programs are carried out preferably on a computer (101) that is adapted for computer graphics applications. Suitable computers for these programs are commercially available from Silicon Graphics, Inc. of Mountain View, California. Graphics software modules for most of the individual image processing operations in the above programs are also available from Silicon Graphics, Inc. as well as other sources.
Referring now to Figure 9, the drawing shows a highly simplified sketch of a three-dimensional image display (901) obtained by the above system with the surgical probe (109) of Figure 1 in the position illustrated, pointing toward the target lesion or tumor (117) inside the patient's head (112) . The edge of the display (901) is shown by the border (900) . The display (901) is a perspective view from the tip (115) of the probe (109) . This display is continuously refreshed, so that as the probe (109) is moved the displayed image (901) immediately changes. It will be noted that, although the probe (109) is shown entirely outside the patient's head, the display (901) shows internal anatomical structures such as the brain and the target lesion (117). With the present system, the display characteristics can be adjusted in real time to emphasize or de-emphasize the internal structures. These structures may be distinguished by displays with different colors for different types of material. Also, the display opacity of the skin, skull, and brain tissue may be reduced to provide or emphasize further structural details regarding the target lesion (117) . In short, the display (901) effectively equips the surgeon with "X-ray eyes" to look at hidden structures through obstructing surfaces and objects. With this display, the entire internal structure of the head may be examined and
studied to plan a surgical trajectory before any incision is made. Furthermore, if the surgical instrument (109) is a scalpel, the display (901) allows the surgeon to see any structures immediately behind a surface prior to the first incision. Figure 9 shows also the conventional axial (902), coronal (903) and sagittal (904) 2D displays for purposes of further clarification and elucidation of the region under examination.
When the surgical instrument (109) is an endoscope or US transducer, the field of view (116) is also indicated in the display (901) by the quasi-circular image (905) indicating the intersection of the conical field of view (116) with the surface of the skin viewed by the endoscope (109) . This conical field of view is also superimposed, for completeness, in the 2D displays (902) - (904) . In a preferred embodiment, displays are also presented showing the actual image seen by the endoscope in the field of view (905) , and the 3D perspective image for the same region in the field of view (905) ; these auxiliary displays are not shown in the drawings. Similar auxiliary displays are preferably included when the instrument (109) is an ultrasound transducer.
After an incision (118) has been made in the patient's head, the endoscope may be inserted to provide an internal view of the target anatomy. Referring now to Figure 10, the drawing shows a highly simplified sketch of a three- dimensional image display (1001) obtained by the above system with the endoscope (109) of Figure 1 in the alternative position shown by the dotted lines, pointing toward the target lesion or tumor (117) . The edge of the display (1001) is shown by the border (1000) . The display (1001) has been manipulated to provide a three-dimensional sectional view with a cutting plane passing through the tip (115) of the endoscope (109) and orthogonal to its axis. Again, the endoscope field of view (905) is indicated in the
display, and in a preferred embodiment auxiliary displays are also presented showing the actual image seen by the endoscope in the field of view (905), and the 3D perspective image for the same region in the field of view (905) ; these auxiliary displays are also not shown in Figure 10. This Figure further preferably includes also the conventional axial (1002), coronal (1003) and sagittal (1004) 2D displays for purposes of further clarification and elucidation. Figures 11a, lib, 12 and 13 illustrate further the three-dimensional displays that are produced by a preferred embodiment of the present invention. Referring to Figures 11a, lib, a plastic model of a skull has been fabricated having a base portion (1102) and a removable top portion (1101). These Figures show the model skull (1101), (1102) resting on a stand (1106) . Figure 11a also shows a pointer (1104) with LED's (1105) connected to an OTS (not shown in the drawing) that has been used to generate displays according to the invention. A plurality of holes (1103) in the top portion (1101) are provided, which allow the pointer (1104) to be extended into the interior of the skull. Figure lib shows the skull with the top portion (1103) removed. A plastic model of internal structures (1107) is fabricated inside the skull; these internal structures are easily recognizable geometric solids, as illustrated in the Figure.
The skull of Figures 11a, lib has been scanned to generate "pre-op" image data, which has been utilized to produce the displays shown in Figures 12, 13. The edges of the displays in Figures 12 and 13 are shown by the borders (1200), (1300) respectively. Figure 12 is a composite of two displays (1201), (1202) of the skull with the pointer (1104) directed toward the skull from a top center external location, similar to the location and orientation of the pointer shown in Figure 1. The display (1201) is a three- dimensional perspective view from this pointer location.
The display (1202) is the same view, but with the display opacity of the skull material reduced. This reduced opacity makes the internal structure (1107) clearly visible, as shown in the Figure. During actual use, the system enables the surgeon to vary this opacity in real time to adjust the image so that both the skull structure and the internal structure are visible in the display in various proportions.
It will be noted that the surface contour lines shown in the display (1201) are produced by the finite size of the rendering layers or voxels. These contour lines may be reduced by smoothing the data, or by reducing the sizes of the voxels or layers.
Figure 13 is a composite of two further displays with the pointer (1104) moved to extend through one of the openings (1103). Display (1302) is the view from the tip of the pointer inside the skull. Display (1301) is a view of the entire structure from outside the skull along the pointer axis; in other words, display (1302) is substantially a magnification of part of display (1301) . Display (1301) shows the skull with a portion cut away by a cutting plane through the tip of the pointer, perpendicular to the pointer axis. Both of these displays clearly illustrate the perspective nature of the three-dimensional displays generated by the present invention. Finally, Figure 14 is a simplified composite of displays generated by the system for an actual human head. Display (1401) is a perspective view of the entire head with a cutaway portion defined by orthogonal cutting planes as shown. The edge of the display in Figure 14 is shown by the border (1400) . This display also shows the field of view of an endoscope pointing toward the head along the intersection line of the two cutting planes, with the tip of the endoscope at the apex of the cone. Display (1402) shows the two-dimensional sectional view produced by the vertical cutting plane, and display (1403) shows the corresponding
sectional view produced by the horizontal cutting plane. Furthermore, the images in displays (1402) and (1403) are also transformed (rotated and magnified) and superimposed on the three-dimensional image in display (1401) . Both of these displays indicate also the intersection of the cutting planes with the conical field of view. Display (1404) is the actual image seen by the endoscope. Display (1405) is a virtual perspective view of the endoscope image reconstructed from scan data by volume rendering in accordance with the present invention. Display (1406) is a virtual perspective view of the image from the endoscope viewpoint with a narrower field of view, reconstructed from scan data by surface rendering in accordance with the present invention. This display (1406) would be used with a surgical probe in planning a surgical trajectory. Display (1407) is a magnification of (1406) (i.e. with a narrower field of view) showing the virtual image that would be seen through a microscope. Finally, display (1408) is a segmented three-dimensional perspective view of the entire head from the scan data utilizing surface rendering, and display (1409) is the same view with volume rendering. Figure 14 illustrates the rich variety and versatility of the displays that are possible with the present system. All of these displays are presented to the surgeon in real time, simultaneously, and can be varied on line.
It is apparent from the foregoing description that this invention provides improved means for navigating through the anatomy during actual surgical procedures. The system enables the surgeon to select and adjust the display with the same tool that is being utilized to perform the procedure, without requiring extra manual operations. Since the displays are provided immediately in real time, the imaging does not require any interruption of the procedure. In addition, the virtual images provided by this system are
continuously correlated with the images that are obtained through conventional means.
It will be further appreciated by persons of ordinary skill in the art that the invention is not limited in its application to neurosurgery, or any other kind of surgery or medical diagnostic applications. For example, systems implementing the invention can be implemented for actual nautical or aviation navigation utilizing information from satellites to obtain the "pre-op" scan data. The pointing device can be implemented by the vessel or aircraft itself, and the video display could be replaced by special imaging goggles or helmets.
The foregoing description of the preferred embodiments of the invention has been presented solely for purposes of illustration and description, and is not exhaustive or limited to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The spirit and scope of the invention are to be defined by reference to the following claims, along with their full scope of equivalents.
Claims
1. A method for generating an image of a three- dimensional object, said method comprising the steps of: acquiring volumetric first scan data for the object; utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data; selecting a viewpoint for displaying an image of said object based on said first virtual image data; manipulating said first virtual image data to generate a first three-dimensional perspective image of said object from said viewpoint; and displaying said first three-dimensional perspective image.
2. The method recited in claim 1, wherein the step of utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data includes the step of segmenting said first virtual image data to distinguish selected features of said object.
3. The method recited in claim 1, wherein the step of utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data includes the step of registration of said first virtual image data in relation to said object to determine the location of features of said object represented in said first virtual image data.
4. The method recited in claim 1, further comprising, following the step of displaying said first three- dimensional perspective image, repeating any desired number of times the steps of: selecting another viewpoint for displaying an image of said object based on said first virtual image data; manipulating said first virtual image data to generate a first three-dimensional perspective image of said object from said other viewpoint; and displaying said first three-dimensional perspective image .
5. The method recited in claim 1, further comprising the steps of: acquiring volumetric second scan data for the object; utilizing said second scan data to reconstruct second virtual image data representing structural information in said second scan data; determining the viewpoint for displaying an image of said object based on said second virtual image data to coincide with said viewpoint selected for displaying an image of said object based on said virtual image data; manipulating said second virtual image data to generate a second three-dimensional perspective image of said object from said viewpoint; and displaying said second three-dimensional perspective image.
6. The method recited in claim 5, further comprising the step of fusing said second three-dimensional perspective image and said first three-dimensional perspective image to display a combined image.
7. The method recited in claim 1, further comprising the steps of: acquiring second scan data for the object; utilizing said second scan data to reconstruct second virtual image data representing structural information in said second scan data; determining the viewpoint for displaying an image of said object based on said second virtual image data to coincide with said viewpoint selected for displaying an image of said object based on said virtual image data; manipulating said second virtual image data to generate a second image of said object from said viewpoint; and displaying said second image.
8. The method recited in claim 7, further comprising the step of fusing said second image and said first three- dimensional perspective image to display a combined image.
9. Apparatus for generating an image of a three- dimensional object, comprising: a computer having a memory; display means communicative with said computer; input means communicative with said computer; pointer means communicative with said computer, said pointer means being movable by the user; and position tracking means communicative with said computer and said pointing means, such that said position tracking means detects the position and orientation of said pointer means continually and communicates said position and orientation to said computer; wherein said computer memory contains volumetric first scan data for the object, and further contains a program which causes said computer to perform the steps of: utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data; determining a viewpoint for displaying an image of said object based on said first virtual image data to be the position and orientation of said pointer means detected by said position tracking means; manipulating said first virtual image data to generate a first three-dimensional perspective image of said object from said viewpoint; and displaying said first three-dimensional perspective image .
10. Apparatus as recited in claim 9, wherein the step of utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data includes the step of segmenting said first virtual image data to distinguish selected features of said object.
11. Apparatus as recited in claim 9, wherein the step of utilizing said first scan data to reconstruct first virtual image data representing structural information in said first scan data includes the step of registration of said first virtual image data in relation to said object to determine the location of features of said object represented in said first virtual image data.
12. Apparatus as recited in claim 9, wherein said program causes said computer, following the step of displaying said first three-dimensional perspective image, to perform and repeat any desired number of times the further steps of: selecting another viewpoint for displaying an image of said object based on said first virtual image data; manipulating said first virtual image data to generate a first three-dimensional perspective image of said object from said other viewpoint; and displaying said first three-dimensional perspective image .
13. Apparatus as recited in claim 9, wherein said program causes said program performs the further steps of: acquiring volumetric second scan data for the object; utilizing said second scan data to reconstruct second virtual image data representing structural information in said second scan data; determining the viewpoint for displaying an image of said object based on said second virtual image data to coincide with said viewpoint selected for displaying an image of said object based on said virtual image data; manipulating said second virtual image data to generate a second three-dimensional perspective image of said object from said viewpoint; and displaying said second three-dimensional perspective image .
14. Apparatus as recited in claim 13, wherein said program performs the further step of fusing said second three-dimensional perspective image and said first three- dimensional perspective image to display a combined image.
15. Apparatus as recited in claim 9, wherein said program performs the further steps of: acquiring second scan data for the object; utilizing said second scan data to reconstruct second virtual image data representing structural information in said second scan data; determining the viewpoint for displaying an image of said object based on said second virtual image data to coincide with said viewpoint selected for displaying an image of said object based on said virtual image data; manipulating said second virtual image data to generate a second image of said object from said viewpoint; and displaying said second image.
16. Apparatus as recited in claim 15, wherein said program performs the further step of fusing said second image and said first three-dimensional perspective image to display a combined image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98931672A EP0999785A4 (en) | 1997-06-27 | 1998-06-26 | Method and apparatus for volumetric image navigation |
JP50581699A JP2002510230A (en) | 1997-06-27 | 1998-06-26 | Stereoscopic image navigation method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US88428997A | 1997-06-27 | 1997-06-27 | |
US08/884,289 | 1997-06-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999000052A1 true WO1999000052A1 (en) | 1999-01-07 |
Family
ID=25384329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1998/013391 WO1999000052A1 (en) | 1997-06-27 | 1998-06-26 | Method and apparatus for volumetric image navigation |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP0999785A4 (en) |
JP (1) | JP2002510230A (en) |
WO (1) | WO1999000052A1 (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001020552A1 (en) * | 1999-09-16 | 2001-03-22 | Mayo Foundation For Medical Education And Research | Method for rendering medical images in real-time |
JP2001133696A (en) * | 1999-11-02 | 2001-05-18 | Olympus Optical Co Ltd | Microscope device for surgery |
WO2001037748A2 (en) | 1999-11-29 | 2001-05-31 | Cbyon, Inc. | Method and apparatus for transforming view orientations in image-guided surgery |
EP1114621A2 (en) * | 1999-12-02 | 2001-07-11 | Philips Corporate Intellectual Property GmbH | Apparatus for displaying images |
WO2001062173A2 (en) | 2000-02-25 | 2001-08-30 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatuses for maintaining a trajectory in sterotaxi for tracking a target inside a body |
FR2807549A1 (en) * | 2000-04-06 | 2001-10-12 | Ge Med Sys Global Tech Co Llc | Method and equipment for processing scanner and radiological images, comprises development of three dimensional image from scanner and production of sections along a proposed direction |
JP2001293006A (en) * | 2000-04-11 | 2001-10-23 | Olympus Optical Co Ltd | Surgical navigation apparatus |
EP1154380A1 (en) * | 2000-05-11 | 2001-11-14 | MTT Medical Technology Transfer AG | A method of simulating a fly through voxel volumes |
WO2002019936A2 (en) | 2000-09-07 | 2002-03-14 | Cbyon, Inc. | Virtual fluoroscopic system and method |
WO2002024051A2 (en) | 2000-09-23 | 2002-03-28 | The Board Of Trustees Of The Leland Stanford Junior University | Endoscopic targeting method and system |
JP2002119507A (en) * | 2000-10-17 | 2002-04-23 | Toshiba Corp | Medical device and medical image collecting and displaying method |
WO2002043009A2 (en) * | 2000-10-23 | 2002-05-30 | Siemens Corporate Research, Inc. | Volume-rendering, generation and display of cut-views |
JP2002263053A (en) * | 2001-03-06 | 2002-09-17 | Olympus Optical Co Ltd | Medical image display device and method |
JP2002541949A (en) * | 1999-04-20 | 2002-12-10 | ジンテーズ アクチエンゲゼルシャフト クール | Apparatus for percutaneous acquisition of 3D coordinates on the surface of a human or animal organ |
JP2003079637A (en) * | 2001-09-13 | 2003-03-18 | Hitachi Medical Corp | Operation navigating system |
EP1391181A1 (en) * | 2002-08-19 | 2004-02-25 | Surgical Navigation Technologies, Inc. | Apparatus for virtual endoscopy |
DE10252837A1 (en) * | 2002-11-13 | 2004-06-03 | Carl Zeiss | Medical tissue examination system, e.g. for use by a surgeon in identifying a tumor and its extent, wherein measurements from a tissue qualification system are combined with microscope images to mark a tissue area |
JP2004533863A (en) * | 2001-02-13 | 2004-11-11 | メディガイド リミテッド | Medical imaging and navigation system |
WO2005020148A1 (en) * | 2003-08-21 | 2005-03-03 | Philips Intellectual Property & Standards Gmbh | Device and method for combined display of angiograms and current x-ray images |
US6907281B2 (en) | 2000-09-07 | 2005-06-14 | Ge Medical Systems | Fast mapping of volumetric density data onto a two-dimensional screen |
WO2006067676A2 (en) * | 2004-12-20 | 2006-06-29 | Koninklijke Philips Electronics N.V. | Visualization of a tracked interventional device |
AU784936B2 (en) * | 1999-12-10 | 2006-08-03 | Iscience Corporation | Treatment of ocular disease |
DE102005051405A1 (en) * | 2005-10-27 | 2007-05-03 | Forschungszentrum Rossendorf E.V. | measuring sensor |
US7697973B2 (en) | 1999-05-18 | 2010-04-13 | MediGuide, Ltd. | Medical imaging and navigation system |
US8401620B2 (en) | 2006-10-16 | 2013-03-19 | Perfint Healthcare Private Limited | Needle positioning apparatus and method |
WO2013061225A1 (en) * | 2011-10-26 | 2013-05-02 | Koninklijke Philips Electronics N.V. | Endoscopic registration of vessel tree images |
US8613748B2 (en) | 2010-11-10 | 2013-12-24 | Perfint Healthcare Private Limited | Apparatus and method for stabilizing a needle |
US8886288B2 (en) | 2009-06-16 | 2014-11-11 | MRI Interventions, Inc. | MRI-guided devices and MRI-guided interventional systems that can track and generate dynamic visualizations of the devices in near real time |
US8989842B2 (en) | 2007-05-16 | 2015-03-24 | General Electric Company | System and method to register a tracking system with intracardiac echocardiography (ICE) imaging system |
US9259290B2 (en) | 2009-06-08 | 2016-02-16 | MRI Interventions, Inc. | MRI-guided surgical systems with proximity alerts |
CN105979879A (en) * | 2014-01-24 | 2016-09-28 | 皇家飞利浦有限公司 | Virtual image with optical shape sensing device perspective |
US9572519B2 (en) | 1999-05-18 | 2017-02-21 | Mediguide Ltd. | Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors |
EP3385039A1 (en) * | 2009-03-31 | 2018-10-10 | Intuitive Surgical Operations Inc. | Synthetic representation of a surgical robot |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6741883B2 (en) * | 2002-02-28 | 2004-05-25 | Houston Stereotactic Concepts, Inc. | Audible feedback from positional guidance systems |
US7477763B2 (en) * | 2002-06-18 | 2009-01-13 | Boston Scientific Scimed, Inc. | Computer generated representation of the imaging pattern of an imaging device |
JP4533638B2 (en) * | 2004-01-30 | 2010-09-01 | オリンパス株式会社 | Virtual image display system |
JP5525727B2 (en) * | 2005-05-23 | 2014-06-18 | ザ ペン ステイト リサーチ ファンデーション | 3D-CT registration with guidance method based on 3D-2D pose estimation and application to raw bronchoscopy |
US7787699B2 (en) * | 2005-08-17 | 2010-08-31 | General Electric Company | Real-time integration and recording of surgical image data |
JP4829673B2 (en) * | 2006-05-10 | 2011-12-07 | 川澄化学工業株式会社 | Head model |
US7925066B2 (en) * | 2006-09-13 | 2011-04-12 | Nexstim Oy | Method and apparatus for correcting an error in the co-registration of coordinate systems used to represent objects displayed during navigated brain stimulation |
US8052598B2 (en) * | 2006-10-12 | 2011-11-08 | General Electric Company | Systems and methods for calibrating an endoscope |
JP5273945B2 (en) * | 2007-05-17 | 2013-08-28 | 株式会社日立メディコ | Reference image display method and ultrasonic apparatus in ultrasonic therapy and the like |
JP2009273521A (en) * | 2008-05-12 | 2009-11-26 | Niigata Univ | Navigation system for arthroscopical surgery |
KR101089116B1 (en) * | 2008-10-21 | 2011-12-02 | 주식회사 휴먼스캔 | Patient Position Monitoring Device |
JP5551960B2 (en) * | 2009-09-30 | 2014-07-16 | 富士フイルム株式会社 | Diagnosis support system, diagnosis support program, and diagnosis support method |
KR101759534B1 (en) | 2009-10-30 | 2017-07-19 | 더 존스 홉킨스 유니버시티 | Visual tracking and annotation of clinically important anatomical landmarks for surgical interventions |
JP4728456B1 (en) * | 2010-02-22 | 2011-07-20 | オリンパスメディカルシステムズ株式会社 | Medical equipment |
JP5421828B2 (en) * | 2010-03-17 | 2014-02-19 | 富士フイルム株式会社 | Endoscope observation support system, endoscope observation support device, operation method thereof, and program |
JP6054089B2 (en) * | 2011-08-19 | 2016-12-27 | 東芝メディカルシステムズ株式会社 | Ultrasonic diagnostic apparatus, medical image processing apparatus, and medical image processing program |
US9947091B2 (en) * | 2015-11-16 | 2018-04-17 | Biosense Webster (Israel) Ltd. | Locally applied transparency for a CT image |
WO2019045144A1 (en) | 2017-08-31 | 2019-03-07 | (주)레벨소프트 | Medical image processing apparatus and medical image processing method which are for medical navigation device |
KR102084251B1 (en) * | 2017-08-31 | 2020-03-03 | (주)레벨소프트 | Medical Image Processing Apparatus and Medical Image Processing Method for Surgical Navigator |
EP3861530A1 (en) * | 2018-10-04 | 2021-08-11 | Intuitive Surgical Operations, Inc. | Graphical user interface for defining an anatomical boundary |
JP7494196B2 (en) * | 2019-02-12 | 2024-06-03 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | SYSTEM AND METHOD FOR FACILITATING OPTIMIZATION OF IMAGING DEVICE VIEWPOINT DURING A SURGERY SESSION OF A COMPUTER-ASSISTED SURGERY SYSTEM - Patent application |
KR102208577B1 (en) * | 2020-02-26 | 2021-01-27 | (주)레벨소프트 | Medical Image Processing Apparatus and Medical Image Processing Method for Surgical Navigator |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5622170A (en) * | 1990-10-19 | 1997-04-22 | Image Guided Technologies, Inc. | Apparatus for determining the position and orientation of an invasive portion of a probe inside a three-dimensional body |
US5800352A (en) * | 1994-09-15 | 1998-09-01 | Visualization Technology, Inc. | Registration system for use with position tracking and imaging system for use in medical applications |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4722056A (en) * | 1986-02-18 | 1988-01-26 | Trustees Of Dartmouth College | Reference display systems for superimposing a tomagraphic image onto the focal plane of an operating microscope |
CA2260688A1 (en) * | 1989-11-21 | 1991-05-21 | I.S.G. Technologies, Inc. | Probe-correlated viewing of anatomical image data |
US5261404A (en) * | 1991-07-08 | 1993-11-16 | Mick Peter R | Three-dimensional mammal anatomy imaging system and method |
US5776050A (en) * | 1995-07-24 | 1998-07-07 | Medical Media Systems | Anatomical visualization system |
-
1998
- 1998-06-26 JP JP50581699A patent/JP2002510230A/en not_active Ceased
- 1998-06-26 EP EP98931672A patent/EP0999785A4/en not_active Withdrawn
- 1998-06-26 WO PCT/US1998/013391 patent/WO1999000052A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5622170A (en) * | 1990-10-19 | 1997-04-22 | Image Guided Technologies, Inc. | Apparatus for determining the position and orientation of an invasive portion of a probe inside a three-dimensional body |
US5800352A (en) * | 1994-09-15 | 1998-09-01 | Visualization Technology, Inc. | Registration system for use with position tracking and imaging system for use in medical applications |
Non-Patent Citations (1)
Title |
---|
See also references of EP0999785A4 * |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6556695B1 (en) | 1999-02-05 | 2003-04-29 | Mayo Foundation For Medical Education And Research | Method for producing high resolution real-time images, of structure and function during medical procedures |
JP2002541949A (en) * | 1999-04-20 | 2002-12-10 | ジンテーズ アクチエンゲゼルシャフト クール | Apparatus for percutaneous acquisition of 3D coordinates on the surface of a human or animal organ |
JP4636696B2 (en) * | 1999-04-20 | 2011-02-23 | アーオー テクノロジー アクチエンゲゼルシャフト | Device for percutaneous acquisition of 3D coordinates on the surface of a human or animal organ |
US10251712B2 (en) | 1999-05-18 | 2019-04-09 | Mediguide Ltd. | Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors |
US9956049B2 (en) | 1999-05-18 | 2018-05-01 | Mediguide Ltd. | Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors |
US9572519B2 (en) | 1999-05-18 | 2017-02-21 | Mediguide Ltd. | Method and apparatus for invasive device tracking using organ timing signal generated from MPS sensors |
US7697973B2 (en) | 1999-05-18 | 2010-04-13 | MediGuide, Ltd. | Medical imaging and navigation system |
WO2001020552A1 (en) * | 1999-09-16 | 2001-03-22 | Mayo Foundation For Medical Education And Research | Method for rendering medical images in real-time |
JP2001133696A (en) * | 1999-11-02 | 2001-05-18 | Olympus Optical Co Ltd | Microscope device for surgery |
JP4633210B2 (en) * | 1999-11-02 | 2011-02-16 | オリンパス株式会社 | Surgical microscope equipment |
WO2001037748A3 (en) * | 1999-11-29 | 2002-02-14 | Cbyon Inc | Method and apparatus for transforming view orientations in image-guided surgery |
WO2001037748A2 (en) | 1999-11-29 | 2001-05-31 | Cbyon, Inc. | Method and apparatus for transforming view orientations in image-guided surgery |
EP1114621A3 (en) * | 1999-12-02 | 2002-02-13 | Philips Corporate Intellectual Property GmbH | Apparatus for displaying images |
JP2001190529A (en) * | 1999-12-02 | 2001-07-17 | Koninkl Philips Electronics Nv | Device for reproducing slice image |
EP1114621A2 (en) * | 1999-12-02 | 2001-07-11 | Philips Corporate Intellectual Property GmbH | Apparatus for displaying images |
AU784936B2 (en) * | 1999-12-10 | 2006-08-03 | Iscience Corporation | Treatment of ocular disease |
WO2001062173A3 (en) * | 2000-02-25 | 2002-04-11 | Univ Leland Stanford Junior | Methods and apparatuses for maintaining a trajectory in sterotaxi for tracking a target inside a body |
WO2001062173A2 (en) | 2000-02-25 | 2001-08-30 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatuses for maintaining a trajectory in sterotaxi for tracking a target inside a body |
FR2807549A1 (en) * | 2000-04-06 | 2001-10-12 | Ge Med Sys Global Tech Co Llc | Method and equipment for processing scanner and radiological images, comprises development of three dimensional image from scanner and production of sections along a proposed direction |
JP2001293006A (en) * | 2000-04-11 | 2001-10-23 | Olympus Optical Co Ltd | Surgical navigation apparatus |
EP1154380A1 (en) * | 2000-05-11 | 2001-11-14 | MTT Medical Technology Transfer AG | A method of simulating a fly through voxel volumes |
WO2002019936A3 (en) * | 2000-09-07 | 2002-08-01 | Cbyon Inc | Virtual fluoroscopic system and method |
WO2002019936A2 (en) | 2000-09-07 | 2002-03-14 | Cbyon, Inc. | Virtual fluoroscopic system and method |
US6907281B2 (en) | 2000-09-07 | 2005-06-14 | Ge Medical Systems | Fast mapping of volumetric density data onto a two-dimensional screen |
WO2002024051A2 (en) | 2000-09-23 | 2002-03-28 | The Board Of Trustees Of The Leland Stanford Junior University | Endoscopic targeting method and system |
JP2002119507A (en) * | 2000-10-17 | 2002-04-23 | Toshiba Corp | Medical device and medical image collecting and displaying method |
WO2002043009A3 (en) * | 2000-10-23 | 2002-08-29 | Siemens Corp Res Inc | Volume-rendering, generation and display of cut-views |
WO2002043009A2 (en) * | 2000-10-23 | 2002-05-30 | Siemens Corporate Research, Inc. | Volume-rendering, generation and display of cut-views |
US6573891B1 (en) | 2000-10-23 | 2003-06-03 | Siemens Corporate Research, Inc. | Method for accelerating the generation and display of volume-rendered cut-away-views of three-dimensional images |
JP2004533863A (en) * | 2001-02-13 | 2004-11-11 | メディガイド リミテッド | Medical imaging and navigation system |
JP2002263053A (en) * | 2001-03-06 | 2002-09-17 | Olympus Optical Co Ltd | Medical image display device and method |
JP2003079637A (en) * | 2001-09-13 | 2003-03-18 | Hitachi Medical Corp | Operation navigating system |
EP1391181A1 (en) * | 2002-08-19 | 2004-02-25 | Surgical Navigation Technologies, Inc. | Apparatus for virtual endoscopy |
EP1913893A3 (en) * | 2002-08-19 | 2008-07-23 | Surgical Navigation Technologies, Inc. | Apparatus for virtual endoscopy |
DE10252837B4 (en) * | 2002-11-13 | 2005-03-24 | Carl Zeiss | Examination system and examination procedure |
DE10252837A1 (en) * | 2002-11-13 | 2004-06-03 | Carl Zeiss | Medical tissue examination system, e.g. for use by a surgeon in identifying a tumor and its extent, wherein measurements from a tissue qualification system are combined with microscope images to mark a tissue area |
US7477764B2 (en) | 2002-11-13 | 2009-01-13 | Carl Zeiss Surgical Gmbh | Examination system and examination method |
WO2005020148A1 (en) * | 2003-08-21 | 2005-03-03 | Philips Intellectual Property & Standards Gmbh | Device and method for combined display of angiograms and current x-ray images |
WO2006067676A2 (en) * | 2004-12-20 | 2006-06-29 | Koninklijke Philips Electronics N.V. | Visualization of a tracked interventional device |
WO2006067676A3 (en) * | 2004-12-20 | 2007-04-05 | Koninkl Philips Electronics Nv | Visualization of a tracked interventional device |
DE102005051405B4 (en) * | 2005-10-27 | 2007-08-23 | Forschungszentrum Dresden - Rossendorf E.V. | measuring sensor |
DE102005051405A1 (en) * | 2005-10-27 | 2007-05-03 | Forschungszentrum Rossendorf E.V. | measuring sensor |
US8401620B2 (en) | 2006-10-16 | 2013-03-19 | Perfint Healthcare Private Limited | Needle positioning apparatus and method |
US8774901B2 (en) | 2006-10-16 | 2014-07-08 | Perfint Healthcare Private Limited | Needle positioning apparatus and method |
US8989842B2 (en) | 2007-05-16 | 2015-03-24 | General Electric Company | System and method to register a tracking system with intracardiac echocardiography (ICE) imaging system |
EP3613547A1 (en) * | 2009-03-31 | 2020-02-26 | Intuitive Surgical Operations Inc. | Synthetic representation of a surgical robot |
EP3385039A1 (en) * | 2009-03-31 | 2018-10-10 | Intuitive Surgical Operations Inc. | Synthetic representation of a surgical robot |
US9259290B2 (en) | 2009-06-08 | 2016-02-16 | MRI Interventions, Inc. | MRI-guided surgical systems with proximity alerts |
US9439735B2 (en) | 2009-06-08 | 2016-09-13 | MRI Interventions, Inc. | MRI-guided interventional systems that can track and generate dynamic visualizations of flexible intrabody devices in near real time |
US8886288B2 (en) | 2009-06-16 | 2014-11-11 | MRI Interventions, Inc. | MRI-guided devices and MRI-guided interventional systems that can track and generate dynamic visualizations of the devices in near real time |
US8613748B2 (en) | 2010-11-10 | 2013-12-24 | Perfint Healthcare Private Limited | Apparatus and method for stabilizing a needle |
US10453174B2 (en) | 2011-10-26 | 2019-10-22 | Koninklijke Philips N.V. | Endoscopic registration of vessel tree images |
WO2013061225A1 (en) * | 2011-10-26 | 2013-05-02 | Koninklijke Philips Electronics N.V. | Endoscopic registration of vessel tree images |
CN105979879A (en) * | 2014-01-24 | 2016-09-28 | 皇家飞利浦有限公司 | Virtual image with optical shape sensing device perspective |
CN105979879B (en) * | 2014-01-24 | 2023-01-17 | 皇家飞利浦有限公司 | Virtual images with optical shape sensing device perspective |
Also Published As
Publication number | Publication date |
---|---|
EP0999785A1 (en) | 2000-05-17 |
JP2002510230A (en) | 2002-04-02 |
EP0999785A4 (en) | 2007-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6591130B2 (en) | Method of image-enhanced endoscopy at a patient site | |
WO1999000052A1 (en) | Method and apparatus for volumetric image navigation | |
US11464575B2 (en) | Systems, methods, apparatuses, and computer-readable media for image guided surgery | |
US11464578B2 (en) | Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures | |
US10398513B2 (en) | Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures | |
US6019724A (en) | Method for ultrasound guidance during clinical procedures | |
US6850794B2 (en) | Endoscopic targeting method and system | |
EP0908146B1 (en) | Real-time image-guided placement of anchor devices | |
EP1103229B1 (en) | System and method for use with imaging devices to facilitate planning of interventional procedures | |
US20070225553A1 (en) | Systems and Methods for Intraoperative Targeting | |
US20080243142A1 (en) | Videotactic and audiotactic assisted surgical methods and procedures | |
WO1996025881A1 (en) | Method for ultrasound guidance during clinical procedures | |
WO1998038908A1 (en) | Imaging device and method | |
CN109833092A (en) | Internal navigation system and method | |
Adams et al. | An optical navigator for brain surgery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1998931672 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1998931672 Country of ref document: EP |