EP1011424A1 - Dispositif et procede de formation d'images - Google Patents

Dispositif et procede de formation d'images

Info

Publication number
EP1011424A1
EP1011424A1 EP98907726A EP98907726A EP1011424A1 EP 1011424 A1 EP1011424 A1 EP 1011424A1 EP 98907726 A EP98907726 A EP 98907726A EP 98907726 A EP98907726 A EP 98907726A EP 1011424 A1 EP1011424 A1 EP 1011424A1
Authority
EP
European Patent Office
Prior art keywords
image
lead
follow
dimensional
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP98907726A
Other languages
German (de)
English (en)
Inventor
M. Bret Schneider
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Schneider Medical Tech Inc
Original Assignee
Schneider Medical Tech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schneider Medical Tech Inc filed Critical Schneider Medical Tech Inc
Publication of EP1011424A1 publication Critical patent/EP1011424A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • A61B2017/348Means for supporting the trocar against the body or retaining the trocar inside the body
    • A61B2017/3482Means for supporting the trocar against the body or retaining the trocar inside the body inside
    • A61B2017/3484Anchoring means, e.g. spreading-out umbrella-like structure
    • A61B2017/3488Fixation to inner organ or inner body tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/363Use of fiducial points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3904Markers, e.g. radio-opaque or breast lesions markers specially adapted for marking specified tissue
    • A61B2090/3908Soft tissue, e.g. breast tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3904Markers, e.g. radio-opaque or breast lesions markers specially adapted for marking specified tissue
    • A61B2090/3916Bone tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3983Reference marker arrangements for use with image guided surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3991Markers, e.g. radio-opaque or breast lesions markers having specific anchoring means to fixate the marker to the tissue, e.g. hooks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles

Definitions

  • This invention relates generally to imaging devices and methods and, in particular, to medical imaging devices and methods.
  • a non-surgical or pre-surgical medical evaluation of a patient frequently requires the difficult task of evaluating imaging from several different modalities along with a physical examination. This requires mental integration of numerous data sets from the separate imaging modalities, which are seen only at separate times by the physician.
  • Image-guided surgical systems currently available are vulnerable to line-of-sight obstruction and consequent registration failure. Additionally, the arbitrary orientation of displayed images contributes to confusion and consequent morbidity.
  • a number of imaging techniques are commonly used today to gather two-, three- and four- dimensional data.
  • These techniques include ultrasound, computerized X-Ray tomography (CT), magnetic resonance imaging (MRI), electric potential tomography (EPT), positron emission tomography (PET), brain electrical activity mapping (BEAM), magnetic resonance angiography (MRA), single photon emission computed tomography (SPECT), magnetoelectro- encephalography (MEG), arterial contrast injection angiography, digital subtraction angiography and fluoroscopy.
  • CT computerized X-Ray tomography
  • MRI magnetic resonance imaging
  • EPT electric potential tomography
  • PET positron emission tomography
  • BEAM brain electrical activity mapping
  • MRA magnetic resonance angiography
  • SPECT single photon emission computed tomography
  • MEG magnetoelectro- encephalography
  • arterial contrast injection angiography digital subtraction angiography and fluoroscopy.
  • Each technique has attributes that make it more or less useful for creating certain kinds of images, for imaging a particular part of the patient's body, for demonstrating certain
  • Ultrasound images may be generated in real time using a relatively small probe. The image generated, however, lacks the accuracy and three-dimensional detail provided by other imaging techniques.
  • the timing of the signals received by each microphone provides probe position information to the computer.
  • Information regarding probe position for each marker registers the probe with the MRI and/or CT image in the computer's memory.
  • the probe can thereafter be inserted into the patient's brain. Sonic signals from the probe move within the patient's brain.
  • the surgeon can use information of the probe's position to place other medical instruments at desired locations in the patient's brain. Since the probe is specially located with respect to the operating table, one requirement of this system is that the patient's head be kept in the same position with respect to the operating table as well. Movement of the patient's head would require a recalibration of the sonic probe with the markers.
  • the system does not function at an interactive rate, and hence, the system cannot transform images to reflect the changing point of view of an individual working on the patient. Because the system is dependent upon cumbersome equipment such as laser rangefinders which measure distance to a target, it cannot perform three-dimensional image transformations guided by ordinary intensity images.
  • a position sending articulated arm integrated with a three-dimensional image processing system such as a CT scan device to provide three-dimensional information about a patient's skull and brain.
  • a CT scan device to provide three-dimensional information about a patient's skull and brain.
  • metallic markers are placed on the patient's scalp prior to the CT scan.
  • a computer develops a three-dimensional image of the patient's skull (including the markers) by taking a series of "slices" or planar images at progressive locations, as is common for CT imaging, then interpolating between the slices to build the three-dimensional image.
  • the articulated arm can be calibrated by correlating the marker locations with the special position of the arm.
  • the system includes an "intramicroscope" through which computer-generated slices of a three-dimensionally reconstructed tumor correlated in location and scale to the surgical trajectory can be seen together with the intramicroscope' s magnified view of underlying tissue. Registration of the images is not accomplished by image analysis, however. Furthermore, there is no mention of any means by which a surgeon's instantaneous point of view is followed by appropriate changes in the tomographic display. This method is also dependent upon a stereotactic frame, and any movement of the patient's head would presumably disable the method.
  • Suetens, P., et al. (in Kelly, P.J., et al. "Computers in Stereotactic Neurosurgery,” pp. 252-253 (Blackwell Scientific Publications 1992)) describe the use of a head mounted display with magnetic head trackers that changes the view of a computerized image of a brain with respect to the user's head movements.
  • the system does not, however, provide any means by which information acquired in real time during a surgical procedure can be correlated with previously acquired imaging data.
  • Roberts, D.W., et al., "Computer Image Display During Frameless Stereotactic Surgery,” (in Kelly P.J., et al. "Computers in Stereotactic Neurosurgery,” pp. 313-319 (Blackwell Scientific Publications 1992)) describe a system that registers pre-procedure images from CT, MRI and angiographic sources to the actual location of the patient in an operating room through the use of an ultrasonic rangefinder, an array of ultrasonic microphones positioned over the patient, and a plurality of fiducial markers attached to the patient. Ultrasonic "spark gaps" are attached to a surgical microscope so that the position of the surgical microscope with respect to the patient can be determined. Stored MRI, CT and/or angiographic images corresponding to the microscope's focal plane may be displayed.
  • an imaging system capable of displaying single modality or multimodality imaging data, in multiple dimensions, in its proper size, rotation, orientation, and position, registered to the instantaneous point of view of a physician examining a patient or performing a procedure on a patient. Furthermore, it would be desirable to do so without the expense, discomfort, and burden of affixing a stereotactic frame to the patient in order to accomplish these goals. Still further, it would be desirable to reduce the vulnerability of the registration process to line-of-sight obstructions. It would also be desirable to utilize such technology for non-medical procedures such as the repair of a device contained within a sealed chassis.
  • This invention provides methods and apparatuses for obtaining and displaying in real time an image of an object obtained by one modality such that the image corresponds to a line of view established by another modality.
  • the method comprises the following steps: obtaining a follow image library of the object via a first imaging modality; providing a lead image library obtained via the second imaging modality; referencing the lead image library to the follow image library; obtaining a lead image of the object in real time via the second imaging modality along a lead view; comparing the real time lead image to lead images in the lead image library via digital image analysis to identify a follow image line of view corresponding to the lead view; transforming the identified follow image to correspond to the scale, rotation and position of the lead image; and displaying the transformed follow image, the comparing, transforming and displaying steps being performed substantially simultaneously with the step of obtaining the lead image in real time.
  • the invention provides a method for displaying an image slice of an object comprising the steps of: obtaining a three dimensional follow image of the object; obtaining a real time lead image of the object; transforming the three dimensional follow image to correspond to the lead image; automatically determining a desired depth within the object for observation; generating an image of the object from the transformed three dimensional follow image at the desired depth; and displaying the generated image of the object.
  • the generated image of the object may be a two dimensional image slice.
  • the invention provides a method for displaying an image of an object comprising the steps of: obtaining three dimensional follow image data of the object; obtaining a real time lead image of the object utilizing a stereo camera; transforming the three dimensional follow image to correspond to the lead image; and displaying at least a portion of the transformed three dimensional follow image of the object.
  • Fig. 1 is a block diagram showing a preferred embodiment of the imaging device of this invention.
  • Fig. 2 is a flow chart illustrating a preferred embodiment of the method of this invention.
  • Fig. 3 is a flow chart illustrating an alternative embodiment of the method of this invention.
  • Fig. 4 shows an embodiment of the invention.
  • Figs. 5A and 5B show an alternative embodiment of a head mounted display/head mounted camera apparatus.
  • Fig. 6 shows several methods for determining in real time the depth of slice that may be extracted from a follow image prior to display.
  • Fig. 7 shows the use of multiple fiducials implanted upon a mobile body part.
  • Fig. 8 shows a fiducial gun which may be utilized to rapidly and efficiently implant or attach fiducial markers.
  • Fig. 9 shows an endoscope including dual localizer cameras for acquiring lead images.
  • Fig. 10 shows a fiducial marker with an elongated staff that penetrates surgical drapes.
  • Fig. 11 shows a flowchart of an embodiment in which the acquisition of a follow image is controlled in real time.
  • Fig. 12 illustrates fiducials for positioning a catheter for angioplasty of a blood vessel.
  • image means the data that represents the special layout of anatomical or functional features of a patient, which may or may not be actually represented in visible, graphical form.
  • image data sitting in a computer memory, as well as an image appearing on a computer screen will be referred to as an image or images.
  • Non-limiting examples of images include an MRI image, an angiography image, and the like.
  • an “image” refers to one particular "frame” in the series that is appropriate for processing at that time.
  • the "image” may refer to an appropriately re-sliced image of a three-dimensional image reconstruction, rather than one of the originally acquired two-dimensional files from which the reconstructions may have been obtained.
  • image is also used to mean any portion of an image that has been selected, such as a fiducial marker, subobject, or knowledge representation.
  • image is also intended to encompass any spacially registered data including the receipt of infrared signals in a constant or time-encoded fashion by a CCD matrix. Stereo imager pairs of the same object are herein refered to in the singular as "image.”
  • Imaging modality means the method or mechanism by which an image is obtained, e.g., MRI, CT, video, ultrasound, etc.
  • Lead View means the line of view toward the object at any given time. Typically the lead view is the line of view through which the physician, at any given time, wishes to view the procedure. In the case where a see-through head-mounted display and head-mounted camera are utilized, this should be the instantaneous line of view of the physician.
  • Lead image is an image obtained through the same modality as the lead view.
  • the lead image is the physician's view of the surface of the patient, the lead image could be a corresponding video image of the surface of the patient.
  • Lead images may also be obtained by any real time intraoperative modality, including fluoroscopy, endoscopy, microscopy, ultrasound, or infrared optical localizers. Lead images may be stereo image pairs.
  • follow image will be an image which should be transformed and possibly sliced to the specifications of the lead view and slice depth control.
  • a properly sliced and transformed follow image will usually be in a plane parallel with that of the lead image, and consequently, orthogonal to the lead view, although other slice contours could be used.
  • a properly transformed follow image will be at the same angle of the view as the lead image, but at a depth to be separately determined.
  • Composite image is the image that results from the combination of properly registered lead and follow images from two or more sources, each source representing a different modality.
  • Fiducial marker means a feature, set of features, image structure, or subobject present in lead or follow images that can be used for image analysis, matching, coordinate interreferencing or registration of the images and creation of a composite image.
  • Feature extraction means a method of identification of image components which are important to the image analysis being conducted. These may include boundaries, angles, area, center of mass, central moments, circularity, rectangularity and regional gray-scale intensities in the image being analyzed.
  • Segmentation is the method of dividing an image into areas which have some physical significance in terms of the original scene that the image attempts to portray. For example, segmentation may include the demarcation of a distinct anatomical structure, such as an external auditory meatus, although it may not be actually identified as such until classification.
  • feature extraction is one method by which an image can be segmented.
  • classification means a step in the imaging method of the invention in which an object is identified as being of a certain type, based on its features.
  • a certain segmented object in an image might be identified by a computer as being an external auditory meatus based on if it falls within predetermined criteria for size, shape, pixel density, and location relative to other segmented objects.
  • classification is extended to include the angle, or Cartesian location, from which the object is viewed ("line of view"), for example, an external auditory meatus viewed from 30° North and 2° West of a designated origin.
  • line of view for example, an external auditory meatus viewed from 30° North and 2° West of a designated origin.
  • a wide variety of classification techniques are known, including statistical techniques (see, e.g., Davies, E.R., "Machine Vision” Theory, Algorithms, Practicalities," pp.
  • transformation means processing an image such that it is translated (moved in a translational fashion), rotated (in two or three dimensions), scaled, sheared, warped, placed in perspective or otherwise altered according to specified criteria. See Burger, P., “Interactive Computer Graphics,” pp. 173-186 (Addison-Wesley 1989).
  • Registration means alignment process by which two images of like to corresponding geometries and of the same set of objects are positioned coincident with each other so that corresponding points of the imaged scene appear in the same position on the registered images. Description of Preferred Embodiments. For convenience, preferred embodiments of the invention are discussed in the context of medical applications, such as in brain surgery or other invasive surgeries.
  • the invention is also applicable to other uses, including but not limited to medical examinations, analysis of ancient and often fragile artifacts, airplane luggage, chemical compositions (in the case of nuclear magnetic resonance spectral analysis), the repair of closed pieces of machinery through small access ways, and the like.
  • FIG. 1 is a block diagram of an imaging system 2 for displaying an image of an object 10 according to a preferred embodiment of this invention.
  • a lead library 12 and a follow library 14 of images of the object 10 obtained by two different modalities communicate with a processing means 16.
  • the imaging modality of either library could be a CT scan, an MRI scan, a sonogram, an angiogram, video or any other imaging technique known in the art.
  • Each library contains image data relating to the object.
  • At least one of the imaging devices is a device that can view and construct an image of the interior of object 10.
  • the images are stored within the libraries in an organized and retrievable manner.
  • the libraries may be any suitable means of storing retrievable image data, such as, for example, electronic memory (RAM, ROM, etc.), magnetic memory (magnetic disks or tape), or optical memory (CD-ROM, WORM, etc.).
  • the processing means 16 interreferences corresponding images in image libraries 12 and 14 to provide a map or table relating images or data in one library to images or data in the other. A preferred interreferencing method is described in detail below.
  • Processing means 16 may be a stand-alone computer such as an SGI Onyx symmetric multiprocessing system workstation with the SGI RealityEngine graphics subsystem (available from Silicon Graphics, Inc.) and suitable software. Additionally, processing means 16 may be an image processor specially designed for this particular application.
  • a lead imager 18 is provided to obtain an image of object 10 along a chosen perspective or line of view. For example, if object 10 is a patient in an operating room, lead imager 18 may be a video camera that obtains video images of the patient along the line of sight of the attending physician, such as a head-mounted video camera. Preferably, the lead imager is a camera or camera array mounted on the head of the user along his or her line of eyesight.
  • Lead imager 18 sends its lead image to processing means 16 which interreferences the lead image with, a corresponding follow image from follow image library 14 and transforms the image to correspond to d e lead image.
  • the depth at which the follow image is sliced may be controlled by a depth control 24 (such as a mouse, joy stick, knob, or other means) to identify the depth at which the follow image slice should be taken.
  • the follow image (or, alternatively, a composite image combining the lead image from lead imager 18 and the corresponding transformed follow image from library 14) may be displayed on display 20.
  • Display 20 may be part of processing means 16 or it may be an independent display.
  • object 10 has at least one fiducial marker 22.
  • the fiducial marker is either an inherent feature of object 10 (such as a particular bone structure within a patient's body) or a natural or artificial subobject attached to or otherwise associated with object 10.
  • the system and method of this invention use one or more fiducial markers to interreference the lead and follow image or to interreference lead images acquired in real time to lead images or data in the lead image library, as discussed in more detail below.
  • Figure 2 is a flowchart showing an embodiment of this invention. In the flowchart, steps are divided into those accomplished before the start of the surgical procedure, and those that are accomplished in real time, i.e., during the procedure.
  • the object of interest is a body or a specific part of the body, such as a patient's head (the follow image modality) and a video image of the surface of the patient's head (the lead image modality). It should be understood, however, that the invention could be used in a variety of environments and applications.
  • the lead and follow images are interreferenced prior to the surgical procedure to gather information for use in real time during the surgical procedure.
  • Interreferencing of the lead and follow images gathered in this pre-procedure stage is preferably performed by establishing common physical coordinates between the patient and the video camera and between the patient and the MRI device.
  • the first step of this preferred method (indicated generally at block 30 of Figure 2) therefore is to mount the patient's head immovably to a holder such as a stereotactic frame.
  • an MRI scan of the patient's head and stereotactic frame is taken, and the three-dimensional data (including coordinate data relating to the patient's head and the stereotactic frame) are processed in a conventional manner and stored in memory, such as in a follow image library, as shown in block 34.
  • the pre-process lead video images of the patient's head are preferably obtained via a camera that automatically obtains digital images at precise locations.
  • Robotic devices built to move instruments automatically between precede stereotactic locations have been described by Young, R.F., et al., "Robot-aided Surgery” and Benabid, A.L., et al., "Computer-driven Robot for Stereotactic Neurosurgery,” (in Kelly, P.J., et al., “Computers in Stereotactic Neurosurgery,” pp. 320-329, 330-342 (Blackwell Scientific Publications, 1992)).
  • Such devices may be used to move a camera to appropriate lead view angles for the acquisition of the lead library.
  • the video camera may be moved about the head in three planes, obtaining an image every 2 mm. Each image is stored in a lead image library along with information about the line of view or trajectory from which the image was taken.
  • the stereotactic frame may be removed from the patient's head after all these images have been obtained.
  • the lead (video) and follow (MRI) image data a common coordinate system.
  • identification of a line of view showing a portion of a stored video image is equivalent to identification of the corresponding line of view in the stored MRI image.
  • Information interreferencing the stored lead and follow images is itself stored for use for real time imaging during the surgical procedure.
  • the video lead images are digitally analyzed to identify predefined fiducial markers.
  • the digital representation of each lead image stored in the lead image library is segmented or broken down into subobjects.
  • Segmentation can be achieved by any suitable means known in the art, such as by feature extraction, thresholding, edge detection, Hough transforms, region growing, run- length connectivity analysis, boundary analysis, template matching, etc.
  • a preferred embodiment of this invention utilizes a Canny edge detection technique, as described in R. Lewis, "Practical Digital Image Processing” (Ellis Horwood, Ltd., 1990).
  • the result of the segmentation process is the division of the video image into subobjects which have defined boundaries, shapes, and positions within the overall image.
  • the Canny edge detection segmenting technique can be modified depending on whether the image is in two or three dimensions. In this example the image is, of course, a two- dimensional video image.
  • segmentation approaches can be adapted for use with either two- dimensional or three-dimensional images, although most written literature concerns two- dimensional image segmentation.
  • One method by which a two-dimensional approach can be adapted for the segmentation of a three-dimensional object is to run the two-dimensional segmentation program on each two-dimensional slice of the series that represents the three- dimensional structure. Subsequent interpolation of each corresponding part of the slices will result in a three-dimensional image containing three-dimensional segmented objects.
  • the least computationally intensive method of segmentation is the use of thresholding. Pixels above and below a designated value are separated, usually by changing the pixels to a binary state representative of the side of the threshold on which that pixel falls. Using thresholding and related edge detection methods that are well know in the art, and using visually distinctive fiducials, a desired area of the image is separated from other areas. If extracted outlines have discontinuous, simple "linking" algorithms, as are known in the art, may be used to connect closely situated pixels.
  • the lead and follow images are processed in similar manners, for example by thresholding, so that they can be matched quickly and efficiently.
  • thresholding may be effectively accomplished prior to any processing by the computer by simply setting up uniform lighting conditions and setting the input sensitivity or output level of the video camera to a selected level, such that only the pixels of a certain intensity will remain visible. Hence, only the relevant fiducial shapes will reach the processor.
  • image analysis as described herein can be accomplished in real time (i.e., at an interactive rate) even using hardware not specially designed for image analysis.
  • segmentation of a unified three- dimensional file is preferable to performing a segmentation on a series of two-dimensional images, then combining them, since the three-dimensional file provides more points of reference when making a statistic-based segmentation decision.
  • Fuzzy logic techniques may also be used, such as those described by Rosenfeld, A., "The Fuzzy geometry of image subsets,” (in Bezdek, J.C, et al., “Fuzzy Models for Pattern Recognition,” pp. 340-346 (IEEE Press 1991)).
  • the final part of this image analysis step is to classify the subobjects.
  • Classification is accomplished by means well known in the art. A wide variety of image classification methods are described in a robust literature, including those based on statistical, fuzzy, relational, and feature- based models. Using a feature-based model, feature extraction is performed on a segmented or unsegmented image. If there is a match between the qualities of the features and those qualities previously assigned in the class definition, the object is classified as being of that type. Class type can describe distinct anatomic structures, and in the case of this invention, distinct anatomic structures as they appear from distinct points of view.
  • the features of each segmented area of an image are compared with a list of feature criteria that describe a fiducial marker.
  • the fiducial marker is preferably a unique and identifiable feature or set of features on the object, such as surface shapes caused by particular bone or cartilage structures within the patient's body.
  • the system could use an eyeball as a fiducial marker by describing it as a roughly spherical object having a diameter within a certain range of diameters and a pixel intensity within a certain range of intensities.
  • Other potential fiducial markers are the nose, the brow, the pinnae and the external auditory meatus.
  • the fiducial marker can be added to the object prior to imaging solely for the purpose of providing a unique marker, such as a marker on the scalp.
  • a unique marker such as a marker on the scalp.
  • Such a marker would typically be selected to be visible in each imaging modality used. For example, copper sulfate capsules are visible both to MRI and to a video camera.
  • the stereotactic frame used in the pre-procedure steps may be left attached to the head. In any case, if an object can be automatically recognized, it can be used as a fiducial marker.
  • segmentation, feature extraction and classification steps utilized by this invention may be performed with custom software. Suitable analysis of two-dimensional images may be done with commercially available software such as Global Lab Image, with processing guided by a macro script.
  • the system is ready for use in real time imaging (i.e., images obtained at an interactive rate) during a medical procedure.
  • real time lead images of the patient's head along the physician's line of sight are obtained through a digital video camera mounted on the physician's head, as in block 38 of Figure 2.
  • Individual video images are obtained via a framegrabber.
  • each video image is correlated in real time (i.e., at an interactive rate) with a corresponding image in the lead image library, preferably using the digital image analysis techniques discussed above.
  • the lead image is segmented, and the subobjects in the segmented lead image are classified to identify one or more fiducial markers.
  • Each fiducial marker in the real time lead image is matched in position, orientation and size with a corresponding fiducial marker in the lead image library and, thus, to a corresponding position orientation and size in the follow image library via the interreferencing information.
  • the follow image is subsequently translated, rotated in three dimensions, and scaled to match the specifications of the selected lead view.
  • the process of translating and/or rotating and/or scaling the images to match each other is known as transformation.
  • the follow image may be stored, manipulated or displayed as a density matrix of points, or it may be converted to a segmented vector-based image by means well-known in the art, prior to being stored, manipulated or displayed. Because the follow image in this example is three-dimensional, this matching step yields a three-dimensional volume, only the "surface" of which would ordinarily be visible.
  • the next step in the method is therefore to select the desired depth of the slice one wishes to view.
  • the depth of slice may be selected via a mouse, knob, joystick or other control mechanism, the transformed follow image is then sliced to the designated depth by means known in the art, such as described in Russ, J.C, "The Image Processing Handbook," pp. 393-400 (CRC Press 1992); .Burger, P., et al., "Interactive Computer Graphics," pp. 195-235 (Addison-Wesley 1989).
  • slicing algorithms involves designating a plane of slice in the three-dimensional image and instructing the computer to ignore or to make transparent any data located between the viewer and that plane. Because images are generally represented in memory as arrays, and because the location of each element in the array is mathematically related to the physical space that it represents, a plane of cut can be designated by mathematically identifying those elements of the array that are divided by the plane. The resulting image is a two-dimensional object sliced at the designated plane.
  • follow images may be displayed according to "perspective rendering" techniques, as are known in the art of computer graphics, so as to most accurately and naturally emulate a given point of view.
  • the graphics functions of the system can employ "three-dimensional texture mapping" functions such as those available with the SGI RealityEngine and with the Sun Microsystems Freedom Series graphics subsystems.
  • the SGI RealityEngine hardware/software graphics platform supports a function called "3-D texture” which enables volumes to be stored in "texture memory.”
  • Texel values are defined in a three-dimensional coordinate system, and two-dimensional slices are extracted from this volume by defining a plane intersecting the volume.
  • the three-dimensional follow image information of this invention may be stored as a texture in texture memory of the RealityEngine and slices obtained as discussed above.
  • the three-dimensional data set is held, transformed and sliced in main memory, including in frame buffers and z-buffers, such as those found on the Sun
  • the system can display the sliced follow image alone, or as a composite image together with a corresponding lead image, such as by digital addition of the two images. Additionally, the transformed and sliced follow image can be projected onto a see-through display mounted in front of the physician's eyes so that it is effectively combined with the physician's direct view of the patient. Alternatively, the composite lead and follow images can be displayed on a screen adjacent the patient. The displayed images remain on the screen while a new updated lead image is obtained, and the process starts again.
  • the imaging system performs the steps of obtaining the lead image and display of the corresponding follow or composite image substantially in real time (or, in other words, at an interactive rate).
  • the time lag between obtaining the lead image and display of the follow or composite is short enough that the displayed image tracks changes of the lead view substantially in real time.
  • new images will be processed and displayed at a frequency that enables the physician to receive a steady stream of visual feedback reflecting the movement of the physician, the patient, medical instruments, etc.
  • interreferencing of the images in the lead and follow libraries in the pre-procedure portion of the imaging method is done solely by digital image analysis techniques.
  • Each digitized lead image for example, a video image
  • the subobjects are classified to identify fiducial markers.
  • Fiducial markers in the follow images e.g., surface views of MRI images
  • a map or table interreferencing the lead and follow images is created by transforming the follow images is created by transforming the follow image fiducial markers.
  • the interreferencing information is stored for use during the real time imaging process.
  • pattern matching techniques may be used to match the images without identifying specific fiducial markers.
  • the method of the first alternative embodiment may then be used to display appropriate slices of the follow images that correspond to lead images obtained in real time.
  • real time video images of a patient obtained by a video camera mounted on a physician's head can be correlated with lead images in the lead image library via the digital image analysis techniques described above with respect to a preferred embodiment.
  • the stored interreferencing information can then be used to identify the follow image corresponding to the real time lead image.
  • the follow image is transformed to match the size, location and orientation of the lead image.
  • the three-dimensional follow image is also sliced to a depth selected via depth control.
  • the transformed and sliced follow image is then displayed alone or as a composite im&ge together with the real time video image. The process repeats when a subsequent real time video image is obtained.
  • the follow images are not sliced in real time. Rather, this embodiment generates a follow image library of pre-sliced follow images obtained on a variety of planes and indexed to multiple lead image lines of view and slice depths. The appropriate follow image slice is retrieved from the follow image library when a given line of view and slice depth is called for by the analysis of the real time lead image. While this embodiment requires greater imaging device memory, it requires less real time processing by the device.
  • Figure 3 This alternative embodiment omits the steps of obtaining lead images and interreferencing the lead images with the follow images during the pre-procedure part of the method.
  • the lead image obtained in real time by the lead imager can be intereferenced directly with the follow images without benefit of a preexisting table or map correlating earlier-obtained lead images with follow images by performing the segmentation and classification steps between the lead image and the follow images in real time or by using other image or pattern matching techniques (such as those described in Haralick, R.M., et al., "Computer and Robot Vision,” vol. 2, pp. 289-377 (Addison-Wesley 1993); Siy, P., et al., "Fuzzy Logic for Handwritten Numeral Character Recognition," in Bezdek, J.C, et al., “Fuzzy Models for Pattern Recognition,” pp.
  • This third alternative method increases the real time load on the system processor, which could result in a slower display refresh time, i.e., the time between successively displayed images.
  • the slower display refresh time might be acceptable for certain procedures, however.
  • one advantage of this approach is that it eliminates some of the time spent in the pre-procedure stage.
  • the follow images can be obtained in real time and related to the lead images in real time as well. This approach would be useful for use in surgical procedures that alter the patient in some way, thereby making any images obtained prior to the procedure inaccurate.
  • the methods shown in Figures 2 and 3 can be practiced using relational data about multiple fiducial markers on the object.
  • orientation and size information regarding the lead and follow images can be determined via triangulation by determining the relative position of the multiple fiducial markers as seen from a particular line of view.
  • image analysis techniques can be used to track the movement of the camera or the head rather than its position directly.
  • pattern matching techniques as described in Davies, Haralick, and Siy may be used for either pre-process or real time matching of corresponding images.
  • the following is an example of the first preferred embodiment in which the imaging system and method is used to generate and display an image of a patient's head.
  • the two images are: (1) the surgeon's view (produced by a digital video camera mounted on the surgeon's head and pointed at the surface of the patient's head) for the lead image and (2) a three-dimensional CT image of the patient's head as the follow image.
  • the images are obtained in the pre-procedure stage by a processing computer via a frame- grabber (for the video lead image library) and as a pre-created file including line of view info ⁇ nation (for the CT follow image library) and are placed in two separate memory buffers or image libraries.
  • the lead images and follow images are preferably obtained while the patient wears a stereotactic head frame.
  • numerous video images are taken from a variety of perspectives around the head. Each image is stored in the lead image library along with the line of view, or trajectory, along which that image was obtained. The stereotactic frame is then removed.
  • the images in the lead image library are interreferenced with images in the follow image library by correlating the lines of view derived in the image obtaining steps. This interreferencing information is used later in the real time portion of the imaging process.
  • the imaging system may be used to obtain and display real time images of the patient.
  • the real time lead image is obtained via a head-mounted video camera that tracks the physician's line of sight.
  • Each real time lead video image is captured by a frame grabber and analyzed to identify predetermined fiducial markers according to the following process.
  • the real time lead images are segmented via the Canny edge detection technique (Lewis, R. "Practical Digital Image Processing,” pp. 211-217 (Ellis Horwood Limited (1990)), which identifies the boundaries between different structures that appear in an image.
  • the fiducial marker for this example is the eye orbit of the patient's skull, which has been enhanced by drawing a circumferential ring with a marker pen. The orbital rims can been seen both on the surface of the face with a video camera as bony ridges.
  • the computer might be told, for example, that a left eye orbit is a roughly circular segmented object with a size between the threshold numbers of 0 and 75, which occurs on the left side of the video images.
  • the orbits appear as ellipses, once they have been segmented.
  • the ellipses representing the orbits will, at least when considered as a pair, most closely approximate circles.
  • the major axis the long axis of an ellipse
  • the minor axis the short axis of an ellipse
  • the angle in degrees between the major axis and the x axis is approximately 90°.
  • the axis ratio of the ellipses also decreases accordingly, but the ellipse angle is now approximately 0°.
  • any given view can be determined, or classified, as being along a certain line of view.
  • Left and right views will not be confused because of the spatial relationship between the two ellipses and other fiducials (one orbit is to the left of the other relative to some other (third) fiducial).
  • a computer program can be "taught" that an ellipse of given shapes and orientation correspond to the head at a specific orientation.
  • the derived orientation of the real time lead image is compared to the stored information regarding the pre-procedure lead images to identify the pre- procedure lead image that corresponds to the physician's line of view. Because of the earlier interreferencing of the lead and follow images, identification of the lead image line of view will provide the correct follow image line of view. If the real time line of view does not correspond exactly with any of the stored lead image lines of view, the system will interpolate to approximate the correct line of view.
  • the follow image After determination of the correct line of view, the follow image must be translated, rotated and scaled to match the real time image. As with the line of view, these transformation steps are performed by comparing the location, orientation and size of the fiducial marker (in this example, the orbit) of the real time video image with the same parameters of the fiducial marker in the corresponding lead library image, and applying them to the follow image, in combination with a predesignated scaling factor which relates the size of the images in the lead and follow libraries.
  • a predesignated scaling factor which relates the size of the images in the lead and follow libraries.
  • any standard or arbitrarily selected position, orientation e.g., axial, coronal, saggital
  • scale may be viewed.
  • the follow image After any transformation of the follow image, the follow image must be sliced at the appropriate depth.
  • the depth can be selected by use of an input mechanism associated with the system, such as a mouse, knob, joystick, switch on a hand-held probe, or keyboard.
  • the resulting follow image slice is then displayed on a head-mounted, see-through display worn by the physician, such as the displays marketed by RPI Advanced Technology Group (San Francisco, CA) and by Virtual Reality, Inc. (Pleasantville, NY).
  • the process repeats either on demand or automatically as new real time lead images are obtained by the video camera.
  • Stereoscopic displays can be a useful way of displaying follow images or composite images to give a three-dimensional appearance to the flat displays of CRT's and head-mounted displays. Stereoscopic displays can also improve the effectiveness of the invention by giving appropriate depth cues to a surgeon.
  • the haed mounted display/camera appartus may include surgical loupes for magnification and/or lights for improved illumination of the surgical field.
  • a head-mounted camera is fixed very close to the user's non-dominant eye; the parallax between the user's natural ocular view and the synthetic view displayed on the see-through head-mounted display creates an approximation of the correct three-dimensional view of the image.
  • alternating polarized light filters such as those in the Stereoscopic Display Kits by Tektronix Display Products (Beaverton, OR) between the user's eyes and a stereoscopic display are used.
  • the stereoscopic system displays artificially parallaxed image pairs which provide a synthetic three-dimensional view.
  • Such stereoscopic views are produced and displayed by means well known in the art and may be displayed on any display device, including a conventional CRT or a see-through head-mounted display.
  • This method provides the user, such as a surgeon, with a very precise illusion of seeing the exact three-dimensional location of a specific structure within a patient's body.
  • Such a method not only provides increased realism to the images provided by the invention, but also helps make image guided surgical procedure more accurate, safe and effective.
  • the speed and efficiency of the hardware used with this invention may be improved by the use of specialized subsystems, leaving the full power of the host system available for miscellaneous tasks such as communicating between the subsystems.
  • specialized machine vision subsystems such as the Max Video 200 and the Max860 systems (Datacube, Inc., Danvers, MA) or the Cognex 4400 image processing board (Cognex Corp., Needham, MA), may be used together with the Onyx.
  • These subsystems are designed to take over from their host system computationally intensive tasks such as real-time edge detection, extraction of shapes, segmentation and image classification.
  • MaxVideo 200 and Max860 subsystems reside on VME busses of an Onyx with a RealityEngine, with all subsystems under control of the Onyx.
  • MaxVideo 200 and Max860 subsystems are under the control of SPARC LXE (Themis Computer, pleasanton, CA ) all residing on VME busses of an Onyx with a RealityEngine.
  • MaxVideo 200 and Max860 subsystems reside on a SPARC 20 workstation with a Freedom Series 3300 graphic subsystem (Sun Microsystems, Mountain View, CA), which has z-buffers and tri-linear MIP texture mapping features.
  • MaxVideo 200 and Max860 subsystems reside on a SPARC 20 workstation with an SX graphics subsystem (Sun Microsystems, Mountain View, CA).
  • SX graphics subsystem Sun Microsystems, Mountain View, CA.
  • the MaxVideo 200 subsystem performs integer-based image processing, filtering, image segmentation, geometric operations and feature extraction, and image classification (lead image derived transformation instructions) and evaluation tasks, communicating its computational output, directly or indirectly, to the graphic subsystem.
  • the Max 860 subsystem may be used to perform similar functions, if desired, which require floating point calculations.
  • operating systems can be used, depending upon what hardware configuration is selected. These operating systems include IRIX (Silicon Graphics, Inc., Mountain View, CA), SunOS/Solaris (Sun Microsystems, Inc.), or VXWorks (Wind River Systems, Inc., Alameda, CA).
  • IRIX Silicon Graphics, Inc., Mountain View, CA
  • SunOS/Solaris Sun Microsystems, Inc.
  • VXWorks Wind River Systems, Inc., Alameda, CA.
  • the invention can be used as part of the vision system of remote-controlled machines (such as remote-controlled military vehicles) and autonomous robots (such as surgical robots).
  • follow image views or composite views generated according to the method of this invention may be used for guidance through an area that is obscured to the view of the naked eye or video camera but known by some other means. For example, if the exterior of a building is visible, and a CAD- type model of that building is also available, a military device can target any room within that building based upon the exterior view.
  • Appropriate follow images or composite views may be used directly in the autonomous vision systems of robots by means well known in the robotics art or may be used by a remote or local human operator. Modifications are possible without departing from the scope of this invention.
  • the imaging modalities could be angiography (done preoperatively) and fluoroscopy (done in real time and used as either a lead or follow image), so that the location of a medical instrument inserted into a patient's body can be tracked in real time.
  • Fluoroscopy may be used as a method of read time intra operative lead image acquisition.
  • Fiducial markers on the surface of the patient, on a catheter tip, and/or affixed within deep tissue may be identified by the computer system as long as they are visible to a fluoroscopic camera. Examples of this kind of fiducial marker placement are shown in Figure 12.
  • Figure 12 illustrates fiducials for positioning a catheter for angioplasty of a blood vessel.
  • a catheter 450 is shown within the lumen blood vessel 452 which is shown with plaque.
  • a fiducial 454 is shown at the tip of the catheter so that the fiducials 454 and 456 may be utilized to delineate balloon 458 for performing angioplasty.
  • a fiducial 460 is shown attached to the exterior of blood vessel 452 with barbs.
  • a fiducial 462 is shown attached to the exterior surface of skin 464. Fiducials 460 and 462 may be attached any number of ways including barbs, small sutures, adhesive material, and the like. The fiducials allow the 3-dimensional position of anatomical structures to be derived in real time.
  • the positional information may be uesed to dictate instructions for follow image acquisition or transformation as further described in this document, and the placement of instrument effigies in the appropriate location.
  • fiducial markers on a catheter tip and fiducial markers on or in a patient's body adjacent to the blood vessel in need of treatment may both be tracked using fluoroscopic lead image acquisition, and obtaining from this 2-dimensional data appropriately acquired or transformed and edited 3-dimensional contrast CT data.
  • an endoscope tip has a fluoroscopically trackable fiducial marker, the desired view of the tip of the scope may be obtained accordingly, and corresponding follow image views may be displayed by the system.
  • Fiducial markers for image guided surgery are commercially available from several sources including bone-implantable fiducials made by ACT Medical (Newton, MA), and adhesive skin markers made by E-Z-EM (Westbury, NY).
  • Figure 10 shows a fiducial marker with an elongated staff that penetrates surgical drapes and may be detected by a localizer camera.
  • a fiducial 350 includes a fiducial array 353 at one end and barbs 354 at the other.
  • the fiducial marker is shown penetrating a surgical drape 356, skin 358, and being affixed to internal organ 360 such as the liver. As shown, the fiducial marker it affixed to the internal organ by extensible and retractable tenaculum barbs, but other mechanisms may be utilized.
  • Fiducial markers such as fiducial marker 350, whether adhered to the skin's surface, implanted within bone, or implanted in deep tissue, may have elongated shafts between the affixed base and the camera localizer visable surface. Such a shaft allows the marker to pass through surgical drapes and even layers of tissue. Consequently, a long-shafted fiducial marker can transcend any physical obstructions lying between an organ that needs to be tracked and a surface visible to localizer earners. In one example, fiducial markers are adhered to the patitent's scalp in a conventional manner. However, the long shaft penetrates through the surgical drapes that may be taped about the shaft for more sterile coverage.
  • a fiducial marker may be placed through the skin on a patient's skin into an internal organ such as a liver or prostate. Subsequently, the patient may be MRI or CT scanned. Despite the fiducial being anchored in the liver, and moving about with any internal shifting of that organ, the long shaft that penetrates to the surface is visible to the lead image localizer cameras.
  • the anterior commissure and posterior commissure of the brain might be visible on both MRI and CT.
  • those common points of reference allow two entirely separate image coordinate systems to be related to one another.
  • the "follow image” could be a composite of data obtained by several modalities, previously registered by established means (Kelly, p. 209-225), or a series of separate follow images sequentially registered to each other, or to the lead image by methods herein described. In this way, a surface video camera could be correlated with the CT via the MR coordinate link.
  • a surgical instrument may be tracked using the techniques of this invention and displayed along with the lead and/or follow images.
  • images of an instrument may be obtained using a video camera or fluoroscopy. If the dimensions of the instrument are known, the image of the instrument may be related to three-dimensional space and displayed with respect to the lead and/or follow images of the patient, even if part of the instrument actually cannot be viewed by the video camera or fluoroscope. This is possible because, like fiducial body features, instruments generally have unique appearances which are characteristic points from which they are viewed. While tracking instruments, a real-time imaging modality could be used as either a lead or a follow image.
  • instrument tracking tasks are preferably performed independent of a patient tracking system, such as by a separate computer or separate processor running in parallel with the computer or processor tracking the patient. Both computers may, of course, derive their input form the same video lead images, and their displays are preferably composited into a single unified display.
  • instruments may be tracked by electromagnetic, sonic or mechanical motion detector systems known in the art. Some such methods are discussed by Kelly, P.J., et al., "Computers in Stereotactic Neurosurgery," pp. 353- 354 (Blackwell Scientific Publications, 1992)). Such instruments may bear additional features such as knobs, buttons or switches for controlling image acquistion or display.
  • the goal of the image guidance in such a case might be a task such as automatic robotic positioning.
  • a user's line of sight may be used to cue the activation of stepper motors that, for example, move a robotic arm or robotic camera to a specific position or orientation.
  • stepper motors that, for example, move a robotic arm or robotic camera to a specific position or orientation.
  • a localizer camera or camera pair may be placed on or within an instrument such as the head of a surgical microscope, or upon an endoscope as shown in Figure 9.
  • Figure 9 shows an endoscope including dual localizer cameras for acquiring lead images.
  • An endoscope 300 includes an endoscope shaft 302 and localizer cameras 304 secured to the shaft.
  • the fiducial markers identified by the system in order to localize the instrument need not be within the optical field of view of such a microscope or endoscope.
  • the lead image is processed by the computer system and need not be the same as the view that is provided to the eyes of the user.
  • Endoscope-mounted cameras like head mounted cameras, are freely movable in three dimensions wihtout motoric or mechanical assistance, and require the active effort of the user in order to maintain line-of-sight.
  • Automated slice-depth selection may be accomplished by making the distance of the user's head/lead image acquisition device directly correlated with the z-value (or equivalent) based slice section of a three dimensional image. In such a manner, the computed selection of the coordinates to be rendered transparent, or otherwise graphically ignored in the display, and those that are to be rendered opaque or otherwise visible, may be a function of how far the user's perspective is from a given target.
  • this may be accomplished with a variety of distance-measuring devices that are known in the art. Such devices run a wide gamut from mechanical arms with potentiometers of optical encoders in their joints, to sound or electromagnetic energy based technologies. Any technology capable of quickly and accurately determining the precise distance between a visual target and a predetermined point may be used for this purpose.
  • a commercially available range finder for example a laser range finder
  • a pulsed signal is emitted from a source that may be located on a surgeon's head mounted display, or on the optical head of an operating microscope.
  • a displacement meter such as that manufactured by Keyence Corp. (Woodcliff Lake, NJ) may be used for precisely ascertaining the distance to a visual target.
  • a laser beam emitter for example, sits near, and in a precise orientation with respect to a CCD camera array, both facing the target.
  • the precise position location on the CCD array that is activated in response to the laser striking the target surface can be computed to reveal the precise distance to target.
  • Image processing techniques may also be used to determine distance to target. For example, images captured by CCD cameras are often automatically focused using a variety of image processing algorithms that, for example seek to minimize the width of strong boundary lines between objects in the image. By measuring the degree of such processing that must be carried out to bring the image into focus, one has a measure of distance. Depth of slice to be displayed may also be determined by means such as the maximal depth of the tip of a tracked instrument, once it is beneath the surface of the patient' s skin.
  • the distance between, for example, the head of an operating microscope and a specific exposed anatomical structure is measured, one may register that particular distance to a given slice depth in the follow image set. For example, the distance between the head of a operating microscope and the surface of a patient's scalp at the occupant may be registered with respect to a that same point on in an MRI data set. As an operation proceeds, and the patient's skull and brain is incised, deeper surfaces will be exposed, and these deeper levels are reflected as longer distances by the range finding device. The measured longer distance is reflected, in turn, by a correspondingly deeper selected slice depth for the image overlay to be displayed.
  • the slice depth/distance relationship may be recalibrated at any time, such as when the resting distance between the optical head of the microscope and the patient is changed.
  • a second range finding or position assessing device may be used to reach a known marker and hence the new position of the optical head, and provide this information to the processing means in order to ascertain a corrected distance to target with which to choose an appropriate slice depth.
  • this system may dictate the size, position, rotation slice depth, etc., of a follow image before or during its acquisition.
  • Such an embodiment is of particular value in the case of follow images being acquired in real time.
  • the transformation instructions provided by the machine vision subsystem herein described are relayed to the computerized image acquisition controls or MRI or CT machine, rather than to a graphics manipulation subsystem.
  • Computerized image acquisition controls are a standard part of modern medical imaging equipment, including CT and MRI machines, such as those produced by GE Medical Systems, and by Siemens. These computerized image acquisition controls typically use manual entry, such as by a keyboard or mouse to select the scale, rotation, translatory position, and slice depth of images to be acquired, typically using a "localizer" image as a reference.
  • the keyboard and mouse are bypassed, and the scale, rotation, translatory position, and slice depth instruction set are automatically communicated to the computerized image acquisition controls by the machine vision subsystem.
  • This instruction set may is essentially the same as the transformation instruction set described in previous embodiments, as both simply reflect the rotation, translatory position, and/or scale of the lead image, plus the slice depth instruction from the slice depth control. In this manner, the follow image may simply be acquired in real time in its desired translation, rotation, scale and/or slice, rather than being transformed into that desired form from a previous form.
  • Embodiments in which data derived from lead image localization is used to control the parameters of a follow image acquisition are particularly useful for follow image acquisition apparatuses such as "open MRI” machines and "intraoperative CT” machines. Such follow image acquisition devices allow efficient access to the patient while scans are in process.
  • Figure 11 outlines the process by which lead images can be used to dictate the orientation parameters of a concurrent MRI or CT scanning process.
  • Figure 11 shows a flowchart of an embodiment in which the acquisition of a follow image is controlled in real time.
  • a step 400 the relative 3-dimensional location and orientation of objects is ascertained.
  • the system may then calculate the scan acquisition parameters at a step
  • the scan acquisition parameters may include such information as the depth of a slice that is desired to be acquired as the follow image.
  • the system translates the scan acquisition parameters into instructions for directing the real time scanning device to acquire the desired follow image.
  • the follow image may be displayed at a step 406.
  • the characteristics of an object as seen within a lead image by a machine vision subsystem of this invention may show a number of parameters including, but not limited scale, rotation, translatory position.
  • slice depth may be designated by both manual and by automatic means. None the less, it is not essential that all of these instructions be applied to the follow image in order to gain the benefits of this invention.
  • the scale of the image shown as an overlay to the optical view of the microscope may be at a magnification factor of the lead image, or at a size completely unrelated to that of the lead image.
  • intraoperatively acquired images may be used to control the automated intraoperative editing of follow images that have been previously acquired by another modality.
  • Such real time images may be lead images, or may be independently acquired.
  • changes between an image taken by an endoscope from a designated location, and another image taken at a later intraoperative time are digitally ascertained, and these changes are then mapped onto the previously acquired follow image as an edit.
  • a previously acquired follow image can be modified to simulate one created in real time.
  • real time ultrasound may be mapped onto, for example, a previously acquired MR image.
  • the temporal changes in the lead image view as provided by a head mounted camera or by an operating in the lead view as provided by a head mounted camera or by an operating microscope optical head may be used to modify a previously acquired follow image.
  • tissue modeling Many techniques in the emerging field of computer based tissue modeling are expected to be used within the methods herein described. For example, the manner in which tissue deforms in response to a probe pushing against it may be modeled in terms of tissue elasticity and other parameters, so as to make intraoperative follow image edits as accurate as possible.
  • the viewpoint of the lead image acquisition device should very closely approximate the actual point of view of the user. Ideally these two views would be identical.
  • a beam splitter is an optical device, well known in the art, which essentially takes light that enters from one side, and divides it equally in two different directions. Hence, anyone looking at either of the two beam splitter outputs would see the same thing. Placing a beam splitter on a see-through head mounted display, for example, one may channel the same view that he sees with his eyes, to a CCD camera or other lead image acquisition device.
  • the hardware for the system may consist of specialized subsystem boards within the chassis of an IBM-compatible PC, such as a dual Pentium Pro system (Intel Corp., Santa Clara, CA).
  • the hard drive and memory should each be at least large enough to hold all data sets being processed, all programs being run, and all correlation information relating lead images to follow image transformation, acquisition, and/or slicing instructions.
  • an Octree (Octree Corporation, Curpertino, CA) 3D, graphics board (PCI) and Cognex Corporation (Mountain View, CA) 5600 machine vision board (ISA), both in a PC.
  • PCI graphics board
  • Cognex Corporation (Mountain View, CA) 5600 machine vision board (ISA)
  • ISA machine vision board
  • Head-mounted displays, see-through or non-see-through, such as those made by Kaiser Electro-optics (Carlsbad, CA), or by Virtual I/O (Seattle, WA) are well suited to the display purpose, as are digital CCD endoscopes as are known in the industry and operating microscopes such as those made by Carl Zeiss, Inc. (Thornwood, N.Y).
  • Translucent volume renderings can show multiple depths of a given follow image at once.
  • Volume rendering is a method of medical image display that is well known in the art, and may be accomplished by processing techniques such as ray casting. Volume renderings can be done using Octree hardware and/or software, as well as other commercially available volume rendering hardware and/or software, and may be done in conjunction with slicing. Because volume rendering shows 3 dimensions of data before the eye at once, it can reduce the degree of precision necessary for selecting a slice depth for display.
  • Figures 5A and 5B show an alternative embodiment of a head mounted display /head mounted camera apparatus.
  • an immersive (non-see-through) head mounted display is worn in the "semi-immersive" position (i.e., high enough that the user can see an unobstructed view when looking in the lower margin of his visual field, but sees the display when looking in the upper margin of his visual field).
  • This allows a surgeon, for example, to work on an operation with the unobstructed view to which he is accustomed, but giving the surgeon the ability to see the pertinent computer data by simply glancing upward slightly.
  • the follow image, or surface/follow image composite display may be offset slightly in the vertical plane from its actual location, so as to provide a view corresponding to that which is seen through the unobstructed path below.
  • stereo CCD cameras 110 and 112 are shown at the level of the user's eyes, not the level of the display. Also shown in figures 5A and 5B is the use of a stereo camera pair for acquiring lead images.
  • the lead image used is actually a stereo image pair; the difference in perspective between the two images helps to discern spatial differences that are more difficult to discern with a single image, by means well known in the art.
  • the lead imager acquisition and analysis subsystem may be a commercially available optical tracking system such as the field matrix CCD-based Polaris system by Northern Digital, Inc. (Waterloo, Ontario, Canada), or the linear CCD-based tracking systems by Image Guided Technologies (Boulder, Colorado).
  • the localizer camera used takes the form of a stereo camera pair so as to improve the accuracy of localization.
  • Left and Right CCD matrices may be placed lateral to each eye, so as to maximally emulate the user's visual perspective without obstructing the user's eyesight, an examplary embodiment is shown in Figures 5 A and 5B.
  • Display means may also include "virtual retinal display” technologies that are known in the art.
  • One advantage of placing localizers along the line of sight of the user's eyes is that the machine vision subsystem's view of the fiducials is not likely to be obstructed without also obstructing the eysight of the user.
  • Another advantage is the ease with which this enables a user- centric display to be rendered.
  • Surface images may be displayed alone or as part of a surface/depth composite on head mounted display 105, and may be derived from stereo images from stereo lead cameras 110 and 112, or may be derived from a third, centrally located camera 125.
  • Camera 125 provides a monocular image that approximates the view provided by the two eyes of a user.
  • Figure 6 shows several methods for determining in real time the depth of slice that may be extracted from a follow image prior to display.
  • slice depth of the follow image may be cued by the position of the probe in space.
  • the maximum depth of the probe (surface 190) within a patient's body, as computed in real time may be used as the criteria for slice depth selection.
  • Figure 6 also shows the use of a light emitter (in this case a laser)/detector pair 140.
  • the time required for a pulse emitted by laser 150 to return to the adjacent detector 145 is a direct function of distance to target surface 190.
  • the laser is aimed at the bottom of a surgeon's excavation of a body part (or at any structure of interest, for that matter), that structure will be exposed in the subsequently sliced follow image. In this manner, depth of excavation can be tracked in real time, and accordingly reflected in the manner in which the follow image is displayed.
  • FIG. 6 shows CCD camera 160 which is aimed at target surface 190, for example the bottom of a surgical excavation. Automatic focusing algorithms required to bring the image of the target into maximum sharpness can be used to calculate the distance to target, when supplied with optical characteristics of the lens.
  • a depth finding device 170 is shown in which two laser beams project toward target surface 190.
  • Laser tube 172 to sits fixed in angle with respect to the device, while laser tube 174 is adjustable with respect to the device. Adjustments of the trajectory of laser tube 172 may be accomplished by means known in the art such as threaded knob 178, and the trajectory may be monitored by a variety of means know including position trackers such as linear potentiometer 176.
  • the relative positions of beam emissions are known by a computer that monitors device 170 via cable 180. If device 170 faces a target surface 190 (e.g., the bottom of a surgical excavation, or a point of interest), and laser tube 174 is adjusted so as to make the beams form laser tube 174 and 172 converge into a single point, that point of convergence corresponds to a specific, precise distance form device 170. In this manner, precise depth of slice required of a follow image may be dictated by the depth finding device 170 in real time.
  • a target surface 190 e.g., the bottom of a surgical excavation, or a point of interest
  • Devices such as 130,140, 160, and 176 may be mounted at a fixed location by a positionable arm 181 above the surgical field prior to surgery, and registered to that location by means known in the art. Fixation, may occur, for example via a clamp 182.
  • depth measurement devices may be tracked in real time by mean described herein, as well as other means known in the art, and their output may be interpreted by the computer to adjust for their real-time spatial location.
  • Figure 7 shows the use of multiple fiducials implanted upon a mobile body part (in this case intestine 205 within abdomen 200).
  • a mobile body part in this case intestine 205 within abdomen 200.
  • fiducials 210, 215, and 220 in place, even mobile body parts can be tracked in real time as they move about.
  • the three dimensional attitude of that mobile structure may be reconstructed, either by doing online modification of the follow image.
  • Means for accomplishing this are known in the art, including those techniques used to modify cartoon images on a computer screen in accordance with the motions of a live actor. Such techniques are accordingly readily adaptable to the monitoring of mobile medical structures.
  • fiducial 220 is asymmetric in shape. The polarity of an asymmetrical fiducial marker pan help to provide 3D-orientation information from a single fiducial marker.
  • Figure 8 shows fiducial gun 250, which can be used for rapidly and efficiently implanting or attaching fiducial markers 260 and 290 onto surfaces.
  • the fiducial markers 260 and 290 may be held in place my means known in the art including retention prongs 270.
  • the device operates in a manner similar to surgical staple and clamp guns that are known in the art.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Robotics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant d'obtenir et d'afficher en temps réel l'image d'un objet obtenue par une modalité de telle sorte que l'image corresponde à une ligne de vue établie par une autre modalité. Dans un mode de réalisation préféré, le procédé consiste: à obtenir une banque d'images à traiter de l'objet grâce à une première modalité de formation d'images (34); à obtenir une banque d'images principales grâce à la deuxième modalité de formation d'images (32); à référencer la banque d'images principales à la banque d'images à traiter (36); à obtenir une image principale de l'objet en temps réel par la deuxième modalité de formation d'images le long d'une vue principale (38); à comparer l'image principale en temps réel à des images principales dans la banque d'images principales par analyse numérique des images, afin d'identifier une ligne de vue d'image à traiter; à transformer l'image à traiter identifiée de sorte qu'elle corresponde à l'échelle, la rotation et la position de l'image principale (40, 42); et à afficher l'image à traiter transformée (46), les étapes de comparaison, de transformation et d'affichage s'effectuant de manière sensiblement simultanée avec l'étape d'obtention de l'image principale en temps réel.
EP98907726A 1997-03-03 1998-02-26 Dispositif et procede de formation d'images Withdrawn EP1011424A1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US39751 1979-05-17
US46809 1987-05-07
US3975197P 1997-03-03 1997-03-03
US4680997P 1997-05-02 1997-05-02
PCT/US1998/004390 WO1998038908A1 (fr) 1997-03-03 1998-02-26 Dispositif et procede de formation d'images

Publications (1)

Publication Number Publication Date
EP1011424A1 true EP1011424A1 (fr) 2000-06-28

Family

ID=26716418

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98907726A Withdrawn EP1011424A1 (fr) 1997-03-03 1998-02-26 Dispositif et procede de formation d'images

Country Status (2)

Country Link
EP (1) EP1011424A1 (fr)
WO (1) WO1998038908A1 (fr)

Families Citing this family (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2652928B1 (fr) 1989-10-05 1994-07-29 Diadix Sa Systeme interactif d'intervention locale a l'interieur d'une zone d'une structure non homogene.
US5592939A (en) 1995-06-14 1997-01-14 Martinelli; Michael A. Method and system for navigating a catheter probe
US6226548B1 (en) 1997-09-24 2001-05-01 Surgical Navigation Technologies, Inc. Percutaneous registration apparatus and method for use in computer-assisted surgical navigation
US6021343A (en) 1997-11-20 2000-02-01 Surgical Navigation Technologies Image guided awl/tap/screwdriver
EP2289423A1 (fr) * 1998-05-14 2011-03-02 David N. Krag Système pour l'encadrement d'un tissu
DE19829224B4 (de) * 1998-06-30 2005-12-15 Brainlab Ag Verfahren zur Lokalisation von Behandlungszielen im Bereich weicher Körperteile
US6482182B1 (en) 1998-09-03 2002-11-19 Surgical Navigation Technologies, Inc. Anchoring system for a brain lead
DE19846687C2 (de) * 1998-10-09 2001-07-26 Auer Dorothee Chirurgische Hilfsvorrichtung zur Verwendung beim Ausführen von medizinischen Eingriffen und Verfahren zum Erzeugen eines Bildes im Rahmen von medizinischen Eingriffen
US6279579B1 (en) 1998-10-23 2001-08-28 Varian Medical Systems, Inc. Method and system for positioning patients for medical treatment procedures
US6973202B2 (en) 1998-10-23 2005-12-06 Varian Medical Systems Technologies, Inc. Single-camera tracking of an object
US6621889B1 (en) 1998-10-23 2003-09-16 Varian Medical Systems, Inc. Method and system for predictive physiological gating of radiation therapy
US6937696B1 (en) 1998-10-23 2005-08-30 Varian Medical Systems Technologies, Inc. Method and system for predictive physiological gating
US6980679B2 (en) 1998-10-23 2005-12-27 Varian Medical System Technologies, Inc. Method and system for monitoring breathing activity of a subject
DE19914455B4 (de) * 1999-03-30 2005-07-14 Siemens Ag Verfahren zur Bestimmung der Bewegung eines Organs oder Therapiegebiets eines Patienten sowie hierfür geeignetes System
US11331150B2 (en) 1999-10-28 2022-05-17 Medtronic Navigation, Inc. Method and apparatus for surgical navigation
US8644907B2 (en) 1999-10-28 2014-02-04 Medtronic Navigaton, Inc. Method and apparatus for surgical navigation
DE10000937B4 (de) * 2000-01-12 2006-02-23 Brainlab Ag Intraoperative Navigationsaktualisierung
US6725080B2 (en) 2000-03-01 2004-04-20 Surgical Navigation Technologies, Inc. Multiple cannula image guided tool for image guided procedures
AU2001240413A1 (en) * 2000-04-10 2001-10-23 2C3D S.A. Medical device for positioning data on intraoperative images
EP1153572B1 (fr) * 2000-05-09 2002-08-07 BrainLAB AG Méthode d'enregistrement des donées d'image d'un patient résultants d'une méthode de navigation, pour des opérations chirurgicales avec des rayons X
US6891518B2 (en) * 2000-10-05 2005-05-10 Siemens Corporate Research, Inc. Augmented reality visualization device
US20020082498A1 (en) * 2000-10-05 2002-06-27 Siemens Corporate Research, Inc. Intra-operative image-guided neurosurgery with augmented reality visualization
AU2002215822A1 (en) 2000-10-23 2002-05-06 Deutsches Krebsforschungszentrum Stiftung Des Offentlichen Rechts Method, device and navigation aid for navigation during medical interventions
AU2002239705A1 (en) 2000-10-27 2002-05-21 The Johns Hopkins University System and method of integrating live video into a contextual background
US6919867B2 (en) * 2001-03-29 2005-07-19 Siemens Corporate Research, Inc. Method and apparatus for augmented reality visualization
US6636757B1 (en) 2001-06-04 2003-10-21 Surgical Navigation Technologies, Inc. Method and apparatus for electromagnetic navigation of a surgical probe near a metal object
US20020193685A1 (en) 2001-06-08 2002-12-19 Calypso Medical, Inc. Guided Radiation Therapy System
US6947786B2 (en) 2002-02-28 2005-09-20 Surgical Navigation Technologies, Inc. Method and apparatus for perspective inversion
DE10210650B4 (de) * 2002-03-11 2005-04-28 Siemens Ag Verfahren zur dreidimensionalen Darstellung eines Untersuchungsbereichs eines Patienten in Form eines 3D-Rekonstruktionsbilds und medizinische Untersuchungs- und/oder Behandlungseinrichtung
US6990368B2 (en) 2002-04-04 2006-01-24 Surgical Navigation Technologies, Inc. Method and apparatus for virtual digital subtraction angiography
US7998062B2 (en) 2004-03-29 2011-08-16 Superdimension, Ltd. Endoscope structures and techniques for navigating to a target in branched structure
US7720522B2 (en) 2003-02-25 2010-05-18 Medtronic, Inc. Fiducial marker devices, tools, and methods
US7620444B2 (en) 2002-10-05 2009-11-17 General Electric Company Systems and methods for improving usability of images for medical applications
US9248003B2 (en) 2002-12-30 2016-02-02 Varian Medical Systems, Inc. Receiver used in marker localization sensing system and tunable to marker frequency
US8355773B2 (en) 2003-01-21 2013-01-15 Aesculap Ag Recording localization device tool positional parameters
US7660623B2 (en) 2003-01-30 2010-02-09 Medtronic Navigation, Inc. Six degree of freedom alignment display for medical procedures
FR2854318B1 (fr) * 2003-05-02 2010-10-22 Perception Raisonnement Action Determination de la position d'un element anatomique
DE602004022432D1 (de) 2003-09-15 2009-09-17 Super Dimension Ltd System aus zubehör zur verwendung mit bronchoskopen
EP2316328B1 (fr) 2003-09-15 2012-05-09 Super Dimension Ltd. Dispositif de fixation à enroulement pour utilisation avec des bronchoscopes
US9623208B2 (en) 2004-01-12 2017-04-18 Varian Medical Systems, Inc. Instruments with location markers and methods for tracking instruments through anatomical passageways
DE102004003381B4 (de) 2004-01-22 2007-02-01 Siemens Ag Verfahren zur Bestimmung der Lage einer Schicht in einem Untersuchungsgebiet, in welcher Schicht eine Schichtbildaufnahme erfolgen soll
US8764725B2 (en) 2004-02-09 2014-07-01 Covidien Lp Directional anchoring mechanism, method and applications thereof
WO2006002396A2 (fr) 2004-06-24 2006-01-05 Calypso Medical Technologies, Inc. Systemes et methodes permettant de traiter le poumon d'un patient par intervention chirurgicale ou radiotherapie guidee
US9586059B2 (en) 2004-07-23 2017-03-07 Varian Medical Systems, Inc. User interface for guided radiation therapy
US8340742B2 (en) 2004-07-23 2012-12-25 Varian Medical Systems, Inc. Integrated radiation therapy systems and methods for treating a target in a patient
US8437449B2 (en) 2004-07-23 2013-05-07 Varian Medical Systems, Inc. Dynamic/adaptive treatment planning for radiation therapy
DE102004046430A1 (de) * 2004-09-24 2006-04-06 Siemens Ag System zur visuellen Situations-bedingten Echtzeit-basierten Unterstützung eines Chirurgen und Echtzeit-basierter Dokumentation und Archivierung der vom Chirurgen während der Operation visuell wahrgenommenen Unterstützungs-basierten Eindrücke
US7833221B2 (en) * 2004-10-22 2010-11-16 Ethicon Endo-Surgery, Inc. System and method for treatment of tissue using the tissue as a fiducial
US9119541B2 (en) 2005-08-30 2015-09-01 Varian Medical Systems, Inc. Eyewear for patient prompting
EP1926520B1 (fr) 2005-09-19 2015-11-11 Varian Medical Systems, Inc. Appareils et procedes permettant d'implanter des objets, tels que des marqueurs d'implantation bronchoscopique, dans les poumons de patients
US9168102B2 (en) 2006-01-18 2015-10-27 Medtronic Navigation, Inc. Method and apparatus for providing a container to a sterile environment
US8150497B2 (en) 2006-09-08 2012-04-03 Medtronic, Inc. System for navigating a planned procedure within a body
US8150498B2 (en) 2006-09-08 2012-04-03 Medtronic, Inc. System for identification of anatomical landmarks
US8160677B2 (en) 2006-09-08 2012-04-17 Medtronic, Inc. Method for identification of anatomical landmarks
US8160676B2 (en) 2006-09-08 2012-04-17 Medtronic, Inc. Method for planning a surgical procedure
US8660635B2 (en) 2006-09-29 2014-02-25 Medtronic, Inc. Method and apparatus for optimizing a computer assisted surgical procedure
EP2036494A3 (fr) * 2007-05-07 2009-04-15 Olympus Medical Systems Corp. Système de guidage médical
JP5335201B2 (ja) * 2007-05-08 2013-11-06 キヤノン株式会社 画像診断装置
FR2917598B1 (fr) * 2007-06-19 2010-04-02 Medtech Plateforme robotisee multi-applicative pour la neurochirurgie et procede de recalage
US8905920B2 (en) 2007-09-27 2014-12-09 Covidien Lp Bronchoscope adapter and method
WO2009122273A2 (fr) 2008-04-03 2009-10-08 Superdimension, Ltd. Système et procédé de détection d'interférence magnétique
WO2009147671A1 (fr) 2008-06-03 2009-12-10 Superdimension Ltd. Procédé d'alignement basé sur des caractéristiques
EP2293720B1 (fr) 2008-06-05 2021-02-24 Varian Medical Systems, Inc. Compensation de mouvements pour imagerie médicale et systèmes et procédés associés
US8218847B2 (en) 2008-06-06 2012-07-10 Superdimension, Ltd. Hybrid registration method
US8932207B2 (en) 2008-07-10 2015-01-13 Covidien Lp Integrated multi-functional endoscopic tool
US10667727B2 (en) 2008-09-05 2020-06-02 Varian Medical Systems, Inc. Systems and methods for determining a state of a patient
WO2010067267A1 (fr) * 2008-12-09 2010-06-17 Philips Intellectual Property & Standards Gmbh Caméra sans fil montée sur la tête et unité d'affichage
DE102009005707A1 (de) * 2009-01-22 2010-07-29 Pohlig Gmbh Geführte Sonografie
US8611984B2 (en) 2009-04-08 2013-12-17 Covidien Lp Locatable catheter
US20130093866A1 (en) * 2010-03-18 2013-04-18 Rigshospitalet Optical motion tracking of an object
EP2372999A1 (fr) * 2010-03-23 2011-10-05 Eliyahu Mashiah Tragbares System zum Filmen, Aufzeichnen und Kommunizieren während einer Sportaktivität
US10582834B2 (en) 2010-06-15 2020-03-10 Covidien Lp Locatable expandable working channel and method
FR2963693B1 (fr) 2010-08-04 2013-05-03 Medtech Procede d'acquisition automatise et assiste de surfaces anatomiques
EP2621578B1 (fr) 2010-10-01 2023-11-29 Varian Medical Systems, Inc. Cathéter d'administration d'un implant, par exemple, implantation bronchoscopique d'un marqueur dans un poumon
US9509924B2 (en) 2011-06-10 2016-11-29 Flir Systems, Inc. Wearable apparatus with integrated infrared imaging module
FR2983059B1 (fr) 2011-11-30 2014-11-28 Medtech Procede assiste par robotique de positionnement d'instrument chirurgical par rapport au corps d'un patient et dispositif de mise en oeuvre.
WO2013184220A2 (fr) * 2012-03-19 2013-12-12 Flir Systems, Inc. Appareil portable à module d'imagerie infrarouge intégré
EP2732788A1 (fr) * 2012-11-19 2014-05-21 Metronor AS Système destiné à permettre le placement de précision d'un implant chez un patient subissant une intervention chirurgicale
JP2016538014A (ja) 2013-10-11 2016-12-08 ソナケアー メディカル,エルエルシー 超音波手術を実施するためのシステム及び方法
US10043284B2 (en) 2014-05-07 2018-08-07 Varian Medical Systems, Inc. Systems and methods for real-time tumor tracking
US9919165B2 (en) 2014-05-07 2018-03-20 Varian Medical Systems, Inc. Systems and methods for fiducial to plan association
US10952593B2 (en) 2014-06-10 2021-03-23 Covidien Lp Bronchoscope adapter
IL236003A (en) 2014-11-30 2016-02-29 Ben-Yishai Rani Model and method for registering a model
US10426555B2 (en) 2015-06-03 2019-10-01 Covidien Lp Medical instrument with sensor for use in a system and method for electromagnetic navigation
US9962134B2 (en) 2015-10-28 2018-05-08 Medtronic Navigation, Inc. Apparatus and method for maintaining image quality while minimizing X-ray dosage of a patient
US10478254B2 (en) 2016-05-16 2019-11-19 Covidien Lp System and method to access lung tissue
US10792106B2 (en) 2016-10-28 2020-10-06 Covidien Lp System for calibrating an electromagnetic navigation system
US10751126B2 (en) 2016-10-28 2020-08-25 Covidien Lp System and method for generating a map for electromagnetic navigation
US10638952B2 (en) 2016-10-28 2020-05-05 Covidien Lp Methods, systems, and computer-readable media for calibrating an electromagnetic navigation system
US10722311B2 (en) 2016-10-28 2020-07-28 Covidien Lp System and method for identifying a location and/or an orientation of an electromagnetic sensor based on a map
US10446931B2 (en) 2016-10-28 2019-10-15 Covidien Lp Electromagnetic navigation antenna assembly and electromagnetic navigation system including the same
US10517505B2 (en) 2016-10-28 2019-12-31 Covidien Lp Systems, methods, and computer-readable media for optimizing an electromagnetic navigation system
US10418705B2 (en) 2016-10-28 2019-09-17 Covidien Lp Electromagnetic navigation antenna assembly and electromagnetic navigation system including the same
US10615500B2 (en) 2016-10-28 2020-04-07 Covidien Lp System and method for designing electromagnetic navigation antenna assemblies
US11660145B2 (en) 2017-08-11 2023-05-30 Mobius Imaging Llc Method and apparatus for attaching a reference marker to a patient
RU2659897C1 (ru) * 2017-10-05 2018-07-04 Общество с ограниченной ответственностью "ЭргоПродакшн" Модуль фотовидеофиксации
US11219489B2 (en) 2017-10-31 2022-01-11 Covidien Lp Devices and systems for providing sensors in parallel with medical tools
EP3977949A1 (fr) * 2020-10-01 2022-04-06 Globus Medical, Inc. Systèmes et procédés pour fixer un réseau de navigation
WO2022153329A1 (fr) * 2021-01-12 2022-07-21 Cloudphysician Healthcare Pvt Ltd Procédé et système d'analyse de données affichées sur des respirateurs

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4343037A (en) * 1979-06-15 1982-08-03 Redifon Simulation Limited Visual display systems of the computer generated image type
US5493595A (en) * 1982-02-24 1996-02-20 Schoolman Scientific Corp. Stereoscopically displayed three dimensional medical imaging
US5531227A (en) * 1994-01-28 1996-07-02 Schneider Medical Technologies, Inc. Imaging device and method
US5531520A (en) * 1994-09-01 1996-07-02 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets including anatomical body data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9838908A1 *

Also Published As

Publication number Publication date
WO1998038908A1 (fr) 1998-09-11

Similar Documents

Publication Publication Date Title
EP1011424A1 (fr) Dispositif et procede de formation d'images
EP0741540B1 (fr) Procede et dispositif d'imagerie
US5531227A (en) Imaging device and method
Colchester et al. Development and preliminary evaluation of VISLAN, a surgical planning and guidance system using intra-operative video imaging
EP3720334B1 (fr) Système et procédé d'assistance à la visualisation durant une procédure
US6690960B2 (en) Video-based surgical targeting system
Bichlmeier et al. The virtual mirror: a new interaction paradigm for augmented reality environments
CN109996511A (zh) 用于引导进程的系统
US6529758B2 (en) Method and apparatus for volumetric image navigation
JP2022507622A (ja) 拡張現実ディスプレイでの光学コードの使用
EP3420413A1 (fr) Procédé et système d'affichage d'images holographiques dans un objet réel
CA2892554A1 (fr) Systeme et procede de validation dynamique et de correction d'enregistrement pour une navigation chirurgicale
Philip et al. Stereo augmented reality in the surgical microscope
US10188468B2 (en) Focused based depth map acquisition
Maurer Jr et al. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom
JPH07136175A (ja) 実時間医用装置および方法
Colchester et al. Craniotomy simulation and guidance using a stereo video based tracking system (VISLAN)
Bichlmeier et al. Evaluation of the virtual mirror as a navigational aid for augmented reality driven minimally invasive procedures
US20230120638A1 (en) Augmented reality soft tissue biopsy and surgery system
US11941765B2 (en) Representation apparatus for displaying a graphical representation of an augmented reality
Adams et al. An optical navigator for brain surgery
Shahidi et al. Volumetric image guidance via a stereotactic endoscope
Vogt et al. Augmented reality system for MR-guided interventions: Phantom studies and first animal test
Zamorano et al. Computer-assisted surgical planning and automation of laser delivery systems
CA3233118A1 (fr) Balayage, ciblage et visualisation anatomiques

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19990910

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20050901