WO2020234409A1 - Intraoperative imaging-based surgical navigation - Google Patents
Intraoperative imaging-based surgical navigation Download PDFInfo
- Publication number
- WO2020234409A1 WO2020234409A1 PCT/EP2020/064177 EP2020064177W WO2020234409A1 WO 2020234409 A1 WO2020234409 A1 WO 2020234409A1 EP 2020064177 W EP2020064177 W EP 2020064177W WO 2020234409 A1 WO2020234409 A1 WO 2020234409A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lung
- medical instrument
- deflated state
- interventional
- controller
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims description 71
- 210000004072 lung Anatomy 0.000 claims abstract description 160
- 238000000034 method Methods 0.000 claims abstract description 76
- 230000015654 memory Effects 0.000 claims abstract description 35
- 238000013152 interventional procedure Methods 0.000 claims abstract description 27
- 230000008569 process Effects 0.000 claims abstract description 22
- 238000002591 computed tomography Methods 0.000 claims description 21
- 238000007408 cone-beam computed tomography Methods 0.000 claims description 20
- 230000003190 augmentative effect Effects 0.000 claims description 15
- 238000013170 computed tomography imaging Methods 0.000 claims description 8
- 239000003550 marker Substances 0.000 claims description 6
- 206010028980 Neoplasm Diseases 0.000 description 31
- 238000001356 surgical procedure Methods 0.000 description 20
- 230000011218 segmentation Effects 0.000 description 11
- 208000037841 lung tumor Diseases 0.000 description 9
- 238000002059 diagnostic imaging Methods 0.000 description 8
- 238000002271 resection Methods 0.000 description 8
- 210000001519 tissue Anatomy 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 208000020816 lung neoplasm Diseases 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000002594 fluoroscopy Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 210000003484 anatomy Anatomy 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 210000000038 chest Anatomy 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 210000000115 thoracic cavity Anatomy 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 201000005202 lung cancer Diseases 0.000 description 2
- 230000004199 lung function Effects 0.000 description 2
- 210000001165 lymph node Anatomy 0.000 description 2
- 238000002324 minimally invasive surgery Methods 0.000 description 2
- 238000002559 palpation Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002432 robotic surgery Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 0 C1C2=C*=C/C2=C\C=*\C=C1 Chemical compound C1C2=C*=C/C2=C\C=*\C=C1 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000010171 animal model Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000000701 chemical imaging Methods 0.000 description 1
- 238000002512 chemotherapy Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000002498 deadly effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 239000000975 dye Substances 0.000 description 1
- 238000009558 endoscopic ultrasound Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 239000007850 fluorescent dye Substances 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/12—Arrangements for detecting or locating foreign bodies
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/486—Diagnostic techniques involving generating temporal series of image data
- A61B6/487—Diagnostic techniques involving generating temporal series of image data involving fluoroscopy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/313—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00681—Aspects not otherwise provided for
- A61B2017/00694—Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00743—Type of operation; Specification of treatment sites
- A61B2017/00809—Lung operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2061—Tracking techniques using shape-sensors, e.g. fiber shape sensors with Bragg gratings
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/372—Details of monitor hardware
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/037—Emission tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/40—Arrangements for generating radiation specially adapted for radiation diagnosis
- A61B6/4064—Arrangements for generating radiation specially adapted for radiation diagnosis specially adapted for producing a particular type of beam
- A61B6/4085—Cone-beams
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Definitions
- [001] Lung cancer is a deadly form of cancer with surgery being the preferred treatment of choice for early stage tumors.
- the most invasive form of surgery historically is open surgery, where the chest is split open to expose a large portion of the lung.
- surgical tools such as scalpels are inserted through a large opening in the thorax and used to remove the tumor.
- the open surgical techniques allow physical access for palpation to sense the tumors by touch.
- VATS was developed to provide a more minimally invasive approach to lung tumor resection.
- a small camera is inserted into the chest cavity through a small port (i.e. a small hole or incision) and the surgical instruments are inserted through the same port or other small ports.
- a small port i.e. a small hole or incision
- palpation to sense the tumors by touch is more difficult under VATS due to constrained access and no haptic feedback, and the entire resection is done using the camera view.
- FIG. 1 illustrates a known VATS implementation for lung resection.
- a thoracoscope or a small camera stick is inserted through the rib cage of a patient P as one of the instruments.
- surgeons and research groups have investigated ways of improving the surgical workflow to better guide tumor resection. This can be done by implanting dyes or markers into the tumor or with better imaging techniques.
- intra-operative cone beam CT is used for a needle-guided insertion of a marker.
- the marker is placed in the center of the tumor and a string/wire comes out to the surface of the lung.
- thoracoscope is then inserted and the traditional VATS procedure is completed with the surgeon following the wire to the marker at the center of the tumor. Visible and fluorescent dyes can also be injected into the tumor to serve as a visual marker for the surgeon as the tissue is dissected.
- Another known mechanism provides a deformable registration algorithm that calculates a deformation matrix between cone-beam CT images of inflated and deflated lungs in phantom and animal models.
- a controller for assisting navigation in an interventional procedure includes a memory that stores instructions, and a processor that executes the instructions.
- the instructions When executed by the processor, the instructions cause the controller to implement a process that includes registering coordinate systems of an interventional medical instrument and an intra-operative image of a lung; and calculating a deformation between the lung in the deflated state and the lung in the inflated state.
- the process implemented when the processor executes the instructions also includes applying the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three-dimensional model of the lung in the deflated state; and generating an image of the modified three-dimensional model of the lung in the deflated state.
- a controller for assisting navigation in an interventional procedure includes a memory that stores instructions, and a processor that executes the instructions.
- the instructions When executed by the processor, the instructions cause the controller to implement a process that includes segmenting a cone-beam computed tomography image of a lung in a deflated state during the interventional procedure to obtain a three-dimensional original model of the lung in the deflated state.
- the process implemented when the processor executes the instructions also includes registering coordinate systems of an interventional medical instrument to the three-dimensional original model of the lung in the deflated state; and generating an image of the original three-dimensional model of the lung in the deflated state.
- a system for assisting navigation in an interventional procedure includes a cone-beam computed tomography imaging apparatus and a computer.
- the cone-beam computed tomography imaging apparatus generates a cone-beam computed tomography image of a lung in a deflated state.
- the computer includes a controller with a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process that includes registering coordinate systems of an interventional medical instrument and an intraoperative image of the lung and calculating a deformation between the lung in the deflated state and the lung in the inflated state.
- the process also includes applying the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three- dimensional model of the lung in the deflated state; and generating an image of the modified three-dimensional model of the lung in the deflated state.
- FIG. 1 illustrates a known VATS implementation for lung resection.
- FIG. 2A illustrates a method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- FIG. 2B illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- FIG. 3 illustrates another method of intraoperative imaging -based surgical navigation, in accordance with a representative embodiment.
- FIG. 4 illustrates a system for intraoperative imaging -based surgical navigation, in accordance with a representative embodiment.
- FIG. 5 illustrates a general computer system, on which a method of intraoperative imaging-based surgical navigation can be implemented, in accordance with another representative embodiment
- mechanisms for intraoperative imaging-based surgical navigation are useful in mitigating the challenges of VATS or RATS (radio assisted thoracoscopic surgery).
- the mechanisms described herein are placed in the context of the surgical setup and workflow, so that components and procedural steps are sequenced to enable productive use of time and resources.
- the embodiments of intraoperative imaging-based surgical navigation described below each typically involve intraoperative cone-beam CT or fluoroscopy image-based registration methods, along with image-based tracking.
- FIG. 2A illustrates a method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- the method starts with imaging at S210.
- the imaging at S210 may be computed tomography and/or positron emission tomography-computed tomography performed prior to surgery.
- the imaging at S210 may be performed at a different time and place, and under the supervision and control of different personnel than the imaging using the equipment in the system 400 of FIG. 4 described later.
- the imaging at S210 involves imaging the lung in the inflated state using computed tomography prior to an interventional procedure. That is, the imaging at S210 may result in the lung in the inflated state being imaged using computed tomography.
- the imaging at S210 may optionally involve imaging the lung in a partially inflated state using computed tomography, where the partially inflated state may be different than the intraoperative state.
- the method of FIG. 2A continues with pre-operative segmentation at S220.
- An algorithm is used to perform the segmentation and is applied to the images obtained in S210 to produce a three-dimensional (3D) model of the anatomical features (e.g., airways, vessels, fissures, tumor, and/or lymph nodes) of the lung that was imaged at S210.
- 3D three-dimensional
- Segmentation is a representation of the surface of structures of the anatomical features (e.g., airways, vessels, fissures, tumor and/or lymph nodes) and consists for example of a set of points in three- dimensional (3-D) coordinates on the surfaces of the lung, and triangular plane segments defined by connecting neighboring groups of three points, such that the entire structure is covered by a mesh of non-intersecting triangular planes.
- a three-dimensional model of the lung is obtained by segmenting at least one of imagery obtained based on cone-beam computed tomography imaging of the lung in the inflated state in an alternative to S210 and imagery obtained based on computed tomography imaging of the lung in the inflated state as in S210.
- the method of FIG. 2A includes imaging of an inflated lung.
- the image of the inflated lung at S230 may be considered intraoperative and may be an intraoperative image taken using computed tomography or cone-beam computed tomography.
- the imaging of the inflated lung at S230 may be with a cone-beam CT imaging apparatus. That is, at the beginning of surgery a cone-beam CT scan of the patient may be completed when the lung is still inflated.
- the cone-beam CT image of the inflated lung can be used subsequently to register the pre operative image(s) obtained at S210 to the intra-operative state and make any alignment adjustments to the patient’s positioning.
- the imaging at S210 and the imaging at S230 may both be via cone-beam CT even when performed in different places and/or different times and/or with different cone-beam CT imaging apparatuses.
- the 3D model created at S220 may be updated after the images from S210 and S230 are registered.
- CT images from S210 may be registered with cone-beam CT images from S230, and then the 3D model created at S220 is updated prior to the process described next for S240.
- a rigid transformation may be applied after S230 to the 3D model from S220 to just account for differences in patient position.
- a deformable transformation may be applied after S230 to the 3D model from S220 to account for non-rigid changes in the lung.
- a scope is inserted into the patient and the lung is deflated.
- the scope may be a thoracoscope.
- the scope is inserted into the chest cavity and may be, but is not necessarily, inserted into the lung.
- the lung is deflated after the scope is inserted in S240.
- the method of FIG. 2A includes imaging of the deflated lung.
- the imaging of the deflated lung at S250 may be via cone-beam CT and may be performed with the same cone- beam CT imaging apparatus used in S230.
- the imaging at S250 may be visible light imaging.
- the imaging at S250 may involve the scope as the interventional medical instrument, such that the interventional medical instrument is imaged using cone-beam computed tomography during the interventional procedure.
- the imaging of the deflated lung at S250 may be via one or more X-rays, which may involve the scope as the interventional medical instrument, such that the interventional medical instrument is imaged using X-ray during the interventional procedure.
- the method of FIG. 2A includes registering the scope to the deflated lung.
- the registration at S260 may involve aligning the coordinate system of the scope inserted into the chest cavity while the lung is deflated and the coordinate system of an intra-operative image of the lung. Registration of coordinate systems may be performed in a variety of ways, such as by identifying and aligning landmarks present in both coordinate systems.
- the intra-operative image of the lung may be that taken at S230 and may be an intra-operative image taken using computed tomography or cone-beam computed tomography.
- a deformation matrix may be calculated between the cone-beam CT image of the inflated lung from S230 and the cone-beam CT image of the deflated lung from S250 and applied to the 3D model of the anatomical features which is created at S220 and already updated at or after S230. This results in a new 3D model of the anatomical features in the deflated state.
- the 3D model can be used for anatomical reference and guidance during surgery and may exist as its own feature.
- the deformation matrix may be a comprehensive all-encompassing deformation matrix which includes CT to cone-beam CT of the inflated lung to cone-beam CT of the deflated lung.
- the deformation matrix may be applied in two steps, where one deformation matrix is applied for CT to the cone-beam CT of the inflated lung (e.g., after S230 as described above), and then a second deformation matrix is applied for the transformation from the cone- beam CT of the inflated lung to the cone-beam CT of the deflated lung.
- the method of FIG. 2A includes tracking movement of the scope relative to the lung surface. That is, movement of an interventional medical instrument such as a scope may be tracked visually via the tissue surface of the lung; this may be with a visible light camera (such as the traditional thoracoscope), or by hyperspectral imaging, or by near-infrared (NIR) fluorescence imaging to name a few.
- An interventional medical instrument may alternatively be tracked with external tracking technologies such as electromagnetic tracking using sensors, optical tracking using optical sense shaping (OSS), and by other forms of tracking technologies for tracking interventional medical instruments.
- Applying a surface-feature tracking algorithm or different tracking methods to the scope allows the system used to implement S270 to move the 3D model on a monitor as the scope is moved with respect to the lung.
- a 3D model may be presented on a headset screen or glasses, such as by using augmented reality.
- a 3D model may be presented as a hologram.
- the method of FIG. 2A includes augmenting the scope view.
- the 3D model can be overlaid on the scope video-feed. Augmenting can be performed at S280 in other ways, such as by highlighting a tumor in the view of the scope by brightness or color, by visually warning of anatomical features that should be avoided, and in other ways that are supplemental to the teachings herein.
- the method of FIG. 2A includes resecting a tumor in the lung, based on intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 2A.
- the tumor can be resected at S290 using the augmented view of the thoracoscope and the augmented view helps ensure the surgeon knows where the tumor is located within the lung.
- S270, S280 and S290 are shown partially overlapping in the vertical direction on the page. This reflects that the tracking at S270 may be performed continually before and during the augmenting of the scope view at S280, and both may be performed continually during the resecting of the tumor at S290.
- methods described herein are generally shown as a series of discrete steps performed separately in sequence, some steps in the methods may be performed continually while other steps in the methods are also performed.
- the thoracoscope is a white light scope which only“sees” visible colors.
- other methods may be employed for the tracking movement of the interventional medical device at S270, such as if there are not enough features in the white light view. That is, movement of an interventional medical instrument may also be tracked at S270 based on light emitted in a frequency band outside of a visible region.
- Other methods include use of a hyperspectral camera/scope, a near-infrared or infrared scope, or a fluorescence scope - each of which“see” the tissue at wavelengths outside of the visible region.
- Alternative (non-optical) imaging modalities such as endoscopic ultrasound may also be incorporated as part of the augmented reality view.
- FIG. 2B illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- FIG. 2B overlaps at the beginning with the method of FIG. 2A, and descriptions of the overlapping parts are not detailed since they may be the same as in FIG. 2A
- the method again starts with imaging at S210, continues with pre-operative segmentation at S220, and includes imaging of an inflated lung at S230.
- the method of FIG. 2B also includes inserting a scope into the patient and deflating the lung at S240, imaging the deflated lung at S250, and registering the scope to the deflated lung at S260.
- the deflated lung is imaged again so that additional images are acquired during the interventional procedure, and the 3D model(s) of the deflated lung is updated at S285 based on the additional images as movement of the scope is tracked relative to the lung surface at S270 and the scope view is augmented at S280.
- the method in FIG. 2B concludes again with resecting a tumor in the lung, based on intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 2B.
- the re-imaging of the deflated lung and updating of the 3D model(s) of the deflated lung at S285 may be performed selectively and dynamically intraoperatively in order to improve the intraoperative surgical navigation.
- additional cone-beam CT images or fluoroscopy images can be acquired during surgery to update the 3D models of the deflated lung at S285.
- a sparse 3D reconstruction of the scene can be achieved by taking two or more projections, identifying common image features between those projections, and then reconstructing the 3D positions of only those features.
- a 3D reconstruction of the scene can be achieved by using an image analysis algorithm to automatically find anatomical landmarks to use in registering the image.
- the interventional medical instrument is imaged using multiple x-ray projections during the interventional procedure, since the interventional medical instrument is inserted into the patient at S240 and imaging of the deflated lung is performed both at S250 and again at S285.
- one intraoperative cone-beam CT scan of the lung in the inflated state and one intraoperative cone-beam CT scan of the lung in the deflated state may be obtained and used.
- a single cone-beam CT scan of the lung in the deflated state can be used.
- a deformable registration between the cone-beam CT image of the lung in the deflated state and the pre-operative CT image may be required. Workflow is simplified by using only one intraoperative scan such as the single cone-beam CT scan of the lung in the deflated state in these alternative embodiments.
- anatomical structures of interest may have already been segmented from the pre-operative CT image. If so, then these segmentations can be used to guide the registration at S260.
- a second set of segmentations are performed on the cone-beam CT image(s) from S250 and used in the registration at S260.
- an advantage of simplicity is obtained with a tradeoff of potential loss of accuracy.
- the additional information provided by the contrast images may be used to update the deflated models at S285 and improve the registration accuracy.
- additional information in contrast images may be vessels shown more clearly.
- FIG. 3 illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- the method of FIG. 3 starts with inserting a scope in the patient and deflating a lung in the patient. That is, in the embodiment of FIG. 3 pre-operative imaging may not be required. Instead the patient may be prepared for surgery immediately before the surgery and the thoracoscope inserted through a port.
- the method of FIG. 3 continues with imaging the deflated lung. That is, the lung is collapsed, and a cone-beam CT image is acquired of the deflated lung at S310.
- the imaging at S310 also involves the scope as the interventional medical instrument, such that the interventional medical instrument is imaged using cone-beam computed tomography during the interventional procedure while the lung is in the deflated state.
- the method of FIG. 3 includes segmenting the image(s) of the deflated lung taken at S310.
- Important structures of the lung anatomy like the vessels, airways, and tumor, are segmented directly from cone-beam CT image of the deflated lung.
- a 3D model of the anatomy in the deflated state is generated showing the segmentation.
- the scope is registered to the deflated lung in the method of FIG. 3. Similar to the embodiments of FIGs. 2A and 2B, a pose of the thoracoscope pose is registered to the 3D model.
- movement of the scope is tracked relative to lung surface in the method of FIG. 3. Similar again to the embodiments of FIGs. 2A and 2B the pose of the thoracoscope can be tracked with respect to the lung surface or otherwise such as with electromagnetic sensors.
- the tracking at S370 may be based on light emitted in a frequency band within a visible region or based on light emitted in a frequency band outside of a visible region.
- the scope view is augmented in the method of FIG. 3. Similar once again to the embodiments of FIGs. 2A and 2B, the thoracoscope view can be augmented with the 3D model information.
- FIG. 3 again concludes with resecting a tumor in the lung, based on the intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 3.
- An advantage of the workflow of the embodiment in FIG. 3 is that deformable registration between images from CT and from cone-beam CT is not required, and a cone-beam CT image of the inflated lung is not needed as well.
- Another hybrid embodiment is applicable when segmentation of a cone-beam CT image of a deflated lung is difficult.
- an intraprocedural cone-beam CT image of an inflated lung is obtained but again without a pre-operative CT imaging process.
- the segmentation is performed on the cone-beam CT image of the inflated lung, and registration is performed to map the cone-beam CT image segmentation model to the model based on the intraoperative cone-beam CT image of the deflated lung.
- FIG. 4 illustrates a system for intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
- the system 400 of FIG. 4 includes a first medical imaging system 410, a computer 420, a display 425, and a tracked device 450.
- An example of the first medical imaging system 410 is a cone-beam computed tomography imaging system.
- a cone-beam computed tomography imaging system provides three-dimensional (3D) imaging with X-rays in the shape of a cone.
- a cone-beam computed tomography imaging system differs from a computed tomography imaging system.
- a computed tomography imaging system generates X-ray beams in the shape of a rotating fan to capture slices of a limited thickness
- a cone-beam computed tomography imaging system generates the X-ray beams in the shape of the cone.
- a patient does not have to advance or move at all in the cone-beam CT, whereas a patient advances during a CT procedure.
- the difference between cone-beam CT and CT is not necessarily a matter of simply flipping a switch to activate different modes using the same system; rather, cone-beam CT and CT as described herein may involve imaging by entirely different systems.
- the first medical imaging system 410 is typically the cone-beam CT, and performs imaging such as at S230, S250, S285, and S310.
- the computer 420 may include a controller described herein.
- a controller described herein may include a combination of a memory that stores instructions and a processor that executes the instructions in order to implement processes described herein.
- a controller may be housed within or linked to a workstation such as the computer 420 or another assembly of one or more computing devices, a display/monitor, and one or more input devices (e.g., a keyboard, joysticks and mouse) in the form of a standalone computing system, a client computer of a server system, a desktop or a tablet.
- the descriptive label for the term“controller” herein facilitates a distinction between controllers as described herein without specifying or implying any additional limitation to the term“controller”.
- controller broadly encompasses all structural configurations, as understood in the art of the present disclosure and as exemplarily described in the present disclosure, of an application specific main board or an application specific integrated circuit for controlling an application of various principles as described in the present disclosure.
- the structural configuration of the controller may include, but is not limited to, processor(s), computer-usable/computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s).
- Fig. 4 shows components networked together, two such components may be integrated into a single system.
- the computer 420 may be integrated with the display 425 and/or with the first medical imaging system 410. That is, in some embodiments, functionality attributed to the computer 420 may be implemented by (e.g., performed by) a system that includes the first medical imaging system 410.
- the networked components shown in Fig. 4 may also be spatially distributed such as by being distributed in different rooms or different buildings, in which case the networked components may be connected via data connections.
- the computer 420 in Fig. 4 may include some or all elements and functionality of the general computer system described below with respect to Fig. 5.
- the computer 420 may include a controller for registering a scope to a cone-beam CT image of a deflated lung, for tracking a scope, and/or for augmenting a view of a scope.
- a process executed by a controller may include receiving a three-dimensional model of anatomy of a lung that is the subject of an interventional procedure.
- the display 425 may be used to display the three-dimensional models, the cone-beam CT images obtained at S230, S250, S285 and S310, the scope views and the augmented scope views, and other imagery and views described herein.
- Imagery obtained during a medical intervention may be, for example, imagery of an inflated lung, imagery of a deflated lung, and imagery or positional information of the tracked device 450.
- Imagery that may be displayed on a display 425 includes imagery obtained during a medical intervention, imagery of a 3D model of a lung generated based on segmentation, and other visual information described herein.
- the term“display” should be interpreted to include a class of features such as a“display device” or“display unit”, and these terms encompass an output device, or a user interface adapted for displaying images and/or data.
- a display may output visual, audio, and or tactile data.
- Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen, Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays, Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode displays (OLED), a projector, and Head-mounted display.
- a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen, Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays, Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode
- Movement of the tracked device 450 may be tracked using white light, infrared or near- infrared, electromagnetism, or other tracking technologies such as optical shape sensing. That is, movement of an interventional medical instrument as a tracked device 450 may be tracked at either based on light emitted in a frequency band within a visible region or based on light emitted in a frequency band outside of a visible region.
- the tracking of the tracked device 450 results in positions and/or pose of the tracked device 450 being sent to the computer 420.
- the computer 420 processes positions of the tracked device 450 and the medical images from the first medical imaging system 410 to, for example, perform the registration at S260 and the tracking at S270, and to control the augmentation at S280.
- a full surgical workflow incurs a minimal number of steps and components to set up, while also providing ease and consistency in performing each step.
- the methods described herein can be efficiently executed.
- Several tradeoffs are possible in implementing the intraoperative imaging-based surgical navigation described herein. For example, reliability may be traded with simplicity, so that achieving a reliable workflow via, for example, more user input, can be offset with a simpler workflow via, for example, less user involvement.
- FIG. 5 illustrates a general computer system, on which a method of intraoperative imaging-based surgical navigation can be implemented, in accordance with another representative embodiment.
- the computer system 500 can include a set of instructions that can be executed to cause the computer system 500 to perform any one or more of the methods or computer-based functions disclosed herein.
- the computer system 500 may operate as a standalone device or may be connected, for example, using a network 501, to other computer systems or peripheral devices.
- the computer system 500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment.
- the computer system 500 can also be implemented as or incorporated into various devices, such as the first medical imaging system 410, the computer 420, a second medical imaging system in the embodiment of FIG. 4 (not shown), a stationary computer, a mobile computer, a personal computer (PC), a laptop computer, a tablet computer, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- the computer system 500 can be incorporated as or in a device that in turn is in an integrated system that includes additional devices.
- the computer system 500 can be implemented using electronic devices that provide voice, video or data communication.
- the term "system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
- the computer system 500 includes a processor 510.
- a processor for a computer system 500 is tangible and non-transitory. As used herein, the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time.
- a processor is an article of manufacture and/or a machine component.
- a processor for a computer system 500 is configured to execute software instructions to perform functions as described in the various embodiments herein.
- a processor for a computer system 500 may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC).
- a processor for a computer system 500 may also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device.
- a processor for a computer system 500 may also be a logical circuit, including a programmable gate array (PGA) such as a field programmable gate array (FPGA), or another type of circuit that includes discrete gate and/or transistor logic.
- a processor for a computer system 500 may be a central processing unit (CPU), a graphics processing unit (GPU), or both. Additionally, any processor described herein may include multiple processors, parallel processors, or both. Multiple processors may be included in, or coupled to, a single device or multiple devices.
- A“processor” as used herein encompasses an electronic component which is able to execute a program or machine executable instruction.
- References to the computing device comprising“a processor” should be interpreted as possibly containing more than one processor or processing core.
- the processor may for instance be a multi-core processor.
- a processor may also refer to a collection of processors within a single computer system or distributed amongst multiple computer systems.
- the term computing device should also be interpreted to possibly refer to a collection or network of computing devices each including a processor or processors. Many programs have instructions performed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
- the computer system 500 may include a main memory 520 and a static memory 530, where memories in the computer system 500 may communicate with each other via a bus 508.
- Memories described herein are tangible storage mediums that can store data and executable instructions and are non-transitory during the time instructions are stored therein.
- the term“non-transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period.
- the term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time.
- a memory described herein is an article of manufacture and/or machine component.
- Memories described herein are computer- readable mediums from which data and executable instructions can be read by a computer.
- Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu- ray disk, or any other form of storage medium known in the art.
- Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted.
- “Memory” is an example of a computer-readable storage medium.
- Computer memory is any memory which is directly accessible to a processor. Examples of computer memory include, but are not limited to RAM memory, registers, and register files. References to“computer memory” or“memory” should be interpreted as possibly being multiple memories. The memory may for instance be multiple memories within the same computer system. The memory may also be multiple memories distributed amongst multiple computer systems or computing devices.
- the computer system 500 may further include a video display unit 550, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, or a cathode ray tube (CRT). Additionally, the computer system 500 may include an input device 560, such as a keyboard/virtual keyboard or touch-sensitive input screen or speech input with speech recognition, and a cursor control device 570, such as a mouse or touch-sensitive input screen or pad. The computer system 500 can also include a disk drive unit 580, a signal generation device 590, such as a speaker or remote control, and a network interface device 540.
- a signal generation device 590 such as a speaker or remote control
- a network interface device 540 such as a speaker or remote control
- the disk drive unit 580 may include a computer- readable medium 582 in which one or more sets of instructions 584, e.g. software, can be embedded. Sets of instructions 584 can be read from the computer-readable medium 582. Further, the instructions 584, when executed by a processor, can be used to perform one or more of the methods and processes as described herein. In an embodiment, the instructions 584 may reside completely, or at least partially, within the main memory 520, the static memory 530, and/or within the processor 510 during execution by the computer system 500.
- the instructions 584 may reside completely, or at least partially, within the main memory 520, the static memory 530, and/or within the processor 510 during execution by the computer system 500.
- dedicated hardware implementations such as application- specific integrated circuits (ASICs), programmable logic arrays and other hardware components, can be constructed to implement one or more of the methods described herein.
- ASICs application-specific integrated circuits
- One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules. Accordingly, the present disclosure encompasses software, firmware, and hardware implementations. None in the present application should be interpreted as being implemented or implementable solely with software and not hardware such as a tangible non-transitory processor and/or memory.
- the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
- the present disclosure contemplates a computer- readable medium 582 that includes instructions 584 or receives and executes instructions 584 responsive to a propagated signal; so that a device connected to a network 501 can communicate voice, video or data over the network 501. Further, the instructions 584 may be transmitted or received over the network 501 via the network interface device 540.
- intraoperative imaging-based surgical navigation enables a surgeon to navigate to a lung tumor without the use of physical markers, so that markers do not have to be placed well before or even immediately before a lung tumor is resected in a surgery.
- intraoperative imaging-based surgical navigation can be used with minimally invasive surgery and result in more complete and more accurate removal of lung tumors, reduced requirements for follow-up surgeries and subsequent radiation/chemotherapy or avoidance of recurrence.
- intraoperative imaging-based surgical navigation can help avoid removing healthy tissue, which helps avoid compromising lung function and/or prolonging recovery times.
- avoiding or reducing the placement of physical markers by using intraoperative image-based surgical navigation can avoid additional complications and hospital/patient burden.
- the representative embodiments described above help alleviate some of the challenges described herein for lung tumor resections by providing improved 3D guidance in procedures involving resecting a tumor from the deflated lung.
- the representative embodiments described herein can be used to provide surgical workflows with intra-operative imaging and performed without use of physical markers being placed in the tumor.
- intraoperative imaging-based surgical navigation has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of intraoperative imaging-based surgical navigation in its aspects. Although intraoperative imaging-based surgical navigation has been described with reference to particular means, materials and embodiments, intraoperative imaging-based surgical navigation is not intended to be limited to the particulars disclosed; rather intraoperative imaging-based surgical navigation extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- High Energy & Nuclear Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Biophysics (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A controller (420) for assisting navigation in an interventional procedure includes a memory (520) that stores instructions, and a processor (510) that executes the instructions. When executed by the processor (510), the instructions cause the controller to implement a process that includes registering (S260) coordinate systems of an interventional medical instrument (450) and an intra-operative image of a lung and calculating (S260) a deformation between the lung in the deflated state and the lung in the inflated state. The process also includes applying (S260) the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three-dimensional model of the lung in the deflated state and generating (S280) an image of the modified three-dimensional model of the lung in the deflated state.
Description
INTRAOPERATIVE IMAGING-BASED SURGICAL NAVIGATION
BACKGROUND
[001] Lung cancer is a deadly form of cancer with surgery being the preferred treatment of choice for early stage tumors. The most invasive form of surgery historically is open surgery, where the chest is split open to expose a large portion of the lung. In open surgery, surgical tools such as scalpels are inserted through a large opening in the thorax and used to remove the tumor. The open surgical techniques allow physical access for palpation to sense the tumors by touch.
[002] In the past, tumors were large enough to be sensed by touch, so surgeons could find them even though they were invisible to the eye. In recent years, both detection of embedded lung tumors and techniques for surgically removing (resecting) lung tumors have improved. Recent years have seen the emergence of a minimally invasive technique for resecting lung tumors called video assisted thoracoscopic surgery (VATS). Additionally, recent years have seen growth of lung cancer screening programs, which tend to identify tumor nodules at an earlier stage when they are smaller and more difficult to discern by touch.
[003] VATS was developed to provide a more minimally invasive approach to lung tumor resection. In VATS, a small camera is inserted into the chest cavity through a small port (i.e. a small hole or incision) and the surgical instruments are inserted through the same port or other small ports. However, palpation to sense the tumors by touch is more difficult under VATS due to constrained access and no haptic feedback, and the entire resection is done using the camera view. FIG. 1 illustrates a known VATS implementation for lung resection. In FIG. l, a thoracoscope or a small camera stick is inserted through the rib cage of a patient P as one of the instruments. Vision that is otherwise occluded is restored via the thoracoscope or small camera. In FIG. 1, instrument #1 and instrument #2 are separately inserted into the patient P via two separate small incisions to perform the resection. In recent years, robotic surgery has emerged as a minimally invasive approach similar to and competitive with VATS.
[004] Nevertheless, three major challenges are still encountered in lung surgery today, regardless of the type of lung surgery being performed. First, insofar as the surgeon determines
the location of the tumor based on a pre-operative CT scan with the lung fully inflated and well before surgery, when the lung is collapsed during surgery the three-dimensional (3D) orientation of the lung, and location of the tumor, will not match the images from the pre-operative CT scan. Second, the lung is complex with many blood vessels and airways which have to be carefully dissected and addressed before the tumor and any feeding airways or vessels are removed. Third, since small, non-palpable tumors are very hard to locate in the lung, especially with VATS or robotic surgery, extra healthy lung tissue may be removed to prevent the possibility of leaving tumor tissue behind and this further compromises lung function.
[005] To overcome the challenges noted above, surgeons and research groups have investigated ways of improving the surgical workflow to better guide tumor resection. This can be done by implanting dyes or markers into the tumor or with better imaging techniques.
[006] While intra-operative imaging is rarely used today, in one known clinical trial, intra operative cone beam CT is used for a needle-guided insertion of a marker. The marker is placed in the center of the tumor and a string/wire comes out to the surface of the lung. The
thoracoscope is then inserted and the traditional VATS procedure is completed with the surgeon following the wire to the marker at the center of the tumor. Visible and fluorescent dyes can also be injected into the tumor to serve as a visual marker for the surgeon as the tissue is dissected. Another known mechanism provides a deformable registration algorithm that calculates a deformation matrix between cone-beam CT images of inflated and deflated lungs in phantom and animal models.
[007] Nevertheless, the benefits of minimally invasive surgery and earlier detection are not yet enough to precisely locate and resect lung tumors today with safe margins while sparing healthy tissue. Additional investigation now leads to the intraoperative imaging-based surgical navigation described herein.
SUMMARY
[008] According to an aspect of the present disclosure, a controller for assisting navigation in an interventional procedure includes a memory that stores instructions, and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process that includes registering coordinate systems of an interventional medical instrument and an intra-operative image of a lung; and calculating a deformation between the
lung in the deflated state and the lung in the inflated state. The process implemented when the processor executes the instructions also includes applying the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three-dimensional model of the lung in the deflated state; and generating an image of the modified three-dimensional model of the lung in the deflated state.
[009] According to another aspect of the present disclosure, a controller for assisting navigation in an interventional procedure includes a memory that stores instructions, and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process that includes segmenting a cone-beam computed tomography image of a lung in a deflated state during the interventional procedure to obtain a three-dimensional original model of the lung in the deflated state. The process implemented when the processor executes the instructions also includes registering coordinate systems of an interventional medical instrument to the three-dimensional original model of the lung in the deflated state; and generating an image of the original three-dimensional model of the lung in the deflated state.
[010] According to still another aspect of the present disclosure, a system for assisting navigation in an interventional procedure includes a cone-beam computed tomography imaging apparatus and a computer. The cone-beam computed tomography imaging apparatus generates a cone-beam computed tomography image of a lung in a deflated state. The computer includes a controller with a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process that includes registering coordinate systems of an interventional medical instrument and an intraoperative image of the lung and calculating a deformation between the lung in the deflated state and the lung in the inflated state. The process also includes applying the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three- dimensional model of the lung in the deflated state; and generating an image of the modified three-dimensional model of the lung in the deflated state.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] The example embodiments are best understood from the following detailed description
when read with the accompanying drawing figures. It is emphasized that the various features are not necessarily drawn to scale. In fact, the dimensions may be arbitrarily increased or decreased for clarity of discussion. Wherever applicable and practical, like reference numerals refer to like elements.
[012] FIG. 1 illustrates a known VATS implementation for lung resection.
[013] FIG. 2A illustrates a method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[014] FIG. 2B illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[015] FIG. 3 illustrates another method of intraoperative imaging -based surgical navigation, in accordance with a representative embodiment.
[016] FIG. 4 illustrates a system for intraoperative imaging -based surgical navigation, in accordance with a representative embodiment.
[017] FIG. 5 illustrates a general computer system, on which a method of intraoperative imaging-based surgical navigation can be implemented, in accordance with another representative embodiment
DETAILED DESCRIPTION
[018] In the following detailed description, for purposes of explanation and not limitation, representative embodiments disclosing specific details are set forth in order to provide a thorough understanding of an embodiment according to the present teachings. Descriptions of known systems, devices, materials, methods of operation and methods of manufacture may be omitted so as to avoid obscuring the description of the representative embodiments. Nonetheless, systems, devices, materials and methods that are within the purview of one of ordinary skill in the art are within the scope of the present teachings and may be used in accordance with the representative embodiments. It is to be understood that the terminology used herein is for purposes of describing particular embodiments only, and is not intended to be limiting. The defined terms are in addition to the technical and scientific meanings of the defined terms as commonly understood and accepted in the technical field of the present teachings.
[019] It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements or components, these elements or components should not be limited by these terms. These terms are only used to distinguish one element or component from another element or component. Thus, a first element or component discussed below could be termed a second element or component without departing from the teachings of the inventive concept.
[020] The terminology used herein is for purposes of describing particular embodiments only and is not intended to be limiting. As used in the specification and appended claims, the singular forms of terms‘a’,‘an’ and‘the’ are intended to include both singular and plural forms, unless the context clearly dictates otherwise. Additionally, the terms "comprises", and/or "comprising," and/or similar terms when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
[021] Unless otherwise noted, when an element or component is said to be“connected to”, “coupled to”, or“adjacent to” another element or component, it will be understood that the element or component can be directly connected or coupled to the other element or component, or intervening elements or components may be present. That is, these and similar terms encompass cases where one or more intermediate elements or components may be employed to connect two elements or components. However, when an element or component is said to be “directly connected” to another element or component, this encompasses only cases where the two elements or components are connected to each other without any intermediate or intervening elements or components.
[022] In view of the foregoing, the present disclosure, through one or more of its various aspects, embodiments and/or specific features or sub-components, is thus intended to bring out one or more of the advantages as specifically noted below. For purposes of explanation and not limitation, example embodiments disclosing specific details are set forth in order to provide a thorough understanding of an embodiment according to the present teachings. However, other embodiments consistent with the present disclosure that depart from specific details disclosed herein remain within the scope of the appended claims. Moreover, descriptions of well-known apparatuses and methods may be omitted so as to not obscure the description of the example
embodiments. Such methods and apparatuses are within the scope of the present disclosure.
[023] As described herein, mechanisms for intraoperative imaging-based surgical navigation are useful in mitigating the challenges of VATS or RATS (radio assisted thoracoscopic surgery). The mechanisms described herein are placed in the context of the surgical setup and workflow, so that components and procedural steps are sequenced to enable productive use of time and resources. The embodiments of intraoperative imaging-based surgical navigation described below each typically involve intraoperative cone-beam CT or fluoroscopy image-based registration methods, along with image-based tracking.
[024] FIG. 2A illustrates a method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[025] In FIG. 2A, the method starts with imaging at S210. The imaging at S210 may be computed tomography and/or positron emission tomography-computed tomography performed prior to surgery. The imaging at S210 may be performed at a different time and place, and under the supervision and control of different personnel than the imaging using the equipment in the system 400 of FIG. 4 described later. The imaging at S210 involves imaging the lung in the inflated state using computed tomography prior to an interventional procedure. That is, the imaging at S210 may result in the lung in the inflated state being imaged using computed tomography. The imaging at S210 may optionally involve imaging the lung in a partially inflated state using computed tomography, where the partially inflated state may be different than the intraoperative state.
[026] Next, the method of FIG. 2A continues with pre-operative segmentation at S220. An algorithm is used to perform the segmentation and is applied to the images obtained in S210 to produce a three-dimensional (3D) model of the anatomical features (e.g., airways, vessels, fissures, tumor, and/or lymph nodes) of the lung that was imaged at S210. Segmentation is a representation of the surface of structures of the anatomical features (e.g., airways, vessels, fissures, tumor and/or lymph nodes) and consists for example of a set of points in three- dimensional (3-D) coordinates on the surfaces of the lung, and triangular plane segments defined by connecting neighboring groups of three points, such that the entire structure is covered by a mesh of non-intersecting triangular planes. A three-dimensional model of the lung is obtained by segmenting at least one of imagery obtained based on cone-beam computed tomography
imaging of the lung in the inflated state in an alternative to S210 and imagery obtained based on computed tomography imaging of the lung in the inflated state as in S210.
[027] At S230, the method of FIG. 2A includes imaging of an inflated lung. The image of the inflated lung at S230 may be considered intraoperative and may be an intraoperative image taken using computed tomography or cone-beam computed tomography. The imaging of the inflated lung at S230 may be with a cone-beam CT imaging apparatus. That is, at the beginning of surgery a cone-beam CT scan of the patient may be completed when the lung is still inflated. The cone-beam CT image of the inflated lung can be used subsequently to register the pre operative image(s) obtained at S210 to the intra-operative state and make any alignment adjustments to the patient’s positioning. To be clear, the imaging at S210 and the imaging at S230 may both be via cone-beam CT even when performed in different places and/or different times and/or with different cone-beam CT imaging apparatuses.
[028] In one or more embodiments, the 3D model created at S220 may be updated after the images from S210 and S230 are registered. For example, CT images from S210 may be registered with cone-beam CT images from S230, and then the 3D model created at S220 is updated prior to the process described next for S240. For example, a rigid transformation may be applied after S230 to the 3D model from S220 to just account for differences in patient position. In another example, a deformable transformation may be applied after S230 to the 3D model from S220 to account for non-rigid changes in the lung.
[029] At S240, a scope is inserted into the patient and the lung is deflated. The scope may be a thoracoscope. The scope is inserted into the chest cavity and may be, but is not necessarily, inserted into the lung. The lung is deflated after the scope is inserted in S240.
[030] At S250, the method of FIG. 2A includes imaging of the deflated lung. The imaging of the deflated lung at S250 may be via cone-beam CT and may be performed with the same cone- beam CT imaging apparatus used in S230. Alternatively, the imaging at S250 may be visible light imaging. When the imaging at S250 is via cone-beam CT, the imaging at S250 may involve the scope as the interventional medical instrument, such that the interventional medical instrument is imaged using cone-beam computed tomography during the interventional procedure. In another alternative, the imaging of the deflated lung at S250 may be via one or more X-rays, which may involve the scope as the interventional medical instrument, such that
the interventional medical instrument is imaged using X-ray during the interventional procedure.
[031] At S260, the method of FIG. 2A includes registering the scope to the deflated lung. The registration at S260 may involve aligning the coordinate system of the scope inserted into the chest cavity while the lung is deflated and the coordinate system of an intra-operative image of the lung. Registration of coordinate systems may be performed in a variety of ways, such as by identifying and aligning landmarks present in both coordinate systems. The intra-operative image of the lung may be that taken at S230 and may be an intra-operative image taken using computed tomography or cone-beam computed tomography.
[032] A deformation matrix may be calculated between the cone-beam CT image of the inflated lung from S230 and the cone-beam CT image of the deflated lung from S250 and applied to the 3D model of the anatomical features which is created at S220 and already updated at or after S230. This results in a new 3D model of the anatomical features in the deflated state. The 3D model can be used for anatomical reference and guidance during surgery and may exist as its own feature. The deformation matrix may be a comprehensive all-encompassing deformation matrix which includes CT to cone-beam CT of the inflated lung to cone-beam CT of the deflated lung. Alternatively, the deformation matrix may be applied in two steps, where one deformation matrix is applied for CT to the cone-beam CT of the inflated lung (e.g., after S230 as described above), and then a second deformation matrix is applied for the transformation from the cone- beam CT of the inflated lung to the cone-beam CT of the deflated lung.
[033] At S270, the method of FIG. 2A includes tracking movement of the scope relative to the lung surface. That is, movement of an interventional medical instrument such as a scope may be tracked visually via the tissue surface of the lung; this may be with a visible light camera (such as the traditional thoracoscope), or by hyperspectral imaging, or by near-infrared (NIR) fluorescence imaging to name a few. An interventional medical instrument may alternatively be tracked with external tracking technologies such as electromagnetic tracking using sensors, optical tracking using optical sense shaping (OSS), and by other forms of tracking technologies for tracking interventional medical instruments. Applying a surface-feature tracking algorithm or different tracking methods to the scope allows the system used to implement S270 to move the 3D model on a monitor as the scope is moved with respect to the lung. In other embodiments, a 3D model may be presented on a headset screen or glasses, such as by using augmented reality.
In yet other embodiments, a 3D model may be presented as a hologram.
[034] At S280, the method of FIG. 2A includes augmenting the scope view. For example, because the thoracoscope is registered to the cone-beam CT images at S260 and hence the 3D model of the deflated anatomical features, the 3D model can be overlaid on the scope video-feed. Augmenting can be performed at S280 in other ways, such as by highlighting a tumor in the view of the scope by brightness or color, by visually warning of anatomical features that should be avoided, and in other ways that are supplemental to the teachings herein.
[035] At S290, the method of FIG. 2A includes resecting a tumor in the lung, based on intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 2A. The tumor can be resected at S290 using the augmented view of the thoracoscope and the augmented view helps ensure the surgeon knows where the tumor is located within the lung.
[036] In FIG. 2A, S270, S280 and S290 are shown partially overlapping in the vertical direction on the page. This reflects that the tracking at S270 may be performed continually before and during the augmenting of the scope view at S280, and both may be performed continually during the resecting of the tumor at S290. In other words, while methods described herein are generally shown as a series of discrete steps performed separately in sequence, some steps in the methods may be performed continually while other steps in the methods are also performed.
[037] The explanation for the tracking at S270 above assumes that the thoracoscope is a white light scope which only“sees” visible colors. However, other methods may be employed for the tracking movement of the interventional medical device at S270, such as if there are not enough features in the white light view. That is, movement of an interventional medical instrument may also be tracked at S270 based on light emitted in a frequency band outside of a visible region. Other methods include use of a hyperspectral camera/scope, a near-infrared or infrared scope, or a fluorescence scope - each of which“see” the tissue at wavelengths outside of the visible region. Alternative (non-optical) imaging modalities such as endoscopic ultrasound may also be incorporated as part of the augmented reality view.
[038] FIG. 2B illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[039] The method in FIG. 2B overlaps at the beginning with the method of FIG. 2A, and
descriptions of the overlapping parts are not detailed since they may be the same as in FIG. 2A In FIG. 2B, the method again starts with imaging at S210, continues with pre-operative segmentation at S220, and includes imaging of an inflated lung at S230. The method of FIG. 2B also includes inserting a scope into the patient and deflating the lung at S240, imaging the deflated lung at S250, and registering the scope to the deflated lung at S260.
[040] However, in the method of FIG. 2B, the deflated lung is imaged again so that additional images are acquired during the interventional procedure, and the 3D model(s) of the deflated lung is updated at S285 based on the additional images as movement of the scope is tracked relative to the lung surface at S270 and the scope view is augmented at S280. The method in FIG. 2B concludes again with resecting a tumor in the lung, based on intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 2B. However, the re-imaging of the deflated lung and updating of the 3D model(s) of the deflated lung at S285 may be performed selectively and dynamically intraoperatively in order to improve the intraoperative surgical navigation.
[041] In the embodiment of FIG. 2B, additional cone-beam CT images or fluoroscopy images can be acquired during surgery to update the 3D models of the deflated lung at S285. In the case of fluoroscopy, a sparse 3D reconstruction of the scene can be achieved by taking two or more projections, identifying common image features between those projections, and then reconstructing the 3D positions of only those features. Alternatively, a 3D reconstruction of the scene can be achieved by using an image analysis algorithm to automatically find anatomical landmarks to use in registering the image. As a result of the additional imaging at S285, the interventional medical instrument is imaged using multiple x-ray projections during the interventional procedure, since the interventional medical instrument is inserted into the patient at S240 and imaging of the deflated lung is performed both at S250 and again at S285.
[042] In the embodiments of FIGs. 2A and 2B, one intraoperative cone-beam CT scan of the lung in the inflated state and one intraoperative cone-beam CT scan of the lung in the deflated state may be obtained and used. In some embodiments that are alternative to those in FIGs. 2A and 2B, rather than having two intraoperative cone-beam CT scans, a single cone-beam CT scan of the lung in the deflated state can be used. In these embodiments, a deformable registration between the cone-beam CT image of the lung in the deflated state and the pre-operative CT
image may be required. Workflow is simplified by using only one intraoperative scan such as the single cone-beam CT scan of the lung in the deflated state in these alternative embodiments.
[043] In some embodiments, when performing the deformable registration, anatomical structures of interest may have already been segmented from the pre-operative CT image. If so, then these segmentations can be used to guide the registration at S260. In one embodiment, a second set of segmentations are performed on the cone-beam CT image(s) from S250 and used in the registration at S260. Alternatively, it is still possible to register the cone-beam CT image(s) from S230 and/or S250 with the pre-operative CT image(s) from S210 without the need for segmentation in either imaging step. In these alternative embodiments, an advantage of simplicity is obtained with a tradeoff of potential loss of accuracy.
[044] In other embodiments, if contrast fluoroscopy is used during the surgery separate from the imaging at S210, S230 and S250, the additional information provided by the contrast images may be used to update the deflated models at S285 and improve the registration accuracy. For example, additional information in contrast images may be vessels shown more clearly.
[045] FIG. 3 illustrates another method of intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[046] At S305, the method of FIG. 3 starts with inserting a scope in the patient and deflating a lung in the patient. That is, in the embodiment of FIG. 3 pre-operative imaging may not be required. Instead the patient may be prepared for surgery immediately before the surgery and the thoracoscope inserted through a port.
[047] At S310, the method of FIG. 3 continues with imaging the deflated lung. That is, the lung is collapsed, and a cone-beam CT image is acquired of the deflated lung at S310. The imaging at S310 also involves the scope as the interventional medical instrument, such that the interventional medical instrument is imaged using cone-beam computed tomography during the interventional procedure while the lung is in the deflated state.
[048] At S321, the method of FIG. 3 includes segmenting the image(s) of the deflated lung taken at S310. Important structures of the lung anatomy, like the vessels, airways, and tumor, are segmented directly from cone-beam CT image of the deflated lung. As a result, a 3D model of the anatomy in the deflated state is generated showing the segmentation.
[049] At S360, the scope is registered to the deflated lung in the method of FIG. 3. Similar to
the embodiments of FIGs. 2A and 2B, a pose of the thoracoscope pose is registered to the 3D model.
[050] At S370, movement of the scope is tracked relative to lung surface in the method of FIG. 3. Similar again to the embodiments of FIGs. 2A and 2B the pose of the thoracoscope can be tracked with respect to the lung surface or otherwise such as with electromagnetic sensors. The tracking at S370 may be based on light emitted in a frequency band within a visible region or based on light emitted in a frequency band outside of a visible region.
[051] At S380, the scope view is augmented in the method of FIG. 3. Similar once again to the embodiments of FIGs. 2A and 2B, the thoracoscope view can be augmented with the 3D model information.
[052] The method of FIG. 3 again concludes with resecting a tumor in the lung, based on the intraoperative surgical navigation enabled by the preceding features, functions and/or steps of FIG. 3.
[053] An advantage of the workflow of the embodiment in FIG. 3 is that deformable registration between images from CT and from cone-beam CT is not required, and a cone-beam CT image of the inflated lung is not needed as well.
[054] Another hybrid embodiment is applicable when segmentation of a cone-beam CT image of a deflated lung is difficult. In this hybrid approach, an intraprocedural cone-beam CT image of an inflated lung is obtained but again without a pre-operative CT imaging process. In this embodiment, the segmentation is performed on the cone-beam CT image of the inflated lung, and registration is performed to map the cone-beam CT image segmentation model to the model based on the intraoperative cone-beam CT image of the deflated lung.
[055] FIG. 4 illustrates a system for intraoperative imaging-based surgical navigation, in accordance with a representative embodiment.
[056] The system 400 of FIG. 4 includes a first medical imaging system 410, a computer 420, a display 425, and a tracked device 450.
[057] An example of the first medical imaging system 410 is a cone-beam computed tomography imaging system. A cone-beam computed tomography imaging system provides three-dimensional (3D) imaging with X-rays in the shape of a cone. A cone-beam computed tomography imaging system differs from a computed tomography imaging system. For example,
a computed tomography imaging system generates X-ray beams in the shape of a rotating fan to capture slices of a limited thickness, whereas a cone-beam computed tomography imaging system generates the X-ray beams in the shape of the cone. Also, a patient does not have to advance or move at all in the cone-beam CT, whereas a patient advances during a CT procedure. Thus, the difference between cone-beam CT and CT is not necessarily a matter of simply flipping a switch to activate different modes using the same system; rather, cone-beam CT and CT as described herein may involve imaging by entirely different systems. In FIG. 4, the first medical imaging system 410 is typically the cone-beam CT, and performs imaging such as at S230, S250, S285, and S310.
[058] The computer 420 may include a controller described herein. A controller described herein may include a combination of a memory that stores instructions and a processor that executes the instructions in order to implement processes described herein. A controller may be housed within or linked to a workstation such as the computer 420 or another assembly of one or more computing devices, a display/monitor, and one or more input devices (e.g., a keyboard, joysticks and mouse) in the form of a standalone computing system, a client computer of a server system, a desktop or a tablet. The descriptive label for the term“controller” herein facilitates a distinction between controllers as described herein without specifying or implying any additional limitation to the term“controller”. The term“controller” broadly encompasses all structural configurations, as understood in the art of the present disclosure and as exemplarily described in the present disclosure, of an application specific main board or an application specific integrated circuit for controlling an application of various principles as described in the present disclosure. The structural configuration of the controller may include, but is not limited to, processor(s), computer-usable/computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s).
[059] Additionally, although Fig. 4 shows components networked together, two such components may be integrated into a single system. For example, the computer 420 may be integrated with the display 425 and/or with the first medical imaging system 410. That is, in some embodiments, functionality attributed to the computer 420 may be implemented by (e.g., performed by) a system that includes the first medical imaging system 410. On the other hand, the networked components shown in Fig. 4 may also be spatially distributed such as by being
distributed in different rooms or different buildings, in which case the networked components may be connected via data connections. In still another embodiment, one or more of the components in Fig. 4 is not connected to the other components via a data connection, and instead is provided with input or output manually such as by a memory stick or other form of memory. In yet another embodiment, functionality described herein may be performed based on functionality of the elements in Fig. 4 but outside of the system shown in Fig. 4.
[060] The computer 420 in Fig. 4 may include some or all elements and functionality of the general computer system described below with respect to Fig. 5. For example, the computer 420 may include a controller for registering a scope to a cone-beam CT image of a deflated lung, for tracking a scope, and/or for augmenting a view of a scope. A process executed by a controller may include receiving a three-dimensional model of anatomy of a lung that is the subject of an interventional procedure.
[061] The display 425 may be used to display the three-dimensional models, the cone-beam CT images obtained at S230, S250, S285 and S310, the scope views and the augmented scope views, and other imagery and views described herein. Imagery obtained during a medical intervention may be, for example, imagery of an inflated lung, imagery of a deflated lung, and imagery or positional information of the tracked device 450. Imagery that may be displayed on a display 425 includes imagery obtained during a medical intervention, imagery of a 3D model of a lung generated based on segmentation, and other visual information described herein.
[062] As the term“display” is used herein, the term should be interpreted to include a class of features such as a“display device” or“display unit”, and these terms encompass an output device, or a user interface adapted for displaying images and/or data. A display may output visual, audio, and or tactile data. Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen, Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays, Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode displays (OLED), a projector, and Head-mounted display.
[063] Movement of the tracked device 450 may be tracked using white light, infrared or near- infrared, electromagnetism, or other tracking technologies such as optical shape sensing. That is,
movement of an interventional medical instrument as a tracked device 450 may be tracked at either based on light emitted in a frequency band within a visible region or based on light emitted in a frequency band outside of a visible region. The tracking of the tracked device 450 results in positions and/or pose of the tracked device 450 being sent to the computer 420. The computer 420 processes positions of the tracked device 450 and the medical images from the first medical imaging system 410 to, for example, perform the registration at S260 and the tracking at S270, and to control the augmentation at S280.
[064] In an embodiment implemented with features of FIGs. 2A, 2B, 3 and/or 4 described above, a full surgical workflow incurs a minimal number of steps and components to set up, while also providing ease and consistency in performing each step. As a result, the methods described herein can be efficiently executed. Several tradeoffs are possible in implementing the intraoperative imaging-based surgical navigation described herein. For example, reliability may be traded with simplicity, so that achieving a reliable workflow via, for example, more user input, can be offset with a simpler workflow via, for example, less user involvement.
[065] One such tradeoff is between tracking and deformable registration. In a favorable workflow, a surgeon can immediately locate anatomical features in the thoracoscope view in real time. But since features such as tumors are embedded in tissue and invisible, such information must be transferred from a different modality which resides in a different coordinate space. Furthermore, the tumor can undergo deformation between image acquisitions, placing this problem in the realm of deformable registration. Deformable registration may be replaced in some embodiments with tracking discrete anatomical features or surrogates thereof in real time.
[066] Another tradeoff described herein is the use of extrinsic tracking via markers with intrinsic tracking via image features. Insofar as marker placement can complicate workflow and potentially raise health risks, the efficient procedures described herein which do not require or use physical markers can be seen as a tradeoff with the use of such markers.
[067] One other tradeoff described herein is the tradeoff between simplicity and accuracy based on using imaging of an inflated lung. Whereas deformation compensation, tracking, and/or augmentation facilities are most beneficial at the deflated lung state since the resection is performed when the lung is in the deflated state, workflows addressing the deflated lung exclusively are simplified. Therefore, sacrificing simplicity in some embodiments with
transformative mapping from anatomical features identified preoperatively when the lung is inflated may be appropriate in other embodiments.
[068] FIG. 5 illustrates a general computer system, on which a method of intraoperative imaging-based surgical navigation can be implemented, in accordance with another representative embodiment.
[069] The computer system 500 can include a set of instructions that can be executed to cause the computer system 500 to perform any one or more of the methods or computer-based functions disclosed herein. The computer system 500 may operate as a standalone device or may be connected, for example, using a network 501, to other computer systems or peripheral devices.
[070] In a networked deployment, the computer system 500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 500 can also be implemented as or incorporated into various devices, such as the first medical imaging system 410, the computer 420, a second medical imaging system in the embodiment of FIG. 4 (not shown), a stationary computer, a mobile computer, a personal computer (PC), a laptop computer, a tablet computer, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. The computer system 500 can be incorporated as or in a device that in turn is in an integrated system that includes additional devices. In an embodiment, the computer system 500 can be implemented using electronic devices that provide voice, video or data communication. Further, while the computer system 500 is illustrated in the singular, the term "system" shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
[071] As illustrated in Fig. 5, the computer system 500 includes a processor 510. A processor for a computer system 500 is tangible and non-transitory. As used herein, the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time. A processor is an article of manufacture and/or a machine
component. A processor for a computer system 500 is configured to execute software instructions to perform functions as described in the various embodiments herein. A processor for a computer system 500 may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC). A processor for a computer system 500 may also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device. A processor for a computer system 500 may also be a logical circuit, including a programmable gate array (PGA) such as a field programmable gate array (FPGA), or another type of circuit that includes discrete gate and/or transistor logic. A processor for a computer system 500 may be a central processing unit (CPU), a graphics processing unit (GPU), or both. Additionally, any processor described herein may include multiple processors, parallel processors, or both. Multiple processors may be included in, or coupled to, a single device or multiple devices.
[072] A“processor” as used herein encompasses an electronic component which is able to execute a program or machine executable instruction. References to the computing device comprising“a processor” should be interpreted as possibly containing more than one processor or processing core. The processor may for instance be a multi-core processor. A processor may also refer to a collection of processors within a single computer system or distributed amongst multiple computer systems. The term computing device should also be interpreted to possibly refer to a collection or network of computing devices each including a processor or processors. Many programs have instructions performed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
[073] Moreover, the computer system 500 may include a main memory 520 and a static memory 530, where memories in the computer system 500 may communicate with each other via a bus 508. Memories described herein are tangible storage mediums that can store data and executable instructions and are non-transitory during the time instructions are stored therein. As used herein, the term“non-transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time. A memory described herein is an article of manufacture and/or machine component. Memories described herein are computer-
readable mediums from which data and executable instructions can be read by a computer. Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu- ray disk, or any other form of storage medium known in the art. Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted.
[074] “Memory” is an example of a computer-readable storage medium. Computer memory is any memory which is directly accessible to a processor. Examples of computer memory include, but are not limited to RAM memory, registers, and register files. References to“computer memory” or“memory” should be interpreted as possibly being multiple memories. The memory may for instance be multiple memories within the same computer system. The memory may also be multiple memories distributed amongst multiple computer systems or computing devices.
[075] As shown, the computer system 500 may further include a video display unit 550, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, or a cathode ray tube (CRT). Additionally, the computer system 500 may include an input device 560, such as a keyboard/virtual keyboard or touch-sensitive input screen or speech input with speech recognition, and a cursor control device 570, such as a mouse or touch-sensitive input screen or pad. The computer system 500 can also include a disk drive unit 580, a signal generation device 590, such as a speaker or remote control, and a network interface device 540.
[076] In an embodiment, as depicted in Fig. 5, the disk drive unit 580 may include a computer- readable medium 582 in which one or more sets of instructions 584, e.g. software, can be embedded. Sets of instructions 584 can be read from the computer-readable medium 582. Further, the instructions 584, when executed by a processor, can be used to perform one or more of the methods and processes as described herein. In an embodiment, the instructions 584 may reside completely, or at least partially, within the main memory 520, the static memory 530, and/or within the processor 510 during execution by the computer system 500.
[077] In an alternative embodiment, dedicated hardware implementations, such as application- specific integrated circuits (ASICs), programmable logic arrays and other hardware components,
can be constructed to implement one or more of the methods described herein. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules. Accordingly, the present disclosure encompasses software, firmware, and hardware implementations. Nothing in the present application should be interpreted as being implemented or implementable solely with software and not hardware such as a tangible non-transitory processor and/or memory.
[078] In accordance with various embodiments of the present disclosure, the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
[079] The present disclosure contemplates a computer- readable medium 582 that includes instructions 584 or receives and executes instructions 584 responsive to a propagated signal; so that a device connected to a network 501 can communicate voice, video or data over the network 501. Further, the instructions 584 may be transmitted or received over the network 501 via the network interface device 540.
[080] Accordingly, intraoperative imaging-based surgical navigation enables a surgeon to navigate to a lung tumor without the use of physical markers, so that markers do not have to be placed well before or even immediately before a lung tumor is resected in a surgery. As a result, intraoperative imaging-based surgical navigation can be used with minimally invasive surgery and result in more complete and more accurate removal of lung tumors, reduced requirements for follow-up surgeries and subsequent radiation/chemotherapy or avoidance of recurrence. Additionally, intraoperative imaging-based surgical navigation can help avoid removing healthy tissue, which helps avoid compromising lung function and/or prolonging recovery times. Moreover, avoiding or reducing the placement of physical markers by using intraoperative image-based surgical navigation can avoid additional complications and hospital/patient burden.
[081] The representative embodiments described above help alleviate some of the challenges
described herein for lung tumor resections by providing improved 3D guidance in procedures involving resecting a tumor from the deflated lung. The representative embodiments described herein can be used to provide surgical workflows with intra-operative imaging and performed without use of physical markers being placed in the tumor.
[082] Although intraoperative imaging-based surgical navigation has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of intraoperative imaging-based surgical navigation in its aspects. Although intraoperative imaging-based surgical navigation has been described with reference to particular means, materials and embodiments, intraoperative imaging-based surgical navigation is not intended to be limited to the particulars disclosed; rather intraoperative imaging-based surgical navigation extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
[083] The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of the disclosure described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
[084] One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term“invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any
and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
[085] The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
[086] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to practice the concepts described in the present disclosure. As such, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents and shall not be restricted or limited by the foregoing detailed description.
Claims
1. A controller (420) for assisting navigation in an interventional procedure, comprising: a memory (520) that stores instructions, and
a processor (510) that executes the instructions,
wherein, when executed by the processor (510), the instructions cause the controller (420) to implement a process that includes:
registering (S260) coordinate systems of an interventional medical instrument (450) and an intra-operative image of a lung;
calculating (S260) a deformation between the lung in a deflated state (S250) and the lung in an inflated state (S230);
applying (S260) the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three-dimensional model of the lung in the deflated state; and
generating (S280) an image of the modified three-dimensional model of the lung in the deflated state.
2. The controller (420) of claim 1, wherein the process implemented when the processor (510) executes the instructions further comprises:
determining (S270) a location of the interventional medical instrument (450) in the deflated state; and
augmenting (S280) a view from the interventional medical instrument (450) at the location during the interventional procedure with features of the modified three-dimensional model of the lung in the deflated state.
3. The controller (420) of claim 2, wherein the process implemented when the processor (510) executes the instructions further comprises:
tracking (S270) movement of the interventional medical instrument (450) relative to the lung in the deflated state;
updating (S280) the view from the interventional medical instrument (450), based on the movement of the interventional medical instrument, with the features of the modified three- dimensional model of the lung in the deflated state.
4. The controller (420) of claim 3, wherein the movement of the interventional medical instrument (450) is tracked (S270) based on light emitted in a frequency band outside of a visible region.
5. The controller (420) of claim 3, further comprising:
acquiring (S285) additional images during the interventional procedure; and
updating (S285) the modified three-dimensional model of the lung in the deflated state based on the additional images.
6. The controller (420) of claim 1, wherein the interventional medical instrument (450) is imaged (S250) using cone-beam computed tomography during the interventional procedure while the lung is in the deflated state, and
the lung in the inflated state is imaged (S230) using cone-beam computed tomography during the interventional procedure.
7. The controller (420) of claim 1, wherein the interventional medical instrument (450) is imaged (S250) using cone-beam computed tomography during the interventional procedure while the lung is in the deflated state,
the lung in the inflated state is imaged (S210) using computed tomography prior to the interventional procedure to obtain a computed tomography image of the lung in the inflated state, and
the interventional medical instrument (450) imaged (S250) while the lung is in the deflated state is registered (S260) to the computed tomography image of the lung in the inflated state.
8. The controller (420) of claim 1, wherein the interventional medical instrument (450) is imaged using multiple X-ray projections during the interventional procedure while the lung is in the deflated state,
the lung in the inflated state is imaged (S210) using computed tomography prior to the interventional procedure to obtain a computed tomography image of the lung in the inflated state, and
the interventional medical instrument (450) imaged while the lung that is in the deflated state is registered (S260) to the computed tomography image of the lung in the inflated state.
9. The controller (420 of claim 1 , wherein the three-dimensional original model of the lung is obtained by segmenting (S220) at least one of imagery obtained based on cone-beam computed tomography imaging of the lung in the inflated state and imagery obtained based on computed tomography imaging of the lung in the inflated state.
10. A controller (420) for assisting navigation in an interventional procedure, comprising:
a memory (520) that stores instructions, and
a processor (510) that executes the instructions,
wherein, when executed by the processor (510), the instructions cause the controller (420) to implement a process that includes:
segmenting (S321) a cone-beam computed tomography image of a lung in a deflated state during the interventional procedure to obtain a three-dimensional original model of the lung in the deflated state;
registering (S360) coordinate systems of an interventional medical instrument (450) to the three-dimensional original model of the lung in the deflated state; and
generating (S380) an image of the original three-dimensional model of the lung in the deflated state.
11. The controller (420) of claim 10, wherein the process implemented when the processor (510) executes the instructions further comprises:
determining (S370) a location of the interventional medical instrument (450) in the deflated state; and
augmenting (S380) a view from the interventional medical instrument (450) at the location during the interventional procedure with features of the original three-dimensional model of the lung in the deflated state.
12. The controller (420) of claim 11, wherein the process implemented when the processor (510) executes the instructions further comprises:
tracking (S370) movement of the interventional medical instrument (450) relative to the lung in the deflated state;
updating (S380) the view from the interventional medical instrument, based on the movement of the interventional medical instrument (450), with the features of the original three- dimensional model of the lung in the deflated state.
13. The controller (420) of claim 11, wherein the interventional procedure is performed without use of a physical marker to mark a location on the lung in the deflated state.
14. A system (400) for assisting navigation in an interventional procedure, comprising: a cone-beam computed tomography imaging apparatus (410) that generates a cone-beam computed tomography image of a lung in a deflated state;
a computer (420) comprising a controller (420) with a memory (520) that stores instructions and a processor (510) that executes the instructions,
wherein, when executed by the processor (510), the instructions cause the controller (420) to implement a process that includes:
registering (S260) coordinate systems of an interventional medical instrument (450) and an intraoperative image of the lung;
calculating (S260) a deformation between the lung in the deflated state and the lung in an inflated state;
applying (S260) the deformation to a three-dimensional original model of the lung in the inflated state to generate a modified three-dimensional model of the lung in the deflated state; and
generating (S280) an image of the modified three-dimensional model of the lung in the deflated state.
15. The system (400) of claim 14, further comprising:
the interventional medical instrument (450), wherein the interventional medical instrument (450) comprises a thoracoscope, and
wherein the process implemented when the processor (510) executes the instructions further comprises determining (S270) a location of the thoracoscope in the deflated state and augmenting (S280) a view from the thoracoscope at the location during the interventional procedure with features of the modified three-dimensional model of the lung in the deflated state.
16. The system (400) of claim 15, wherein the process implemented when the processor (510) executes the instructions further comprises:
tracking (270) movement of the interventional medical instrument (450) relative to the lung in the deflated state;
updating (S280) the view from the interventional medical instrument, based on the movement of the interventional medical instrument, with the features of the modified three- dimensional model of the lung in the deflated state.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962851174P | 2019-05-22 | 2019-05-22 | |
US62/851,174 | 2019-05-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020234409A1 true WO2020234409A1 (en) | 2020-11-26 |
Family
ID=70802864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2020/064177 WO2020234409A1 (en) | 2019-05-22 | 2020-05-20 | Intraoperative imaging-based surgical navigation |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020234409A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140073907A1 (en) * | 2012-09-12 | 2014-03-13 | Convergent Life Sciences, Inc. | System and method for image guided medical procedures |
WO2016178690A1 (en) * | 2015-05-07 | 2016-11-10 | Siemens Aktiengesellschaft | System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation |
US20180161102A1 (en) * | 2014-10-30 | 2018-06-14 | Edda Technology, Inc. | Method and system for estimating a deflated lung shape for video assisted thoracic surgery in augmented and mixed reality |
-
2020
- 2020-05-20 WO PCT/EP2020/064177 patent/WO2020234409A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140073907A1 (en) * | 2012-09-12 | 2014-03-13 | Convergent Life Sciences, Inc. | System and method for image guided medical procedures |
US20180161102A1 (en) * | 2014-10-30 | 2018-06-14 | Edda Technology, Inc. | Method and system for estimating a deflated lung shape for video assisted thoracic surgery in augmented and mixed reality |
WO2016178690A1 (en) * | 2015-05-07 | 2016-11-10 | Siemens Aktiengesellschaft | System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation |
Non-Patent Citations (1)
Title |
---|
UNERI ALI ET AL: "Deformable registration of the inflated and deflated lung in cone-beam CT-guided thoracic surgery: Initial investigation of a combined model- and image-driven approach", MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 40, no. 1, 18 December 2012 (2012-12-18), pages 17501-1 - 17501-8, XP012170938, ISSN: 0094-2405, [retrieved on 20121218], DOI: 10.1118/1.4767757 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Luo et al. | Augmented reality navigation for liver resection with a stereoscopic laparoscope | |
JP7527302B2 (en) | Dynamic Interventional 3D Model Deformation | |
US20200405433A1 (en) | System and method for dynamic validation, correction of registration for surgical navigation | |
US11564748B2 (en) | Registration of a surgical image acquisition device using contour signatures | |
US10074176B2 (en) | Method, system and apparatus for displaying surgical engagement paths | |
Bertolo et al. | Systematic review of augmented reality in urological interventions: the evidences of an impact on surgical outcomes are yet to come | |
US20240041558A1 (en) | Video-guided placement of surgical instrumentation | |
US20170249737A1 (en) | Method, system and apparatus for quantitative surgical image registration | |
Liu et al. | Toward intraoperative image-guided transoral robotic surgery | |
WO2023246521A1 (en) | Method, apparatus and electronic device for lesion localization based on mixed reality | |
EP3398552A1 (en) | Medical image viewer control from surgeon's camera | |
US12064280B2 (en) | System and method for identifying and marking a target in a fluoroscopic three-dimensional reconstruction | |
US20210290309A1 (en) | Method, system and apparatus for surface rendering using medical imaging data | |
Kumar et al. | Stereoscopic visualization of laparoscope image using depth information from 3D model | |
Alam et al. | A review on extrinsic registration methods for medical images | |
Yaniv et al. | Applications of augmented reality in the operating room | |
US20140275994A1 (en) | Real time image guidance system | |
WO2020234409A1 (en) | Intraoperative imaging-based surgical navigation | |
US10102681B2 (en) | Method, system and apparatus for adjusting image data to compensate for modality-induced distortion | |
Chen et al. | Video-guided calibration of an augmented reality mobile C-arm | |
Wang et al. | Stereoscopic augmented reality for single camera endoscopy: a virtual study | |
EP4494569A1 (en) | Medical support device, operation method of medical support device, operation program of medical support device, and medical support system | |
Kersten-Oertel et al. | 20 Augmented Reality for Image-Guided Surgery | |
JP2024131034A (en) | Medical support device, operation method and operation program for medical support device | |
Mirota | Video-based navigation with application to endoscopic skull base surgery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20727624 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20727624 Country of ref document: EP Kind code of ref document: A1 |