WO2020182997A1 - Dynamic interventional three-dimensional model deformation - Google Patents

Dynamic interventional three-dimensional model deformation Download PDF

Info

Publication number
WO2020182997A1
WO2020182997A1 PCT/EP2020/056902 EP2020056902W WO2020182997A1 WO 2020182997 A1 WO2020182997 A1 WO 2020182997A1 EP 2020056902 W EP2020056902 W EP 2020056902W WO 2020182997 A1 WO2020182997 A1 WO 2020182997A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional model
tracked device
pathways
tracked
processor
Prior art date
Application number
PCT/EP2020/056902
Other languages
French (fr)
Inventor
Torre Michelle BYDLON
Paul Thienphrapa
Alvin Chen
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Priority to US17/438,990 priority Critical patent/US20220156925A1/en
Priority to EP20713213.5A priority patent/EP3939052A1/en
Priority to CN202080021071.3A priority patent/CN113614844A/en
Priority to JP2021553783A priority patent/JP2022523445A/en
Publication of WO2020182997A1 publication Critical patent/WO2020182997A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10121Fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • Optical shape sensing can be used to determine the shape of an optical fiber by using light along the optical fiber.
  • the optical fiber can be provided in or on an interventional medical device to determine the shape of the interventional medical device.
  • Information of the shape of the optical fiber can be used for localizing and navigating the interventional medical device during surgical intervention.
  • Optical shape sensing is based on the principle that the wavelengths of reflected light differ under distinct circumstances. Distributed strain
  • measurements in the optical fiber can therefore be used to determine the shape of the optical fiber using characteristic Rayleigh backscatter or controlled grating patterns.
  • interventional medical devices include guidewires, catheters, sheaths, and bronchoscopes, and optical shape sensing can be used to provide live positions and orientations of such interventional medical devices for guidance during minimally invasive procedures.
  • FIG. 1A illustrates a virtual image of an optical fiber 101 embedded in a guidewire 102.
  • the three-dimensional position of the guidewire 102 can then be registered to anatomical imaging modalities, such as x-ray or computed tomography (CT), to provide the anatomical context of the guidewire 102 and a shape sensed interventional device (e.g., a bronchoscope) to which the guidewire 102 is inserted or attached.
  • CT computed tomography
  • a lung lesion can be biopsied using an endobronchial approach in which a bronchoscope is guided down the airways.
  • a camera at the end of the bronchoscope provides imagery of the airways, and an abnormal part of the airway can be biopsied using a small tool that is inserted via the working channel of the bronchoscope.
  • Approaches for lung biopsy face challenges including:
  • the endobronchial approach may be limited to lesions that are in or connected to the upper airways since only the upper airways can fit the bronchoscope. Otherwise, visualization may be lost resulting in interventional medical devices being blindly navigated within or outside of the airways to take random tissue samples. Additionally, when the lesion is not connected to an airway the interventional medical device must puncture the airway wall and travel outside of the airway through the lung parenchyma to perform a transbronchial biopsy. A transbronchial biopsy may also be performed when the closest airway to the lesion is too small to navigate a tool.
  • a pre -operative three-dimensional image of the lung is acquired and processed with algorithms to produce a three-dimensional model of the airways and lesion. Then, the physician can navigate using this three-dimensional model, with or without x-ray and/or a bronchoscope.
  • three-dimensional models are limited to what the airways look like at the time of the pre-operative imaging.
  • some navigation methods also use tracking technology, like electromagnetic
  • Fig. IB illustrates a comparison between an inflated view of a lung and a collapsed view of the lung. This is further complicated by the fact that the lung is typically moved and re-positioned throughout the surgical procedure.
  • FIG. 1C illustrates a three-dimensional model of the airways and tumor with a planned path to reach the tumor.
  • FIG. 1C also shows an overlaying of a planned path for an
  • the planned path 103 from the trachea is shown as a thin line from the top.
  • the current position 106 of the bronchoscope is also seen in the fluoroscopy image to the right of the planned path 103.
  • the tumor location 104 is shown at the end of the planned path 103.
  • the three-dimensional model in FIG. 1C is static. Therefore, there is no way provided to adjust the three-dimensional model.
  • dynamic interventional three-dimensional model deformation can be used to enhance localization and navigation for interventional medical devices.
  • a controller for assisting navigation in an interventional procedure includes a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process.
  • the process includes obtaining a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure.
  • the process also includes determining, during the interventional procedure, whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three-dimensional model, the process includes deforming the three-dimensional model to the current position of the tracked device.
  • a method for assisting navigation in an interventional procedure includes obtaining a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure. The method also includes determining, during the interventional procedure by a controller executing instructions with a processor, whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three- dimensional model, the method also includes deforming the three-dimensional model to the current position of the tracked device.
  • a system for assisting navigation in an interventional procedure includes an imaging apparatus and a computer.
  • the imaging apparatus generates, prior to an interventional procedure, computed tomography images of a subject of an interventional procedure to be used in generating, prior to the interventional procedure, a three-dimensional model based on segmenting pathways with a plurality of branches in the subject of the interventional procedure.
  • the computer includes a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the system to execute a process that includes obtaining the three-dimensional model generated prior to the interventional procedure.
  • the process executed when the process executes instructions also includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three-dimensional model, the process includes deforming the three-dimensional model to the current position of the tracked device.
  • Fig. 1A illustrates a virtual image of an optical fiber inserted in the working channel of a bronchoscope.
  • Fig. IB illustrates a comparison between an inflated view of a lung and a collapsed view of the lung.
  • Fig. 1C illustrates a three-dimensional model of the airways and tumor with a planned path to reach the tumor. It also shows an overlay of real-time fluoroscopy images of the anatomy during an interventional procedure, including the position of an interventional device inside the airways.
  • FIG. 2 illustrates a system for dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • FIG. 3 illustrates a general computer system, on which a method of dynamic
  • interventional three-dimensional model deformation can be implemented, in accordance with another representative embodiment.
  • Fig. 4. illustrates a method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 5A illustrates comparative views of a tracked device and a three-dimensional model of pathways with multiple branches before dynamic interventional three-dimensional model deformation is applied, in accordance with a representative embodiment.
  • Fig. 5B illustrates a progression for dynamic interventional three-dimensional model deformation using optical shape sensing, in accordance with a representative embodiment.
  • Fig. 6. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 7 illustrates a virtual bronchoscopic view of the airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 8. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • FIG. 9. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 10 illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • FIG. 11 illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • FIG. 12. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 13 illustrates an intraluminal view of airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment
  • Fig. 14. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • FIG. 2 illustrates a system for dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the system 200 of Fig. 2 includes a first medical imaging system 210, a computer 220, a display 225, a second medical imaging system 240 and a tracked device 250.
  • the system in Fig. 2 may be used for dynamic interventional three-dimensional model deformation, such as for medical interventions to repair, replace or adjust an organ such as a lung, or to otherwise intervene in the body of a subject.
  • An example of the first medical imaging system 210 is an X-ray system that includes an X-ray machine.
  • the first medical imaging system 210 may perform computed tomography scanning or cone based computed tomography scanning prior to a medical intervention that involves the three-dimensional model deformation described herein.
  • the three-dimensional model may be generated before or during the medical intervention based on the scanning by the first medical imaging system 210. Additionally, while the medical imaging by the first medical imaging system 210 may be performed prior to the medical intervention, this medical imaging may also be performed during the medical intervention.
  • the tracked device 250 may be tracked using optical shape sensing or any other tracking technology like sensor-based electromagnetism.
  • the tracked device 250 sends positional information to the computer 220.
  • the computer 220 processes the medical images from the first medical imaging system 210 to generate the three-dimensional model.
  • the computer 220 also processes data from the tracked device 250 to perform the deformation and adjustments to the three-dimensional model.
  • An example of the second medical imaging system 240 is an ultrasound apparatus used to obtain ultrasound images during a medical intervention that involves the three-dimensional model deformation described herein.
  • Another example of the second medical imaging system 240 is another X-ray system that includes an X-ray machine used to obtain fluoroscopic images during a medical intervention that involves the three-dimensional model deformation described herein.
  • the second medical imaging system 240 may be used to track a medical device that is otherwise untracked such that the medical device becomes the tracked device 250. For example, some medical devices do not have optical shape sensing or electromagnetic tracking components disposed therein or thereon.
  • Locating such a medical device in the x-ray image or using anatomical information from the ultrasound images provides the positional information of the otherwise un-tracked medical device such that it becomes the tracked device 250.
  • a medical device may be a tracked device 250 with both a tracking technology embedded therein or thereon (e.g., optical shape sensing), while also being tracked simultaneously with the medical imaging modality of the second medical imaging system 240.
  • the first medical imaging system 210 and the second medical imaging system 240 may both be present and used during the medical intervention, such as when the first medical imaging system 210 is the X-ray system used for fluoroscopic imaging during the medical intervention and the second medical imaging system 240 is the ultrasound system used for ultrasound imaging during the medical intervention.
  • the first medical imaging system 210 may be used during a certain type of medical intervention, while the second medical imaging system 240 is only used as an additional component such as when x-ray or ultrasound is used as the tracking method.
  • the first medical imaging system 210 may be used to generate computed tomography images that serves as the basis for generating a three-dimensional model as described herein.
  • the computed tomography images are an example of three-dimensional anatomical images.
  • the second medical imaging system 240 may be used to perform imaging that is used to track a tracked device.
  • a tracked device may refer to an interventional medical device on or in which a sensor, optical shape sensing element or other tracking element is provided.
  • the imaging that is used to track a tracked device may be performed in real-time using the second medical imaging system 240 alone or using both the second medical imaging system 240 and the first medical imaging system 210.
  • the second medical imaging system 240 may be present and used during the interventional procedure without the first medical imaging system 210 or with the first medical imaging system 210.
  • the second medical imaging system 240 provides imaging data to the computer 220.
  • the computer 220 includes a display 225.
  • the display may be used to display the three-dimensional model based on the imaging performed by the first medical imaging system 210, along with imagery obtained during a medical intervention based on the imaging performed by the second medical imaging system 240.
  • Imagery obtained during a medical intervention may be, for example, imagery of an inflated lung that is deflated during the medical intervention.
  • the term“display” is used herein, the term should be interpreted to include a class of features such as a“display device” or“display unit”, and these terms encompass an output device, or a user interface adapted for displaying images and/or data.
  • a display may output visual, audio, and or tactile data.
  • Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen, Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays,
  • Electroluminescent display ELD
  • Plasma display panels PDP
  • Liquid crystal display LCD
  • OLED Organic light-emitting diode displays
  • projector projector
  • Head-mounted display ELD
  • EDP Plasma display panels
  • LCD Liquid crystal display
  • OLED Organic light-emitting diode displays
  • Any of the elements in Fig. 2 may include a controller described herein.
  • a controller described herein may include a combination of a memory that stores instructions and a processor that executes the instructions in order to implement processes described herein.
  • a controller may be housed within or linked to a workstation such as the computer 220 or another assembly of one or more computing devices, a display/monitor, and one or more input devices (e.g., a keyboard, joysticks and mouse) in the form of a standalone computing system, a client computer of a server system, a desktop or a tablet.
  • the descriptive label for the term“controller” herein facilitates a distinction between controllers as described herein without specifying or implying any additional limitation to the term“controller”.
  • controller broadly encompasses all structural configurations, as understood in the art of the present disclosure and as exemplarily described in the present disclosure, of an application specific main board or an application specific integrated circuit for controlling an application of various principles as described in the present disclosure.
  • the structural configuration of the controller may include, but is not limited to, processor(s), computer-usable/computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s).
  • Fig. 2 shows components networked together, two such components may be integrated into a single system.
  • the computer 220 may be integrated with the first medical imaging system 210. That is, in embodiments, functionality attributed to the computer 220 may be implemented by (e.g., performed by) a system that includes the first medical imaging system 210.
  • the networked components shown in Fig. 2 may also be spatially distributed such as by being distributed in different rooms or different buildings, in which case the networked components may be connected via data connections.
  • one or more of the components in Fig. 2 is not connected to the other components via a data connection, and instead is provided with input or output manually such as by a memory stick or other form of memory.
  • functionality described herein may be performed based on functionality of the elements in Fig. 2 but outside of the system shown in Fig. 2.
  • any of the first medical imaging system 210, the computer 220, and the second medical imaging system 240 in Fig. 2 may include some or all elements and functionality of the general computer system described below with respect to Fig. 3.
  • the computer 220 may include a controller for determining whether a current position of a tracked device is outside of the pathways in a three-dimensional model.
  • a process executed by a controller may include receiving a three-dimensional model of pathways with multiple branches in a subject of an interventional procedure or receiving image data and generating based on the image data the three-dimensional model of pathways with the multiple branches in the subject of the
  • the process implemented when a controller of the computer 220 executes instructions also includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model, and when the current position of the tracked device is outside of the pathways in the three-dimensional model, deforming the three-dimensional model to the current position of the tracked device.
  • the same controller may also execute the functionality of generating a three-dimensional model based on segmenting pathways with a plurality of branches in a subject of an interventional procedure.
  • the controller that tracks positions of the tracked device may obtain a segmented three-dimensional model that is generated and segmented elsewhere, such as by the first medical imaging system 210 or by the computer 220 executing instructions to process medical imagery created by the first medical imaging system 210.
  • a process implemented by a controller as described herein may include obtaining a three-dimensional model generated prior to an interventional procedure, wherein the three-dimensional model was generated based on segmenting pathways with a plurality of branches in a subject of the interventional procedure.
  • the three-dimensional model does not, however, have to be generated“prior to” the interventional procedure.
  • the models are generated or updated during the interventional procedure and then the subsequent processes are still performed.
  • the medical imaging by the first medical imaging system 210 may be performed during the same medical intervention in which the interventional three-dimensional model deformation is performed.
  • FIG. 3 illustrates a general computer system, on which a method of dynamic
  • interventional three-dimensional model deformation can be implemented, in accordance with another representative embodiment.
  • the computer system 300 can include a set of instructions that can be executed to cause the computer system 300 to perform any one or more of the methods or computer-based functions disclosed herein.
  • the computer system 300 may operate as a standalone device or may be connected, for example, using a network 301, to other computer systems or peripheral devices.
  • the computer system 300 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment.
  • the computer system 300 can also be implemented as or incorporated into various devices, such as the first medical imaging system 210, the computer 220, the second medical imaging system 240, a stationary computer, a mobile computer, a personal computer (PC), a laptop computer, a tablet computer, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • the computer system 300 can be incorporated as or in a device that in turn is in an integrated system that includes additional devices.
  • the computer system 300 can be implemented using electronic devices that provide voice, video or data communication. Further, while the computer system 300 is illustrated in the singular, the term "system” shall also be taken to include any collection of systems or sub systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
  • the computer system 300 includes a processor 310.
  • a processor for a computer system 300 is tangible and non-transitory. As used herein, the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time.
  • a processor is an article of manufacture and/or a machine component.
  • a processor for a computer system 300 is configured to execute software instructions to perform functions as described in the various embodiments herein.
  • a processor for a computer system 300 may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC).
  • a processor for a computer system 300 may also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device.
  • a processor for a computer system 300 may also be a logical circuit, including a programmable gate array (PGA) such as a field programmable gate array (FPGA), or another type of circuit that includes discrete gate and/or transistor logic.
  • a processor for a computer system 300 may be a central processing unit (CPU), a graphics processing unit (GPU), or both. Additionally, any processor described herein may include multiple processors, parallel processors, or both. Multiple processors may be included in, or coupled to, a single device or multiple devices.
  • A“processor” as used herein encompasses an electronic component which is able to execute a program or machine executable instruction.
  • References to the computing device comprising“a processor” should be interpreted as possibly containing more than one processor or processing core.
  • the processor may for instance be a multi-core processor.
  • a processor may also refer to a collection of processors within a single computer system or distributed amongst multiple computer systems.
  • the term computing device should also be interpreted to possibly refer to a collection or network of computing devices each including a processor or processors. Many programs have instructions performed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
  • the computer system 300 may include a main memory 320 and a static memory 330, where memories may can communicate with each other via a bus 308.
  • Memories described herein are tangible storage mediums that can store data and executable instructions and are non-transitory during the time instructions are stored therein.
  • the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period.
  • the term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time.
  • a memory described herein is an article of manufacture and/or machine component.
  • Memories described herein are computer-readable mediums from which data and executable instructions can be read by a computer.
  • Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu-ray disk, or any other form of storage medium known in the art.
  • Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted.
  • “Memory” is an example of a computer-readable storage medium.
  • Computer memory is any memory which is directly accessible to a processor. Examples of computer memory include, but are not limited to RAM memory, registers, and register files. References to“computer memory” or“memory” should be interpreted as possibly being multiple memories. The memory may for instance be multiple memories within the same computer system. The memory may also be multiple memories distributed amongst multiple computer systems or computing devices.
  • the computer system 300 may further include a video display unit 350, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, or a cathode ray tube (CRT).
  • a video display unit 350 such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, or a cathode ray tube (CRT).
  • the computer system 300 may include an input device 360, such as a keyboard/virtual keyboard or touch-sensitive input screen or speech input with speech recognition, and a cursor control device 370, such as a mouse or touch-sensitive input screen or pad.
  • the computer system 300 can also include a disk drive unit 380, a signal generation device 390, such as a speaker or remote control, and a network interface device 340.
  • the disk drive unit 380 may include a computer- readable medium 382 in which one or more sets of instructions 384, e.g. software, can be embedded. Sets of instructions 384 can be read from the computer-readable medium 382.
  • the instructions 384 when executed by a processor, can be used to perform one or more of the methods and processes as described herein.
  • the instructions 384 may reside completely, or at least partially, within the main memory 320, the static memory 330, and/or within the processor 310 during execution by the computer system 300.
  • dedicated hardware implementations such as application-specific integrated circuits (ASICs), programmable logic arrays and other hardware components, can be constructed to implement one or more of the methods described herein.
  • ASICs application-specific integrated circuits
  • programmable logic arrays and other hardware components can be constructed to implement one or more of the methods described herein.
  • One or more embodiments described herein may implement functions using two or more specific hardware implementations.
  • the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
  • the present disclosure contemplates a computer-readable medium 382 that includes instructions 384 or receives and executes instructions 384 responsive to a propagated signal; so that a device connected to a network 301 can communicate voice, video or data over the network 301. Further, the instructions 384 may be transmitted or received over the network 301 via the network interface device 340.
  • Fig. 4. illustrates a method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the method in Fig. 4 starts at S405 by generating, capturing and storing medical images of a subject of an interventional procedure.
  • pre-operative computed tomography (CT) or cone-beam computed tomography (cone-beam computed tomography) images of the airways are acquired.
  • Cone-beam computed tomography is a medical imaging technique that involves X-ray computed tomography where the X-rays are divergent and resultingly form a cone.
  • the airways are segmented as described herein in most embodiments.
  • the computed tomography or cone -beam computed tomography imaging and segmenting may be performed prior to an interventional operation in most embodiments.
  • Three-dimensional images may be acquired using computed tomography, cone-beam computed tomography, magnetic resonance, or other imaging modalities that provide a three-dimensional representation of anatomy.
  • the method in Fig. 4 includes generating a three-dimensional model based on segmenting pathways with multiple (a plurality of) branches in the subject of the interventional procedure.
  • the three-dimensional model of the pathways may be generated based on the cone- beam computed tomography, or any other three-dimensional imaging modality.
  • the three- dimensional model is of airways in the example of lungs but may be blood vessels in alternative embodiments. Additionally, while S410 specifies that the three-dimensional model is created by segmenting, a three-dimensional model may also be created by other mechanisms such as rendering.
  • Segmentation is a representation of the surface of structures such as the pathways and branches of lungs or the components of a heart and consists for example of a set of points in three-dimensional (3-D) coordinates on the surface of the organ, and triangular plane segments defined by connecting neighboring groups of 3 points, such that the entire structure is covered by a mesh of non-intersecting triangular planes.
  • 3-D three-dimensional
  • the method in Fig. 4 includes generating a path from a starting point to a target point in the three-dimensional model.
  • a tracked device may be registered to the three-dimensional model. Registration involves placing different elements or systems onto a common coordinate system. For example, the shape of a tracked interventional medical device can be registered to pre-operative lung images, which then leads to an ability to account for breathing motion, dislocation of anatomy based on interaction with the tracked interventional medical device (e.g., lung deformation), and other movement, as well as an ability to plan a lung navigation in the airways. Shape sensing can be used for real-time guidance in the presence of respiratory/cardiac motion using error deviation of the current path from the planned path to help the user time the navigation inside and outside (off-road) of the airways.
  • the method in Fig. 4 includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model.
  • tracked devices are tracked in three dimensions as the tracked devices are navigated in the airways.
  • the tracking may be performed using optical shape sensing, but in other embodiments the tracking may involve electromagnetic sensors, ultrasound-based sensors, or x-ray-based device recognition based on dynamic analysis of fluoroscopy images.
  • optical shape sensing is generally described.
  • tracked devices described herein can alternatively be tracked with electromagnetic sensors, ultrasound-based sensors, or X-ray-based device recognition from fluoroscopy images, etc.
  • the method in Fig. 4 includes deforming the three-dimensional model to the current position of the tracked device when the current position of the tracked device is outside of the pathways in the three-dimensional model.
  • the deformation at S480 results in adjusting visualizations seen by a physician using a pre-operative three-dimensional model of the airways and real-time interventional medical device positions.
  • the deforming at S480 thus results from the tracking insofar as the current path of the tracked device may be compared to the airway lumen (open space), and if the current path is outside of the closest airways, the computed tomography/cone-beam computed tomography images and the three-dimensional model are deformed.
  • the method in Fig. 4 includes deforming the three-dimensional model to center the tracked device within a current path of the three-dimensional model when the current position of the tracked device is inside of the pathways.
  • the method may include not deforming the three-dimensional model at all when the current position of the tracked device is inside of the pathways.
  • the method in Fig. 4 provides a physician an ability to locate an interventional medical devices with respect to the actual anatomy.
  • the three-dimensional model may be based on the image acquisition as at S405.
  • the physician using this three-dimensional model for real-time guidance can obtain information that is otherwise missing about how the anatomy is changing in real-time.
  • the airways can stretch or contract by several centimeters throughout one respiratory cycle and will shift when a stiff device like a bronchoscope is inserted.
  • the tracking of interventional medical devices allows real-time information about the position to be used to update the three-dimensional model from its pre-operative state.
  • the numbering of steps in Fig. 4 may partially or fully hold true for other embodiments described herein.
  • the numbering of S680 in Fig. 6 may be taken to mean that the step at S680 can be performed in place of S480 in Fig. 4.
  • the numbering of S660 in Fig. 6 may be taken to mean that the step can be performed before S470 but after S415 relative to the functionality described for Fig. 4.
  • the relative values of the last two numerals for steps described herein for various embodiments may be taken as a general or specific placement relative to the numbering of steps in Fig. 4.
  • Fig. 5A illustrates comparative views of a tracked device and a three-dimensional model of pathways with multiple branches before dynamic interventional three-dimensional model deformation is applied, in accordance with a representative embodiment.
  • Fig. 5A the three-dimensional models of the pathway structure(s) are based on the point in time at which the computed tomography imagery was acquired prior to segmentation used to generate the three-dimensional model based on the image.
  • Fig. 5A illustrates how a tracked interventional medical device can diverge from the three-dimensional models as the anatomy changes in real-time. This is illustrated in image #2, image #4 and image #6 in Fig. 5A. Therefore, if a physician is using the underlying three-dimensional model for real-time guidance, the physician is missing information about how the anatomy is changing in real-time. For example, the airways can stretch or contract several centimeters throughout one respiratory cycle and will shift when a stiff device like a bronchoscope is inserted.
  • Fig. 5A illustrates how a tracked interventional medical device can diverge from the three-dimensional models as the anatomy changes in real-time. This is illustrated in image #2, image #4 and image #6 in Fig. 5A. Therefore, if a physician is using the underlying three-
  • 5A shows visually how the interventional medical device will appear with respect to a pre -operatively acquired model.
  • Tracked interventional medical devices allow real-time information about the position and are used herein to update what the three-dimensional pre-operative model looks like to the physician. This may be especially useful when intra-operative fluoroscopy is not being used.
  • Fig. 5A the tracked device is navigated down an airway.
  • image #1, image #3, image #5 and image #7 the tracked device is aligned with the airway.
  • image #2, image #4 and image #6 the interventional medical device appears outside of the airway even though the tracked device is actually within the lumen of the airway. That is, in several phases of the respiratory cycle shown in Fig. 5A, the interventional medical device appears to be outside of the airway when in reality the interventional medical device is still within the lumen.
  • Fig. 5B illustrates a progression for dynamic interventional three-dimensional model deformation using optical shape sensing, in accordance with a representative embodiment.
  • a three-dimensional segmented airway is deformed using optical shape sensing feedback about respiratory motion, cardiac motion and anatomical motion due to interaction with a tracked device (e.g., lung deformation).
  • medical imagery underlying image #1 may be computed tomography images taken by the first medical imaging system 210 well before a medical intervention.
  • Image #1 shows the pre-operative segmented model of the airway without a tracked interventional medical device
  • image #2 shows the pre-operative segmented model of the airway with an endobronchial interventional medical device navigating the airway.
  • the tracking is performed using optical shape sensing as described herein.
  • deformation may involve shrinking, expanding, shifting or otherwise moving a three-dimensional model to fit the current location of a tracked device.
  • the tracked interventional medical device 501 is labelled in each of image #2, image #3 and image #4.
  • Fig. 5B The example in Fig. 5B is described using tracked interventional medical devices that are tracked with optical shape sensing.
  • electromagnetic sensing could alternatively be used, either with a single sensor at the tip of a tracked device or multiple sensors along the length of the tracked device.
  • recording of the tip position could be performed continuously to track the path taken by the entire tracked device.
  • primary elements of a method described herein may involve acquiring a three-dimensional scan of the patient’s airways either with pre-operative computed tomography or intra-operative cone-beam computed tomography.
  • the pre-operative computed tomography or intra-operative cone-beam computed tomography are typically performed at one phase of the respiratory cycle.
  • the airways and lesion are segmented from the computed tomography data to create a three-dimensional model of the airways.
  • the resultant three-dimensional model of the airways is the starting point in image #1 in Fig. 5B.
  • an interventional medical device can be navigated on top of the segmented model of the airway to a desired location and made to appear consistently within the airways if there is no cardiac or respiratory motion.
  • Examples of an endobronchial lung procedure include lesion or lymph node biopsy, tumor ablation, airway stenting, tumor resection and other forms of lung procedures. If there were no cardiac or respiratory motion the interventional medical device could be navigated on top of the segmented model of the airway and the interventional medical device would always appear to stay inside the airways. However, significant motion occurs due to cardiac and respiratory motion, as well as minor movements of the patient and inserted devices.
  • the interventional medical device When motion occurs, the interventional medical device appears to go outside of the airways of the model and can be misleading to a physician.
  • One benefit of optical shape sensing is that the positional information of the entire length of the interventional medical device is known at all times. This information can be used to update the model of the airway in real-time to provide a more realistic image of where the interventional medical devices are relative to the anatomy.
  • the upper airways are very rigid, and therefore most likely the interventional medical device will be inside the lumen of the airway.
  • the walls are quite thin, and the tip of an interventional medical device may easily poke outside the walls. Therefore, the deformation of the model should predominately use the information of the interventional medical device location when in the upper airways.
  • FIG. 6. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the process in Fig. 6 starts at S630 by registering the tracked device in the pathways to the three-dimensional model. Registration is a process which involves aligning different coordinate systems or assigning coordinates of an existing coordinate system to a newly introduced element.
  • a coordinate system of the three-dimensional model may be used to assign coordinates to a tracked device so that the tracked device can be tracked in the coordinate system of the three-dimensional model.
  • the tracked device may be tracked using optical shape sensing, and the three-dimensional model may be a pre-operative model such as in all four images in Fig. 5B.
  • the process in Fig. 6 includes calculating an offset between an immediately previous position of the tracked device and the current position of the tracked device relative to the three-dimensional model. Insofar as positions of the tracked device may be calculated continuously and rapidly during an interventional procedure, an offset may be used to help plot a trajectory and path for the tracked device, and also to assist in the deforming as at S480 in FIG 4. or at S680 as described below.
  • the calculating at S675 may be performed after a determination such as at S470 in Fig. 4 where a check is made as to whether a tracked device is inside the airway lumen of the mesh of the three-dimensional model.
  • the process in Fig. 6 includes transforming the three-dimensional model to the current position of the tracked device based on the offset. For example, at S680 the
  • transformation may involve a deformation by adjusting the three-dimensional model so that the three-dimensional model includes one or more immediately previous positions of the tracked device and the current position of the tracked device.
  • the transforming at S680 may involve deforming the entire pre-operative three- dimensional model to the current position of the tracked device.
  • the transforming may be based on recording the last known position of the tracked device and the current position of the tracked device and calculating the offset as at S675 insofar as recording positions assists, for example, in identifying which branches the tracked interventional medical device has already traversed.
  • the system 200 may remember the history of the tracked device at all times and all positions, in order to facilitate the calculating at S675 and the transforming at S680.
  • the process in Fig. 6 may involve showing the pre-operative three-dimensional model in an undeformed state.
  • small deformation corrections may be applied to keep the interventional medical device in the center of the lumen.
  • the process in Fig. 6 includes iteratively deforming each new branch of the three-dimensional model containing the tracked device each time the tracked device moves to a new branch.
  • the location of the tracked device in each airway at each time point can be iteratively used to locally deform the airway to fit the tracked device, so as to keep the tracked device at the center of the airway lumen. The location can be identified after each deformation and each time the tracked interventional medical device is advanced further, and each time the tracked device is advanced further the local deforming may be again performed.
  • the deformation shown in image #4 of Fig. 5B may be obtained.
  • the tracked device may be tracked with optical shape sensing. When the tracked device is outside of the pathways, the pathways of the three-dimensional model are adjusted to the location of the tracked device.
  • An important part of the method in Fig. 6 is that the system 200 stores the history of multiple or even all time points and positions of the tracked device as the tracked device is moved throughout the airways. These time points and positions may be critical for tracked devices where only the tip is tracked, or with minimal tracking along the length of the tracked device. The history of positions can be used to deform the three-dimensional model along the entire device trajectory and not just at the tip of the tracked device.
  • the method may be performed while navigating an optical shape sensing device in the manner shown in image #2, image #3 and image #4 of Fig. 5B.
  • the optical shape sensing device is registered to the pre-operative three-dimensional model.
  • the pre-operative three-dimensional model is shown in an undeformed state or alternatively only small deformable corrections are applied to keep the tracked device in the center of the lumen.
  • the pre-operative three-dimensional model is deformed to the current position of the optical shape sensing device. Since positions of the optical shape sensing device are recorded on an ongoing basis, the offset between the last position and the current position can be calculated with respect to position in the pre-operative three-dimensional model of the airway in order to help identify which branches have already been traversed. In this embodiment, the system may remember the history of all time points and positions of the optical shape sensing device.
  • the pre-operative three-dimensional model is transformed to the new device location as a whole. Afterwards as the optical shape sensing device is advanced in the airways, the location is recorded at each time point and then used to deform the airway to fit the optical shape sensing device such as by keeping the optical shape sensing device at the center of the airway lumen.
  • the system stores the history of all device time points and positions as the interventional medical device is moved throughout the airways. These time points and positions may be important for devices where only the tip is tracked, or with minimal tracking along the length of the interventional medical device. The history of positions can be used to deform the model along the entire device trajectory and not just at the tip of the interventional medical device.
  • Fig. 6 The embodiment of Fig. 6 is described using optical shape sensing devices.
  • electromagnetic sensing devices can be used, either with a single sensor at the tip or multiple sensors along the length of the interventional medical device.
  • recording of the tip position may take place continuously to track the path taken by the electromagnetic sensing device.
  • Fig. 7 illustrates a bronchoscopic view of the airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • a bronchoscopic view is shown as a separate image from the image of the three- dimensional model.
  • the tracked device is shown in the pathways of the three-dimensional model along with the path taken by the tracked device.
  • the position of a tracked interventional medical device relative to the three-dimensional model of the anatomy can be determined.
  • the bronchoscopic view can be created based on the position of the tip of the tracked interventional medical device, such as when the tracked interventional medical device is tracked with optical shape sensing. It may also be shown from any other fixed position on the medical device of interest.
  • a virtual bronchoscopic view may be formed from a segmented three-dimensional model based on the computed tomography data.
  • the positional information of the tracked device can be used to determine the closest location of the tracked device to the planned path. Simply calculating the distance between the planned path and the current path and executing one or more error minimization routines, the position of the interventional medical device tip along the planned path can be determined. The bronchoscopic view at this position can then be shown to the user and automatically updated as the tracked device is moved throughout the airways.
  • the three-dimensional positional information of the tracked device relative to the three- dimensional model can be used to show a virtual bronchoscopic view of the insides of the airways.
  • the three-dimensional model may be updated dynamically, and the tip of the tracked device can be shown with the deformed three-dimensional model.
  • a path from the trachea to a target can be planned, and then the position of the tracked device can be compared to the planned path. Error minimization can be used to select the most appropriate location of the device tip to the planned path and to show the bronchoscopic view from this point in the three-dimensional model.
  • a virtual bronchoscopic view can be created based on positions of a fixed point of the tracked device; determining closest locations of a planned path to the positions of the fixed point of the tracked device; and automatically updating the bronchoscopic view as the tracked device is moved through the three-dimensional model.
  • an orientation of the tracked device can be identified and then used to determine a viewpoint for the virtual bronchoscopic view based on the orientation of the tracked device. For example, if a user wants to show the bronchoscopic view in front of the tip of the tracked device and show where the device tip will be moving to next, the orientation can be used to ensure the user is not viewing areas to the side or behind the tip. Similarly, knowing when to show a side view may also provide advantages for a user, such as when the tracked device tip gets close to a branch in the three-dimensional model. Proximity to a branch may be
  • Fig. 8. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the method in Fig. 8 starts at S810 by generating a three-dimensional model based on segmenting pathways with multiple (a plurality of) branches in a subject of an interventional procedure.
  • the method in Fig. 8 includes generating a path from a starting point to a target point in the three-dimensional model.
  • the method in Fig. 8 includes registering the tracked device in the pathways to the three-dimensional model
  • positions of the tracked device are tracked with respect to the three-dimensional model as a lung is deflating.
  • the method in Fig. 8 includes deforming only a local branch of the pathways that contains the tracked device or two local branches of the pathways nearest to the tracked device based on the amount of movement of the tracked device over time. For example, the amount of movement may be used as the basis for calculating an offset, and then the three- dimensional model may be corrected based on the offset.
  • the method in Fig. 8 overlaps in many key aspects the method in Fig. 4, but also diverges in several respects.
  • the method in Fig. 8 is particular to a medical intervention involving the lungs, and even involves tracking positions of the tracked device as the lung is deflating.
  • the deforming is limited to only one or two local branches though this is not prohibited at S480. Rather, as should be clear, features of various
  • the method of Fig. 8 involves registering a deflated lung model and an inflated lung model using interventional medical device tracking.
  • the optical shape sensing device (or another tracked device) is registered to the pre-operative three-dimensional model.
  • the optical shape sensing device (or other tracked device) is navigated to the location of the target lesion (endobronchially).
  • the navigating can be performed, for example, using the methods described above.
  • the history of all device time points and positions is recorded with respect to the pre-operative model in order to identify which branches have been traversed and which lobe of the lung the optical shape sensing device is in.
  • the lung is then deflated, and as the lung is deflated the position of the optical shape sensing device is tracked with respect to the pre-operative model.
  • the pre-operative model is then deformed to the current position of the optical shape sensing device.
  • the deforming could be local, such as deforming only the lobe containing the optical shape sensing device and the target lesion. Alternatively, in cases where the lesion sits between two lobes, both of the lobes could be deformed.
  • the new deformed three-dimensional model is presented to the user representing the deflated lung state.
  • FIG. 9. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 9 the process starts at S950 by continuously tracking the tracked device while the lung is maneuvered.
  • the process in Fig. 9 includes deforming only a local branch of the pathway that contains the tracked device or two local branches of the pathways nearest to the tracked device to correct for lung motion.
  • lung motion can be tracked during a surgical resection using the deformation described herein.
  • the embodiment of Fig. 9 involves registering the optical shape sensing device (or other tracking device) to the pre-operative model. The tracked device is navigated before the surgery to the location of the target lesion endobronchially.
  • the motion of the registered optical shape sensing device is tracked while the lung is maneuvered such as by being pulled, stretched or flipped by the surgeon.
  • the pre-operative model is deformed to the current position of the optical shape sensing device.
  • the deformation may be local such as by deforming only the lobe containing the optical shape sensing device and the target lesion. Alternatively, in cases where the lesion sits between two lobes, both of the lobes can be deformed.
  • FIGs. 6, 8 and 9 have some degree of similarity, but describe three different use cases.
  • the use case for FIG. 6 is moving an optical shape sensing tracked device from the trachea to a distal airway.
  • the use case for FIG. 8 is keeping the optical shape sensing tracked device in one airway and collapsing the lung.
  • the use case for FIG. 9 is keeping the optical shape sensing tracked device in one airway but physically moving the lung with a tool from the outside surface of the lung as is done in surgery.
  • a three-dimensional image of the anatomy of interest can be acquired pre-operatively or intra-operatively, and the three- dimensional anatomical image is segmented to form a mesh of the anatomy of interest such as airways.
  • the three-dimensional mesh consists of a number of faces and vertices (i.e. points in a three-dimensional coordinate system).
  • the coordinate systems of the optical shape sensing device and the imaging modality are registered so the x, y, z positions of the optical shape sensing device are in the same coordinate system as the x, y, z positions of the airway mesh.
  • the position of the optical shape sensing device is continuously measured at this point in the workflow and the positional coordinates are sent to a computer processor.
  • the optical shape sensing device coordinates are stored. Tissue may be manipulated in many ways at this point, though no manipulation of tissue is specifically required here.
  • the distance between the optical shape sensing device coordinates at frame (n) and at frame (n-1) is calculated. If the optical shape sensing device consists of multiple positional points, then the distance is calculated between each respective point along the optical shape sensing device. This offset between frames is then stored.
  • the distance between the mesh point(i) and the optical shape sensing device points (j:j+N) is calculated. The optical shape sensing device point closest to the mesh point(i) is determined via a distance minimization calculation.
  • FIG. 9 The above implementation for the embodiment of FIG. 9 may also be applied to the embodiments of FIGs. 6 and 8 for device movement and deflating lung, respectively.
  • the algorithms may be varied, such as by adding thresholds to distance calculations and offsets to exclude parts of the mesh from being deformed. For example, if the optical shape sensing device is in an airway on the right side of the lung, deformation of the left side of the lung may not be desired since information on how much that lung is moving may not be available. Additionally, it may be optimal to only deform the lobe of the lung in which the optical shape sensing device is present and not the other lobes of the same lung. Setting a threshold on the maximum amount of acceptable distance from the optical shape sensing device to the mesh point can restrict which mesh points are adjusted.
  • the mesh is a three-dimensional volume of the anatomy.
  • a centerline may be drawn within the mesh.
  • a determination may be made as to whether the optical shape sensing device is within the mesh volume, or otherwise how far the mesh is off-center.
  • the optical shape sensing device position can be compared to the mesh centerlines. Using an error minimization technique, the optical shape sensing device position with respect to the best matching location against the centerlines can be computed.
  • a threshold may be implemented to determine if the optical shape sensing device is outside of the mesh volume. Fine adjustments may be made to the mesh to deform accordingly.
  • the best matching location will determine which branch, lobe, or lung the optical shape sensing device is currently sitting in. This information may be used to also restrict which parts of the mesh are deformed versus those that are left undeformed.
  • the distance from the centerline may also be used to transform the mesh.
  • the transform (or shift) necessary to re-center the optical shape sensing device with respect to the mesh can be computed so as to allow the optical shape sensing device to always“appear” at the center of the mesh.
  • continuous saving of the optical shape sensing device position can be used to determine the current branch in which the optical shape sensing device is in.
  • the continuous saving of the device position may also be used for devices that do not use optical shape sensing or devices which only track one or a few select points. These coordinates may be saved for every frame of data to build a device track which can then be used in a similar manner to the implementations described above.
  • Fig. 10 illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the process starts at S1030 by registering the three-dimensional model to a two-dimensional X-ray image space.
  • the process in Fig. 10 continues by tracking the tracked device in two dimensions based on the X-ray imaging and identifying locations of the tracked device in a fluoroscopic image based on the X-ray imaging.
  • the process in Fig. 10 includes projecting the three-dimensional model to overlay the three-dimensional model onto the two-dimensional X-ray image space as the tracked device is navigated through the pathways under guidance of the X-ray based tracking.
  • the method of Fig. 10 involves using X-ray based tracking of devices to identify the interventional medical device location in two-dimensional and adjusting the three- dimensional orientation and position of the three-dimensional segmented model.
  • the method of Fig. 10 includes registering the three-dimensional segmented model to the two-dimensional x-ray image space such as by using fiducials, markers, or an isocenter.
  • the three-dimensional segmented model is projected in two-dimensional to overlay the model on the x-ray image.
  • the tracked device is navigated down the airways under two-dimensional fluoroscopic guidance, and an image frame from the fluoroscopy is analyzed with image processing techniques to automatically identify the interventional medical device.
  • the position of the tracked device can be extracted from the image and compared to the projection image of the three-dimensional model.
  • the position of the tracked device can be checked to see if the tracked device is within the projected airway.
  • the distance between the interventional medical device position and a centerline of the projected airway can be calculated with a threshold set to define whether the interventional medical device is within the airway or not
  • the method of Fig. 10 can be modified to apply to segmentation of vessels instead of airways.
  • the distance between the interventional medical device position and a centerline of the projected vessel can again be calculated.
  • the method of Fig. 10 can be adjusted so that the projected model of the airway can be deformed to fit the actual device position if the interventional medical device is found to be outside of the actual airway or vessel.
  • An out of plane component of the motion in the two-dimensional X-ray image can be accounted for using a small fiducial, as this may be challenging using only three-dimensional projections.
  • the x-ray image may be acquired from alternative angles to occasionally adjust the out-of-plane information.
  • a tracked device 250 may be tracked with the assistance of a hub such as when the tracked device 250 is being registered using fluoroscopic imaging as in FIG. 10.
  • An alternative method for registering the tracked device to fluoroscopy is described as follows: A hub as described herein may be placed in the trachea. The hub may contain a path of a known pattern. As the tracked device 250 is guided through the hub, the trajectory of the tracked device 250 is examined for this special path. Once the special hub path is found in the device trajectory, it can be used to register the tracked device 250 to the hub.
  • a fluoroscopic image can be used to register the hub with respect to fluoroscopy; the hub may contain radiopaque markers or other discernable features, which are conducive to localizing the hub with respect to the fluoroscopic image. This allows the hub and fluoroscopy to be registered to each other. In combining these to sub-registrations, a full registration between the tracked device 250 and fluoroscopy is achieved.
  • FIG. 11 illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • the process starts at S 1149 by determining a position of the tip of the tracked device with respect to (relative to) the three-dimensional model.
  • the process continues by creating a virtual bronchoscopic view based on the position of the tip of the tracked device.
  • the tracked device is moved along the planned path, and the process returns to S 1149 and S 1150 to update the position of the tip and the virtual bronchoscopic view.
  • the method of Fig. 11 may involve showing a bronchoscopic view of an interventional medical device tip using an optical shape sensing position.
  • the position of the interventional medical device is determined relative to the anatomical model, and the bronchoscopic view can be created based on the position of the interventional medical device tip.
  • the positional information of the interventional medical device at positions of a fixed point of the tracked device can be used to determine the closest location of a planned path. Simply calculating the distance between the planned path and the current path and performing error minimization routines, the position of the interventional medical device tip along the planned path can be determined.
  • the bronchoscopic view at this position can then be shown to the user and automatically updated as the interventional medical device is moved throughout the airways.
  • a method implemented by a controller may include creating a virtual
  • the bronchoscopic view based on each of multiple positions of a fixed point of the tracked device; determining closest locations of a planned path to the positions of the fixed point of the tracked device; and automatically updating the bronchoscopic view as the tracked device is moved through the three-dimensional model.
  • the fixed point of the tracked device may be the tip of the tracked device.
  • FIG. 7 An example of the bronchoscopic view of the airways at a point marked on a three- dimensional pre-operative model as acquired in Fig. 11 is shown in Fig. 7. Anatomical features are shown in Fig. 7 beyond just the lumen.
  • the method of Fig. 11 can provide additional imaging views of what is directly outside the airway walls to provide better guidance for when to exit the airways. For example, the imaging of what is outside the airway walls can assist an interventionist by providing visual guidance not to exit when there is a large blood vessel directly outside of the airway wall.
  • Fig. 12. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 12 the process starts at S1222 by labelling each branch of a path in the pathways as a progressive sequence through the three-dimensional model.
  • the tracking system e.g., optical shape sensing
  • This registration may be inherent in the mechanical integration of the tracking to the tracked device, such as the tracking element being fixed to the distal tip of the tracked device.
  • shape sensing a hub can be fixed to the tracked device in a predetermined manner, which then allows the shape sensing fiber to be registered to the tracked device.
  • position tracking in FIG. 12 is not limited to optical shape sensing, and other forms of position tracking such as sensors may also or alternatively be used.
  • the three-dimensional model is registered to the tracked device.
  • the process in Fig. 12 includes updating the three-dimensional model to live anatomy of the subject.
  • the process in Fig. 12 includes highlighting a planned path through the pathways in the three-dimensional model and alerting an interventionalist when the tracked device moves away from the planned path.
  • the process in Fig. 12 includes presenting labels on a display for branches of the pathways of the three-dimensional model that are proximate to the tracked device as the tracked device is navigated.
  • the process in Fig. 12 includes deforming each of multiple (a plurality of) branches of the pathways in the three-dimensional model based on a trajectory of the tracked device as the tracked device approaches each of the multiple (plurality of) branches.
  • the airways are first segmented and planned paths created as described above.
  • each branch in the model of the airway is distinctly labelled.
  • An intuitive labelling scheme may be hierarchical and/or based on clinical nomenclature. The net effect is that a planned path can be communicated in terms of a sequence of branches, in addition to a visually continuous course.
  • bronchoscope 12 may be based on an assumption that the bronchoscope has a shape sensing fiber, or another position tracking mechanism embedded within so that the bronchoscope is fully tracked, or that a shape-sensed device is provided through the working channel of the bronchoscope, or that a shape- sensed device is provided through the working channel and a hub is used to track the non- shape- sensed bronchoscope.
  • a working hub may be used in the embodiment where optical shape sensing fibers are used but is not necessarily required.
  • the model of the airway is registered to the shape- sensed device, bronchoscope image and/or fluoroscopy, which allows the system to transfer the branch labels from the computed tomography to the images later in the procedure.
  • Breathing motion as well as patient motion are tracked via optical shape sensing or another position tracking mechanism as long as the interventional medical device is in an airway. Since the upper airways are relatively rigid, the upper airways deform less due to physiological motion.
  • the model of the airway can be updated to the live anatomy by initializing the position and orientation of the model against the upper airways seen in a fluoroscopic image, then following the tracked breathing/patient motion to update continuously.
  • a small device that operates similarly to a dynamic hub may be placed in the trachea to provide a reference in the body regarding the location of the bronchoscope.
  • branch labels appear in the bronchoscope image as the bronchoscope approaches the branches, or in the virtual bronchoscope image if the shape-sensed device is navigated through the working channel and deep into the airways beyond the bronchoscope itself. This may indicate to the user that a navigation decision must soon be made, and which branch the user should guide the bronchoscope or optical shape sensing device (or other position tracking device) towards.
  • the system detects that the branch has been traversed, and which branch has been taken. This determination is possible because the computed tomography and segmented airways are registered to the shape-sensed bronchoscope or device. This particular segment of the model of the airway is deformed based on the interventional medical device trajectory. Note that the locations of branch labels update to the deformed model. Furthermore, deformation is anchored by upper airway locations which deform less and by locations of the branches taken.
  • Distal segments of the airway that the device has not yet reached may be rigidly coupled to the newly deformed airway segments as an approximation, in order to maintain a sensible visualization of the remaining airways to the user. If the correct branch is taken based on the desired planned path, the path is highlighted to indicate the successful branch taken. Otherwise an alert is provided. As navigation proceeds, airways unrelated to the current path may be progressively deemphasized visually while still remaining visible to provide overall context. Additionally, each traversed branch can be labelled in the fluoroscopic image, so that the user can see the paths/branches traversed in the live image.
  • the bronchoscope in Fig. 12 could be registered to the pre-operative segmentation model based only on the positions/orientations of the branches in the bronchoscope view.
  • bronchoscopy images can be analyzed in real time to estimate the path and pose of the bronchoscope. Branches seen in the bronchoscope images can then be used to refine the registration further. This provides a reasonable early estimate of the bronchoscope trajectory, particularly in the rigid upper airways. Then, as branches are taken, the registration is refined using the tracking and information of the branches and paths taken.
  • the pre -operative three-dimensional models may remain static and the tracked interventional medical device may instead be deformed based on tracheal reference, paths traversed, branches taken, etc.
  • Fig. 13 illustrates an intraluminal view of airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • a radial endobronchial ultrasound (REBUS) catheter is navigated down the right main branch and into small peripheral airways.
  • the ultrasound transducer produces a radial image of the airway in which the ultrasound transducer is sitting. When there is contact with the wall of the airway the image appears white and when there is a branching airway the image appears black because all signal is lost to the air in that location.
  • REBUS radial endobronchial ultrasound
  • the REBUS catheter is tracked with ultrasound imaging information as a pseudo-tracking mechanism rather than with optical shape sensing.
  • a REBUS catheter is tracked with optical shape sensing along with the ultrasound imaging information.
  • Fig. 14. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
  • Fig. 14 the process starts at S1430 by registering an ultrasound probe and the three- dimensional model when a branch of the pathways is initially obtained based on the ultrasound images.
  • the process of Fig. 14 includes continuously acquiring ultrasound images of the subject as the tracked device is navigated on the pathways.
  • the process of Fig. 14 includes virtually marking a current location of the ultrasound probe on the three-dimensional model, and continuously acquiring ultrasound images and registering each branch of the pathways to the three-dimensional model as the ultrasound probe is navigated. For example, diameter of the airways may be compared to the three- dimensional model to provide a rough context of which branch the tracked device is in.
  • ultrasound itself is the tracking method.
  • an interventional tool such as an ultrasound catheter may also be tracked with another type of tracking technology such as optical shape sensing or electromagnetics.
  • radial endobronchial ultrasound is used to assess the airways and guide needle biopsies and to guide navigation and determine the location of the transducer with respect to the three-dimensional segmented model.
  • a method for using (R)EBUS guided and tracked navigation is described in part by Fig. 14.
  • the (R)EBUS probe is navigated in the left or right main branch, and ultrasound imaging is continuously acquired from the (R)EBUS probe.
  • Ultrasound imaging as in Fig. 14 may assume is good image quality and no air pocket between the transducer and the airway wall.
  • a first branch is shown in Fig. 13 as a dark gap.
  • a virtual marker may be placed on the three-dimensional model to indicate the current location of the transducer. As the probe is navigated further into the airways, ultrasound images are
  • each branch is registered to the pre-operative computed tomography or to the segmented three-dimensional model. Additionally, every time a branch is visualized the virtual marker position is updated on the three-dimensional model; and if no branch is within the ultrasound image the virtual marker position can be estimated with some indication as to the uncertainty of the location.
  • a method implemented by a controller may include automatically determining the position of a tip of an ultrasound probe that is used to acquire the ultrasound images by calculating a diameter and wall thickness of a pathway from the ultrasound images and comparing the diameter and wall thickness to the three-dimensional model.
  • the controller may also optimize the position of the tip of the ultrasound probe based on previous locations of the ultrasound probe.
  • the location and orientation of the transducer can be further refined by measuring the diameter of the branching airway and the orientation of the airway with respect to the transducer in the image. Keeping track of each branch that the transducer passes can provide a better assessment as to which branch and path the entire device has gone down. A virtual image of the entire (R)EBUS probe can then be shown on the three-dimensional segmented model.
  • a virtual three-dimensional ultrasound image or roadmap of the airways may be constructed based on the tracked (R)EBUS. This may be shown on the pre operative segmentation model. Furthermore, the pre-operative model may be updated to match the virtual (R)EBUS roadmap, e.g. based on deformable correction techniques described earlier.
  • Intermittent x-ray images can also be used to update the location of the virtual marker or check virtual device positioning consistent with the embodiment in Fig. 14.
  • tracking such as optical shape sensing may be incorporated with the ultrasound -based approach to reduce the uncertainty, particularly in branch locations where image quality is reduced.
  • dynamic interventional three-dimensional model deformation enables dynamic navigational correction during interventional procedures.
  • the dynamic interventional three-dimensional model deformation is applicable to multiple different imaging modes such as X-ray and ultrasound, and to multiple different organs of a human subject including the lungs as well as the vascular system.
  • Dynamic interventional three-dimensional model deformation may be added as functionality to an existing product such as a device tracking product.
  • the primary application discussed for lung biopsy is not an absolute requirement, as dynamic interventional three-dimensional model deformation may also be used for other pulmonology applications such as surgical excision or ablating tissue.
  • dynamic interventional three-dimensional model deformation may be applied to other fields such as vascular or gastrointestinal.
  • dynamic interventional three-dimensional model deformation applies to both Rayleigh (enhanced and regular) as well as Fiber Bragg implementations of shape sensing fiber, as well as both manual and robotic manipulation of such devices.
  • interventional three-dimensional model deformation applies to both X-ray based tracking of devices and ultrasound tracking of devices, in additional to optical shape sensing
  • dynamic interventional three-dimensional model deformation has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of dynamic interventional three-dimensional model deformation enables in its aspects.
  • dynamic interventional three-dimensional model deformation enables has been described with reference to particular means, materials and embodiments, dynamic interventional three-dimensional model deformation enables is not intended to be limited to the particulars disclosed; rather dynamic interventional three- dimensional model deformation enables extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
  • invention merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept.
  • inventive concept merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept.
  • specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
  • This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.

Abstract

A controller for assisting navigation in an interventional procedure includes a memory that stores instructions and a processor (310) that executes the instructions. When executed by the processor (310), the instructions cause the controller to implement a process that includes obtaining (S410) a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure. The process also includes determining (S470), during the interventional procedure, whether a current position of a tracked device (250) is outside of the pathways in the three- dimensional model. When the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, the process includes deforming (S480) the three- dimensional model to the current position of the tracked device (250).

Description

DYNAMIC INTERVENTIONAL THREE-DIMENSIONAL MODEL DEFORMATION
BACKGROUND
[001] Optical shape sensing (OSS) can be used to determine the shape of an optical fiber by using light along the optical fiber. The optical fiber can be provided in or on an interventional medical device to determine the shape of the interventional medical device. Information of the shape of the optical fiber can be used for localizing and navigating the interventional medical device during surgical intervention. Optical shape sensing is based on the principle that the wavelengths of reflected light differ under distinct circumstances. Distributed strain
measurements in the optical fiber can therefore be used to determine the shape of the optical fiber using characteristic Rayleigh backscatter or controlled grating patterns.
[002] Examples of interventional medical devices include guidewires, catheters, sheaths, and bronchoscopes, and optical shape sensing can be used to provide live positions and orientations of such interventional medical devices for guidance during minimally invasive procedures.
[003] One example of using optical shape sensing with an interventional device is to embed the optical fiber in a guidewire and navigate the guidewire through a channel in the body of the interventional device. Fig. 1A illustrates a virtual image of an optical fiber 101 embedded in a guidewire 102. The three-dimensional position of the guidewire 102 can then be registered to anatomical imaging modalities, such as x-ray or computed tomography (CT), to provide the anatomical context of the guidewire 102 and a shape sensed interventional device (e.g., a bronchoscope) to which the guidewire 102 is inserted or attached.
[004] Separate from the above, a lung lesion can be biopsied using an endobronchial approach in which a bronchoscope is guided down the airways. A camera at the end of the bronchoscope provides imagery of the airways, and an abnormal part of the airway can be biopsied using a small tool that is inserted via the working channel of the bronchoscope. Approaches for lung biopsy face challenges including:
o the bronchoscope can only fit in large upper airways
o many lesions are not connected to any airway, large or small, and many others are in or connected to only a small airway which interventional tools cannot reach o respiratory and cardiac motion results in large deformations in the lung The endobronchial approach may be limited to lesions that are in or connected to the upper airways since only the upper airways can fit the bronchoscope. Otherwise, visualization may be lost resulting in interventional medical devices being blindly navigated within or outside of the airways to take random tissue samples. Additionally, when the lesion is not connected to an airway the interventional medical device must puncture the airway wall and travel outside of the airway through the lung parenchyma to perform a transbronchial biopsy. A transbronchial biopsy may also be performed when the closest airway to the lesion is too small to navigate a tool.
[005] Several methods have evolved to address these endobronchial challenges and provide better navigation to physicians. In some of these methods, a pre -operative three-dimensional image of the lung is acquired and processed with algorithms to produce a three-dimensional model of the airways and lesion. Then, the physician can navigate using this three-dimensional model, with or without x-ray and/or a bronchoscope. However, three-dimensional models are limited to what the airways look like at the time of the pre-operative imaging. In addition to the imaging, some navigation methods also use tracking technology, like electromagnetic
navigation, which tracks the three-dimensional position of certain locations of the interventional devices. However, other than optical shape sensing, most tracking techniques rely on locating the tip of the interventional medical device, and this is problematic due to respiratory and cardiac motion.
[006] In addition to the challenges with lung biopsies described above, three major challenges are encountered in lung tumor surgery, regardless of the type of lung tumor surgery being performed. The location of the tumor may be initially determined based on a pre -operative computed tomography (CT) scan performed with the lung inflated. During surgery the lung is collapsed and therefore the three-dimensional orientation of the lung, and location of the tumor, do not match the pre-operative images which are used for planning. Fig. IB illustrates a comparison between an inflated view of a lung and a collapsed view of the lung. This is further complicated by the fact that the lung is typically moved and re-positioned throughout the surgical procedure. This motion can cause the surgeon to lose track of the tumor location with respect to the surface of the lung and orientation of the lobes. Second, the lung is very complex with many blood vessels and airways which have to be carefully dissected and dealt with before the tumor and any feeding airways or vessels are removed. Third, small, non-palpable tumors are very hard to locate in the lung, especially with video -assisted thoracoscopic surgery (VATS) or robotic surgery.
[007] Fig. 1C illustrates a three-dimensional model of the airways and tumor with a planned path to reach the tumor. FIG. 1C also shows an overlaying of a planned path for an
interventional procedure onto real-time fluoroscopy images of airways and a tumor. The planned path 103 from the trachea is shown as a thin line from the top. The current position 106 of the bronchoscope is also seen in the fluoroscopy image to the right of the planned path 103. The tumor location 104 is shown at the end of the planned path 103. However, the three-dimensional model in FIG. 1C is static. Therefore, there is no way provided to adjust the three-dimensional model.
[008] As described herein, dynamic interventional three-dimensional model deformation can be used to enhance localization and navigation for interventional medical devices.
SUMMARY
[009] According to a representative embodiment of the present disclosure, a controller for assisting navigation in an interventional procedure includes a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the controller to implement a process. The process includes obtaining a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure. The process also includes determining, during the interventional procedure, whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three-dimensional model, the process includes deforming the three-dimensional model to the current position of the tracked device.
[010] According to another representative embodiment of the present disclosure, a method for assisting navigation in an interventional procedure includes obtaining a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure. The method also includes determining, during the interventional procedure by a controller executing instructions with a processor, whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three- dimensional model, the method also includes deforming the three-dimensional model to the current position of the tracked device.
[Oil] According to yet another representative embodiment, a system for assisting navigation in an interventional procedure includes an imaging apparatus and a computer. The imaging apparatus generates, prior to an interventional procedure, computed tomography images of a subject of an interventional procedure to be used in generating, prior to the interventional procedure, a three-dimensional model based on segmenting pathways with a plurality of branches in the subject of the interventional procedure. The computer includes a memory that stores instructions and a processor that executes the instructions. When executed by the processor, the instructions cause the system to execute a process that includes obtaining the three-dimensional model generated prior to the interventional procedure. The process executed when the process executes instructions also includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model. When the current position of the tracked device is outside of the pathways in the three-dimensional model, the process includes deforming the three-dimensional model to the current position of the tracked device.
BRIEF DESCRIPTION OF THE DRAWINGS
[012] The example embodiments are best understood from the following detailed description when read with the accompanying drawing figures. It is emphasized that the various features are not necessarily drawn to scale. In fact, the dimensions may be arbitrarily increased or decreased for clarity of discussion. Wherever applicable and practical, like reference numerals refer to like elements.
[013] Fig. 1A illustrates a virtual image of an optical fiber inserted in the working channel of a bronchoscope.
[014] Fig. IB illustrates a comparison between an inflated view of a lung and a collapsed view of the lung.
[015] Fig. 1C illustrates a three-dimensional model of the airways and tumor with a planned path to reach the tumor. It also shows an overlay of real-time fluoroscopy images of the anatomy during an interventional procedure, including the position of an interventional device inside the airways.
[016] Fig. 2 illustrates a system for dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[017] Fig. 3 illustrates a general computer system, on which a method of dynamic
interventional three-dimensional model deformation can be implemented, in accordance with another representative embodiment.
[018] Fig. 4. illustrates a method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[019] Fig. 5A illustrates comparative views of a tracked device and a three-dimensional model of pathways with multiple branches before dynamic interventional three-dimensional model deformation is applied, in accordance with a representative embodiment.
[020] Fig. 5B illustrates a progression for dynamic interventional three-dimensional model deformation using optical shape sensing, in accordance with a representative embodiment.
[021] Fig. 6. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[022] Fig. 7 illustrates a virtual bronchoscopic view of the airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[023] Fig. 8. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[024] Fig. 9. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[025] Fig. 10. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[026] Fig. 11. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[027] Fig. 12. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[028] Fig. 13 illustrates an intraluminal view of airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment
[029] Fig. 14. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
DETAILED DESCRIPTION
[030] In the following detailed description, for purposes of explanation and not limitation, representative embodiments disclosing specific details are set forth in order to provide a thorough understanding of an embodiment according to the present teachings. Descriptions of known systems, devices, materials, methods of operation and methods of manufacture may be omitted so as to avoid obscuring the description of the representative
embodiments. Nonetheless, systems, devices, materials and methods that are within the purview of one of ordinary skill in the art are within the scope of the present teachings and may be used in accordance with the representative embodiments. It is to be understood that the terminology used herein is for purposes of describing particular embodiments only and is not intended to be limiting. The defined terms are in addition to the technical and scientific meanings of the defined terms as commonly understood and accepted in the technical field of the present teachings.
[031] It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements or components, these elements or components should not be limited by these terms. These terms are only used to distinguish one element or component from another element or component. Thus, a first element or component discussed below could be termed a second element or component without departing from the teachings of the inventive concept.
[032] The terminology used herein is for purposes of describing particular embodiments only and is not intended to be limiting. As used in the specification and appended claims, the singular forms of terms‘a’,‘an’ and‘the’ are intended to include both singular and plural forms, unless the context clearly dictates otherwise. Additionally, the terms "comprises", and/or "comprising," and/or similar terms when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
[033] Unless otherwise noted, when an element or component is said to be“connected to”, “coupled to”, or“adjacent to” another element or component, it will be understood that the element or component can be directly connected or coupled to the other element or component, or intervening elements or components may be present. That is, these and similar terms encompass cases where one or more intermediate elements or components may be employed to connect two elements or components. However, when an element or component is said to be “directly connected” to another element or component, this encompasses only cases where the two elements or components are connected to each other without any intermediate or intervening elements or components.
[034] In view of the foregoing, the present disclosure, through one or more of its various aspects, embodiments and/or specific features or sub-components, is thus intended to bring out one or more of the advantages as specifically noted below. For purposes of explanation and not limitation, example embodiments disclosing specific details are set forth in order to provide a thorough understanding of an embodiment according to the present teachings. However, other embodiments consistent with the present disclosure that depart from specific details disclosed herein remain within the scope of the appended claims. Moreover, descriptions of well-known apparatuses and methods may be omitted so as to not obscure the description of the example embodiments. Such methods and apparatuses are within the scope of the present disclosure.
[035] Fig. 2 illustrates a system for dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[036] The system 200 of Fig. 2 includes a first medical imaging system 210, a computer 220, a display 225, a second medical imaging system 240 and a tracked device 250. The system in Fig. 2 may be used for dynamic interventional three-dimensional model deformation, such as for medical interventions to repair, replace or adjust an organ such as a lung, or to otherwise intervene in the body of a subject.
[037] An example of the first medical imaging system 210 is an X-ray system that includes an X-ray machine. The first medical imaging system 210 may perform computed tomography scanning or cone based computed tomography scanning prior to a medical intervention that involves the three-dimensional model deformation described herein. The three-dimensional model may be generated before or during the medical intervention based on the scanning by the first medical imaging system 210. Additionally, while the medical imaging by the first medical imaging system 210 may be performed prior to the medical intervention, this medical imaging may also be performed during the medical intervention.
[038] The tracked device 250 may be tracked using optical shape sensing or any other tracking technology like sensor-based electromagnetism. The tracked device 250 sends positional information to the computer 220. The computer 220 processes the medical images from the first medical imaging system 210 to generate the three-dimensional model. The computer 220 also processes data from the tracked device 250 to perform the deformation and adjustments to the three-dimensional model.
[039] An example of the second medical imaging system 240 is an ultrasound apparatus used to obtain ultrasound images during a medical intervention that involves the three-dimensional model deformation described herein. Another example of the second medical imaging system 240 is another X-ray system that includes an X-ray machine used to obtain fluoroscopic images during a medical intervention that involves the three-dimensional model deformation described herein. The second medical imaging system 240 may be used to track a medical device that is otherwise untracked such that the medical device becomes the tracked device 250. For example, some medical devices do not have optical shape sensing or electromagnetic tracking components disposed therein or thereon. Locating such a medical device in the x-ray image or using anatomical information from the ultrasound images provides the positional information of the otherwise un-tracked medical device such that it becomes the tracked device 250. Of course, a medical device may be a tracked device 250 with both a tracking technology embedded therein or thereon (e.g., optical shape sensing), while also being tracked simultaneously with the medical imaging modality of the second medical imaging system 240.
[040] Additionally, the first medical imaging system 210 and the second medical imaging system 240 may both be present and used during the medical intervention, such as when the first medical imaging system 210 is the X-ray system used for fluoroscopic imaging during the medical intervention and the second medical imaging system 240 is the ultrasound system used for ultrasound imaging during the medical intervention. Alternatively, only one of the first medical imaging system 210 or the second medical imaging system 240 is present and used during the medical intervention. For example, the first medical imaging system 210 may be used during a certain type of medical intervention, while the second medical imaging system 240 is only used as an additional component such as when x-ray or ultrasound is used as the tracking method. [041] The first medical imaging system 210 may be used to generate computed tomography images that serves as the basis for generating a three-dimensional model as described herein.
The computed tomography images are an example of three-dimensional anatomical images. The second medical imaging system 240 may be used to perform imaging that is used to track a tracked device. A tracked device may refer to an interventional medical device on or in which a sensor, optical shape sensing element or other tracking element is provided. In Fig. 2, the imaging that is used to track a tracked device may be performed in real-time using the second medical imaging system 240 alone or using both the second medical imaging system 240 and the first medical imaging system 210. In other words, the second medical imaging system 240 may be present and used during the interventional procedure without the first medical imaging system 210 or with the first medical imaging system 210. The second medical imaging system 240 provides imaging data to the computer 220.
[042] In Fig. 2, the computer 220 includes a display 225. The display may be used to display the three-dimensional model based on the imaging performed by the first medical imaging system 210, along with imagery obtained during a medical intervention based on the imaging performed by the second medical imaging system 240. Imagery obtained during a medical intervention may be, for example, imagery of an inflated lung that is deflated during the medical intervention. As the term“display” is used herein, the term should be interpreted to include a class of features such as a“display device” or“display unit”, and these terms encompass an output device, or a user interface adapted for displaying images and/or data. A display may output visual, audio, and or tactile data. Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen, Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays,
Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode displays (OLED), a projector, and Head-mounted display.
[043] Any of the elements in Fig. 2 may include a controller described herein. A controller described herein may include a combination of a memory that stores instructions and a processor that executes the instructions in order to implement processes described herein. A controller may be housed within or linked to a workstation such as the computer 220 or another assembly of one or more computing devices, a display/monitor, and one or more input devices (e.g., a keyboard, joysticks and mouse) in the form of a standalone computing system, a client computer of a server system, a desktop or a tablet. The descriptive label for the term“controller” herein facilitates a distinction between controllers as described herein without specifying or implying any additional limitation to the term“controller”. The term“controller” broadly encompasses all structural configurations, as understood in the art of the present disclosure and as exemplarily described in the present disclosure, of an application specific main board or an application specific integrated circuit for controlling an application of various principles as described in the present disclosure. The structural configuration of the controller may include, but is not limited to, processor(s), computer-usable/computer readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s).
[044] Additionally, although Fig. 2 shows components networked together, two such components may be integrated into a single system. For example, the computer 220 may be integrated with the first medical imaging system 210. That is, in embodiments, functionality attributed to the computer 220 may be implemented by (e.g., performed by) a system that includes the first medical imaging system 210. On the other hand, the networked components shown in Fig. 2 may also be spatially distributed such as by being distributed in different rooms or different buildings, in which case the networked components may be connected via data connections. In still another embodiment, one or more of the components in Fig. 2 is not connected to the other components via a data connection, and instead is provided with input or output manually such as by a memory stick or other form of memory. In yet another
embodiment, functionality described herein may be performed based on functionality of the elements in Fig. 2 but outside of the system shown in Fig. 2.
[045] Any of the first medical imaging system 210, the computer 220, and the second medical imaging system 240 in Fig. 2 may include some or all elements and functionality of the general computer system described below with respect to Fig. 3. For example, the computer 220 may include a controller for determining whether a current position of a tracked device is outside of the pathways in a three-dimensional model. A process executed by a controller may include receiving a three-dimensional model of pathways with multiple branches in a subject of an interventional procedure or receiving image data and generating based on the image data the three-dimensional model of pathways with the multiple branches in the subject of the
interventional procedure. [046] The process implemented when a controller of the computer 220 executes instructions also includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model, and when the current position of the tracked device is outside of the pathways in the three-dimensional model, deforming the three-dimensional model to the current position of the tracked device. The same controller may also execute the functionality of generating a three-dimensional model based on segmenting pathways with a plurality of branches in a subject of an interventional procedure. However, the controller that tracks positions of the tracked device may obtain a segmented three-dimensional model that is generated and segmented elsewhere, such as by the first medical imaging system 210 or by the computer 220 executing instructions to process medical imagery created by the first medical imaging system 210. That is, a process implemented by a controller as described herein may include obtaining a three-dimensional model generated prior to an interventional procedure, wherein the three-dimensional model was generated based on segmenting pathways with a plurality of branches in a subject of the interventional procedure. The three-dimensional model does not, however, have to be generated“prior to” the interventional procedure. For example, in some embodiments the models are generated or updated during the interventional procedure and then the subsequent processes are still performed. As noted above, even the medical imaging by the first medical imaging system 210 may be performed during the same medical intervention in which the interventional three-dimensional model deformation is performed.
[047] Fig. 3 illustrates a general computer system, on which a method of dynamic
interventional three-dimensional model deformation can be implemented, in accordance with another representative embodiment.
[048] The computer system 300 can include a set of instructions that can be executed to cause the computer system 300 to perform any one or more of the methods or computer-based functions disclosed herein. The computer system 300 may operate as a standalone device or may be connected, for example, using a network 301, to other computer systems or peripheral devices.
[049] In a networked deployment, the computer system 300 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 300 can also be implemented as or incorporated into various devices, such as the first medical imaging system 210, the computer 220, the second medical imaging system 240, a stationary computer, a mobile computer, a personal computer (PC), a laptop computer, a tablet computer, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. The computer system 300 can be incorporated as or in a device that in turn is in an integrated system that includes additional devices. In an embodiment, the computer system 300 can be implemented using electronic devices that provide voice, video or data communication. Further, while the computer system 300 is illustrated in the singular, the term "system" shall also be taken to include any collection of systems or sub systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
[050] As illustrated in Fig. 3, the computer system 300 includes a processor 310. A processor for a computer system 300 is tangible and non-transitory. As used herein, the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time. A processor is an article of manufacture and/or a machine component. A processor for a computer system 300 is configured to execute software instructions to perform functions as described in the various embodiments herein. A processor for a computer system 300 may be a general-purpose processor or may be part of an application specific integrated circuit (ASIC). A processor for a computer system 300 may also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device. A processor for a computer system 300 may also be a logical circuit, including a programmable gate array (PGA) such as a field programmable gate array (FPGA), or another type of circuit that includes discrete gate and/or transistor logic. A processor for a computer system 300 may be a central processing unit (CPU), a graphics processing unit (GPU), or both. Additionally, any processor described herein may include multiple processors, parallel processors, or both. Multiple processors may be included in, or coupled to, a single device or multiple devices.
[051] A“processor” as used herein encompasses an electronic component which is able to execute a program or machine executable instruction. References to the computing device comprising“a processor” should be interpreted as possibly containing more than one processor or processing core. The processor may for instance be a multi-core processor. A processor may also refer to a collection of processors within a single computer system or distributed amongst multiple computer systems. The term computing device should also be interpreted to possibly refer to a collection or network of computing devices each including a processor or processors. Many programs have instructions performed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
[052] Moreover, the computer system 300 may include a main memory 320 and a static memory 330, where memories may can communicate with each other via a bus 308. Memories described herein are tangible storage mediums that can store data and executable instructions and are non-transitory during the time instructions are stored therein. As used herein, the term“non- transitory” is to be interpreted not as an eternal characteristic of a state, but as a characteristic of a state that will last for a period. The term“non-transitory” specifically disavows fleeting characteristics such as characteristics of a carrier wave or signal or other forms that exist only transitorily in any place at any time. A memory described herein is an article of manufacture and/or machine component. Memories described herein are computer-readable mediums from which data and executable instructions can be read by a computer. Memories as described herein may be random access memory (RAM), read only memory (ROM), flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, compact disk read only memory (CD-ROM), digital versatile disk (DVD), floppy disk, blu-ray disk, or any other form of storage medium known in the art. Memories may be volatile or non-volatile, secure and/or encrypted, unsecure and/or unencrypted.
[053] “Memory” is an example of a computer-readable storage medium. Computer memory is any memory which is directly accessible to a processor. Examples of computer memory include, but are not limited to RAM memory, registers, and register files. References to“computer memory” or“memory” should be interpreted as possibly being multiple memories. The memory may for instance be multiple memories within the same computer system. The memory may also be multiple memories distributed amongst multiple computer systems or computing devices.
[054] As shown, the computer system 300 may further include a video display unit 350, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid-state display, or a cathode ray tube (CRT). Additionally, the computer system 300 may include an input device 360, such as a keyboard/virtual keyboard or touch-sensitive input screen or speech input with speech recognition, and a cursor control device 370, such as a mouse or touch-sensitive input screen or pad. The computer system 300 can also include a disk drive unit 380, a signal generation device 390, such as a speaker or remote control, and a network interface device 340.
[055] In an embodiment, as depicted in Fig. 3, the disk drive unit 380 may include a computer- readable medium 382 in which one or more sets of instructions 384, e.g. software, can be embedded. Sets of instructions 384 can be read from the computer-readable medium 382.
Further, the instructions 384, when executed by a processor, can be used to perform one or more of the methods and processes as described herein. In an embodiment, the instructions 384 may reside completely, or at least partially, within the main memory 320, the static memory 330, and/or within the processor 310 during execution by the computer system 300.
[056] In an alternative embodiment, dedicated hardware implementations, such as application- specific integrated circuits (ASICs), programmable logic arrays and other hardware components, can be constructed to implement one or more of the methods described herein. One or more embodiments described herein may implement functions using two or more specific
interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules. Accordingly, the present disclosure encompasses software, firmware, and hardware implementations. Nothing in the present application should be interpreted as being implemented or implementable solely with software and not hardware such as a tangible non-transitory processor and/or memory.
[057] In accordance with various embodiments of the present disclosure, the methods described herein may be implemented using a hardware computer system that executes software programs. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Virtual computer system processing can be constructed to implement one or more of the methods or functionalities as described herein, and a processor described herein may be used to support a virtual processing environment.
[058] The present disclosure contemplates a computer-readable medium 382 that includes instructions 384 or receives and executes instructions 384 responsive to a propagated signal; so that a device connected to a network 301 can communicate voice, video or data over the network 301. Further, the instructions 384 may be transmitted or received over the network 301 via the network interface device 340.
[059] Fig. 4. illustrates a method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[060] The method in Fig. 4 starts at S405 by generating, capturing and storing medical images of a subject of an interventional procedure. In some embodiments of dynamic interventional three-dimensional model deformation, pre-operative computed tomography (CT) or cone-beam computed tomography (cone-beam computed tomography) images of the airways are acquired. Cone-beam computed tomography is a medical imaging technique that involves X-ray computed tomography where the X-rays are divergent and resultingly form a cone. The airways are segmented as described herein in most embodiments. The computed tomography or cone -beam computed tomography imaging and segmenting may be performed prior to an interventional operation in most embodiments. Three-dimensional images may be acquired using computed tomography, cone-beam computed tomography, magnetic resonance, or other imaging modalities that provide a three-dimensional representation of anatomy.
[061] At S410, the method in Fig. 4 includes generating a three-dimensional model based on segmenting pathways with multiple (a plurality of) branches in the subject of the interventional procedure. The three-dimensional model of the pathways may be generated based on the cone- beam computed tomography, or any other three-dimensional imaging modality. The three- dimensional model is of airways in the example of lungs but may be blood vessels in alternative embodiments. Additionally, while S410 specifies that the three-dimensional model is created by segmenting, a three-dimensional model may also be created by other mechanisms such as rendering. Segmentation is a representation of the surface of structures such as the pathways and branches of lungs or the components of a heart and consists for example of a set of points in three-dimensional (3-D) coordinates on the surface of the organ, and triangular plane segments defined by connecting neighboring groups of 3 points, such that the entire structure is covered by a mesh of non-intersecting triangular planes.
[062] At S415, the method in Fig. 4 includes generating a path from a starting point to a target point in the three-dimensional model.
[063] As specified for embodiments described later, a tracked device may be registered to the three-dimensional model. Registration involves placing different elements or systems onto a common coordinate system. For example, the shape of a tracked interventional medical device can be registered to pre-operative lung images, which then leads to an ability to account for breathing motion, dislocation of anatomy based on interaction with the tracked interventional medical device (e.g., lung deformation), and other movement, as well as an ability to plan a lung navigation in the airways. Shape sensing can be used for real-time guidance in the presence of respiratory/cardiac motion using error deviation of the current path from the planned path to help the user time the navigation inside and outside (off-road) of the airways.
[064] At S470, the method in Fig. 4 includes determining whether a current position of a tracked device is outside of the pathways in the three-dimensional model. In Fig. 4, tracked devices are tracked in three dimensions as the tracked devices are navigated in the airways. The tracking may be performed using optical shape sensing, but in other embodiments the tracking may involve electromagnetic sensors, ultrasound-based sensors, or x-ray-based device recognition based on dynamic analysis of fluoroscopy images. In methods described herein, optical shape sensing is generally described. However, tracked devices described herein can alternatively be tracked with electromagnetic sensors, ultrasound-based sensors, or X-ray-based device recognition from fluoroscopy images, etc.
[065] At S480, the method in Fig. 4 includes deforming the three-dimensional model to the current position of the tracked device when the current position of the tracked device is outside of the pathways in the three-dimensional model. The deformation at S480 results in adjusting visualizations seen by a physician using a pre-operative three-dimensional model of the airways and real-time interventional medical device positions. The deforming at S480 thus results from the tracking insofar as the current path of the tracked device may be compared to the airway lumen (open space), and if the current path is outside of the closest airways, the computed tomography/cone-beam computed tomography images and the three-dimensional model are deformed.
[066] At S485, the method in Fig. 4 includes deforming the three-dimensional model to center the tracked device within a current path of the three-dimensional model when the current position of the tracked device is inside of the pathways. As an alternative to S485, the method may include not deforming the three-dimensional model at all when the current position of the tracked device is inside of the pathways.
[067] The method in Fig. 4 provides a physician an ability to locate an interventional medical devices with respect to the actual anatomy. In anatomical segmentations resulting in a three- dimensional model, the three-dimensional model may be based on the image acquisition as at S405. The physician using this three-dimensional model for real-time guidance can obtain information that is otherwise missing about how the anatomy is changing in real-time. For example, the airways can stretch or contract by several centimeters throughout one respiratory cycle and will shift when a stiff device like a bronchoscope is inserted. The tracking of interventional medical devices allows real-time information about the position to be used to update the three-dimensional model from its pre-operative state.
[068] The individual steps of the method in Fig. 4 are shown in a particular order that generally will hold true for embodiments described herein. However, other steps for methods described herein may be performed in different orders than shown or may be performed on an ongoing basis concurrent with one another.
[069] Additionally, the numbering of steps in Fig. 4 may partially or fully hold true for other embodiments described herein. For example, the numbering of S680 in Fig. 6 may be taken to mean that the step at S680 can be performed in place of S480 in Fig. 4. Similarly, the numbering of S660 in Fig. 6 may be taken to mean that the step can be performed before S470 but after S415 relative to the functionality described for Fig. 4. Thus, the relative values of the last two numerals for steps described herein for various embodiments may be taken as a general or specific placement relative to the numbering of steps in Fig. 4.
[070] Fig. 5A illustrates comparative views of a tracked device and a three-dimensional model of pathways with multiple branches before dynamic interventional three-dimensional model deformation is applied, in accordance with a representative embodiment.
[071] In Fig. 5A, the three-dimensional models of the pathway structure(s) are based on the point in time at which the computed tomography imagery was acquired prior to segmentation used to generate the three-dimensional model based on the image. Fig. 5A illustrates how a tracked interventional medical device can diverge from the three-dimensional models as the anatomy changes in real-time. This is illustrated in image #2, image #4 and image #6 in Fig. 5A. Therefore, if a physician is using the underlying three-dimensional model for real-time guidance, the physician is missing information about how the anatomy is changing in real-time. For example, the airways can stretch or contract several centimeters throughout one respiratory cycle and will shift when a stiff device like a bronchoscope is inserted. Fig. 5A shows visually how the interventional medical device will appear with respect to a pre -operatively acquired model. Tracked interventional medical devices allow real-time information about the position and are used herein to update what the three-dimensional pre-operative model looks like to the physician. This may be especially useful when intra-operative fluoroscopy is not being used.
[072] In Fig. 5A, the tracked device is navigated down an airway. In image #1, image #3, image #5 and image #7 the tracked device is aligned with the airway. However, in image #2, image #4 and image #6, the interventional medical device appears outside of the airway even though the tracked device is actually within the lumen of the airway. That is, in several phases of the respiratory cycle shown in Fig. 5A, the interventional medical device appears to be outside of the airway when in reality the interventional medical device is still within the lumen.
[073] Fig. 5B illustrates a progression for dynamic interventional three-dimensional model deformation using optical shape sensing, in accordance with a representative embodiment.
[074] In Fig. 5B, a three-dimensional segmented airway is deformed using optical shape sensing feedback about respiratory motion, cardiac motion and anatomical motion due to interaction with a tracked device (e.g., lung deformation). The segmented airway shown in image #1 is a pre-operative segmented airway, where R=0 denotes that the underlying medical image is taken at one specific phase of the respiratory cycle (example, at full inspiration). For example, medical imagery underlying image #1 may be computed tomography images taken by the first medical imaging system 210 well before a medical intervention. Image #1 shows the pre-operative segmented model of the airway without a tracked interventional medical device, and image #2 shows the pre-operative segmented model of the airway with an endobronchial interventional medical device navigating the airway. In image #2, the tracking is performed using optical shape sensing as described herein. R=0 in image #2 may be taken to mean that the respiratory phase at this point in the medical intervention is at the same phase of respiration as was seen when the pre-operative image was acquired.
[075] In Fig. 5B, at R=1 the airways move during the respiratory cycle so that the current location(s) of the airways are partly, largely or entirely offset from the original locations of the pre-operative segmented model shown in image #1. This is shown by the duplicated pathways in image #3 that are on top of and offset from the original pathways of the pre-operative segmented model carried over from image #1. That is, as shown in image #3, at time R=1 (example, full expiration of the respiratory cycle) the tracked interventional medical device and the actual pathway structure have moved away from the pre-operative segmented model shown in image #1. Accordingly, in image #4 the segmented model is deformed so that the tracked
interventional medical device and the actual pathway structure fit to the actual pathway structure that exists at R=l.
[076] As shown in image #4, deformation may involve shrinking, expanding, shifting or otherwise moving a three-dimensional model to fit the current location of a tracked device. In Fig. 5B, the position of the lungs during the respiratory phase in which the pre -operative three- dimensional scan was acquired is defined as R=0, and another phase of the respiratory cycle different from the pre-operative acquisition is defined as R=l. The three-dimensional segmented airways from R=0 are shown in each of image #1, image #2, image #3 and image #4, whereas in images #3 and images #4 the offset structure is superimposed on the three-dimensional segmented airways before and after the deforming. The offset structure represents the actual locations of the pathways at R=l. The tracked interventional medical device 501 is labelled in each of image #2, image #3 and image #4.
[077] The example in Fig. 5B is described using tracked interventional medical devices that are tracked with optical shape sensing. However, electromagnetic sensing could alternatively be used, either with a single sensor at the tip of a tracked device or multiple sensors along the length of the tracked device. For an electromagnetic sensor at the tip, recording of the tip position could be performed continuously to track the path taken by the entire tracked device.
[078] As described above with respect to Fig. 4, primary elements of a method described herein may involve acquiring a three-dimensional scan of the patient’s airways either with pre-operative computed tomography or intra-operative cone-beam computed tomography. The pre-operative computed tomography or intra-operative cone-beam computed tomography are typically performed at one phase of the respiratory cycle. The airways and lesion are segmented from the computed tomography data to create a three-dimensional model of the airways. The resultant three-dimensional model of the airways is the starting point in image #1 in Fig. 5B. During an endobronchial lung procedure an interventional medical device can be navigated on top of the segmented model of the airway to a desired location and made to appear consistently within the airways if there is no cardiac or respiratory motion. Examples of an endobronchial lung procedure include lesion or lymph node biopsy, tumor ablation, airway stenting, tumor resection and other forms of lung procedures. If there were no cardiac or respiratory motion the interventional medical device could be navigated on top of the segmented model of the airway and the interventional medical device would always appear to stay inside the airways. However, significant motion occurs due to cardiac and respiratory motion, as well as minor movements of the patient and inserted devices. When motion occurs, the interventional medical device appears to go outside of the airways of the model and can be misleading to a physician. One benefit of optical shape sensing is that the positional information of the entire length of the interventional medical device is known at all times. This information can be used to update the model of the airway in real-time to provide a more realistic image of where the interventional medical devices are relative to the anatomy. The upper airways are very rigid, and therefore most likely the interventional medical device will be inside the lumen of the airway. However, in the peripheral (far out) airways the walls are quite thin, and the tip of an interventional medical device may easily poke outside the walls. Therefore, the deformation of the model should predominately use the information of the interventional medical device location when in the upper airways.
[079] Fig. 6. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[080] The process in Fig. 6 starts at S630 by registering the tracked device in the pathways to the three-dimensional model. Registration is a process which involves aligning different coordinate systems or assigning coordinates of an existing coordinate system to a newly introduced element. At S630, a coordinate system of the three-dimensional model may be used to assign coordinates to a tracked device so that the tracked device can be tracked in the coordinate system of the three-dimensional model. The tracked device may be tracked using optical shape sensing, and the three-dimensional model may be a pre-operative model such as in all four images in Fig. 5B.
[081] At S675, the process in Fig. 6 includes calculating an offset between an immediately previous position of the tracked device and the current position of the tracked device relative to the three-dimensional model. Insofar as positions of the tracked device may be calculated continuously and rapidly during an interventional procedure, an offset may be used to help plot a trajectory and path for the tracked device, and also to assist in the deforming as at S480 in FIG 4. or at S680 as described below. The calculating at S675 may be performed after a determination such as at S470 in Fig. 4 where a check is made as to whether a tracked device is inside the airway lumen of the mesh of the three-dimensional model. [082] At S680, the process in Fig. 6 includes transforming the three-dimensional model to the current position of the tracked device based on the offset. For example, at S680 the
transformation may involve a deformation by adjusting the three-dimensional model so that the three-dimensional model includes one or more immediately previous positions of the tracked device and the current position of the tracked device.
[083] The transforming at S680 may involve deforming the entire pre-operative three- dimensional model to the current position of the tracked device. The transforming may be based on recording the last known position of the tracked device and the current position of the tracked device and calculating the offset as at S675 insofar as recording positions assists, for example, in identifying which branches the tracked interventional medical device has already traversed. The system 200 may remember the history of the tracked device at all times and all positions, in order to facilitate the calculating at S675 and the transforming at S680.
[084] As an alternative to S680, if the tracked device is inside the airway lumen of the mesh of the three-dimensional mode as determined at S470, the process in Fig. 6 may involve showing the pre-operative three-dimensional model in an undeformed state. Alternatively, small deformation corrections may be applied to keep the interventional medical device in the center of the lumen.
[085] At S690, the process in Fig. 6 includes iteratively deforming each new branch of the three-dimensional model containing the tracked device each time the tracked device moves to a new branch. At S690, the location of the tracked device in each airway at each time point can be iteratively used to locally deform the airway to fit the tracked device, so as to keep the tracked device at the center of the airway lumen. The location can be identified after each deformation and each time the tracked interventional medical device is advanced further, and each time the tracked device is advanced further the local deforming may be again performed.
[086] In the method of Fig. 6, the deformation shown in image #4 of Fig. 5B may be obtained. In Fig. 6, the tracked device may be tracked with optical shape sensing. When the tracked device is outside of the pathways, the pathways of the three-dimensional model are adjusted to the location of the tracked device. An important part of the method in Fig. 6 is that the system 200 stores the history of multiple or even all time points and positions of the tracked device as the tracked device is moved throughout the airways. These time points and positions may be critical for tracked devices where only the tip is tracked, or with minimal tracking along the length of the tracked device. The history of positions can be used to deform the three-dimensional model along the entire device trajectory and not just at the tip of the tracked device.
[087] In the embodiment of Fig. 6, the method may be performed while navigating an optical shape sensing device in the manner shown in image #2, image #3 and image #4 of Fig. 5B. The optical shape sensing device is registered to the pre-operative three-dimensional model. Upon checking if the optical shape sensing device is inside the airway lumen/model mesh, if yes then the pre-operative three-dimensional model is shown in an undeformed state or alternatively only small deformable corrections are applied to keep the tracked device in the center of the lumen.
On the other hand, if the optical shape sensing device is not inside the airway lumen/model mesh, the pre-operative three-dimensional model is deformed to the current position of the optical shape sensing device. Since positions of the optical shape sensing device are recorded on an ongoing basis, the offset between the last position and the current position can be calculated with respect to position in the pre-operative three-dimensional model of the airway in order to help identify which branches have already been traversed. In this embodiment, the system may remember the history of all time points and positions of the optical shape sensing device.
[088] As also described for the embodiment of Fig. 6, the pre-operative three-dimensional model is transformed to the new device location as a whole. Afterwards as the optical shape sensing device is advanced in the airways, the location is recorded at each time point and then used to deform the airway to fit the optical shape sensing device such as by keeping the optical shape sensing device at the center of the airway lumen. In the method of Fig. 6, the system stores the history of all device time points and positions as the interventional medical device is moved throughout the airways. These time points and positions may be important for devices where only the tip is tracked, or with minimal tracking along the length of the interventional medical device. The history of positions can be used to deform the model along the entire device trajectory and not just at the tip of the interventional medical device.
[089] The embodiment of Fig. 6 is described using optical shape sensing devices. However, electromagnetic sensing devices can be used, either with a single sensor at the tip or multiple sensors along the length of the interventional medical device. For an electromagnetic sensor at the tip, recording of the tip position may take place continuously to track the path taken by the electromagnetic sensing device.
[090] Fig. 7 illustrates a bronchoscopic view of the airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[091] In Fig. 7, a bronchoscopic view is shown as a separate image from the image of the three- dimensional model. The tracked device is shown in the pathways of the three-dimensional model along with the path taken by the tracked device. Using methods as described in Fig. 4 and Fig. 6, the position of a tracked interventional medical device relative to the three-dimensional model of the anatomy can be determined. The bronchoscopic view can be created based on the position of the tip of the tracked interventional medical device, such as when the tracked interventional medical device is tracked with optical shape sensing. It may also be shown from any other fixed position on the medical device of interest.
[092] A virtual bronchoscopic view may be formed from a segmented three-dimensional model based on the computed tomography data. The positional information of the tracked device can be used to determine the closest location of the tracked device to the planned path. Simply calculating the distance between the planned path and the current path and executing one or more error minimization routines, the position of the interventional medical device tip along the planned path can be determined. The bronchoscopic view at this position can then be shown to the user and automatically updated as the tracked device is moved throughout the airways.
[093] The three-dimensional positional information of the tracked device relative to the three- dimensional model can be used to show a virtual bronchoscopic view of the insides of the airways.
[094] For example, the three-dimensional model may be updated dynamically, and the tip of the tracked device can be shown with the deformed three-dimensional model. Alternatively, a path from the trachea to a target can be planned, and then the position of the tracked device can be compared to the planned path. Error minimization can be used to select the most appropriate location of the device tip to the planned path and to show the bronchoscopic view from this point in the three-dimensional model. As an example of a method implemented by a controller, a virtual bronchoscopic view can be created based on positions of a fixed point of the tracked device; determining closest locations of a planned path to the positions of the fixed point of the tracked device; and automatically updating the bronchoscopic view as the tracked device is moved through the three-dimensional model.
[095] In an embodiment, an orientation of the tracked device can be identified and then used to determine a viewpoint for the virtual bronchoscopic view based on the orientation of the tracked device. For example, if a user wants to show the bronchoscopic view in front of the tip of the tracked device and show where the device tip will be moving to next, the orientation can be used to ensure the user is not viewing areas to the side or behind the tip. Similarly, knowing when to show a side view may also provide advantages for a user, such as when the tracked device tip gets close to a branch in the three-dimensional model. Proximity to a branch may be
automatically used to trigger a display of multiple virtual bronchoscopic views to help the user move down the correct branch.
[096] Fig. 8. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[097] The method in Fig. 8 starts at S810 by generating a three-dimensional model based on segmenting pathways with multiple (a plurality of) branches in a subject of an interventional procedure.
[098] At S815, the method in Fig. 8 includes generating a path from a starting point to a target point in the three-dimensional model.
[099] At S830, the method in Fig. 8 includes registering the tracked device in the pathways to the three-dimensional model
[0100] At S850, positions of the tracked device are tracked with respect to the three-dimensional model as a lung is deflating.
[0101] At S870, the method in Fig. 8 includes deforming only a local branch of the pathways that contains the tracked device or two local branches of the pathways nearest to the tracked device based on the amount of movement of the tracked device over time. For example, the amount of movement may be used as the basis for calculating an offset, and then the three- dimensional model may be corrected based on the offset.
[0102] The method in Fig. 8 overlaps in many key aspects the method in Fig. 4, but also diverges in several respects. For example, the method in Fig. 8 is particular to a medical intervention involving the lungs, and even involves tracking positions of the tracked device as the lung is deflating. Additionally, at S880 the deforming is limited to only one or two local branches though this is not prohibited at S480. Rather, as should be clear, features of various
embodiments such as the embodiment of Fig. 8 may be exchanged with features in other embodiments, or added into other embodiments. [0103] As described above, the method of Fig. 8 involves registering a deflated lung model and an inflated lung model using interventional medical device tracking. The optical shape sensing device (or another tracked device) is registered to the pre-operative three-dimensional model. Before surgery, the optical shape sensing device (or other tracked device) is navigated to the location of the target lesion (endobronchially). The navigating can be performed, for example, using the methods described above. The history of all device time points and positions is recorded with respect to the pre-operative model in order to identify which branches have been traversed and which lobe of the lung the optical shape sensing device is in. The lung is then deflated, and as the lung is deflated the position of the optical shape sensing device is tracked with respect to the pre-operative model. The pre-operative model is then deformed to the current position of the optical shape sensing device. The deforming could be local, such as deforming only the lobe containing the optical shape sensing device and the target lesion. Alternatively, in cases where the lesion sits between two lobes, both of the lobes could be deformed. The new deformed three-dimensional model is presented to the user representing the deflated lung state.
[0104] Fig. 9. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0105] In Fig. 9, the process starts at S950 by continuously tracking the tracked device while the lung is maneuvered.
[0106] At S980, the process in Fig. 9 includes deforming only a local branch of the pathway that contains the tracked device or two local branches of the pathways nearest to the tracked device to correct for lung motion.
[0107] As described above, for the method in Fig. 9 lung motion can be tracked during a surgical resection using the deformation described herein. As with other embodiments, the embodiment of Fig. 9 involves registering the optical shape sensing device (or other tracking device) to the pre-operative model. The tracked device is navigated before the surgery to the location of the target lesion endobronchially.
[0108] During surgical dissection, the motion of the registered optical shape sensing device is tracked while the lung is maneuvered such as by being pulled, stretched or flipped by the surgeon. Based on the motion of the optical shape sensing device, the pre-operative model is deformed to the current position of the optical shape sensing device. The deformation may be local such as by deforming only the lobe containing the optical shape sensing device and the target lesion. Alternatively, in cases where the lesion sits between two lobes, both of the lobes can be deformed.
[0109] The methods of FIGs. 6, 8 and 9 have some degree of similarity, but describe three different use cases. The use case for FIG. 6 is moving an optical shape sensing tracked device from the trachea to a distal airway. The use case for FIG. 8 is keeping the optical shape sensing tracked device in one airway and collapsing the lung. The use case for FIG. 9 is keeping the optical shape sensing tracked device in one airway but physically moving the lung with a tool from the outside surface of the lung as is done in surgery.
[0110] An algorithm for the surgical method of FIG. 9 is described now and is also applicable to the embodiments of FIGs. 6 and 8. The following description details an algorithm using real data for implementing the surgical method of FIG. 9. Initially, a three-dimensional image of the anatomy of interest can be acquired pre-operatively or intra-operatively, and the three- dimensional anatomical image is segmented to form a mesh of the anatomy of interest such as airways. The three-dimensional mesh consists of a number of faces and vertices (i.e. points in a three-dimensional coordinate system). The coordinate systems of the optical shape sensing device and the imaging modality are registered so the x, y, z positions of the optical shape sensing device are in the same coordinate system as the x, y, z positions of the airway mesh. The position of the optical shape sensing device is continuously measured at this point in the workflow and the positional coordinates are sent to a computer processor.
[0111] For a sequence of frames starting at frame (n-1) the optical shape sensing device coordinates are stored. Tissue may be manipulated in many ways at this point, though no manipulation of tissue is specifically required here. At frame (n) the distance between the optical shape sensing device coordinates at frame (n) and at frame (n-1) is calculated. If the optical shape sensing device consists of multiple positional points, then the distance is calculated between each respective point along the optical shape sensing device. This offset between frames is then stored. For each point in the mesh, the distance between the mesh point(i) and the optical shape sensing device points (j:j+N) is calculated. The optical shape sensing device point closest to the mesh point(i) is determined via a distance minimization calculation. The previously calculated offset at this optical shape sensing device point is added to mesh point (i) and the new mesh point (i) coordinate is stored. This is repeated for all mesh points. After all mesh points are adjusted by the optical shape sensing device offset a new mesh can be displayed that has been dynamically deformed based on the optical shape sensing device position. This continuous process can be performed for all frames of optical shape sensing data (n:n+M) - resulting in a real-time mesh visualization.
[0112] The above implementation for the embodiment of FIG. 9 may also be applied to the embodiments of FIGs. 6 and 8 for device movement and deflating lung, respectively. However, the algorithms may be varied, such as by adding thresholds to distance calculations and offsets to exclude parts of the mesh from being deformed. For example, if the optical shape sensing device is in an airway on the right side of the lung, deformation of the left side of the lung may not be desired since information on how much that lung is moving may not be available. Additionally, it may be optimal to only deform the lobe of the lung in which the optical shape sensing device is present and not the other lobes of the same lung. Setting a threshold on the maximum amount of acceptable distance from the optical shape sensing device to the mesh point can restrict which mesh points are adjusted.
[0113] Additionally, the mesh is a three-dimensional volume of the anatomy. A centerline may be drawn within the mesh. A determination may be made as to whether the optical shape sensing device is within the mesh volume, or otherwise how far the mesh is off-center. The optical shape sensing device position can be compared to the mesh centerlines. Using an error minimization technique, the optical shape sensing device position with respect to the best matching location against the centerlines can be computed.
[0114] In an embodiment, a threshold may be implemented to determine if the optical shape sensing device is outside of the mesh volume. Fine adjustments may be made to the mesh to deform accordingly.
[0115] In another embodiment, the best matching location will determine which branch, lobe, or lung the optical shape sensing device is currently sitting in. This information may be used to also restrict which parts of the mesh are deformed versus those that are left undeformed.
[0116] In still another embodiment, the distance from the centerline may also be used to transform the mesh. For example, if the optical shape sensing device is only slightly offset from the centerline, the transform (or shift) necessary to re-center the optical shape sensing device with respect to the mesh can be computed so as to allow the optical shape sensing device to always“appear” at the center of the mesh.
[0117] In another embodiment, continuous saving of the optical shape sensing device position can be used to determine the current branch in which the optical shape sensing device is in.
Determining which branch an optical shape sensing device is in using, for example best matching as described above, but the historical information is useful for reducing the
computational processing time of determining the current branch. If the historical data says the optical shape sensing device is already in the left side of the lung, then the computation using the centerline described above can eliminate the right sided branches from the computation and only focus on the left sided branches.
[0118] The continuous saving of the device position may also be used for devices that do not use optical shape sensing or devices which only track one or a few select points. These coordinates may be saved for every frame of data to build a device track which can then be used in a similar manner to the implementations described above.
[0119] Fig. 10. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0120] In Fig. 10, the process starts at S1030 by registering the three-dimensional model to a two-dimensional X-ray image space.
[0121] At S1050, the process in Fig. 10 continues by tracking the tracked device in two dimensions based on the X-ray imaging and identifying locations of the tracked device in a fluoroscopic image based on the X-ray imaging.
[0122] At S1055, the process in Fig. 10 includes projecting the three-dimensional model to overlay the three-dimensional model onto the two-dimensional X-ray image space as the tracked device is navigated through the pathways under guidance of the X-ray based tracking.
[0123] As described above, the method of Fig. 10 involves using X-ray based tracking of devices to identify the interventional medical device location in two-dimensional and adjusting the three- dimensional orientation and position of the three-dimensional segmented model. The method of Fig. 10 includes registering the three-dimensional segmented model to the two-dimensional x-ray image space such as by using fiducials, markers, or an isocenter. The three-dimensional segmented model is projected in two-dimensional to overlay the model on the x-ray image.
Next, the tracked device is navigated down the airways under two-dimensional fluoroscopic guidance, and an image frame from the fluoroscopy is analyzed with image processing techniques to automatically identify the interventional medical device. The position of the tracked device can be extracted from the image and compared to the projection image of the three-dimensional model. The position of the tracked device can be checked to see if the tracked device is within the projected airway. Alternatively, the distance between the interventional medical device position and a centerline of the projected airway can be calculated with a threshold set to define whether the interventional medical device is within the airway or not
[0124] The method of Fig. 10 can be modified to apply to segmentation of vessels instead of airways. The distance between the interventional medical device position and a centerline of the projected vessel can again be calculated. Similarly, the method of Fig. 10 can be adjusted so that the projected model of the airway can be deformed to fit the actual device position if the interventional medical device is found to be outside of the actual airway or vessel. An out of plane component of the motion in the two-dimensional X-ray image can be accounted for using a small fiducial, as this may be challenging using only three-dimensional projections.
Additionally, the x-ray image may be acquired from alternative angles to occasionally adjust the out-of-plane information.
[0125] Additionally, although not shown in FIG. 2, a tracked device 250 may be tracked with the assistance of a hub such as when the tracked device 250 is being registered using fluoroscopic imaging as in FIG. 10. An alternative method for registering the tracked device to fluoroscopy is described as follows: A hub as described herein may be placed in the trachea. The hub may contain a path of a known pattern. As the tracked device 250 is guided through the hub, the trajectory of the tracked device 250 is examined for this special path. Once the special hub path is found in the device trajectory, it can be used to register the tracked device 250 to the hub.
Then, a fluoroscopic image can be used to register the hub with respect to fluoroscopy; the hub may contain radiopaque markers or other discernable features, which are conducive to localizing the hub with respect to the fluoroscopic image. This allows the hub and fluoroscopy to be registered to each other. In combining these to sub-registrations, a full registration between the tracked device 250 and fluoroscopy is achieved.
[0126] Fig. 11. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0127] In Fig. 11, the process starts at S 1149 by determining a position of the tip of the tracked device with respect to (relative to) the three-dimensional model. At S 1150, the process continues by creating a virtual bronchoscopic view based on the position of the tip of the tracked device.
[0128] At SI 160, the tracked device is moved along the planned path, and the process returns to S 1149 and S 1150 to update the position of the tip and the virtual bronchoscopic view.
[0129] The method of Fig. 11 may involve showing a bronchoscopic view of an interventional medical device tip using an optical shape sensing position. In the method of Fig. 11, the position of the interventional medical device is determined relative to the anatomical model, and the bronchoscopic view can be created based on the position of the interventional medical device tip. The positional information of the interventional medical device at positions of a fixed point of the tracked device can be used to determine the closest location of a planned path. Simply calculating the distance between the planned path and the current path and performing error minimization routines, the position of the interventional medical device tip along the planned path can be determined. The bronchoscopic view at this position can then be shown to the user and automatically updated as the interventional medical device is moved throughout the airways. For example, a method implemented by a controller may include creating a virtual
bronchoscopic view based on each of multiple positions of a fixed point of the tracked device; determining closest locations of a planned path to the positions of the fixed point of the tracked device; and automatically updating the bronchoscopic view as the tracked device is moved through the three-dimensional model. The fixed point of the tracked device may be the tip of the tracked device.
[0130] An example of the bronchoscopic view of the airways at a point marked on a three- dimensional pre-operative model as acquired in Fig. 11 is shown in Fig. 7. Anatomical features are shown in Fig. 7 beyond just the lumen. The method of Fig. 11 can provide additional imaging views of what is directly outside the airway walls to provide better guidance for when to exit the airways. For example, the imaging of what is outside the airway walls can assist an interventionist by providing visual guidance not to exit when there is a large blood vessel directly outside of the airway wall.
[0131] Fig. 12. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0132] In Fig. 12, the process starts at S1222 by labelling each branch of a path in the pathways as a progressive sequence through the three-dimensional model.
[0133] At S1225, the tracking system (e.g., optical shape sensing) is registered to the tracked device in order to track the tracked device. This registration may be inherent in the mechanical integration of the tracking to the tracked device, such as the tracking element being fixed to the distal tip of the tracked device. In the case of shape sensing, a hub can be fixed to the tracked device in a predetermined manner, which then allows the shape sensing fiber to be registered to the tracked device. However, position tracking in FIG. 12 is not limited to optical shape sensing, and other forms of position tracking such as sensors may also or alternatively be used.
[0134] At S1230, the three-dimensional model is registered to the tracked device.
[0135] At S1235, the process in Fig. 12 includes updating the three-dimensional model to live anatomy of the subject.
[0136] At S1240, the process in Fig. 12 includes highlighting a planned path through the pathways in the three-dimensional model and alerting an interventionalist when the tracked device moves away from the planned path.
[0137] At S1252, the process in Fig. 12 includes presenting labels on a display for branches of the pathways of the three-dimensional model that are proximate to the tracked device as the tracked device is navigated.
[0138] At S1280, the process in Fig. 12 includes deforming each of multiple (a plurality of) branches of the pathways in the three-dimensional model based on a trajectory of the tracked device as the tracked device approaches each of the multiple (plurality of) branches.
[0139] In the method of Fig. 12, the airways are first segmented and planned paths created as described above. In addition, each branch in the model of the airway is distinctly labelled. An intuitive labelling scheme may be hierarchical and/or based on clinical nomenclature. The net effect is that a planned path can be communicated in terms of a sequence of branches, in addition to a visually continuous course. The method of Fig. 12 may be based on an assumption that the bronchoscope has a shape sensing fiber, or another position tracking mechanism embedded within so that the bronchoscope is fully tracked, or that a shape-sensed device is provided through the working channel of the bronchoscope, or that a shape- sensed device is provided through the working channel and a hub is used to track the non- shape- sensed bronchoscope. A working hub may be used in the embodiment where optical shape sensing fibers are used but is not necessarily required.
[0140] Additionally, in the method of Fig. 12 the model of the airway is registered to the shape- sensed device, bronchoscope image and/or fluoroscopy, which allows the system to transfer the branch labels from the computed tomography to the images later in the procedure. Breathing motion as well as patient motion are tracked via optical shape sensing or another position tracking mechanism as long as the interventional medical device is in an airway. Since the upper airways are relatively rigid, the upper airways deform less due to physiological motion. The model of the airway can be updated to the live anatomy by initializing the position and orientation of the model against the upper airways seen in a fluoroscopic image, then following the tracked breathing/patient motion to update continuously. To further assist registration in the case of an optical shape sensing device or another position tracking device, a small device that operates similarly to a dynamic hub may be placed in the trachea to provide a reference in the body regarding the location of the bronchoscope.
[0141] Using the model of the airway and labelling, the operator involved in the method of Fig. 12 navigates the bronchoscope to the next branch on the planned path. During navigation, branch labels appear in the bronchoscope image as the bronchoscope approaches the branches, or in the virtual bronchoscope image if the shape-sensed device is navigated through the working channel and deep into the airways beyond the bronchoscope itself. This may indicate to the user that a navigation decision must soon be made, and which branch the user should guide the bronchoscope or optical shape sensing device (or other position tracking device) towards.
[0142] As the user navigates the bronchoscope or optical shape sensing device (or other position tracking device) through the branch in Fig. 12, the system detects that the branch has been traversed, and which branch has been taken. This determination is possible because the computed tomography and segmented airways are registered to the shape-sensed bronchoscope or device. This particular segment of the model of the airway is deformed based on the interventional medical device trajectory. Note that the locations of branch labels update to the deformed model. Furthermore, deformation is anchored by upper airway locations which deform less and by locations of the branches taken. Distal segments of the airway that the device has not yet reached may be rigidly coupled to the newly deformed airway segments as an approximation, in order to maintain a sensible visualization of the remaining airways to the user. If the correct branch is taken based on the desired planned path, the path is highlighted to indicate the successful branch taken. Otherwise an alert is provided. As navigation proceeds, airways unrelated to the current path may be progressively deemphasized visually while still remaining visible to provide overall context. Additionally, each traversed branch can be labelled in the fluoroscopic image, so that the user can see the paths/branches traversed in the live image.
[0143] The above sequence of branch traversal and path highlighting proceeds until the target is reached. The net effects of the approach in Fig. 12 is that deformation of the model only needs to address the planned path and paths taken, as opposed to the entire model, which removes visual clutter and is simpler to implement; the user retains greater confidence in navigating down the correct path even in the presence of registration uncertainty and imperfect deformation of the models; and can improve the accuracy of the registration based on the bronchoscopy image and knowing if the bronchoscope is in the center of the lumen or closer to the wall.
[0144] Moreover, even without tracking, the bronchoscope in Fig. 12 could be registered to the pre-operative segmentation model based only on the positions/orientations of the branches in the bronchoscope view.
[0145] A hybrid approach to register the three-dimensional model to the anatomy
intraoperatively is also possible, such as to provide navigation before registration. As the tracked bronchoscope is guided down the trachea, when there is not yet enough information in the tracking system to register the model to the anatomy, bronchoscopy images can be analyzed in real time to estimate the path and pose of the bronchoscope. Branches seen in the bronchoscope images can then be used to refine the registration further. This provides a reasonable early estimate of the bronchoscope trajectory, particularly in the rigid upper airways. Then, as branches are taken, the registration is refined using the tracking and information of the branches and paths taken.
[0146] In one or more embodiments alternative to those described already, the pre -operative three-dimensional models may remain static and the tracked interventional medical device may instead be deformed based on tracheal reference, paths traversed, branches taken, etc.
[0147] Fig. 13 illustrates an intraluminal view of airways from a current position of a tracked device determined using dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0148] In Fig. 13, a radial endobronchial ultrasound (REBUS) catheter is navigated down the right main branch and into small peripheral airways. The ultrasound transducer produces a radial image of the airway in which the ultrasound transducer is sitting. When there is contact with the wall of the airway the image appears white and when there is a branching airway the image appears black because all signal is lost to the air in that location.
[0149] In the embodiment of FIG. 13, the REBUS catheter is tracked with ultrasound imaging information as a pseudo-tracking mechanism rather than with optical shape sensing. However, in an embodiment based on FIG. 13, a REBUS catheter is tracked with optical shape sensing along with the ultrasound imaging information.
[0150] Fig. 14. illustrates another method of dynamic interventional three-dimensional model deformation, in accordance with a representative embodiment.
[0151] In Fig. 14, the process starts at S1430 by registering an ultrasound probe and the three- dimensional model when a branch of the pathways is initially obtained based on the ultrasound images.
[0152] At S1445, the process of Fig. 14 includes continuously acquiring ultrasound images of the subject as the tracked device is navigated on the pathways.
[0153] At S1454, the process of Fig. 14 includes virtually marking a current location of the ultrasound probe on the three-dimensional model, and continuously acquiring ultrasound images and registering each branch of the pathways to the three-dimensional model as the ultrasound probe is navigated. For example, diameter of the airways may be compared to the three- dimensional model to provide a rough context of which branch the tracked device is in.
Additionally, in the embodiment of FIG. 14, ultrasound itself is the tracking method. However, an interventional tool such as an ultrasound catheter may also be tracked with another type of tracking technology such as optical shape sensing or electromagnetics.
[0154] In the method of Fig. 14 described above, radial endobronchial ultrasound (REBUS) is used to assess the airways and guide needle biopsies and to guide navigation and determine the location of the transducer with respect to the three-dimensional segmented model. A method for using (R)EBUS guided and tracked navigation is described in part by Fig. 14. The (R)EBUS probe is navigated in the left or right main branch, and ultrasound imaging is continuously acquired from the (R)EBUS probe. Ultrasound imaging as in Fig. 14 may assume is good image quality and no air pocket between the transducer and the airway wall. When the first branch is seen in the ultrasound image an initial registration between the probe and the three-dimensional segmented model is established. A first branch is shown in Fig. 13 as a dark gap. A virtual marker may be placed on the three-dimensional model to indicate the current location of the transducer. As the probe is navigated further into the airways, ultrasound images are
continuously acquired, and each branch is registered to the pre-operative computed tomography or to the segmented three-dimensional model. Additionally, every time a branch is visualized the virtual marker position is updated on the three-dimensional model; and if no branch is within the ultrasound image the virtual marker position can be estimated with some indication as to the uncertainty of the location.
[0155] Automatic labeling described in previous embodiments can be combined with the embodiment of FIG. 14. For example, diameter and wall thickness of the airway that the EBUS transducer is currently sitting in can be calculated and referenced back to the three-dimensional model. The three-dimensional model, generated from the three-dimensional imaging, may also include information about the airway diameter and airway wall thickness at all positions.
Registering this information between the REBUS probe and the three-dimensional model determines an approximate position of the transducer. Adding in additional information about which branches have been passed through can be used to determine the exact location of the transducer. Additionally, while the description of FIG. 14 and most embodiments herein focuses on airways, methods such as in FIG. 14 including the updated method using automatic labelling can also be used in vascular navigation.
[0156] A method implemented by a controller may include automatically determining the position of a tip of an ultrasound probe that is used to acquire the ultrasound images by calculating a diameter and wall thickness of a pathway from the ultrasound images and comparing the diameter and wall thickness to the three-dimensional model. The controller may also optimize the position of the tip of the ultrasound probe based on previous locations of the ultrasound probe.
[0157] In Fig. 14, the location and orientation of the transducer can be further refined by measuring the diameter of the branching airway and the orientation of the airway with respect to the transducer in the image. Keeping track of each branch that the transducer passes can provide a better assessment as to which branch and path the entire device has gone down. A virtual image of the entire (R)EBUS probe can then be shown on the three-dimensional segmented model.
[0158] Additionally, in Fig. 14 a virtual three-dimensional ultrasound image or roadmap of the airways may be constructed based on the tracked (R)EBUS. This may be shown on the pre operative segmentation model. Furthermore, the pre-operative model may be updated to match the virtual (R)EBUS roadmap, e.g. based on deformable correction techniques described earlier.
[0159] Intermittent x-ray images can also be used to update the location of the virtual marker or check virtual device positioning consistent with the embodiment in Fig. 14. Similarly, tracking such as optical shape sensing may be incorporated with the ultrasound -based approach to reduce the uncertainty, particularly in branch locations where image quality is reduced.
[0160] Accordingly, dynamic interventional three-dimensional model deformation enables dynamic navigational correction during interventional procedures. The dynamic interventional three-dimensional model deformation is applicable to multiple different imaging modes such as X-ray and ultrasound, and to multiple different organs of a human subject including the lungs as well as the vascular system. Dynamic interventional three-dimensional model deformation may be added as functionality to an existing product such as a device tracking product. The primary application discussed for lung biopsy is not an absolute requirement, as dynamic interventional three-dimensional model deformation may also be used for other pulmonology applications such as surgical excision or ablating tissue. Additionally, dynamic interventional three-dimensional model deformation may be applied to other fields such as vascular or gastrointestinal.
Additionally, dynamic interventional three-dimensional model deformation applies to both Rayleigh (enhanced and regular) as well as Fiber Bragg implementations of shape sensing fiber, as well as both manual and robotic manipulation of such devices. Finally, dynamic
interventional three-dimensional model deformation applies to both X-ray based tracking of devices and ultrasound tracking of devices, in additional to optical shape sensing
implementations .
[0161] Although dynamic interventional three-dimensional model deformation has been described with reference to several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of dynamic interventional three-dimensional model deformation enables in its aspects. Although dynamic interventional three-dimensional model deformation enables has been described with reference to particular means, materials and embodiments, dynamic interventional three-dimensional model deformation enables is not intended to be limited to the particulars disclosed; rather dynamic interventional three- dimensional model deformation enables extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
[0162] The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of the disclosure described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
[0163] One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term“invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
[0164] The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b) and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
[0165] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to practice the concepts described in the present disclosure. As such, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents and shall not be restricted or limited by the foregoing detailed description.

Claims

CLAIMS:
1. A controller for assisting navigation in an interventional procedure, comprising:
a memory (320/330/382) that stores instructions, and
a processor (310) that executes the instructions,
wherein, when executed by the processor (310), the instructions cause the controller to implement a process that includes:
obtaining (S410) a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure;
determining (S470), during the interventional procedure, whether a current position of a tracked device (250) is outside of the pathways in the three-dimensional model; and
when the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, deforming (S480) the three-dimensional model to the current position of the tracked device (250).
2. The controller of claim 1,
wherein the process implemented when the processor (310) executes the instructions further comprises:
registering (S630) the tracked device (250) in the pathways to the three-dimensional model;
calculating (S675) an offset between an immediately previous position of the tracked device (250) and the current position of the tracked device (250) relative to the three-dimensional model;
transforming (S680) the three-dimensional model to the current position of the tracked device (250) based on the offset, and
iteratively locally deforming (S690) each new branch of the three-dimensional model containing the tracked device (250) each time the tracked device (250) moves to a new branch.
3. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises: registering (S830) the tracked device (250) in the pathways to the three-dimensional model; and
when the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, the deforming the three-dimensional model comprises deforming (S870) only a local branch of the pathways that contains the tracked device (250) or two local branches of the pathways nearest to the tracked device (250),
wherein the pathways are through a lung and the tracked device (250) is navigated to a location in the lung prior to the interventional procedure.
4. The controller of claim 3,
wherein the process implemented when the processor (310) executes the instructions further comprises tracking (S850) positions of the tracked device (250) with respect to the three- dimensional model as the lung is deflating.
5. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
when the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, the deforming the three-dimensional model comprises deforming (S870) only a local branch of the pathways that contains the tracked device (250) or two local branches of the pathways nearest to the tracked device (250), wherein the pathways are through a lung;
deforming (S980) the three-dimensional model to the current position of the tracked device (250) to correct for lung motion or lung deformation, and
continuously tracking (S950) the tracked device (250) while the lung is maneuvered.
6. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
tracking (S1050) the tracked device (250) in two dimensions based on X-ray imaging; and
registering (S1030) the three-dimensional model to a two-dimensional X-ray image space.
7. The controller of claim 6, wherein the process implemented when the processor (310) executes the instructions further comprises:
projecting (S1055) the three-dimensional model to overlay the three-dimensional model onto the two-dimensional X-ray image space as the tracked device (250) is navigated through the pathways under guidance of the tracking based on X-ray imaging; and
identifying (S1050) locations of the tracked device (250) in a fluoroscopic image based on the X-ray imaging.
8. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
creating (S 1150) a virtual bronchoscopic view based on positions of a fixed point of the tracked device (250);
determining (S 1149) closest locations of a planned path to the positions of the fixed point of the tracked device (250); and
automatically updating (SI 150) the bronchoscopic view as the tracked device (250) is moved through the three-dimensional model.
9. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
labelling (S1222) each branch of a path in the pathways as a progressive sequence through the three-dimensional model;
registering (S1230) the three-dimensional model to the tracked device (250);
updating (S1235) the three-dimensional model to live anatomy of the subject; and as the tracked device (250) is navigated, presenting (S1252) labels on a display (225) for branches of the pathways of the three-dimensional model that are proximate to the tracked device (250).
10. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises: deforming (S1280) each of a plurality of branches of the pathways in the three- dimensional model based on a trajectory of the tracked device (250) as the tracked device (250) approaches each of the plurality of branches.
11. The controller of claim 1, further comprising:
highlighting (S1240) a planned path through the pathways in the three-dimensional model and alerting a user when the tracked device (250) moves away from the planned path.
12. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
continuously acquiring (S1445) ultrasound images of the pathways as the tracked device (250) is navigated on the pathways;
registering (S1430) an ultrasound probe and the three-dimensional model when a branch of the pathways is initially obtained based on the ultrasound images;
virtually marking (S1454) a current location of the ultrasound probe on the three- dimensional model, and continuously acquiring ultrasound images and registering each branch of the pathways to the three-dimensional model as the ultrasound probe is navigated.
13. The controller of claim 1, wherein the process implemented when the processor (310) executes the instructions further comprises:
continuously acquiring (S1445) ultrasound images of the pathways as the tracked device (250) is navigated on the pathways; and
virtually marking (S1454) the three-dimensional model each time a branch of the pathways is visualized.
14. The controller of claim 13, wherein the process implemented when the processor (310) executes the instructions further comprises:
automatically determining (SI 149) the position of a tip of an ultrasound probe used to acquire the ultrasound images by calculating a diameter and wall thickness of a pathway from the ultrasound images and comparing the diameter and wall thickness to the three-dimensional model; and
optimizing the position of the tip of the ultrasound probe based on previous locations of the ultrasound probe.
15. A method for assisting navigation in an interventional procedure, comprising:
obtaining a three-dimensional model generated prior to an interventional procedure based on segmenting pathways with a plurality of branches in a subject of the interventional procedure;
determining, during the interventional procedure by a controller executing instructions with a processor (310), whether a current position of a tracked device (250) is outside of the pathways in the three-dimensional model; and
when the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, deforming the three-dimensional model to the current position of the tracked device (250).
16. A system (200) for assisting navigation in an interventional procedure, comprising: an imaging apparatus that generates, prior to an interventional procedure, three- dimensional anatomical images of a subject of the interventional procedure to be used in generating, prior to the interventional procedure, a three-dimensional model based on segmenting pathways with a plurality of branches in the subject of the interventional procedure; a computer (220) with a memory that stores instructions and a processor (310) that executes the instructions;
wherein, when executed by the processor (310), the instructions cause the system (200) to execute a process that includes:
obtaining (S410) the three-dimensional model generated prior to the interventional procedure;
determining (S470) whether a current position of a tracked device (250) is outside of the pathways in the three-dimensional model; and when the current position of the tracked device (250) is outside of the pathways in the three-dimensional model, deforming (S480) the three-dimensional model to the current position of the tracked device (250).
17. The controller of claim 8, wherein the process implemented when the processor (310) executes the instructions further comprises:
determining an orientation of the tracked device (250) and determining (SI 150) a viewpoint for the virtual bronchoscopic view based on the orientation of the tracked device (250).
PCT/EP2020/056902 2019-03-14 2020-03-13 Dynamic interventional three-dimensional model deformation WO2020182997A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/438,990 US20220156925A1 (en) 2019-03-14 2020-03-13 Dynamic interventional three-dimensional model deformation
EP20713213.5A EP3939052A1 (en) 2019-03-14 2020-03-13 Dynamic interventional three-dimensional model deformation
CN202080021071.3A CN113614844A (en) 2019-03-14 2020-03-13 Dynamic intervention three-dimensional model deformation
JP2021553783A JP2022523445A (en) 2019-03-14 2020-03-13 Dynamic interventional 3D model transformation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962818622P 2019-03-14 2019-03-14
US62/818,622 2019-03-14

Publications (1)

Publication Number Publication Date
WO2020182997A1 true WO2020182997A1 (en) 2020-09-17

Family

ID=69941336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/056902 WO2020182997A1 (en) 2019-03-14 2020-03-13 Dynamic interventional three-dimensional model deformation

Country Status (5)

Country Link
US (1) US20220156925A1 (en)
EP (1) EP3939052A1 (en)
JP (1) JP2022523445A (en)
CN (1) CN113614844A (en)
WO (1) WO2020182997A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022069266A1 (en) * 2020-09-30 2022-04-07 Koninklijke Philips N.V. Interventional medical device tracking
WO2022123577A1 (en) * 2020-12-10 2022-06-16 Magnisity Ltd. Dynamic deformation tracking for navigational bronchoscopy
WO2023049443A1 (en) * 2021-09-27 2023-03-30 Bard Access Systems, Inc. Medical instrument shape filtering systems and methods
US11622816B2 (en) 2020-06-26 2023-04-11 Bard Access Systems, Inc. Malposition detection system
US11630009B2 (en) 2020-08-03 2023-04-18 Bard Access Systems, Inc. Bragg grated fiber optic fluctuation sensing and monitoring system
WO2023218468A1 (en) * 2022-05-12 2023-11-16 Magnisity Ltd. Curve inductive sensor
EP4285832A1 (en) * 2022-06-01 2023-12-06 Koninklijke Philips N.V. Guiding an interventional imaging device
WO2023232729A1 (en) * 2022-06-01 2023-12-07 Koninklijke Philips N.V. Guiding an interventional imaging device
US11850338B2 (en) 2019-11-25 2023-12-26 Bard Access Systems, Inc. Optical tip-tracking systems and methods thereof
US11883609B2 (en) 2020-06-29 2024-01-30 Bard Access Systems, Inc. Automatic dimensional frame reference for fiber optic
US11931112B2 (en) 2019-08-12 2024-03-19 Bard Access Systems, Inc. Shape-sensing system and methods for medical devices

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220202501A1 (en) * 2020-12-30 2022-06-30 Canon U.S.A., Inc. Real-time correction of regional tissue deformation during endoscopy procedure
CN116416412A (en) * 2021-12-31 2023-07-11 杭州堃博生物科技有限公司 Auxiliary navigation method, device and equipment for bronchoscope
EP4285854A1 (en) * 2022-06-02 2023-12-06 Koninklijke Philips N.V. Navigation in hollow anatomical structures
WO2023232678A1 (en) * 2022-06-02 2023-12-07 Koninklijke Philips N.V. Navigation in hollow anatomical structures

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
WO2012154786A2 (en) * 2011-05-11 2012-11-15 Broncus Medical, Inc. Fluoroscopy-based surgical device tracking method and system
US20160005220A1 (en) * 2014-07-02 2016-01-07 Covidien Lp Dynamic 3d lung map view for tool navigation inside the lung

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
WO2012154786A2 (en) * 2011-05-11 2012-11-15 Broncus Medical, Inc. Fluoroscopy-based surgical device tracking method and system
US20160005220A1 (en) * 2014-07-02 2016-01-07 Covidien Lp Dynamic 3d lung map view for tool navigation inside the lung

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11931112B2 (en) 2019-08-12 2024-03-19 Bard Access Systems, Inc. Shape-sensing system and methods for medical devices
US11850338B2 (en) 2019-11-25 2023-12-26 Bard Access Systems, Inc. Optical tip-tracking systems and methods thereof
US11622816B2 (en) 2020-06-26 2023-04-11 Bard Access Systems, Inc. Malposition detection system
US11883609B2 (en) 2020-06-29 2024-01-30 Bard Access Systems, Inc. Automatic dimensional frame reference for fiber optic
US11630009B2 (en) 2020-08-03 2023-04-18 Bard Access Systems, Inc. Bragg grated fiber optic fluctuation sensing and monitoring system
WO2022069266A1 (en) * 2020-09-30 2022-04-07 Koninklijke Philips N.V. Interventional medical device tracking
WO2022123577A1 (en) * 2020-12-10 2022-06-16 Magnisity Ltd. Dynamic deformation tracking for navigational bronchoscopy
WO2023049443A1 (en) * 2021-09-27 2023-03-30 Bard Access Systems, Inc. Medical instrument shape filtering systems and methods
WO2023218468A1 (en) * 2022-05-12 2023-11-16 Magnisity Ltd. Curve inductive sensor
EP4285832A1 (en) * 2022-06-01 2023-12-06 Koninklijke Philips N.V. Guiding an interventional imaging device
WO2023232729A1 (en) * 2022-06-01 2023-12-07 Koninklijke Philips N.V. Guiding an interventional imaging device

Also Published As

Publication number Publication date
CN113614844A (en) 2021-11-05
US20220156925A1 (en) 2022-05-19
EP3939052A1 (en) 2022-01-19
JP2022523445A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
US20220156925A1 (en) Dynamic interventional three-dimensional model deformation
US11896414B2 (en) System and method for pose estimation of an imaging device and for determining the location of a medical device with respect to a target
CN105979900B (en) Visualization of depth and position of blood vessels and robot-guided visualization of blood vessel cross-sections
US9265468B2 (en) Fluoroscopy-based surgical device tracking method
US11304686B2 (en) System and method for guided injection during endoscopic surgery
US11564751B2 (en) Systems and methods for visualizing navigation of medical devices relative to targets
US20210052240A1 (en) Systems and methods of fluoro-ct imaging for initial registration
EP3398552A1 (en) Medical image viewer control from surgeon's camera
US20220277477A1 (en) Image-based guidance for navigating tubular networks
US20230172574A1 (en) System and method for identifying and marking a target in a fluoroscopic three-dimensional reconstruction
JP2019531113A (en) How to use soft point features to predict respiratory cycle and improve end alignment
US20230143522A1 (en) Surgical assistant system based on image data of the operative field
RO130303A0 (en) System and method of navigation in bronchoscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20713213

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021553783

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2020713213

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2020713213

Country of ref document: EP

Effective date: 20211014