US20210248922A1 - Systems and methods for simulated product training and/or experience - Google Patents
Systems and methods for simulated product training and/or experience Download PDFInfo
- Publication number
- US20210248922A1 US20210248922A1 US17/150,918 US202117150918A US2021248922A1 US 20210248922 A1 US20210248922 A1 US 20210248922A1 US 202117150918 A US202117150918 A US 202117150918A US 2021248922 A1 US2021248922 A1 US 2021248922A1
- Authority
- US
- United States
- Prior art keywords
- navigation
- virtual environment
- patient
- user
- bronchoscopic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/28—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
- G09B23/285—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for injections, endoscopy, bronchoscopy, sigmoidscopy, insertion of contraceptive devices or enemas
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
- A61B1/2676—Bronchoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/00234—Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00743—Type of operation; Specification of treatment sites
- A61B2017/00809—Lung operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
Definitions
- Disclosed features concern medical training equipment and methods, and more particularly medical training equipment and methods used for training in bronchoscopic lung navigation procedures and techniques.
- Some medical procedures can involve a variety of different tasks by one or more medical personnel.
- Some medical procedures are minimally invasive surgical procedures performed using one or more devices, including a bronchoscope or an endoscope.
- a surgeon operates controls via a console, which remotely and precisely control surgical instruments that interact with the patient to perform surgery and other procedures.
- various other components to the system can also be used to perform a procedure.
- the surgical instruments can be provided on a separate instrument device or cart that is positioned near or over a patient, and a video output device and other equipment and devices can be provided on one or more additional units.
- a simulator unit for example, can be coupled to a surgeon console and be used in place of an actual patient, to provide a surgeon with a simulation of performing the procedure. With such a system, the surgeon can learn how instruments respond to manipulation and how those actions are presented or incorporated into the displays on the of the console.
- One aspect of the disclosure is directed to a training system for a medical procedure including: a virtual reality (VR) headset, including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of: presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope.
- VR virtual reality
- the training system also includes depicting at least one representation of a user's hand in the virtual environment; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; enabling interaction with the bronchoscopic tools via the representation of the user's hand; and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
- Implementations of this aspect of the disclosure may include one or more of the following features.
- the training system further including a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration.
- the method further including a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration.
- Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One aspect of the disclosure is directed to a training system for a medical procedure including: a virtual reality (VR) headset; and a computer operably connected to the VR headset, the computer including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of: presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment, and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of
- Implementations of this aspect of the disclosure may include one or more of the following features.
- the training system where the user interface displays one or more navigation plans for selection by a user of the VR headset.
- the training system where the computer in the virtual environment displays a user interface for performance of a registration of the navigation plan to a patient.
- the training system where during registration the virtual environment presents a bronchoscope, catheter, and locatable guide for manipulation by a user in the virtual environment to perform the registration.
- the training system where the virtual environment depicts at least one representation of a user's hands.
- the training system where the virtual environment depicts the user's hands manipulating the bronchoscope, catheter, or locatable guide.
- the training system where the virtual environment depicts the user's hands manipulating the user interface on the computer displayed in the virtual environment.
- the training system where the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient.
- the training system where the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment.
- the training system where the user interface for performance of navigation depicts an updated position of the locatable guide as the bronchoscope or catheter are manipulated by a user.
- Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One aspect of the disclosure is directed to a method for simulating a medical procedure on a patient in a virtual reality environment, including: presenting in a virtual reality (VR) headset a virtual environment replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
- Implementations of this aspect of the disclosure may include one or more of the following features.
- the method where the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient.
- the method where the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment.
- the method where the user interface for performance of navigation depicts an updated position of a catheter within the patient as the catheter is manipulated by a representation of the user's hands.
- Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- FIG. 1 is a perspective view of a virtual reality system in accordance with the disclosure
- FIG. 2 is a perspective view of a system for navigating to a soft-tissue target via the airways network in accordance with the disclosure
- FIG. 3 is a view of a virtual reality environment in accordance with the disclosure.
- FIG. 4 is a view of a virtual reality environment in accordance with the disclosure.
- FIG. 5 is a view of a virtual reality environment in accordance with the disclosure.
- FIG. 6 are flow chart representations of workflows enabled in the virtual reality environment in accordance with the disclosure.
- FIG. 7 is a flow chart illustrating a method of navigation in accordance with an embodiment of the disclosure.
- FIG. 8 is an illustration of a user interface presenting a view for performing registration in accordance with the present disclosure
- FIG. 9 is an illustration of the view of FIG. 8 with each indicator activated
- FIG. 10 is an illustration of a user interface, presenting a view for verifying registration in accordance with the present disclosure
- FIG. 11 is an illustration of a user interface presenting a view for performing navigation to a target further presenting a central navigation tab;
- FIG. 12 is an illustration of the view of FIG. 11 further presenting a peripheral navigation tab
- FIG. 13 is an illustration of the view of FIG. 11 further presenting the peripheral navigation tab of FIG. 12 near the target;
- FIG. 14 is an illustration of the view of FIG. 11 further presenting a target alignment tab
- FIG. 15 is an illustration of the user interface of the workstation of FIG. 2 presenting a view for marking a location of a biopsy or treatment of the target;
- FIG. 16 is an illustration of the user interface presenting a view for reviewing aspects of registration.
- FIG. 17 is a flow chart of a method for identifying and marking a target in fluoroscopic 3D reconstruction in accordance with the disclosure.
- FIG. 18 is a screen shot of a user interface for marking a target in a fluoroscopic image in accordance with the disclosure
- FIG. 19 is a screen shot of a user interface for marking a medical device target in a fluoroscopic image in accordance with the disclosure.
- FIG. 20 is a flow chart of a method for confirming placement of a biopsy tool in a target in accordance with the disclosure
- FIG. 21 is a screen shot of a user interface for confirming placement of a biopsy tool in a target in accordance with the disclosure
- FIG. 22 is a view of the virtual reality environment of FIGS. 3-5 depicting the bronchoscopic tools of a virtual procedure in accordance with the disclosure;
- FIG. 23 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure
- FIG. 24 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure
- FIG. 25 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure
- FIG. 26 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure
- FIG. 27 is a view of the virtual reality environment enabling the user to interact with the navigation software as displayed on a virtual computer;
- FIG. 28 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure.
- FIG. 29 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure.
- This disclosure is directed to a virtual reality (VR) system and method for training clinicians in the use of medical devices and the performance of one or more medical procedures.
- the disclosure is directed to systems and methods for performing VR bronchoscopic and endoscopic navigation, biopsy, and therapy procedures.
- FIG. 1 depicts a clinician 10 wearing a virtual reality headset 20 .
- the headset 20 may be for example a VR headset such as the Oculus Rift or Oculus Quest as currently sold by Facebook Technologies.
- the headset 20 may be connectable to a computer or may be a so called “all-in-one” system which is wirelessly connected to a phone or other access point to the internet.
- the headset 20 may be connected physically to a computer system 30 executing a variety of applications described herein.
- the instant disclosure is directed to a VR bronchoscopic environment that allows for virtual reality access and operation of bronchoscopic navigation systems.
- bronchoscopic navigation systems There are several bronchoscopic navigation systems currently being offered including the Illumisite system offered by Medtronic PLC, the ION system offered by Intuitive Surgical Inc. the Monarch system offered by Auris, and the Spin system offered by Veran Medical Technologies. Though the disclosure focuses on implementation in the Illumisite system, the disclosure is not so limited and may be employed in any of these systems without departing from the scope of the disclosure.
- FIG. 2 generally depicts the Illumisite system 100 and the set-up for such a system in an operating room or bronchoscopy suite.
- catheter 102 is part of a catheter guide assembly 106 .
- catheter 102 is inserted into a bronchoscope 108 for access to a luminal network of the patient “P.”
- catheter 102 of catheter guide assembly 106 may be inserted into a working channel of bronchoscope 108 for navigation through a patient's luminal network.
- a locatable guide (LG) 110 including a sensor 104 is inserted into catheter 102 and locked into position such that sensor 104 extends a desired distance beyond the distal tip of catheter 102 .
- LG locatable guide
- catheter guide assemblies 106 are currently marketed and sold by Medtronic PLC under the brand names SUPERDIMENSION® Procedure Kits, or EDGETM Procedure Kits, and are contemplated as useable with the disclosure.
- System 100 generally includes an operating table 112 configured to support a patient “P,” a bronchoscope 108 configured for insertion through the patient “P”'s mouth into the patient “P”'s airways; monitoring equipment 114 coupled to bronchoscope 108 (e.g., a video display, for displaying the video images received from the video imaging system of bronchoscope 108 ); a locating or tracking system 114 including a locating or tracking module 116 ; a plurality of reference sensors 118 ; a transmitter mat 120 including a plurality of incorporated markers (not shown); and a computing device 122 including software and/or hardware used to facilitate identification of a target, pathway planning to the target, navigation of a medical device to the target, and/or confirmation and/or determination of placement of catheter 102 , or a suitable device therethrough, relative to the target.
- Computing device 122 may be configured to execute the methods as described herein.
- a fluoroscopic imaging device 124 capable of acquiring fluoroscopic or x-ray images or video of the patient “P” is also included in this particular aspect of system 100 .
- the images, sequence of images, or video captured by fluoroscopic imaging device 124 may be stored within fluoroscopic imaging device 124 or transmitted to computing device 122 for storage, processing, and display. Additionally, fluoroscopic imaging device 124 may move relative to the patient “P” so that images may be acquired from different angles or perspectives relative to patient “P” to create a sequence of fluoroscopic images, such as a fluoroscopic video.
- the pose of fluoroscopic imaging device 124 relative to patient “P” while capturing the images may be estimated via markers incorporated with the transmitter mat 120 .
- the markers are positioned under patient “P”, between patient “P” and operating table 112 , and between patient “P” and a radiation source or a sensing unit of fluoroscopic imaging device 124 .
- the markers incorporated with the transmitter mat 120 may be two separate elements which may be coupled in a fixed manner or alternatively may be manufactured as a single unit.
- Fluoroscopic imaging device 124 may include a single imaging device or more than one imaging device.
- Computing device 122 may be any suitable computing device including a processor and storage medium, wherein the processor is capable of executing instructions stored on the storage medium.
- Computing device 122 may further include a database configured to store patient data, CT data sets including CT images, fluoroscopic data sets including fluoroscopic images and video, fluoroscopic 3D reconstruction, navigation plans, and any other such data.
- computing device 122 may include inputs, or may otherwise be configured to receive CT data sets, fluoroscopic images/video, and other data described herein.
- computing device 122 includes a display configured to display graphical user interfaces. Computing device 122 may be connected to one or more networks through which one or more databases may be accessed.
- computing device 122 utilizes previously acquired CT image data for generating and viewing a three-dimensional model or rendering of the patient “P”'s airways, enables the identification of a target on the three-dimensional model (automatically, semi-automatically, or manually), and allows for determining a pathway through the patient “P”'s airways to tissue located at and around the target. More specifically, CT images acquired from previous CT scans are processed and assembled into a three-dimensional CT volume, which is then utilized to generate a three-dimensional model of the patient “P”'s airways. The three-dimensional model may be displayed on a display associated with computing device 122 , or in any other suitable fashion.
- the enhanced two-dimensional images may possess some three-dimensional capabilities because they are generated from three-dimensional data.
- the three-dimensional model may be manipulated to facilitate identification of target on the three-dimensional model or two-dimensional images, and selection of a suitable pathway through the patient “P”'s airways to access tissue located at the target can be made. Once selected, the pathway plan, three-dimensional model, and images derived therefrom, can be saved, and exported to a navigation system for use during the navigation phase or phases.
- Tracking system 114 includes the tracking module 116 , a plurality of reference sensors 118 , and the transmitter mat 120 (including the markers). Tracking system 114 is configured for use with a locatable guide 110 and sensor 104 . As described above, locatable guide 110 and sensor 104 are configured for insertion through catheter 102 into a patient “P”'s airways (either with or without bronchoscope 108 ) and are selectively lockable relative to one another via a locking mechanism.
- Transmitter mat 120 is positioned beneath patient “P.” Transmitter mat 120 generates an electromagnetic field around at least a portion of the patient “P” within which the position of a plurality of reference sensors 118 and the sensor 104 can be determined with use of a tracking module 116 .
- a second electromagnetic sensor 126 may also be incorporated into the end of the catheter 102 . Sensor 126 may be a five degree of freedom (5 DOF) sensor or a six degree of freedom (6 DOF) sensor.
- One or more of reference sensors 118 are attached to the chest of the patient “P.” The six degrees of freedom coordinates of reference sensors 118 are sent to computing device 122 (which includes the appropriate software) where they are used to calculate a patient coordinate frame of reference.
- Registration is generally performed to coordinate locations of the three-dimensional model and two-dimensional images from the planning phase, with the patient “P”'s airways as observed through the bronchoscope 108 and allow for the navigation phase to be undertaken with precise knowledge of the location of the sensor 104 , even in portions of the airway where the bronchoscope 108 cannot reach.
- Registration of the patient “P”'s location on the transmitter mat 120 is performed by moving sensor 104 through the airways of the patient “P.” More specifically, data pertaining to locations of sensor 104 , while locatable guide 110 is moving through the airways, is recorded using transmitter mat 120 , reference sensors 118 , and tracking system 114 . A shape resulting from this location data is compared to an interior geometry of passages of the three-dimensional model generated in the planning phase, and a location correlation between the shape and the three-dimensional model based on the comparison is determined, e.g., utilizing the software on computing device 122 . In addition, the software identifies non-tissue space (e.g., air filled cavities) in the three-dimensional model.
- non-tissue space e.g., air filled cavities
- the software aligns, or registers, an image representing a location of sensor 104 with the three-dimensional model and/or two-dimensional images generated from the three-dimension model, which are based on the recorded location data and an assumption that locatable guide 110 remains located in non-tissue space in the patient “P”'s airways.
- a manual registration technique may be employed by navigating the bronchoscope 108 with the sensor 104 to pre-specified locations in the lungs of the patient “P”, and manually correlating the images from the bronchoscope to the model data of the three-dimensional model.
- the instant disclosure is not so limited and may be used in conjunction with flexible sensor, ultrasonic sensors, or without sensors. Additionally, the methods described herein may be used in conjunction with robotic systems such that robotic actuators drive the catheter 102 or bronchoscope 108 proximate the target.
- a user interface is displayed in the navigation software which sets for the pathway that the clinician is to follow to reach the target.
- the locatable guide 110 may be unlocked from catheter 102 and removed, leaving catheter 102 in place as a guide channel for guiding medical devices including without limitation, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryogenic probes, sensor probes, and aspirating needles to the target.
- a medical device may be then inserted through catheter 102 and navigated to the target or to a specific area adjacent to the target.
- a sequence of fluoroscopic images may be then acquired via fluoroscopic imaging device 124 according to directions displayed via computing device 122 .
- a fluoroscopic 3D reconstruction may be then generated via computing device 122 .
- the generation of the fluoroscopic 3D reconstruction is based on the sequence of fluoroscopic images and the projections of structure of markers incorporated with transmitter mat 120 on the sequence of images.
- One or more slices of the 3D reconstruction may be then generated based on the pre-operative CT scan and via computing device 122 .
- the one or more slices of the 3D reconstruction and the fluoroscopic 3D reconstruction may be then displayed to the user on a display via computing device 122 , optionally simultaneously.
- the slices of 3D reconstruction may be presented on the user interface in a scrollable format where the user is able to scroll through the slices in series.
- the user may be then directed to identify and mark the target while using the slice of the 3D reconstruction as a reference.
- the user may be also directed to identify and mark the medical device in the sequence of fluoroscopic 2D-dimensional images. An offset between the location of the target and the medical device may be then determined or calculated via computing device 122 .
- the offset may be then utilized, via computing device 122 , to correct the location of the medical device on the display with respect to the target and/or correct the registration between the three-dimensional model and tracking system 114 in the area of the target and/or generate a local registration between the three-dimensional model and the fluoroscopic 3D reconstruction in the target area.
- FIG. 3-5 depict a virtual environment viewable via the headset 20 to simulate the system 100 depicted in FIG. 2 . Every component described above with respect to the actual system 100 are recreated in the virtual environments depicted in FIGS. 3-5 . As will be described in greater detail below, these components in the virtual environment can be grasped, manipulated, and utilized in the virtual environment in connection with the headset 20 .
- the headset 20 includes a variety of sensors which either alone or in combination with one or more handpieces (not shown) detect movement of the user's hands and head as the user moves through real space and translates these movements into the virtual reality as depicted to the user in the headset 20 .
- FIG. 6 depicts a variety of workflows for the system 100 , which can be performed in the virtual environment utilizing the headset 20 and the computer 30 .
- Each of these workflows is described in detail below or has been described in connection with the system 100 , above.
- These workflows result in a series of displays in the headset 20 being presented to the user 10 as if they were undertaking a case and deploying an actual system 100 .
- the user 10 can train without having to interact with an actual patient.
- the computer 30 may collect data with which the performance of a user can be graded. In this way, a user can work in the virtual reality environment and undertake any number of procedures, learn the implementation details of the system 100 and be evaluated by experts without tying up equipment in the bronchoscopy suite, and without the need to engage an actual patient.
- the VR headset 20 and computer 30 can include an application that enables another user to be in the virtual environment.
- the second user may be another clinician that is demonstrating a case, a sales person, or an assistant who will be assisting the user in an actual case and needs to understand the functionality and uses of all the components of the system 100 prior to actual use.
- step S 300 user interface, which is depicted on a virtual representation of the computer 122 , presents the clinician with a view (not shown) for the selection of a patient.
- the clinician may enter patient information such as, for example, the patient name or patient ID number, into a text box to select a patient on which to perform a navigation procedure. Alternatively, the patient may be selected from a drop-down menu or other similar methods of patient selection.
- the user interface may present the clinician with a view (not shown) including a list of available navigation plans for the selected patient.
- step S 302 the clinician may load one of the navigation plans by activating the navigation plan.
- the navigation plans may be imported from a procedure planning software, described briefly above.
- the user interface presents the clinician with a patient details view (not shown) in step S 304 which allows the clinician to review the selected patient and plan details.
- patient details presented to the clinician in the timeout view may include the patient's name, patient ID number, and birth date.
- plan details include navigation plan details, automatic registration status, and/or manual registration status.
- the clinician may activate the navigation plan details to review the navigation plan and may verify the availability of automatic registration and/or manual registration.
- the clinician may also activate an edit button (not shown) to edit the loaded navigation plan from the patient details view. Activating the edit button (not shown) of the loaded navigation plan may also activate the planning software described above.
- the clinician proceeds to navigation setup in step S 306 .
- medical staff may perform the navigation setup prior to or concurrently with the clinician selecting the patient and navigation plan.
- the user interface presents the clinician with a view 400 ( FIG. 8 ) for registering the location of LG 110 relative to the loaded navigation plan.
- the clinician prepares for registration by inserting bronchoscope 108 with catheter 102 , LG 110 and EM sensor 104 into the virtual patient's airway until the distal ends of the LG 110 , the EM sensor 104 , and bronchoscope 108 are positioned within the patient's trachea, for example, as shown in FIG. 8 .
- view 400 presents a clinician with a video feed 402 from bronchoscope 108 and a lung survey 404 .
- Video feed 402 from bronchoscope 108 provides the clinician with a real-time video of the interior of the patient's airways at the distal end of bronchoscope 108 .
- Video feed 402 allows the clinician to visually navigate through the airways of the lungs.
- Lung survey 404 provides the clinician with indicators 406 for the trachea 408 and each region 410 , 412 , 414 , and 416 of the lungs. Regions 410 , 412 , 414 , may also correspond to the patient's lung lobes. It is contemplated that an additional region (not shown) may be present and may correspond to the fifth lung lobe, e.g., the middle lung lobe in the patient's right lung. Lung survey 404 may also be modified for patients in which all or a part of one of the lungs is missing, for example, due to prior surgery.
- the clinician advances bronchoscope 108 and LG 110 into each region 410 , 412 , 414 , and 416 until the corresponding indicator 406 is activated.
- the corresponding indicator may display a “check mark” symbol 417 when activated.
- the location of the EM sensor 104 of LG 110 relative to each region 410 , 412 , 414 , and 416 is tracked by the electromagnetic interaction between EM sensor 104 of LG 110 and the electromagnetic field generator 120 and may activate an indicator 406 when the EM sensor 104 enters a corresponding region 410 , 412 , 414 , or 416 .
- step S 310 once the indicators 406 for the trachea 408 and each region 410 , 412 , 414 , and 416 have been activated, for example, as shown in FIG. 9 , the clinician activates the “done” button 418 using, for example, a mouse or foot pedal, and proceeds to verification of the registration in step S 312 .
- the clinician may alternatively achieve registration with the currently loaded navigation plan while one or more of regions 410 , 412 , 414 , and 416 are not activated. For example, so long as the clinician has achieved sufficient registration with the currently loaded navigation plan the clinician may activate the “done” button 418 to proceed to registration verification in step S 312 .
- Sufficient registration may depend on both the patient's lung structure and the currently loaded navigation plan where, for example, only the indicators 406 for the trachea 408 and one or more of the regions 410 , 412 , 414 , or 416 in one of the lungs may be necessary to achieve a useable registration where the plan identifies targets in one lung.
- user interface presents the clinician with a view 420 for registration verification in step S 312 .
- View 420 presents the clinician with an LG indicator 422 (depicting the location of the EM sensor 104 ) overlaid on a displayed slice 424 of the CT images of the currently loaded navigation plan, for example, as shown in FIG. 10 .
- the slice 424 displayed in FIG. 10 is from the coronal direction, the clinician may alternatively select one of the axial or sagittal directions by activating a display bar 426 .
- the displayed slice 424 changes based on the position of the EM sensor 104 of LG 110 relative to the registered 3D volume of the navigation plan.
- the clinician determines whether the registration is acceptable in step S 314 . Once the clinician is satisfied that the registration is acceptable, for example, that the LG indicator 422 does not stray from within the patient's airways as presented in the displayed slice 424 , the clinician accepts the registration by activating the “accept registration” button 428 and proceeds to navigation in step S 316 . Although registration has now been completed by the clinician, the EMN system 10 may continue to track the location of the EM sensor 104 of LG 110 within the patient's airways relative to the 3D volume and may continue to update and improve the registration during the navigation procedure.
- a user interface on computer 122 presents the clinician with a view 450 , as shown, for example, in FIG. 11 .
- View 450 provides the clinician with a user interface for navigating to a target 452 ( FIG. 12 ) including a central navigation tab 454 , a peripheral navigation tab 456 , and a target alignment tab 458 .
- Central navigation tab 454 is primarily used to guide the bronchoscope 108 through the patient's bronchial tree until the airways become small enough that the bronchoscope 108 becomes wedged in place and is unable to advance.
- Peripheral navigation tab 456 is primarily used to guide the catheter 102 , EM sensor 104 , and LG 110 toward target 452 ( FIG.
- Target alignment tab 458 is primarily used to verify that LG 110 is aligned with the target 452 after LG 110 has been navigated to the target 452 using the peripheral navigation tab 456 .
- View 450 also allows the clinician to select target 452 to navigate by activating a target selection button 460 .
- Each tab 454 , 456 , and 458 includes a number of windows 462 that assist the clinician in navigating to the target.
- the number and configuration of windows 462 to be presented is configurable by the clinician prior to or during navigation through the activation of an “options” button 464 .
- the view displayed in each window 462 is also configurable by the clinician by activating a display button 466 of each window 462 .
- activating the display button 466 presents the clinician with a list of views for selection by the clinician including a bronchoscope view 470 ( FIG. 11 ), virtual bronchoscope view 472 ( FIG. 11 ), local view 478 ( FIG. 12 ), MIP view (not explicitly shown), 3D map dynamic view 482 ( FIG.
- Bronchoscope view 470 presents the clinician with a real-time image received from the bronchoscope 108 , as shown, for example, in FIG. 11 .
- Bronchoscope view 470 allows the clinician to visually observe the patient's airways in real-time as bronchoscope 108 is navigated through the patient's airways toward target 452 .
- Virtual bronchoscope view 472 presents the clinician with a 3D rendering 474 of the walls of the patient's airways generated from the 3D volume of the loaded navigation plan, as shown, for example, in FIG. 11 .
- Virtual bronchoscope view 472 also presents the clinician with a navigation pathway 476 providing an indication of the direction along which the clinician will need to travel to reach the target 452 .
- the navigation pathway 476 may be presented in a color or shape that contrasts with the 3D rendering 474 so that the clinician may easily determine the desired path to travel.
- the local view 478 also provides the clinician with a watermark 481 that indicates to the clinician the elevation of the target 452 relative to the displayed slice.
- the majority of the target 452 is located below watermark 481 and may, for example, be displayed as having a dark color such as a dark green, while a smaller portion of target 452 located above watermark 481 may be displayed, for example, as having a light color such as a light green. Any other color scheme which serves to indicate the difference between the portion of target 452 disposed above watermark 481 and the portion of target 452 disposed below watermark 481 may alternatively be used.
- the MIP view (not explicitly shown), also known in the art as a Maximum Intensity Projection view is a volume rendering of the 3D volume of the loaded navigation plan.
- the MIP view presents a volume rendering that is based on the maximum intensity voxels found along parallel rays traced from the viewpoint to the plane of projection. For example, the MIP view enhances the 3D nature of lung nodules and other features of the lungs for easier visualization by the clinician.
- 3D map dynamic view 482 presents a dynamic 3D model 484 of the patient's airways generated from the 3D volume of the loaded navigation plan.
- Dynamic 3D model 484 includes a highlighted portion 486 indicating the airways along which the clinician will need to travel to reach target 452 .
- the orientation of dynamic 3D model 484 automatically updates based on movement of the EM sensor 104 within the patient's airways to provide the clinician with a view of the dynamic 3D model 484 that is relatively unobstructed by airway branches that are not on the pathway to the target 452 .
- 3D map dynamic view 482 also presents the virtual probe 479 to the clinician as described above where the virtual probe 479 rotates and moves through the airways presented in the dynamic 3D model 484 as the clinician advances the LG 110 through corresponding patient airways.
- 3D map static view (not explicitly shown) is similar to 3D map dynamic view 482 with the exception that the orientation of the static 3D model does not automatically update. Instead, the 3D map static view must be activated by the clinician to pan or rotate the static 3D model.
- the 3D map static view may also present the virtual probe 479 to the clinician as described above for 3D map dynamic view 482 .
- the sagittal, axial, and coronal CT views (not explicitly shown) present slices taken from the 3D volume of the loaded navigation plan in each of the coronal, sagittal, and axial directions.
- Tip view 488 presents the clinician with a simulated view from the distal tip 93 of LG 110 , as shown, for example, in FIG. 12 .
- Tip view 488 includes a crosshair 490 and a distance indicator 492 .
- Crosshair 490 may be any shape, size, or color that indicates to the clinician the direction that the distal tip 93 of LG 110 is facing.
- Distance indicator 492 provides the clinician with an indication of the distance from the distal tip 93 of LG 110 to the center of target 452 .
- Tip view 488 may be used to align the distal tip 93 LG 110 with the target 452 .
- 3D CT view 494 presents the clinician with a 3D projection 496 of the 3D volume located directly in front of the distal tip of LG 110 .
- 3D projection 496 presents high density structures such as, for example, blood vessels, and lesions to the clinician.
- 3D CT view 494 may also present distance indicator 492 to the clinician as described above for tip view 488 .
- Alignment view 498 presents the clinician with a 2D projection 500 of the 3D volume located directly in front of the distal tip of LG 110 , for example.
- 2D projection 500 presents high density structures such as, for example, blood vessels and lesions.
- target 452 may presented as a color, for example, green, and may be translucent.
- Alignment view 498 may also present distance indicator 492 to the clinician as described above for tip view 488 .
- step S 316 view 450 is presented to the clinician by user interface with central navigation tab 454 active, as shown, for example, in FIG. 11 .
- Central navigation tab 454 may be the default tab upon initialization of view 450 by user interface 202 .
- Central navigation tab 454 presents the clinician with the bronchoscope view 470 , virtual bronchoscope view 472 , and 3D map dynamic view 482 , as described above.
- the clinician navigates bronchoscope 108 , LG 110 , and catheter 102 toward the target 452 by following the navigation pathway 476 of virtual bronchoscope view 472 along the patient's airways.
- step S 318 the clinician determines whether the airways leading to the target have become too small for bronchoscope 108 and, if so, wedges the bronchoscope 108 in place. Once the bronchoscope 108 has been wedged in place, the clinician activates peripheral navigation tab 456 , for example, a mouse or foot pedal, and proceeds to peripheral navigation in step S 320 .
- peripheral navigation tab 456 for example, a mouse or foot pedal
- peripheral navigation tab 456 is presented to the clinician as shown, for example, in FIG. 12 .
- Peripheral navigation tab 456 presents the clinician with the local view 478 , 3D Map Dynamic view 482 , bronchoscope view 470 , and tip view 488 .
- Peripheral navigation tab 456 assists the clinician with navigation between the distal end of bronchoscope 108 and target 452 .
- the clinician extends LG 110 and catheter 102 from the working channel of bronchoscope 108 into the patient's airway toward target 452 .
- the clinician tracks the progress of LG 110 , EM sensor 104 , and catheter 102 in the local view 478 , the 3D map dynamic view 482 , and the tip view 488 .
- the clinician rotates LG 110 , EM sensor 104 , and catheter 102 relative to the patient's airways until the tip 479 a of virtual probe 479 is oriented toward the desired airway leading to the target 452 .
- the desired airway may be determined based on the navigation pathway 476 presented in local view 478 and the highlighted portion 486 presented in 3D map dynamic view 482 .
- the clinician then advances LG 110 , EM sensor 104 , and the catheter 102 into the desired airway and confirms the movement of the EM sensor 104 relative to the target 452 and the patient's airways in the 3D map dynamic view 482 and local view 478 .
- the clinician may also check the location of target 452 on the tip view 488 to determine where the target 452 is relative to the orientation of the LG 110 as LG 110 moves closer to the target 452 .
- the clinician may decide in step S 322 to activate the target alignment tab 458 to confirm target alignment with the target 452 .
- target alignment tab 458 is presented to the clinician as shown, for example, in FIG. 14 .
- Target alignment tab 458 presents the clinician with the local view 478 , 3D Map Dynamic view 482 , 3D CT view 494 , and Alignment view 498 .
- Target alignment tab 458 assists the clinician with alignment of the LG 110 with the target 452 .
- the clinician may make a determination of whether the LG 110 is aligned with the target 452 and of the relative distance of the LG 110 to the target 452 .
- the clinician may decide to activate the “mark position” button 502 of either the peripheral navigation tab 456 ( FIG. 13 ) or the target alignment tab 458 ( FIG. 14 ) in step S 328 to virtually mark the current position of the virtual probe 479 where the registered position of the virtual probe 479 corresponds to the current location of the LG 110 .
- This mark may be permanently recorded as part of the navigation plan to enable a clinician to return to substantially the same location in subsequent navigations or at a later time in the same procedure, for example, where a biopsy sample has been taken and is determined to be cancerous and in need of immediate treatment.
- the user interface presents the clinician with a view 504 providing the clinician with details of the marked position of the virtual probe 470 , as shown, for example, in FIG. 15 .
- view 504 provides the clinician with a biopsy or treatment position number 506 and distance to target center 508 for the clinician's review.
- the clinician may withdraw the LG 110 from catheter 102 of the bronchoscope 108 and insert a tool through catheter 102 in step S 330 , for example, a biopsy device 102 , a fiducial marking device, an ablation probe, a chemical treatment probe, or other similar tools to sample, mark and/or treat the target 452 .
- the clinician withdraws the tool from bronchoscope 108 and inserts LG 110 back into bronchoscope 108 .
- the clinician then activates the “done” button 510 to finish marking the target 452 .
- the user interface presents the clinician with view 500 with one of tabs 454 , 456 , or 458 active.
- a representation of a virtual marker 512 is presented by target alignment tab 458 in various views including, for example, the 3D Map Dynamic view 482 , local view 488 , or any other view described above to indicate to the clinician the location of a previous treatment site.
- the clinician determines whether an additional biopsy, marking, or treatment is required for the target 452 in step S 332 . If additional biopsies are required, the clinician repeats steps S 320 through S 330 .
- the clinician may alternatively repeat only a subset of steps S 320 through S 330 .
- the clinician may return to the target alignment tab 458 without activating the peripheral navigation tab 456 to continue navigating to the target or aligning the LG 110 with the target for an additional biopsy, marking, or treatment.
- the clinician may use only the peripheral navigation tab 456 to continue navigating to the target 452 for an additional biopsy or treatment.
- the clinician determines whether there is an additional target planned for navigation by activating the target selection button 460 in step S 334 . If an additional target is planned for navigation, the clinician activates the additional target and repeats steps S 316 through S 332 to navigate to the additional target for biopsy or treatment. If the additional target is in the same lung lobe or region as target 452 , the clinician may alternatively only repeat a subset of steps S 316 through S 332 .
- the clinician may start navigation to the additional target using the peripheral navigation tab 456 (step S 320 ) or the target alignment tab 458 (step S 324 ) without using the central navigation tab 454 (step S 316 ) where the location of the wedged bronchoscope 108 can still provide access to the additional target.
- the clinician has finished the navigation procedure and may withdraw the LG 110 , catheter 102 , and bronchoscope 108 from the patient.
- the clinician may then export a record of the navigation procedure in step S 336 to a memory associated with computer 122 , or to a server or other destination for later review via network interface.
- the EM sensor 104 of LG 110 may continuously update registration information such that the registration is continuously updated.
- the clinician may also review the registration by activating the “options” button 464 and activating a review registration button (not shown).
- the user interface presents the clinician with a view 514 as shown, for example, in FIG. 16 .
- View 514 presents the clinician with a 3D model 516 of the patient's bronchial tree generated from the 3D volume of the loaded navigation plan for review of the registration.
- 3D model 516 includes a set of data points 518 that are generated during registration based on the locations to which the sensor 104 of LG 110 has traveled within the patient's airways.
- the data points 518 are presented on the 3D model 516 to allow the clinician to assess the overall registration of the 3D model 516 with the patient's airways.
- a second set of data points 520 are generated based on the locations to which the sensor 104 of LG 110 has traveled on its path to the target 452 .
- Data points 518 and 520 may be color coded, for example, green and purple, respectively, or may have different shapes or other identifying features that allow the clinician to differentiate between data points 518 and 520 .
- the clinician may also activate or de-activate check boxes 522 to control which sets of data points 518 and 520 are presented in the 3D model 516 .
- a pre-operative CT scan of the target area may be received by the computing device 122 .
- the pre-operative CT scan that is received may include a pathway plan that has already been developed or the user may generate the pathway plan using computing device 122 .
- a user navigates the catheter 102 and sensor 104 proximate the target using the pathway plan which may be displayed on computing device 122 as part of the 3D model, described above.
- a sequence of fluoroscopic images of the target area acquired in real time about a plurality of angles relative to the target area may be captured by fluoroscopic imaging device 124 .
- the sequence of images may be captured while a medical device is positioned in the target area.
- the method may include further steps for directing a user to acquire the sequence of fluoroscopic images.
- the method may include one or more further steps for automatically acquiring the sequence of fluoroscopic images.
- the fluoroscopic images may be two-dimensional (2D) images, a three-dimensional (3D) reconstruction generated from a plurality of 2D images, or slice-images of a 3D reconstruction.
- the disclosure refers to systems and methods for facilitating the navigation of a medical device to a target and/or a target area using two-dimensional fluoroscopic images of the target area.
- the navigation is facilitated by using local three-dimensional volumetric data, in which small soft-tissue objects are visible, constructed from a sequence of fluoroscopic images captured by a fluoroscopic imaging device.
- the fluoroscopic-based constructed local three-dimensional volumetric data may be used to correct a location of a medical device with respect to a target or may be locally registered with previously acquired volumetric data.
- the location of the medical device may be determined by a tracking system such as the EM navigation system or another system as described herein.
- the tracking system may be registered with the previously acquired volumetric data.
- receiving a fluoroscopic 3D reconstruction of a body region may include receiving a sequence of fluoroscopic images of the body region and generating the fluoroscopic 3D reconstruction of the body region based on at least a portion of the fluoroscopic images.
- the method may further include directing a user to acquire the sequence of fluoroscopic images by manually sweeping the fluoroscope.
- the method may further include automatically acquiring the sequence of fluoroscopic images.
- the fluoroscopic images may be acquired by a standard fluoroscope, in a continuous manner and about a plurality of angles relative to the body region.
- the fluoroscope may be swept manually, i.e., by a user, or automatically.
- the fluoroscope may be swept along an angle of 20 to 45 degrees. In some embodiments, the fluoroscope may be swept along an angle of 30 ⁇ 5 degrees. Typically, these images are gathered in a fluoroscopic sweep of the fluoroscopic imaging device 124 of about 30 degrees (i.e., 15 degrees on both sides of the AP position). As is readily understood, larger sweeps of 45, 60, 90 or even greater angles may alternatively be performed to acquire the fluoroscopic images.
- a three-dimensional reconstruction of the target area may be generated based on the sequence of fluoroscopic images.
- the method further comprises one or more steps for estimating the pose of the fluoroscopic imaging device while acquiring each of the fluoroscopic images, or at least a plurality of them.
- the three-dimensional reconstruction of the target area may be then generated based on the pose estimation of the fluoroscopic imaging device.
- the markers incorporated with the transmitter mat 120 may be placed with respect to the patient “P” and the fluoroscopic imaging device 124 , such that each fluoroscopic image includes a projection of at least a portion of the structure of markers.
- the estimation of the pose of the fluoroscopic imaging device while acquiring each image may be then facilitated by the projections of the structure of markers on the fluoroscopic images.
- the estimation may be based on detection of a possible and most probable projection of the structure of markers on each image.
- one or more fluoroscopy images are displayed to a user as illustrated in FIG. 18 .
- FIG. 18 which is an exemplary screen shot 1850 of a software program running on computing device 122
- a slice of the 3D reconstruction 1860 is displayed simultaneously with two thumbnail images 1810 a and 1810 b of a fluoroscopic 3D reconstruction in accordance with the disclosure.
- the 3D reconstruction is created for a fluoroscopic sweep of images between 30 and 90 degrees.
- the slice of the 3D reconstruction 1860 depicted on screen shot 1850 changes in accordance with the movement.
- generating and using a virtual slice image as a reference may be more advantageous.
- generating and using a virtual fluoroscopic 2D image may be more advantageous.
- a selection of the target from the fluoroscopic 3D reconstruction is made by the user.
- the user is asked to “mark the target” in the slice of the 3D reconstruction 1860 .
- These two ends could be any two positions of the fluoroscopic sweep, so long as they are sufficiently angularly separated such that efficient triangulation of the location of the target and the catheter ( FIG. 18 ) can be performed.
- the user had previously marked the target at one end of the sweep, depicted in thumbnail image 180 a , and is in the process of marking the target in a second end of the sweep, with the current slice of the 3D reconstruction 1860 which is depicted in thumbnail image 1810 b .
- two ranges are defined within which the user is to “mark the target” in an image that appears within a range.
- a second scroll bar 1832 on the bottom of the screen shot 350 allows the user to view the virtual fluoroscopy images 1860 as a video rather than to scroll through the slices individually using scroll bar 1820 .
- the software may be configured to automatically jump the user from one “mark the target” range to the other following successful marking in the first “mark the target” range.
- the user has placed a marker 1870 on the slice of the 3D reconstruction 1860 .
- This marker also appears in thumbnail image 1810 b as marker 1880 b .
- the user has marked the location of the target in the fluoroscopic image data collected by the fluoroscopic imaging device 124 .
- a selection of the medical device from the three-dimensional reconstruction or the sequence of fluoroscopic images is made. In some embodiments, this may be automatically made, and a user either accepts or rejects the selection. In some embodiments, the selection is made directly by the user.
- the user may be asked to mark the location of the catheter 102 .
- FIG. 19 depicts a screen shot 1980 including two actual fluoroscopic images 1982 . The user is asked to mark the end of the catheter 102 in each of the actual fluoroscopic images 1982 . Each of these images comes from portions of the sweep that correspond to the “mark the target” portions depicted in FIG. 19 .
- an offset of the catheter 102 with respect to the target may be calculated. The determination of the offset is based on the received selections of the target and the medical device. This offset is used to update the detected position of the catheter 102 , and specifically the sensor 104 in the 3D model and the pathway plan that was created to navigate to the target.
- the user has managed to navigate the catheter 102 to within 2-3 cm of the target, for example.
- the user can have confidence of reaching the target while traversing this last distance.
- a tool such as a needle or coring biopsy tool, brushes, ablation devices (e.g., RF, microwave, chemical, radiological, etc.), clamping devices, and others, may be advanced down the catheter 102 .
- a biopsy tool (not shown) is advanced down the catheter 102 and, using the sensor 126 , the user navigates the final 2-3 cm, for example, to the target and can advance to biopsy tool into the target as it appears in the 3D model.
- the user navigates the final 2-3 cm, for example, to the target and can advance to biopsy tool into the target as it appears in the 3D model.
- a second fluoroscopic imaging process can be undertaken.
- the user can select a “tool-in target” tab on the user interface at step 2010 of the method 2000 of FIG. 20 . This selection can initiate the fluoroscopic imaging device 124 .
- the user may also be directed to perform a fluoroscopic sweep similar to that described previously.
- the images collected during the fluoroscopic sweep can be processed to form a fluoroscopic 3D reconstruction step 2020 .
- a slice of the 3D reconstruction is generated from the fluoroscopic 3D reconstruction and output as screenshot 2100 as depicted in FIG. 21 in step 210 .
- the screen shot 2100 is similar to that of FIG. 18 .
- the user is able to use the scroll bar 2102 to scroll through the slices of the 3D reconstruction 2104 .
- the biopsy tools are typically made of metal and other materials which resolve quite well in fluoroscopic images. As a result, the user can scroll through the slices of the 3D reconstruction 2104 all along the sweep to ensure that the biopsy tool is in the target indicated by marker 2108 at step 2040 .
- the user may be requested to mark the position of the catheter 102 similar to the step described in step 1760 ( FIG. 78 ) in a 2D fluoroscopic image acquired by the fluoroscopic imaging device 124 as part of its fluoroscopic sweep from step 2110 .
- This step may also be performed automatically via image processing techniques by computing device 122 .
- the 3D coordinates of the marked position can be determined by the computing device 122 . Accordingly, the computing device 122 can identify a slice of the 3D reconstruction 2104 that best displays the catheter 102 .
- this marking of the position of the catheter 102 in the 2D fluoroscopic image provides an indication of the position of the catheter 102 in the 3D reconstruction 2104 , which can then be presented to the user for review. Still further, the user may be asked to mark the target in the 2D fluoroscopic image and a 3D relative position of the target and the catheter 102 or biopsy tool 2106 can be calculated and displayed in the 3D reconstruction 2104 , similar that that described above with reference to FIG. 17 .
- image processing techniques can be employed to enhance the display of the catheter 102 or biopsy tool 2106 extending therethrough. These techniques may further be employed to remove artifacts that might be the result of the reconstruction process.
- the user can click the next button 5210 to confirm placement of the tool-in-target at step 2050 , thereby ending the tool-in-target confirmation method 2000 of FIG. 20 . Then, the user can proceed back to the 3D model and perform navigation to other targets. Additionally, if more biopsies are desired, following each movement of the catheter 102 and biopsy tool 2106 , one could perform another tool-in-lesion confirmation starting again at step 2110 . Though not necessarily required, the system may prompt the user to identify the location of the target in the images, though most users can perform their confirmation of the tool-in-lesion without requiring a separate indication of the target.
- All the heretofore described techniques of insertion of a bronchoscope, registration, navigation to a target, local registration, removal of an LG, insertion of biopsy tools, confirmation of tools in target, etc., can be experienced by a clinician or user in a virtual reality environment by employing the headset 20 and computer 30 .
- the user may experience a virtual environment as depicted in FIGS. 3-5 , which accurately represents an actual bronchoscopic suite or a surgical room. That virtual environment includes all the equipment necessary to undertake a procedure as described above virtually. All the screen shots of the navigation software, the CT images, the 3D models derived therefrom, and the fluoroscopic images are displayed on a display which is viewable to the clinician in the virtual environment.
- the clinician may observe that the bronchoscope 108 , catheter 102 and LG 104 are set out on a tools table as depicted in FIG. 22 .
- the virtual environment may present various types of guidance to the user, for example, in FIG. 23 , there are displayed instructions to “pick up the bronchoscope.”
- the headset 20 Using VR techniques that have been well established the physical movements of the user's hands and head are detected by the headset 20 and translated into the virtual environment allowing the user to visually experience the actions that will be required in an actual case.
- the virtual environment may prompt the user to insert the bronchoscope into the patient as shown in FIG. 24 . Additional guidance, including wireframe shapes and the like may be employed to assist the user in understanding where to place the bronchoscope relative to the patient. As depicted in FIG. 25 , the representation of the bronchoscope, a wireframe for its placement and advancement, and a representation of the software application for bronchoscopic navigation are displayed. At this position, the user experiences nearly all the sensations that they would otherwise experience during an actual procedure. Following the prompts in the virtual environment and utilizing the software which is presented on the computer in the image, and which corresponds to an actual computer 122 of FIG. 2 , the case can be virtually performed.
- either additional prompts in the virtual environment can be presented to the user as seen in FIG. 26-29 , and the user interface on the computer in the virtual environment is updated.
- the user as depicted in FIG. 27 can interact with the software running on the computer in the virtual environment just as they might the actual software running on computer 122 . And the case proceeds until proximate a target, and the user is prompted to perform the local registration using the virtual fluoroscopic imager and all the other aspects of the software until a biopsy is acquired or the target is treated, as described herein above. All the workflows in FIG. 6 are enabled in the virtual environment.
- the virtual environment may include a variety of virtual cases on which the user can practice. These are loaded onto computer 30 and selected by the user. Each case includes its unique underlying CT image data that is displayed as part of the user interface on the virtual computer and may include unique tools or prompts to the user. In addition, the unique case may include its own unique fluoroscopic data, if necessary, for local registration and the like.
- a user or clinician gains experience with the equipment and the workflows without the need of an actual patient. Further as part of that experience data can be collected regarding proficiency of the user, and an assessment can be made which helps ensure that clinicians are proficient with the system 100 before performing a live procedure. This may also be used for refresher courses, and continuing education evaluations to ensure that clinicians are current on the procedure and equipment.
- These virtual environments may also provide a means of rolling out new features and presenting them to clinicians in a manner in which they can not only hear and see them but actually experience them.
- the virtual environment for each case there are certain metrics and thresholds programmed into the virtual environment that are used to assess the performance of the user. As the user experiences the case the computer 30 logs the performance of the user with respect to these thresholds and metrics. In some embodiments, immediately upon missing one of these thresholds or metrics the virtual environment displays an indication of the missed threshold and provides guidance on how to correct. Additionally or alternatively, the virtual environment may store these missed thresholds and metrics and present them to the user at the end of a virtual case as part of a debrief session to foster learning. Still further, all the prompts and guidance be stopped and the user's ability to navigate the virtual environment on their own, without any assistance or prompting can be assessed.
- an attending nurse, additional clinician, or sales staff may also be in the virtual procedure to provide guidance or to gain their own experience with the system or to walk a new user through the process and allow them to gain familiarity. Additional uses of the VR system described herein will also be known to those of ordinary skill in the art.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Chemical & Material Sciences (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Medicinal Chemistry (AREA)
- Pure & Applied Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Pulmonology (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Endoscopes (AREA)
Abstract
A virtual reality training system and method enabling virtual bronchoscopic navigation of a virtual patient following a pathway plan to a target. The pathway plan presented on a computer depicted in the virtual environment and simulating an actual bronchoscopic navigation.
Description
- Disclosed features concern medical training equipment and methods, and more particularly medical training equipment and methods used for training in bronchoscopic lung navigation procedures and techniques.
- There have been developed medical procedures on patients can involve a variety of different tasks by one or more medical personnel. Some medical procedures are minimally invasive surgical procedures performed using one or more devices, including a bronchoscope or an endoscope. In some such systems, a surgeon operates controls via a console, which remotely and precisely control surgical instruments that interact with the patient to perform surgery and other procedures. In some systems, various other components to the system can also be used to perform a procedure. For example, the surgical instruments can be provided on a separate instrument device or cart that is positioned near or over a patient, and a video output device and other equipment and devices can be provided on one or more additional units.
- Systems have been developed to provide certain types of training in the use of such medical system. A simulator unit, for example, can be coupled to a surgeon console and be used in place of an actual patient, to provide a surgeon with a simulation of performing the procedure. With such a system, the surgeon can learn how instruments respond to manipulation and how those actions are presented or incorporated into the displays on the of the console.
- However, these systems can be cumbersome to move and transport to various training sites. Further, because these systems are linked to actual consoles and other equipment there is both a capital cost for the training systems, and there is the potential need for maintenance on such systems. Accordingly, there is a need for improved training systems which address the shortcomings of these physical training aids
- One aspect of the disclosure is directed to a training system for a medical procedure including: a virtual reality (VR) headset, including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of: presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope. The training system also includes depicting at least one representation of a user's hand in the virtual environment; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; enabling interaction with the bronchoscopic tools via the representation of the user's hand; and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
- Implementations of this aspect of the disclosure may include one or more of the following features. The training system further including a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration. The method further including a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One aspect of the disclosure is directed to a training system for a medical procedure including: a virtual reality (VR) headset; and a computer operably connected to the VR headset, the computer including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of: presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment, and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
- Implementations of this aspect of the disclosure may include one or more of the following features. The training system where the user interface displays one or more navigation plans for selection by a user of the VR headset. The training system where the computer in the virtual environment displays a user interface for performance of a registration of the navigation plan to a patient. The training system where during registration the virtual environment presents a bronchoscope, catheter, and locatable guide for manipulation by a user in the virtual environment to perform the registration. The training system where the virtual environment depicts at least one representation of a user's hands. The training system where the virtual environment depicts the user's hands manipulating the bronchoscope, catheter, or locatable guide. The training system where the virtual environment depicts the user's hands manipulating the user interface on the computer displayed in the virtual environment. The training system where the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient. The training system where the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment. The training system where the user interface for performance of navigation depicts an updated position of the locatable guide as the bronchoscope or catheter are manipulated by a user. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One aspect of the disclosure is directed to a method for simulating a medical procedure on a patient in a virtual reality environment, including: presenting in a virtual reality (VR) headset a virtual environment replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope; providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment; enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; and executing a bronchoscopic navigation in the virtual environment, when the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods and systems described herein.
- Implementations of this aspect of the disclosure may include one or more of the following features. The method where the virtual environment depicts at least one representation of a user's hands. The method where the virtual environment depicts the representation of the user's hands manipulating the bronchoscope, catheter, or locatable guide. The method where the virtual environment depicts the representation of the user's hands manipulating the user interface on the computer displayed in the virtual environment. The method where the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient. The method where the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment. The method where the user interface for performance of navigation depicts an updated position of a catheter within the patient as the catheter is manipulated by a representation of the user's hands. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium, including software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- Objects and features of the presently disclosed system and method will become apparent to those of ordinary skill in the art when descriptions of various embodiments thereof are read with reference to the accompanying drawings, of which.
-
FIG. 1 is a perspective view of a virtual reality system in accordance with the disclosure; -
FIG. 2 is a perspective view of a system for navigating to a soft-tissue target via the airways network in accordance with the disclosure; -
FIG. 3 is a view of a virtual reality environment in accordance with the disclosure; -
FIG. 4 is a view of a virtual reality environment in accordance with the disclosure; -
FIG. 5 is a view of a virtual reality environment in accordance with the disclosure; -
FIG. 6 are flow chart representations of workflows enabled in the virtual reality environment in accordance with the disclosure; -
FIG. 7 is a flow chart illustrating a method of navigation in accordance with an embodiment of the disclosure; -
FIG. 8 is an illustration of a user interface presenting a view for performing registration in accordance with the present disclosure; -
FIG. 9 is an illustration of the view ofFIG. 8 with each indicator activated; -
FIG. 10 is an illustration of a user interface, presenting a view for verifying registration in accordance with the present disclosure; -
FIG. 11 is an illustration of a user interface presenting a view for performing navigation to a target further presenting a central navigation tab; -
FIG. 12 is an illustration of the view ofFIG. 11 further presenting a peripheral navigation tab; -
FIG. 13 is an illustration of the view ofFIG. 11 further presenting the peripheral navigation tab ofFIG. 12 near the target; -
FIG. 14 is an illustration of the view ofFIG. 11 further presenting a target alignment tab; -
FIG. 15 is an illustration of the user interface of the workstation ofFIG. 2 presenting a view for marking a location of a biopsy or treatment of the target; and -
FIG. 16 is an illustration of the user interface presenting a view for reviewing aspects of registration. -
FIG. 17 is a flow chart of a method for identifying and marking a target in fluoroscopic 3D reconstruction in accordance with the disclosure; -
FIG. 18 is a screen shot of a user interface for marking a target in a fluoroscopic image in accordance with the disclosure; -
FIG. 19 is a screen shot of a user interface for marking a medical device target in a fluoroscopic image in accordance with the disclosure; -
FIG. 20 is a flow chart of a method for confirming placement of a biopsy tool in a target in accordance with the disclosure; -
FIG. 21 is a screen shot of a user interface for confirming placement of a biopsy tool in a target in accordance with the disclosure; -
FIG. 22 is a view of the virtual reality environment ofFIGS. 3-5 depicting the bronchoscopic tools of a virtual procedure in accordance with the disclosure; -
FIG. 23 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure; -
FIG. 24 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure; -
FIG. 25 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure; -
FIG. 26 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure; -
FIG. 27 is a view of the virtual reality environment enabling the user to interact with the navigation software as displayed on a virtual computer; -
FIG. 28 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure; and -
FIG. 29 is a view of the virtual reality environment providing guidance as to action the user should take to perform a procedure in accordance with the disclosure. - This disclosure is directed to a virtual reality (VR) system and method for training clinicians in the use of medical devices and the performance of one or more medical procedures. In particular, the disclosure is directed to systems and methods for performing VR bronchoscopic and endoscopic navigation, biopsy, and therapy procedures.
-
FIG. 1 depicts aclinician 10 wearing avirtual reality headset 20. Theheadset 20 may be for example a VR headset such as the Oculus Rift or Oculus Quest as currently sold by Facebook Technologies. Theheadset 20 may be connectable to a computer or may be a so called “all-in-one” system which is wirelessly connected to a phone or other access point to the internet. Theheadset 20 may be connected physically to acomputer system 30 executing a variety of applications described herein. - The instant disclosure is directed to a VR bronchoscopic environment that allows for virtual reality access and operation of bronchoscopic navigation systems. There are several bronchoscopic navigation systems currently being offered including the Illumisite system offered by Medtronic PLC, the ION system offered by Intuitive Surgical Inc. the Monarch system offered by Auris, and the Spin system offered by Veran Medical Technologies. Though the disclosure focuses on implementation in the Illumisite system, the disclosure is not so limited and may be employed in any of these systems without departing from the scope of the disclosure.
-
FIG. 2 generally depicts theIllumisite system 100 and the set-up for such a system in an operating room or bronchoscopy suite. As shown inFIG. 2 ,catheter 102 is part of acatheter guide assembly 106. In practice,catheter 102 is inserted into abronchoscope 108 for access to a luminal network of the patient “P.” Specifically,catheter 102 ofcatheter guide assembly 106 may be inserted into a working channel ofbronchoscope 108 for navigation through a patient's luminal network. A locatable guide (LG) 110, including asensor 104 is inserted intocatheter 102 and locked into position such thatsensor 104 extends a desired distance beyond the distal tip ofcatheter 102. The position and orientation ofsensor 104 relative to the reference coordinate system, and thus the distal portion ofcatheter 102, within an electromagnetic field can be derived.Catheter guide assemblies 106 are currently marketed and sold by Medtronic PLC under the brand names SUPERDIMENSION® Procedure Kits, or EDGE™ Procedure Kits, and are contemplated as useable with the disclosure. -
System 100 generally includes an operating table 112 configured to support a patient “P,” abronchoscope 108 configured for insertion through the patient “P”'s mouth into the patient “P”'s airways;monitoring equipment 114 coupled to bronchoscope 108 (e.g., a video display, for displaying the video images received from the video imaging system of bronchoscope 108); a locating ortracking system 114 including a locating ortracking module 116; a plurality ofreference sensors 118; atransmitter mat 120 including a plurality of incorporated markers (not shown); and acomputing device 122 including software and/or hardware used to facilitate identification of a target, pathway planning to the target, navigation of a medical device to the target, and/or confirmation and/or determination of placement ofcatheter 102, or a suitable device therethrough, relative to the target.Computing device 122 may be configured to execute the methods as described herein. - A
fluoroscopic imaging device 124 capable of acquiring fluoroscopic or x-ray images or video of the patient “P” is also included in this particular aspect ofsystem 100. The images, sequence of images, or video captured byfluoroscopic imaging device 124 may be stored withinfluoroscopic imaging device 124 or transmitted tocomputing device 122 for storage, processing, and display. Additionally,fluoroscopic imaging device 124 may move relative to the patient “P” so that images may be acquired from different angles or perspectives relative to patient “P” to create a sequence of fluoroscopic images, such as a fluoroscopic video. The pose offluoroscopic imaging device 124 relative to patient “P” while capturing the images may be estimated via markers incorporated with thetransmitter mat 120. The markers are positioned under patient “P”, between patient “P” and operating table 112, and between patient “P” and a radiation source or a sensing unit offluoroscopic imaging device 124. The markers incorporated with thetransmitter mat 120 may be two separate elements which may be coupled in a fixed manner or alternatively may be manufactured as a single unit.Fluoroscopic imaging device 124 may include a single imaging device or more than one imaging device. -
Computing device 122 may be any suitable computing device including a processor and storage medium, wherein the processor is capable of executing instructions stored on the storage medium.Computing device 122 may further include a database configured to store patient data, CT data sets including CT images, fluoroscopic data sets including fluoroscopic images and video, fluoroscopic 3D reconstruction, navigation plans, and any other such data. Although not explicitly illustrated,computing device 122 may include inputs, or may otherwise be configured to receive CT data sets, fluoroscopic images/video, and other data described herein. Additionally,computing device 122 includes a display configured to display graphical user interfaces.Computing device 122 may be connected to one or more networks through which one or more databases may be accessed. - With respect to a planning phase,
computing device 122 utilizes previously acquired CT image data for generating and viewing a three-dimensional model or rendering of the patient “P”'s airways, enables the identification of a target on the three-dimensional model (automatically, semi-automatically, or manually), and allows for determining a pathway through the patient “P”'s airways to tissue located at and around the target. More specifically, CT images acquired from previous CT scans are processed and assembled into a three-dimensional CT volume, which is then utilized to generate a three-dimensional model of the patient “P”'s airways. The three-dimensional model may be displayed on a display associated withcomputing device 122, or in any other suitable fashion. Usingcomputing device 122, various views of the three-dimensional model or enhanced two-dimensional images generated from the three-dimensional model are presented. The enhanced two-dimensional images may possess some three-dimensional capabilities because they are generated from three-dimensional data. The three-dimensional model may be manipulated to facilitate identification of target on the three-dimensional model or two-dimensional images, and selection of a suitable pathway through the patient “P”'s airways to access tissue located at the target can be made. Once selected, the pathway plan, three-dimensional model, and images derived therefrom, can be saved, and exported to a navigation system for use during the navigation phase or phases. - With respect to the navigation phase, a six degrees-of-freedom electromagnetic locating or
tracking system 114, or other suitable system for determining location or position, is utilized for performing registration of the images and the pathway for navigation, although other configurations are also contemplated.Tracking system 114 includes thetracking module 116, a plurality ofreference sensors 118, and the transmitter mat 120 (including the markers).Tracking system 114 is configured for use with alocatable guide 110 andsensor 104. As described above,locatable guide 110 andsensor 104 are configured for insertion throughcatheter 102 into a patient “P”'s airways (either with or without bronchoscope 108) and are selectively lockable relative to one another via a locking mechanism. -
Transmitter mat 120 is positioned beneath patient “P.”Transmitter mat 120 generates an electromagnetic field around at least a portion of the patient “P” within which the position of a plurality ofreference sensors 118 and thesensor 104 can be determined with use of atracking module 116. A secondelectromagnetic sensor 126 may also be incorporated into the end of thecatheter 102.Sensor 126 may be a five degree of freedom (5 DOF) sensor or a six degree of freedom (6 DOF) sensor. One or more ofreference sensors 118 are attached to the chest of the patient “P.” The six degrees of freedom coordinates ofreference sensors 118 are sent to computing device 122 (which includes the appropriate software) where they are used to calculate a patient coordinate frame of reference. Registration is generally performed to coordinate locations of the three-dimensional model and two-dimensional images from the planning phase, with the patient “P”'s airways as observed through thebronchoscope 108 and allow for the navigation phase to be undertaken with precise knowledge of the location of thesensor 104, even in portions of the airway where thebronchoscope 108 cannot reach. - Registration of the patient “P”'s location on the
transmitter mat 120 is performed by movingsensor 104 through the airways of the patient “P.” More specifically, data pertaining to locations ofsensor 104, whilelocatable guide 110 is moving through the airways, is recorded usingtransmitter mat 120,reference sensors 118, andtracking system 114. A shape resulting from this location data is compared to an interior geometry of passages of the three-dimensional model generated in the planning phase, and a location correlation between the shape and the three-dimensional model based on the comparison is determined, e.g., utilizing the software oncomputing device 122. In addition, the software identifies non-tissue space (e.g., air filled cavities) in the three-dimensional model. The software aligns, or registers, an image representing a location ofsensor 104 with the three-dimensional model and/or two-dimensional images generated from the three-dimension model, which are based on the recorded location data and an assumption thatlocatable guide 110 remains located in non-tissue space in the patient “P”'s airways. Alternatively, a manual registration technique may be employed by navigating thebronchoscope 108 with thesensor 104 to pre-specified locations in the lungs of the patient “P”, and manually correlating the images from the bronchoscope to the model data of the three-dimensional model. - Though described herein with respect to EMN systems using EM sensors, the instant disclosure is not so limited and may be used in conjunction with flexible sensor, ultrasonic sensors, or without sensors. Additionally, the methods described herein may be used in conjunction with robotic systems such that robotic actuators drive the
catheter 102 orbronchoscope 108 proximate the target. - Following registration of the patient “P” to the image data and pathway plan, a user interface is displayed in the navigation software which sets for the pathway that the clinician is to follow to reach the target. Once
catheter 102 has been successfully navigated proximate the target as depicted on the user interface, thelocatable guide 110 may be unlocked fromcatheter 102 and removed, leavingcatheter 102 in place as a guide channel for guiding medical devices including without limitation, optical systems, ultrasound probes, marker placement tools, biopsy tools, ablation tools (i.e., microwave ablation devices), laser probes, cryogenic probes, sensor probes, and aspirating needles to the target. - A medical device may be then inserted through
catheter 102 and navigated to the target or to a specific area adjacent to the target. Upon achieving a position proximate the target, e.g., within about 2.5 cm, a sequence of fluoroscopic images may be then acquired viafluoroscopic imaging device 124 according to directions displayed viacomputing device 122. A fluoroscopic 3D reconstruction may be then generated viacomputing device 122. The generation of the fluoroscopic 3D reconstruction is based on the sequence of fluoroscopic images and the projections of structure of markers incorporated withtransmitter mat 120 on the sequence of images. One or more slices of the 3D reconstruction may be then generated based on the pre-operative CT scan and viacomputing device 122. The one or more slices of the 3D reconstruction and the fluoroscopic 3D reconstruction may be then displayed to the user on a display viacomputing device 122, optionally simultaneously. The slices of 3D reconstruction may be presented on the user interface in a scrollable format where the user is able to scroll through the slices in series. The user may be then directed to identify and mark the target while using the slice of the 3D reconstruction as a reference. The user may be also directed to identify and mark the medical device in the sequence of fluoroscopic 2D-dimensional images. An offset between the location of the target and the medical device may be then determined or calculated viacomputing device 122. The offset may be then utilized, viacomputing device 122, to correct the location of the medical device on the display with respect to the target and/or correct the registration between the three-dimensional model andtracking system 114 in the area of the target and/or generate a local registration between the three-dimensional model and the fluoroscopic 3D reconstruction in the target area. -
FIG. 3-5 depict a virtual environment viewable via theheadset 20 to simulate thesystem 100 depicted inFIG. 2 . Every component described above with respect to theactual system 100 are recreated in the virtual environments depicted inFIGS. 3-5 . As will be described in greater detail below, these components in the virtual environment can be grasped, manipulated, and utilized in the virtual environment in connection with theheadset 20. Theheadset 20 includes a variety of sensors which either alone or in combination with one or more handpieces (not shown) detect movement of the user's hands and head as the user moves through real space and translates these movements into the virtual reality as depicted to the user in theheadset 20. -
FIG. 6 depicts a variety of workflows for thesystem 100, which can be performed in the virtual environment utilizing theheadset 20 and thecomputer 30. Each of these workflows is described in detail below or has been described in connection with thesystem 100, above. These workflows result in a series of displays in theheadset 20 being presented to theuser 10 as if they were undertaking a case and deploying anactual system 100. As a result, theuser 10 can train without having to interact with an actual patient. Further, thecomputer 30 may collect data with which the performance of a user can be graded. In this way, a user can work in the virtual reality environment and undertake any number of procedures, learn the implementation details of thesystem 100 and be evaluated by experts without tying up equipment in the bronchoscopy suite, and without the need to engage an actual patient. - Further, the
VR headset 20 andcomputer 30 can include an application that enables another user to be in the virtual environment. The second user may be another clinician that is demonstrating a case, a sales person, or an assistant who will be assisting the user in an actual case and needs to understand the functionality and uses of all the components of thesystem 100 prior to actual use. - As noted above, the virtual environment enables a user wearing a
headset 20 to perform virtual methods of navigation of a virtual representation ofcatheter 102 in a virtual patient.FIG. 7 describes these methods. In step S300, user interface, which is depicted on a virtual representation of thecomputer 122, presents the clinician with a view (not shown) for the selection of a patient. The clinician may enter patient information such as, for example, the patient name or patient ID number, into a text box to select a patient on which to perform a navigation procedure. Alternatively, the patient may be selected from a drop-down menu or other similar methods of patient selection. Once the patient has been selected, the user interface may present the clinician with a view (not shown) including a list of available navigation plans for the selected patient. In step S302, the clinician may load one of the navigation plans by activating the navigation plan. The navigation plans may be imported from a procedure planning software, described briefly above. - Once the patient has been selected and a corresponding navigation plan has been loaded, the user interface presents the clinician with a patient details view (not shown) in step S304 which allows the clinician to review the selected patient and plan details. Examples of patient details presented to the clinician in the timeout view may include the patient's name, patient ID number, and birth date. Examples of plan details include navigation plan details, automatic registration status, and/or manual registration status. For example, the clinician may activate the navigation plan details to review the navigation plan and may verify the availability of automatic registration and/or manual registration. The clinician may also activate an edit button (not shown) to edit the loaded navigation plan from the patient details view. Activating the edit button (not shown) of the loaded navigation plan may also activate the planning software described above. Once the clinician is satisfied that the patient and plan details are correct, the clinician proceeds to navigation setup in step S306. Alternatively, medical staff may perform the navigation setup prior to or concurrently with the clinician selecting the patient and navigation plan.
- Once setup is complete, the user interface presents the clinician with a view 400 (
FIG. 8 ) for registering the location ofLG 110 relative to the loaded navigation plan. In step S308 the clinician prepares for registration by insertingbronchoscope 108 withcatheter 102,LG 110 andEM sensor 104 into the virtual patient's airway until the distal ends of theLG 110, theEM sensor 104, andbronchoscope 108 are positioned within the patient's trachea, for example, as shown inFIG. 8 . As shown inFIG. 8 , view 400 presents a clinician with avideo feed 402 frombronchoscope 108 and alung survey 404. Video feed 402 frombronchoscope 108 provides the clinician with a real-time video of the interior of the patient's airways at the distal end ofbronchoscope 108.Video feed 402 allows the clinician to visually navigate through the airways of the lungs. -
Lung survey 404 provides the clinician withindicators 406 for thetrachea 408 and eachregion Regions Lung survey 404 may also be modified for patients in which all or a part of one of the lungs is missing, for example, due to prior surgery. - During registration, the clinician advances
bronchoscope 108 andLG 110 into eachregion corresponding indicator 406 is activated. For example, the corresponding indicator may display a “check mark”symbol 417 when activated. As described above, the location of theEM sensor 104 ofLG 110 relative to eachregion EM sensor 104 ofLG 110 and theelectromagnetic field generator 120 and may activate anindicator 406 when theEM sensor 104 enters acorresponding region - In step S310, once the
indicators 406 for thetrachea 408 and eachregion FIG. 9 , the clinician activates the “done”button 418 using, for example, a mouse or foot pedal, and proceeds to verification of the registration in step S312. Although eachindicator 406 is shown as activated inFIG. 9 , the clinician may alternatively achieve registration with the currently loaded navigation plan while one or more ofregions button 418 to proceed to registration verification in step S312. Sufficient registration may depend on both the patient's lung structure and the currently loaded navigation plan where, for example, only theindicators 406 for thetrachea 408 and one or more of theregions - After registration with the navigation plan is complete, user interface presents the clinician with a
view 420 for registration verification in step S312. View 420 presents the clinician with an LG indicator 422 (depicting the location of the EM sensor 104) overlaid on a displayedslice 424 of the CT images of the currently loaded navigation plan, for example, as shown inFIG. 10 . Although theslice 424 displayed inFIG. 10 is from the coronal direction, the clinician may alternatively select one of the axial or sagittal directions by activating adisplay bar 426. As the clinician advances theLG 110 andbronchoscope 108 through the patient's airways, the displayedslice 424 changes based on the position of theEM sensor 104 ofLG 110 relative to the registered 3D volume of the navigation plan. The clinician then determines whether the registration is acceptable in step S314. Once the clinician is satisfied that the registration is acceptable, for example, that theLG indicator 422 does not stray from within the patient's airways as presented in the displayedslice 424, the clinician accepts the registration by activating the “accept registration”button 428 and proceeds to navigation in step S316. Although registration has now been completed by the clinician, theEMN system 10 may continue to track the location of theEM sensor 104 ofLG 110 within the patient's airways relative to the 3D volume and may continue to update and improve the registration during the navigation procedure. - During navigation, a user interface on
computer 122 presents the clinician with aview 450, as shown, for example, inFIG. 11 . View 450 provides the clinician with a user interface for navigating to a target 452 (FIG. 12 ) including acentral navigation tab 454, aperipheral navigation tab 456, and atarget alignment tab 458.Central navigation tab 454 is primarily used to guide thebronchoscope 108 through the patient's bronchial tree until the airways become small enough that thebronchoscope 108 becomes wedged in place and is unable to advance.Peripheral navigation tab 456 is primarily used to guide thecatheter 102,EM sensor 104, andLG 110 toward target 452 (FIG. 12 ) after thebronchoscope 108 is wedged in place.Target alignment tab 458 is primarily used to verify thatLG 110 is aligned with thetarget 452 afterLG 110 has been navigated to thetarget 452 using theperipheral navigation tab 456. View 450 also allows the clinician to selecttarget 452 to navigate by activating atarget selection button 460. - Each
tab windows 462 that assist the clinician in navigating to the target. The number and configuration ofwindows 462 to be presented is configurable by the clinician prior to or during navigation through the activation of an “options”button 464. The view displayed in eachwindow 462 is also configurable by the clinician by activating adisplay button 466 of eachwindow 462. For example, activating thedisplay button 466 presents the clinician with a list of views for selection by the clinician including a bronchoscope view 470 (FIG. 11 ), virtual bronchoscope view 472 (FIG. 11 ), local view 478 (FIG. 12 ), MIP view (not explicitly shown), 3D map dynamic view 482 (FIG. 11 ), 3D map static view (not explicitly shown), sagittal CT view (not explicitly shown), axial CT view (not shown), coronal CT view (not explicitly shown), tip view 488 (FIG. 12 ), 3D CT view 494 (FIG. 14 ), and alignment view 498 (FIG. 14 ). -
Bronchoscope view 470 presents the clinician with a real-time image received from thebronchoscope 108, as shown, for example, inFIG. 11 .Bronchoscope view 470 allows the clinician to visually observe the patient's airways in real-time asbronchoscope 108 is navigated through the patient's airways towardtarget 452. -
Virtual bronchoscope view 472 presents the clinician with a3D rendering 474 of the walls of the patient's airways generated from the 3D volume of the loaded navigation plan, as shown, for example, inFIG. 11 .Virtual bronchoscope view 472 also presents the clinician with anavigation pathway 476 providing an indication of the direction along which the clinician will need to travel to reach thetarget 452. Thenavigation pathway 476 may be presented in a color or shape that contrasts with the3D rendering 474 so that the clinician may easily determine the desired path to travel. -
Local view 478, shown inFIG. 12 , presents the clinician with aslice 480 of the 3D volume located at and aligned with the distal tip ofLG 110.Local view 478 showstarget 452,navigation pathway 476, and surrounding airway branches overlaid onslice 480 from an elevated perspective. Theslice 480 that is presented bylocal view 478 changes based on the location ofEM sensor 104 relative to the 3D volume of the loaded navigation plan.Local view 478 also presents the clinician with a visualization of the distal tip 93 ofLG 110 in the form of avirtual probe 479.Virtual probe 479 provides the clinician with an indication of the direction that distal tip 93 ofLG 110 is facing so that the clinician can control the advancement of theLG 110 in the patient's airways. For example, as the clinician manipulates the handle of thecatheter 102 and theLG 110 locked into position relative thereto rotate, and the orientation of thedistal end 479 a ofvirtual probe 479 also rotates relative to the displayedslice 480 to allow the clinician to guide theLG 110 andcatheter 102 through the patient's airways. Thelocal view 478 also provides the clinician with awatermark 481 that indicates to the clinician the elevation of thetarget 452 relative to the displayed slice. For example, as seen inFIG. 12 , the majority of thetarget 452 is located belowwatermark 481 and may, for example, be displayed as having a dark color such as a dark green, while a smaller portion oftarget 452 located abovewatermark 481 may be displayed, for example, as having a light color such as a light green. Any other color scheme which serves to indicate the difference between the portion oftarget 452 disposed abovewatermark 481 and the portion oftarget 452 disposed belowwatermark 481 may alternatively be used. - The MIP view (not explicitly shown), also known in the art as a Maximum Intensity Projection view is a volume rendering of the 3D volume of the loaded navigation plan. The MIP view presents a volume rendering that is based on the maximum intensity voxels found along parallel rays traced from the viewpoint to the plane of projection. For example, the MIP view enhances the 3D nature of lung nodules and other features of the lungs for easier visualization by the clinician.
- 3D map dynamic view 482 (
FIG. 12 ) presents adynamic 3D model 484 of the patient's airways generated from the 3D volume of the loaded navigation plan.Dynamic 3D model 484 includes a highlightedportion 486 indicating the airways along which the clinician will need to travel to reachtarget 452. The orientation ofdynamic 3D model 484 automatically updates based on movement of theEM sensor 104 within the patient's airways to provide the clinician with a view of thedynamic 3D model 484 that is relatively unobstructed by airway branches that are not on the pathway to thetarget 452. 3D mapdynamic view 482 also presents thevirtual probe 479 to the clinician as described above where thevirtual probe 479 rotates and moves through the airways presented in thedynamic 3D model 484 as the clinician advances theLG 110 through corresponding patient airways. - 3D map static view (not explicitly shown) is similar to 3D map
dynamic view 482 with the exception that the orientation of the static 3D model does not automatically update. Instead, the 3D map static view must be activated by the clinician to pan or rotate the static 3D model. The 3D map static view may also present thevirtual probe 479 to the clinician as described above for 3D mapdynamic view 482. The sagittal, axial, and coronal CT views (not explicitly shown) present slices taken from the 3D volume of the loaded navigation plan in each of the coronal, sagittal, and axial directions. -
Tip view 488 presents the clinician with a simulated view from the distal tip 93 ofLG 110, as shown, for example, inFIG. 12 .Tip view 488 includes acrosshair 490 and adistance indicator 492.Crosshair 490 may be any shape, size, or color that indicates to the clinician the direction that the distal tip 93 ofLG 110 is facing.Distance indicator 492 provides the clinician with an indication of the distance from the distal tip 93 ofLG 110 to the center oftarget 452.Tip view 488 may be used to align the distal tip 93LG 110 with thetarget 452. - 3D CT view 494 (
FIG. 14 ) presents the clinician with a3D projection 496 of the 3D volume located directly in front of the distal tip ofLG 110. For example,3D projection 496 presents high density structures such as, for example, blood vessels, and lesions to the clinician.3D CT view 494 may alsopresent distance indicator 492 to the clinician as described above fortip view 488. - Alignment view 498 (
FIG. 14 ) presents the clinician with a2D projection 500 of the 3D volume located directly in front of the distal tip ofLG 110, for example.2D projection 500 presents high density structures such as, for example, blood vessels and lesions. In2D projection 500,target 452 may presented as a color, for example, green, and may be translucent.Alignment view 498 may alsopresent distance indicator 492 to the clinician as described above fortip view 488. - Navigation to a
target 452 will now be described: - Initially, in step S316,
view 450 is presented to the clinician by user interface withcentral navigation tab 454 active, as shown, for example, inFIG. 11 .Central navigation tab 454 may be the default tab upon initialization ofview 450 by user interface 202.Central navigation tab 454 presents the clinician with thebronchoscope view 470,virtual bronchoscope view dynamic view 482, as described above. Usingcentral navigation tab 452, the clinician navigatesbronchoscope 108,LG 110, andcatheter 102 toward thetarget 452 by following thenavigation pathway 476 ofvirtual bronchoscope view 472 along the patient's airways. The clinician observes the progress ofbronchoscope 108 in eachview bronchoscope 108 and, if so, wedges thebronchoscope 108 in place. Once thebronchoscope 108 has been wedged in place, the clinician activatesperipheral navigation tab 456, for example, a mouse or foot pedal, and proceeds to peripheral navigation in step S320. - During peripheral navigation in step S320,
peripheral navigation tab 456 is presented to the clinician as shown, for example, inFIG. 12 .Peripheral navigation tab 456 presents the clinician with thelocal view Map Dynamic view 482,bronchoscope view 470, andtip view 488.Peripheral navigation tab 456 assists the clinician with navigation between the distal end ofbronchoscope 108 andtarget 452. As shown in thebronchoscope view 470 inFIG. 12 , the clinician extendsLG 110 andcatheter 102 from the working channel ofbronchoscope 108 into the patient's airway towardtarget 452. The clinician tracks the progress ofLG 110,EM sensor 104, andcatheter 102 in thelocal view 478, the 3D mapdynamic view 482, and thetip view 488. For example, as described above, and shown inFIG. 12 , the clinician rotatesLG 110,EM sensor 104, andcatheter 102 relative to the patient's airways until thetip 479 a ofvirtual probe 479 is oriented toward the desired airway leading to thetarget 452. For example, the desired airway may be determined based on thenavigation pathway 476 presented inlocal view 478 and the highlightedportion 486 presented in 3D mapdynamic view 482. The clinician then advancesLG 110,EM sensor 104, and thecatheter 102 into the desired airway and confirms the movement of theEM sensor 104 relative to thetarget 452 and the patient's airways in the 3D mapdynamic view 482 andlocal view 478. The clinician may also check the location oftarget 452 on thetip view 488 to determine where thetarget 452 is relative to the orientation of theLG 110 asLG 110 moves closer to thetarget 452. - When the clinician has advanced the distal tip 93 of
LG 110 to target 452, as shown, for example, inFIG. 13 , the clinician may decide in step S322 to activate thetarget alignment tab 458 to confirm target alignment with thetarget 452. - During target alignment in step S324,
target alignment tab 458 is presented to the clinician as shown, for example, inFIG. 14 .Target alignment tab 458 presents the clinician with thelocal view Map Dynamic view 3D CT view 494, andAlignment view 498.Target alignment tab 458 assists the clinician with alignment of theLG 110 with thetarget 452. By comparing the 3D and 2D projections of the3D CT view 494 andalignment view 498 with the position and orientation of thevirtual probe 479 in thelocal view dynamic view 482, the clinician may make a determination of whether theLG 110 is aligned with thetarget 452 and of the relative distance of theLG 110 to thetarget 452. - After the clinician determines that the target has been aligned in step S326 using the
target alignment tab 458, or if the clinician decides not to activate thetarget alignment view 458 in step S322, the clinician may decide to activate the “mark position”button 502 of either the peripheral navigation tab 456 (FIG. 13 ) or the target alignment tab 458 (FIG. 14 ) in step S328 to virtually mark the current position of thevirtual probe 479 where the registered position of thevirtual probe 479 corresponds to the current location of theLG 110. This mark may be permanently recorded as part of the navigation plan to enable a clinician to return to substantially the same location in subsequent navigations or at a later time in the same procedure, for example, where a biopsy sample has been taken and is determined to be cancerous and in need of immediate treatment. - Once the clinician has activated the “mark position”
button 502, the user interface presents the clinician with aview 504 providing the clinician with details of the marked position of thevirtual probe 470, as shown, for example, inFIG. 15 . For example,view 504 provides the clinician with a biopsy ortreatment position number 506 and distance to targetcenter 508 for the clinician's review. Whileview 504 is presented, the clinician may withdraw theLG 110 fromcatheter 102 of thebronchoscope 108 and insert a tool throughcatheter 102 in step S330, for example, abiopsy device 102, a fiducial marking device, an ablation probe, a chemical treatment probe, or other similar tools to sample, mark and/or treat thetarget 452. Once the clinician has finished sampling, marking, and/or treating thetarget 452 using the tool, the clinician withdraws the tool frombronchoscope 108 and insertsLG 110 back intobronchoscope 108. The clinician then activates the “done”button 510 to finish marking thetarget 452. - Once the “done”
button 506 has been activated, the user interface presents the clinician withview 500 with one oftabs FIG. 14 , for example, a representation of avirtual marker 512 is presented bytarget alignment tab 458 in various views including, for example, the 3DMap Dynamic view 482,local view 488, or any other view described above to indicate to the clinician the location of a previous treatment site. The clinician then determines whether an additional biopsy, marking, or treatment is required for thetarget 452 in step S332. If additional biopsies are required, the clinician repeats steps S320 through S330. Because the clinician has already navigated to thetarget 452, the clinician may alternatively repeat only a subset of steps S320 through S330. For example, the clinician may return to thetarget alignment tab 458 without activating theperipheral navigation tab 456 to continue navigating to the target or aligning theLG 110 with the target for an additional biopsy, marking, or treatment. Alternatively, the clinician may use only theperipheral navigation tab 456 to continue navigating to thetarget 452 for an additional biopsy or treatment. - If no additional biopsies or treatments are required, the clinician determines whether there is an additional target planned for navigation by activating the
target selection button 460 in step S334. If an additional target is planned for navigation, the clinician activates the additional target and repeats steps S316 through S332 to navigate to the additional target for biopsy or treatment. If the additional target is in the same lung lobe or region astarget 452, the clinician may alternatively only repeat a subset of steps S316 through S332. For example, the clinician may start navigation to the additional target using the peripheral navigation tab 456 (step S320) or the target alignment tab 458 (step S324) without using the central navigation tab 454 (step S316) where the location of the wedgedbronchoscope 108 can still provide access to the additional target. - If there are no other targets, the clinician has finished the navigation procedure and may withdraw the
LG 110,catheter 102, andbronchoscope 108 from the patient. The clinician may then export a record of the navigation procedure in step S336 to a memory associated withcomputer 122, or to a server or other destination for later review via network interface. - During the navigation procedure, the
EM sensor 104 ofLG 110 may continuously update registration information such that the registration is continuously updated. In addition, at any time during the navigation procedure the clinician may also review the registration by activating the “options”button 464 and activating a review registration button (not shown). The user interface then presents the clinician with aview 514 as shown, for example, inFIG. 16 . View 514 presents the clinician with a3D model 516 of the patient's bronchial tree generated from the 3D volume of the loaded navigation plan for review of the registration. As shown inFIG. 16 ,3D model 516 includes a set ofdata points 518 that are generated during registration based on the locations to which thesensor 104 ofLG 110 has traveled within the patient's airways. The data points 518 are presented on the3D model 516 to allow the clinician to assess the overall registration of the3D model 516 with the patient's airways. In addition, during navigation to target 452, a second set ofdata points 520 are generated based on the locations to which thesensor 104 ofLG 110 has traveled on its path to thetarget 452. Data points 518 and 520 may be color coded, for example, green and purple, respectively, or may have different shapes or other identifying features that allow the clinician to differentiate betweendata points check boxes 522 to control which sets ofdata points 3D model 516. - The above has been described using solely the navigation process without
FIG. 17 depicts the process of performing a localregistration using system 100. Where further confirmation of position is desired a local registration process may be undertaken in conjunction with thefluoroscopic imaging device 124. Utilizing the processes described above, astep 1700, a pre-operative CT scan of the target area may be received by thecomputing device 122. The pre-operative CT scan that is received may include a pathway plan that has already been developed or the user may generate the pathway plan usingcomputing device 122. At step 1710 a user navigates thecatheter 102 andsensor 104 proximate the target using the pathway plan which may be displayed oncomputing device 122 as part of the 3D model, described above. - Once proximate the target (e.g., about 2.5 cm), the user may wish to confirm the exact relative positioning of the
sensor 104 and the target. Atstep 1720, a sequence of fluoroscopic images of the target area acquired in real time about a plurality of angles relative to the target area may be captured byfluoroscopic imaging device 124. The sequence of images may be captured while a medical device is positioned in the target area. In some embodiments, the method may include further steps for directing a user to acquire the sequence of fluoroscopic images. In some embodiments, the method may include one or more further steps for automatically acquiring the sequence of fluoroscopic images. The fluoroscopic images may be two-dimensional (2D) images, a three-dimensional (3D) reconstruction generated from a plurality of 2D images, or slice-images of a 3D reconstruction. - The disclosure refers to systems and methods for facilitating the navigation of a medical device to a target and/or a target area using two-dimensional fluoroscopic images of the target area. The navigation is facilitated by using local three-dimensional volumetric data, in which small soft-tissue objects are visible, constructed from a sequence of fluoroscopic images captured by a fluoroscopic imaging device. The fluoroscopic-based constructed local three-dimensional volumetric data may be used to correct a location of a medical device with respect to a target or may be locally registered with previously acquired volumetric data. In general, the location of the medical device may be determined by a tracking system such as the EM navigation system or another system as described herein. The tracking system may be registered with the previously acquired volumetric data.
- In some embodiments, receiving a fluoroscopic 3D reconstruction of a body region may include receiving a sequence of fluoroscopic images of the body region and generating the fluoroscopic 3D reconstruction of the body region based on at least a portion of the fluoroscopic images. In some embodiments, the method may further include directing a user to acquire the sequence of fluoroscopic images by manually sweeping the fluoroscope. In some embodiments, the method may further include automatically acquiring the sequence of fluoroscopic images. The fluoroscopic images may be acquired by a standard fluoroscope, in a continuous manner and about a plurality of angles relative to the body region. The fluoroscope may be swept manually, i.e., by a user, or automatically. For example, the fluoroscope may be swept along an angle of 20 to 45 degrees. In some embodiments, the fluoroscope may be swept along an angle of 30±5 degrees. Typically, these images are gathered in a fluoroscopic sweep of the
fluoroscopic imaging device 124 of about 30 degrees (i.e., 15 degrees on both sides of the AP position). As is readily understood, larger sweeps of 45, 60, 90 or even greater angles may alternatively be performed to acquire the fluoroscopic images. - At
step 1730, a three-dimensional reconstruction of the target area may be generated based on the sequence of fluoroscopic images. In some embodiments, the method further comprises one or more steps for estimating the pose of the fluoroscopic imaging device while acquiring each of the fluoroscopic images, or at least a plurality of them. The three-dimensional reconstruction of the target area may be then generated based on the pose estimation of the fluoroscopic imaging device. - In some embodiments, the markers incorporated with the
transmitter mat 120 may be placed with respect to the patient “P” and thefluoroscopic imaging device 124, such that each fluoroscopic image includes a projection of at least a portion of the structure of markers. The estimation of the pose of the fluoroscopic imaging device while acquiring each image may be then facilitated by the projections of the structure of markers on the fluoroscopic images. In some embodiments, the estimation may be based on detection of a possible and most probable projection of the structure of markers on each image. - In
step 1740, one or more fluoroscopy images are displayed to a user as illustrated inFIG. 18 . As depicted inFIG. 18 , which is an exemplary screen shot 1850 of a software program running oncomputing device 122, a slice of the3D reconstruction 1860 is displayed simultaneously with twothumbnail images scroll bar 1820 andindicator 1830, the slice of the3D reconstruction 1860 depicted on screen shot 1850 changes in accordance with the movement. - In some embodiments, when marking of the target in a slice image of a fluoroscopic 3D reconstruction is desired, generating and using a virtual slice image as a reference may be more advantageous. In some embodiments, when marking of the target in a fluoroscopic 2D image is desired, generating and using a virtual fluoroscopic 2D image may be more advantageous.
- In accordance with
step 1750, a selection of the target from the fluoroscopic 3D reconstruction is made by the user. As shown inFIG. 3A , at two end portions of the fluoroscopic 3D reconstruction displayable onscreen shot 1850, the user is asked to “mark the target” in the slice of the3D reconstruction 1860. These two ends could be any two positions of the fluoroscopic sweep, so long as they are sufficiently angularly separated such that efficient triangulation of the location of the target and the catheter (FIG. 18 ) can be performed. - As shown in
FIG. 18 , the user had previously marked the target at one end of the sweep, depicted in thumbnail image 180 a, and is in the process of marking the target in a second end of the sweep, with the current slice of the3D reconstruction 1860 which is depicted inthumbnail image 1810 b. At the bottom of the screen shot 1850, two ranges are defined within which the user is to “mark the target” in an image that appears within a range. Asecond scroll bar 1832 on the bottom of the screen shot 350 allows the user to view thevirtual fluoroscopy images 1860 as a video rather than to scroll through the slices individually usingscroll bar 1820. In addition, the software may be configured to automatically jump the user from one “mark the target” range to the other following successful marking in the first “mark the target” range. - As depicted in
FIG. 18 , the user has placed amarker 1870 on the slice of the3D reconstruction 1860. This marker also appears inthumbnail image 1810 b asmarker 1880 b. In this manner, atstep 1740 the user has marked the location of the target in the fluoroscopic image data collected by thefluoroscopic imaging device 124. - In
step 1760, a selection of the medical device from the three-dimensional reconstruction or the sequence of fluoroscopic images is made. In some embodiments, this may be automatically made, and a user either accepts or rejects the selection. In some embodiments, the selection is made directly by the user. As depicted inFIG. 19 , the user may be asked to mark the location of thecatheter 102.FIG. 19 depicts ascreen shot 1980 including two actualfluoroscopic images 1982. The user is asked to mark the end of thecatheter 102 in each of the actualfluoroscopic images 1982. Each of these images comes from portions of the sweep that correspond to the “mark the target” portions depicted inFIG. 19 . - Once both the
catheter 102 and the target are marked at both ends of the sweep, atstep 1770, an offset of thecatheter 102 with respect to the target may be calculated. The determination of the offset is based on the received selections of the target and the medical device. This offset is used to update the detected position of thecatheter 102, and specifically thesensor 104 in the 3D model and the pathway plan that was created to navigate to the target. - Typically, at this point in the procedure, the user has managed to navigate the
catheter 102 to within 2-3 cm of the target, for example. With the updated position provided by the fluoroscopic data collection and position determination, the user can have confidence of reaching the target while traversing this last distance. - In heretofore known systems, the
sensor 104 would now be removed from thecatheter 102 and the final approaches to the target in navigation would proceed completely blind. Recently, systems have been devised that allow for the incorporation of asensor 126 that can provide 5 DOF location information of thecatheter 102 after thesensor 104 is removed. - Following removal of the
sensor 104, a tool, such as a needle or coring biopsy tool, brushes, ablation devices (e.g., RF, microwave, chemical, radiological, etc.), clamping devices, and others, may be advanced down thecatheter 102. In one example, a biopsy tool (not shown) is advanced down thecatheter 102 and, using thesensor 126, the user navigates the final 2-3 cm, for example, to the target and can advance to biopsy tool into the target as it appears in the 3D model. However, despite the confidence provided by updating relative locations of the target and thecatheter 102, there are times where a user may wish to confirm that the biopsy tool is in fact placed within the target. - To undertake this tool-in-target confirmation, a second fluoroscopic imaging process can be undertaken. As part of this process, the user can select a “tool-in target” tab on the user interface at
step 2010 of themethod 2000 ofFIG. 20 . This selection can initiate thefluoroscopic imaging device 124. Atstep 2010, the user may also be directed to perform a fluoroscopic sweep similar to that described previously. The images collected during the fluoroscopic sweep can be processed to form a fluoroscopic3D reconstruction step 2020. - A slice of the 3D reconstruction is generated from the fluoroscopic 3D reconstruction and output as
screenshot 2100 as depicted inFIG. 21 in step 210. The screen shot 2100 is similar to that ofFIG. 18 . The user is able to use thescroll bar 2102 to scroll through the slices of the3D reconstruction 2104. The biopsy tools are typically made of metal and other materials which resolve quite well in fluoroscopic images. As a result, the user can scroll through the slices of the3D reconstruction 2104 all along the sweep to ensure that the biopsy tool is in the target indicated bymarker 2108 atstep 2040. - As an alternative to step 2040 where the user scrolls through the slices of the
3D reconstruction 504, the user may be requested to mark the position of thecatheter 102 similar to the step described in step 1760 (FIG. 78 ) in a 2D fluoroscopic image acquired by thefluoroscopic imaging device 124 as part of its fluoroscopic sweep fromstep 2110. This step may also be performed automatically via image processing techniques by computingdevice 122. After receiving the marked position of thecatheter 102, the 3D coordinates of the marked position can be determined by thecomputing device 122. Accordingly, thecomputing device 122 can identify a slice of the3D reconstruction 2104 that best displays thecatheter 102. Additionally, or alternatively, this marking of the position of thecatheter 102 in the 2D fluoroscopic image provides an indication of the position of thecatheter 102 in the3D reconstruction 2104, which can then be presented to the user for review. Still further, the user may be asked to mark the target in the 2D fluoroscopic image and a 3D relative position of the target and thecatheter 102 orbiopsy tool 2106 can be calculated and displayed in the3D reconstruction 2104, similar that that described above with reference toFIG. 17 . - In addition to the above, with respect to depiction of the
catheter 102 in the slice images of the3D reconstruction 2104, image processing techniques can be employed to enhance the display of thecatheter 102 orbiopsy tool 2106 extending therethrough. These techniques may further be employed to remove artifacts that might be the result of the reconstruction process. - If the user is satisfied with the position of the
biopsy tool 2106, the user can click the next button 5210 to confirm placement of the tool-in-target atstep 2050, thereby ending the tool-in-target confirmation method 2000 ofFIG. 20 . Then, the user can proceed back to the 3D model and perform navigation to other targets. Additionally, if more biopsies are desired, following each movement of thecatheter 102 andbiopsy tool 2106, one could perform another tool-in-lesion confirmation starting again atstep 2110. Though not necessarily required, the system may prompt the user to identify the location of the target in the images, though most users can perform their confirmation of the tool-in-lesion without requiring a separate indication of the target. - All the heretofore described techniques of insertion of a bronchoscope, registration, navigation to a target, local registration, removal of an LG, insertion of biopsy tools, confirmation of tools in target, etc., can be experienced by a clinician or user in a virtual reality environment by employing the
headset 20 andcomputer 30. As noted above, upon donning theheadset 20, the user may experience a virtual environment as depicted inFIGS. 3-5 , which accurately represents an actual bronchoscopic suite or a surgical room. That virtual environment includes all the equipment necessary to undertake a procedure as described above virtually. All the screen shots of the navigation software, the CT images, the 3D models derived therefrom, and the fluoroscopic images are displayed on a display which is viewable to the clinician in the virtual environment. - As an example, prior to initiation of a procedure, the clinician may observe that the
bronchoscope 108,catheter 102 andLG 104 are set out on a tools table as depicted inFIG. 22 . Upon approaching a patient, the virtual environment may present various types of guidance to the user, for example, inFIG. 23 , there are displayed instructions to “pick up the bronchoscope.” Using VR techniques that have been well established the physical movements of the user's hands and head are detected by theheadset 20 and translated into the virtual environment allowing the user to visually experience the actions that will be required in an actual case. - Following picking up the bronchoscope, the virtual environment may prompt the user to insert the bronchoscope into the patient as shown in
FIG. 24 . Additional guidance, including wireframe shapes and the like may be employed to assist the user in understanding where to place the bronchoscope relative to the patient. As depicted inFIG. 25 , the representation of the bronchoscope, a wireframe for its placement and advancement, and a representation of the software application for bronchoscopic navigation are displayed. At this position, the user experiences nearly all the sensations that they would otherwise experience during an actual procedure. Following the prompts in the virtual environment and utilizing the software which is presented on the computer in the image, and which corresponds to anactual computer 122 ofFIG. 2 , the case can be virtually performed. At each point, either additional prompts in the virtual environment can be presented to the user as seen inFIG. 26-29 , and the user interface on the computer in the virtual environment is updated. The user, as depicted inFIG. 27 can interact with the software running on the computer in the virtual environment just as they might the actual software running oncomputer 122. And the case proceeds until proximate a target, and the user is prompted to perform the local registration using the virtual fluoroscopic imager and all the other aspects of the software until a biopsy is acquired or the target is treated, as described herein above. All the workflows inFIG. 6 are enabled in the virtual environment. - The virtual environment may include a variety of virtual cases on which the user can practice. These are loaded onto
computer 30 and selected by the user. Each case includes its unique underlying CT image data that is displayed as part of the user interface on the virtual computer and may include unique tools or prompts to the user. In addition, the unique case may include its own unique fluoroscopic data, if necessary, for local registration and the like. - By performing the virtual navigations, a user or clinician gains experience with the equipment and the workflows without the need of an actual patient. Further as part of that experience data can be collected regarding proficiency of the user, and an assessment can be made which helps ensure that clinicians are proficient with the
system 100 before performing a live procedure. This may also be used for refresher courses, and continuing education evaluations to ensure that clinicians are current on the procedure and equipment. These virtual environments may also provide a means of rolling out new features and presenting them to clinicians in a manner in which they can not only hear and see them but actually experience them. - In another aspect of the disclosure, for each case there are certain metrics and thresholds programmed into the virtual environment that are used to assess the performance of the user. As the user experiences the case the
computer 30 logs the performance of the user with respect to these thresholds and metrics. In some embodiments, immediately upon missing one of these thresholds or metrics the virtual environment displays an indication of the missed threshold and provides guidance on how to correct. Additionally or alternatively, the virtual environment may store these missed thresholds and metrics and present them to the user at the end of a virtual case as part of a debrief session to foster learning. Still further, all the prompts and guidance be stopped and the user's ability to navigate the virtual environment on their own, without any assistance or prompting can be assessed. It will be appreciated that for training purposes, the first few instances of utilization may be with full prompting from the virtual environment and as skill is developed these are reduced. Much like airplane pilots, a final assessment before use with the actual bronchoscopic suite and a live patient may involve a solo approach were the user must navigate the case entirely based on their experience and knowledge of the system. - Still further, as noted above, an attending nurse, additional clinician, or sales staff may also be in the virtual procedure to provide guidance or to gain their own experience with the system or to walk a new user through the process and allow them to gain familiarity. Additional uses of the VR system described herein will also be known to those of ordinary skill in the art.
- While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.
- From the foregoing and with reference to the various figure drawings, those skilled in the art will appreciate that certain modifications can also be made to the disclosure without departing from the scope of the same. While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.
Claims (20)
1. A training system for a medical procedure comprising:
a virtual reality (VR) headset, including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of:
presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope;
depicting at least one representation of a user's hand in the virtual environment;
providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment;
enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment;
enabling interaction with the bronchoscopic tools via the representation of the user's hand; and
executing a bronchoscopic navigation in the virtual environment, wherein as the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
2. A training system for a medical procedure comprising:
a virtual reality (VR) headset; and
a computer operably connected to the VR headset, the computer including a processor and a computer readable recording media storing one or more applications thereon, the applications including instructions that when executed by the processor performs steps of:
presenting a virtual environment viewable in the VR headset replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope;
providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment;
enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; and
executing a bronchoscopic navigation in the virtual environment, wherein as the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
3. The training system of claim 2 , wherein the user interface displays one or more navigation plans for selection by a user of the VR headset.
4. The training system of claim 3 , wherein the computer in the virtual environment displays a user interface for performance of a registration of the navigation plan to a patient.
5. The training system of claim 4 , wherein during registration the virtual environment presents a bronchoscope, catheter, and locatable guide for manipulation by a user in the virtual environment to perform the registration.
6. The training system of claim 5 , wherein the virtual environment depicts at least one representation of a user's hands.
7. The training system of claim 6 , wherein the virtual environment depicts the user's hands manipulating the bronchoscope, catheter, or locatable guide.
8. The training system of claim 6 , wherein the virtual environment depicts the user's hands manipulating the user interface on the computer displayed in the virtual environment.
9. The training system of claim 6 , wherein the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient.
10. The training system of claim 6 , wherein the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment.
11. The training system of claim 10 , wherein the user interface for performance of navigation depicts an updated position of the locatable guide as the bronchoscope or catheter are manipulated by a user.
12. The training system of claim 1 , further comprising a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration.
13. A method for simulating a medical procedure on a patient in a virtual reality environment, comprising:
presenting in a virtual reality (VR) headset a virtual environment replicating a bronchoscopic suite including a patient, bronchoscopic tools, a patient, and a fluoroscope;
providing instructions in the virtual environment viewable in the VR headset for performing a bronchoscopic navigation of the patient in the virtual environment;
enabling interaction with a bronchoscopic navigation software on a computer displayed in the virtual environment; and
executing a bronchoscopic navigation in the virtual environment, wherein as the bronchoscopic navigation is undertaken, a user interface on the computer displayed in the virtual environment is updated to simulate a bronchoscopic navigation on an actual patient.
14. The method of claim 13 , wherein the virtual environment depicts at least one representation of a user's hands.
15. The method of claim 14 , wherein the virtual environment depicts the representation of the user's hands manipulating the bronchoscope, catheter, or locatable guide.
16. The method of claim 14 , wherein the virtual environment depicts the representation of the user's hands manipulating the user interface on the computer displayed in the virtual environment.
17. The method of claim 14 , wherein the computer in the virtual environment displays a user interface for performance of navigation of airways of a patient.
18. The method of claim 17 , wherein the user interface for performance of navigation includes central navigation, peripheral navigation, and target alignment.
19. The method of claim 18 , wherein the user interface for performance of navigation depicts an updated position of a catheter within the patient as the catheter is manipulated by a representation of the user's hands.
20. The method of claim 12 , further comprising a plurality of user interfaces for display on the computer in the virtual environment for performance of a local registration.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/150,918 US20210248922A1 (en) | 2020-02-11 | 2021-01-15 | Systems and methods for simulated product training and/or experience |
EP21156386.1A EP3866140A1 (en) | 2020-02-11 | 2021-02-10 | Systems and methods for simulated product training and/or experience |
CN202110190605.1A CN113257064A (en) | 2020-02-11 | 2021-02-18 | System and method for simulating product training and/or experience |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062975106P | 2020-02-11 | 2020-02-11 | |
US17/150,918 US20210248922A1 (en) | 2020-02-11 | 2021-01-15 | Systems and methods for simulated product training and/or experience |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210248922A1 true US20210248922A1 (en) | 2021-08-12 |
Family
ID=74591755
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/150,918 Abandoned US20210248922A1 (en) | 2020-02-11 | 2021-01-15 | Systems and methods for simulated product training and/or experience |
Country Status (3)
Country | Link |
---|---|
US (1) | US20210248922A1 (en) |
EP (1) | EP3866140A1 (en) |
CN (1) | CN113257064A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220146703A1 (en) * | 2020-11-11 | 2022-05-12 | Halliburton Energy Services, Inc. | Evaluation and visualization of well log data in selected three-dimensional volume |
US11532132B2 (en) * | 2019-03-08 | 2022-12-20 | Mubayiwa Cornelious MUSARA | Adaptive interactive medical training program with virtual patients |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160000302A1 (en) * | 2014-07-02 | 2016-01-07 | Covidien Lp | System and method for navigating within the lung |
US20180090029A1 (en) * | 2016-09-29 | 2018-03-29 | Simbionix Ltd. | Method and system for medical simulation in an operating room in a virtual reality or augmented reality environment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11127306B2 (en) * | 2017-08-21 | 2021-09-21 | Precisionos Technology Inc. | Medical virtual reality surgical system |
-
2021
- 2021-01-15 US US17/150,918 patent/US20210248922A1/en not_active Abandoned
- 2021-02-10 EP EP21156386.1A patent/EP3866140A1/en not_active Withdrawn
- 2021-02-18 CN CN202110190605.1A patent/CN113257064A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160000302A1 (en) * | 2014-07-02 | 2016-01-07 | Covidien Lp | System and method for navigating within the lung |
US20180090029A1 (en) * | 2016-09-29 | 2018-03-29 | Simbionix Ltd. | Method and system for medical simulation in an operating room in a virtual reality or augmented reality environment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11532132B2 (en) * | 2019-03-08 | 2022-12-20 | Mubayiwa Cornelious MUSARA | Adaptive interactive medical training program with virtual patients |
US20220146703A1 (en) * | 2020-11-11 | 2022-05-12 | Halliburton Energy Services, Inc. | Evaluation and visualization of well log data in selected three-dimensional volume |
US11852774B2 (en) * | 2020-11-11 | 2023-12-26 | Halliburton Energy Services, Inc. | Evaluation and visualization of well log data in selected three-dimensional volume |
Also Published As
Publication number | Publication date |
---|---|
EP3866140A1 (en) | 2021-08-18 |
CN113257064A (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2019204524B2 (en) | System and method for navigating within the lung | |
US11341692B2 (en) | System and method for identifying, marking and navigating to a target using real time two dimensional fluoroscopic data | |
US11564751B2 (en) | Systems and methods for visualizing navigation of medical devices relative to targets | |
AU2017312764B2 (en) | Method of using soft point features to predict breathing cycles and improve end registration | |
US20210248922A1 (en) | Systems and methods for simulated product training and/or experience | |
EP3689244B1 (en) | Method for displaying tumor location within endoscopic images | |
WO2024079584A1 (en) | Systems and methods of moving a medical tool with a target in a visualization or robotic system for higher yields |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: COVIDIEN LP, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GORDON, BRIAN C.;MOIN, MILAD D.;SPERLING, CHARLES P.;SIGNING DATES FROM 20200415 TO 20200420;REEL/FRAME:055241/0753 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |