CN118662231A - Active tracking system and method for electromagnetic navigation bronchoscopy tool with single guide sheath - Google Patents
Active tracking system and method for electromagnetic navigation bronchoscopy tool with single guide sheath Download PDFInfo
- Publication number
- CN118662231A CN118662231A CN202410297047.2A CN202410297047A CN118662231A CN 118662231 A CN118662231 A CN 118662231A CN 202410297047 A CN202410297047 A CN 202410297047A CN 118662231 A CN118662231 A CN 118662231A
- Authority
- CN
- China
- Prior art keywords
- tool
- target
- information
- distal portion
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000013276 bronchoscopy Methods 0.000 title description 5
- 238000002591 computed tomography Methods 0.000 claims abstract description 73
- 238000002679 ablation Methods 0.000 claims description 42
- 238000001574 biopsy Methods 0.000 claims description 24
- 230000005855 radiation Effects 0.000 claims description 16
- 238000003780 insertion Methods 0.000 claims description 9
- 230000037431 insertion Effects 0.000 claims description 9
- 230000005672 electromagnetic field Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 81
- 210000001519 tissue Anatomy 0.000 description 26
- 210000004072 lung Anatomy 0.000 description 25
- 238000011282 treatment Methods 0.000 description 18
- 238000007408 cone-beam computed tomography Methods 0.000 description 15
- 230000003902 lesion Effects 0.000 description 9
- 238000001356 surgical procedure Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 206010028980 Neoplasm Diseases 0.000 description 5
- 230000029058 respiratory gaseous exchange Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 210000003437 trachea Anatomy 0.000 description 4
- 206010073306 Exposure to radiation Diseases 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013170 computed tomography imaging Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000000241 respiratory effect Effects 0.000 description 2
- 206010061818 Disease progression Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 230000005750 disease progression Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000002674 endoscopic surgery Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000000232 gallbladder Anatomy 0.000 description 1
- 210000002216 heart Anatomy 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000003734 kidney Anatomy 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002324 minimally invasive surgery Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
- 210000005166 vasculature Anatomy 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Systems and methods for accurately navigating a tool to a target through a lumen network use fixed length tools to minimize the use of radiographic imaging and thus radiation exposure. These systems and methods relate to receiving Computed Tomography (CT) image data, generating a three-dimensional (3D) model based on the CT image data, displaying the 3D model, and receiving a position of a target in the 3D model. The systems and methods also relate to receiving information of a tool to be guided through an Extended Working Channel (EWC) including a position sensor, determining a position of the tool after the tool is guided through the EWC and fixed in position relative to the EWC, displaying a virtual tool in a 3D model based on the position information from the position sensor and the tool information, and displaying a situation in which the tool is advanced toward a target in the 3D model based on the position information and the tool information.
Description
Cross Reference to Related Applications
The present application claims the benefit and priority of U.S. provisional patent application Ser. No. 63/453,105 filed 3/18 at 2023, the entire contents of which are incorporated herein by reference.
Technical Field
The disclosed technology relates generally to endoluminal navigation and treatment systems and methods that minimize radiation exposure in radiographic imaging when navigating a medical tool to a target.
Background
There are several commonly used medical procedures, such as endoscopic or minimally invasive surgery, for the treatment of various diseases affecting organs including liver, brain, heart, lung, gall bladder, kidney and bone. Typically, a clinician employs one or more imaging modalities such as Magnetic Resonance Imaging (MRI), ultrasound imaging, computed Tomography (CT), or fluoroscopy to identify and navigate to a region of interest and a target of biopsy or treatment within a patient. In some procedures, pre-operative scanning may be utilized for target identification and intra-operative guidance. In some cases, real-time imaging or intra-operative imaging may also be required to obtain a more accurate current image of the target region and the endoluminal medical tool for biopsy or treatment of tissue in the target region. Furthermore, it may be desirable to display the current position of the medical device relative to the target and real-time image data of its surroundings to navigate the medical device to the target in a safe and accurate manner (e.g., without causing damage to other organs or tissues). However, real-time intra-operative imaging may subject the patient to unnecessary and/or potentially unhealthy amounts of X-ray radiation.
Endoscopic methods have proven to be useful in navigating to a region of interest within a patient's body, and in particular to a region within a luminal network of the body (e.g., the lungs). To implement endoscopic methods and more particularly bronchoscopic methods in the lungs, intrabronchial navigation systems have been developed that use previously acquired MRI data or CT image data to generate three-dimensional (3D) renderings, models or volumes of specific body parts such as the lungs.
A navigation plan is then created using the resulting volume generated by the MRI scan or CT scan to facilitate advancement of a navigation catheter (or other suitable medical tool) through the bronchoscope and bronchial branches of the patient to the region of interest. A positioning or tracking system, such as an Electromagnetic (EM) tracking system, may be used in conjunction with, for example, CT data to facilitate guiding a navigation catheter through a bronchial branch to a region of interest. In some cases, the navigation catheter may be positioned within one of the airways of the branched cavity network adjacent to or within the region of interest to provide access for one or more medical tools.
However, the 3D volume of the patient's lungs generated by a previously acquired scan (e.g., a CT scan) may not provide a basis sufficient to accurately guide a medical device or tool to a target during a navigation procedure. In some cases, inaccuracy is caused by the patient's lungs being deformed during the procedure relative to the lungs when the previously acquired CT data was acquired. Such deformation (CT versus body differences) may be caused by a number of different factors including, for example, body changes when transitioning between sedated and non-sedated states, bronchoscopy changing the posture of the patient, bronchoscopy pushing tissue, lung volume differences (e.g., CT scan is acquired during inspiration and navigation is performed during respiration), bed differences, day differences, etc. Such deformation can cause substantial movement of the target, making it challenging to align the medical tool with the target in order to safely and accurately biopsy or treat the tissue of the target.
Accordingly, there is a need for systems and methods that account for patient movement during in vivo navigation and biopsy or treatment procedures. Furthermore, in order for a medical tool to be safely and accurately navigated to a remote target by a surgical robotic system or clinician using a guidance system and biopsied or treated with the medical tool, the system should track the tip of the medical tool during patient body movements (e.g., patient chest movements during breathing) while minimizing intraoperative X-ray radiation to which the patient is exposed.
Disclosure of Invention
The techniques of the present disclosure generally relate to positioning a tool within an Extended Working Channel (EWC) such that a distal portion of the tool extends a predetermined distance from the distal portion of the EWC, tracking the distal portion of the tool based on Electromagnetic (EM) signals from an EM sensor disposed on the distal portion of the EWC, and navigating the EWC and the tool together to a target based on the tracked distal portion of the tool. This configuration minimizes or eliminates the use of intraoperative imaging and, thus, minimizes or eliminates exposure to harmful amounts of radiation.
In one aspect, the present disclosure provides a system for navigating to a target via a lumen network of a patient. The system comprises: an Extended Working Channel (EWC) including a position sensor disposed at a distal portion of the EWC; at least one processor; and a memory coupled to the at least one processor. The memory has instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to receive a pre-operative Computed Tomography (CT) image, generate a three-dimensional (3D) model based on the CT image, and display the 3D model in a user interface on a display operatively coupled to the at least one processor. The instructions, when executed by the at least one processor, further cause the at least one processor to receive an indication of a position of the target in the CT image, display the position of the target in the 3D model, and generate a path plan to the target for navigation of the EWC.
The instructions, when executed by the at least one processor, further cause the at least one processor to receive position information from a position sensor when the EWC is navigated through the lumen network of the patient, register the 3D model with the lumen network of the patient based on the position information, display a position of the position sensor within the 3D model that substantially corresponds to a position of the position sensor within the lumen network of the patient, and display a navigation condition that extends the working channel to follow the path plan to a position near the target. The instructions, when executed by the at least one processor, further cause the at least one processor to receive tool information for a tool disposed within the EWC that is fixed relative to the EWC and the distal portion extends distally beyond the distal portion of the EWC, determine a position of a portion of the tool based on the position information from the position sensor, and display the virtual tool in the 3D model according to the position information and the tool information; and displaying a situation in which the virtual tool advances toward the target in the 3D model based on the position information and the tool information.
In various aspects, implementations of the system can include one or more of the following features. The tool may comprise a positionable guidance tool, which may comprise a second position sensor, an ablation tool, or a biopsy tool. The EWC may include an EWC handle. The tool may include a fixation member that mates with the EWC handle when the tool is disposed within the EWC and a distal portion of the tool extends beyond the distal portion of the EWC. The fixing member may fix the tool in place with respect to the EWC. The tool may include a handle for operating the tool.
The distal portion of the EWC may be curved. The instructions, when executed by the at least one processor, may cause the at least one processor to receive location information having six degrees of freedom. The instructions, when executed by the at least one processor, may cause the at least one processor to display a message prompting a user to lock the bronchoscope adapter, receive an intra-operative image, update a relative position of the target and the EWC in the 3D model based on the intra-operative image, and display a message prompting the user to unlock the bronchoscope adapter.
The intra-operative image may be a C-arm fluoroscopic image, a3D fluoroscopic image or a cone beam CT image. The tool information may be a tool type, a tool feature, or a tool size. The instructions, when executed by the at least one processor, may cause the at least one processor to determine a location of a distal portion of the tool by projecting location information from the tool information and display a portion of the virtual tool in the 3D model based on the location of the distal portion of the tool.
The system may include a tool memory coupled to the tool and configured to store tool information, and a tool memory reader configured to read the tool information from the tool memory. The at least one processor may be in operative communication with the tool memory reader. The instructions, when executed by the at least one processor, may cause the at least one processor to receive tool information from a tool memory reader.
In another aspect, the present disclosure provides a method for navigating to a target via a luminal network. The method includes receiving a Computed Tomography (CT) image, generating a three-dimensional (3D) model based on the CT image, and displaying the 3D model in a user interface on a display. The method further includes receiving an indication of a position of the target in the CT image, displaying the position of the target in the 3D model, and generating a path plan to the target for navigation of the catheter. The method also includes receiving position information from a position sensor disposed at a distal portion of the catheter, registering the 3D model with a luminal network of the patient based on the position information, and displaying a position of the position sensor within the 3D model.
The method further includes displaying navigation of the catheter following the path planning to a location near the target, and receiving tool information of a tool disposed within the catheter, fixed in position relative to the catheter and having a distal portion extending distally beyond the distal portion of the catheter. The method further includes determining a position of a portion of the tool based on the position information from the sensor, displaying at least a portion of the tool in the 3D model based on the position information and the tool information, and displaying a progress of the tool toward the target in the 3D model based on the position information and the tool information.
In various aspects, implementations of the method can include one or more of the following features. The method may include displaying a distal portion of the tool while treating, ablating, sampling or biopsy the target. Determining the position of a portion of the tool may include determining the position of a distal portion of the tool by projecting position information from the sensor to the distal portion of the tool. The method may include receiving intra-operative images of the catheter and the target, and updating a position of the catheter in the 3D model relative to the target based on the intra-operative images.
Receiving the intra-operative image may include receiving a 2D fluoroscopic image, a 3D fluoroscopic image, or a cone beam CT image. The method may include displaying at least a portion of the path plan, at least a portion of the tool, and the target on the intra-operative image. Intra-operative images may be captured at reduced radiation doses.
In another aspect, the present disclosure provides a system that includes a guide catheter, a sensor coupled to the guide catheter, and a tool. The guide catheter is configured for insertion into a luminal network of a patient, and the sensor is configured to sense a position of a distal portion of the guide catheter within the luminal network of the patient. The tool is configured to pass through the guide catheter such that a distal portion of the tool extends distally beyond a distal portion of the guide catheter. The tool is also configured to be fixed relative to the guide catheter during navigation of the guide catheter. The system also includes at least one processor, a display operatively coupled to the at least one processor, and a memory coupled to the at least one processor. The memory has instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to display a virtual target in the 3D model on a display, receive location information from a sensor in a luminal network of the patient, and display a navigation condition of a virtual guide catheter in the vicinity of the virtual target in the 3D model based on the location information.
The instructions, when executed by the at least one processor, further cause the at least one processor to determine tool information from a tool that passes through the guide catheter and is fixed in position relative to the guide catheter, and determine a position of a distal portion of the tool. The instructions, when executed by the at least one processor, further cause the at least one processor to display a distal portion of the virtual tool in the 3D model based on the location information and the tool information, and display an operation of the virtual tool for performing a procedure on the virtual target in the 3D model based on the updated location information and the tool information.
In various aspects, implementations of the system can include one or more of the following features. The guide catheter may be a smart extended working channel and the tool may include at least two of a positionable guide, forceps, a biopsy needle, or an ablation antenna. The system may include a bronchoscope adapter configured to lock the guide catheter in place. The instructions, when executed by the at least one processor, may cause the at least one processor to receive intra-operative images, update a position of the virtual target in the 3D model relative to the virtual guide catheter based on the intra-operative images, and display a message prompting a user to unlock the bronchoscope adapter prior to executing the procedure with the tools of the tools.
The tool may comprise a handle and the length from the distal portion of the handle to the distal portion of the tool may be the same length. The system may include an electromagnetic generator configured to generate an electromagnetic field. The sensor may be an EM sensor configured to sense an electromagnetic field and output an electromagnetic field signal indicative of a location of the EM sensor.
The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the technology described in this disclosure will be apparent from the description and drawings, and from the claims.
Drawings
FIG. 1 is a perspective view illustrating an electromagnetic navigation and imaging system according to the present disclosure;
fig. 2 and 3 are flowcharts illustrating examples of methods for navigating a medical tool to a target in accordance with the present disclosure;
FIG. 4 is a diagram illustrating a user interface for visualizing an endoluminal navigation scenario for a medical tool to a target via sEWC in accordance with the present disclosure; and
FIG. 5 is a schematic diagram illustrating a computer system according to the present disclosure.
Detailed Description
In accordance with the systems and methods of the present disclosure, an electromagnetic navigation (EMN) system is used to perform biopsy or treatment of a lung lesion through a bronchoscope. The system uses an intelligent extended working channel (sEWC) as a guide catheter. sEWC may include Electromagnetic (EM) sensors for determining the position of sEWC. sEWC are deposited near the lesion, e.g., a few centimeters from the lesion, and the tool is then extended from the end of sEWC and the lesion is sampled or treated. To know the sampling location of the tool, visualization is typically performed using a C-arm fluoroscope. However, hospitals and clinicians desire to reduce radiation from C-arm fluoroscopes as much as possible to extend patient life as much as possible.
The systems and methods of the present disclosure perform tool tracking with an EMN system (e.g., an Electromagnetic Navigation Bronchoscopy (ENB) system) by updating only the tools used. sEWC or guide catheter is directed to a target site (e.g., a focal site) and stored and locked in place using a Bronchoscope Adapter (BA) that is connected to the bronchoscope. The systems and methods of the present disclosure may involve tools designed to be of a fixed length such that once each tool is inserted into sEWC, each tool protrudes from the distal portion of sEWC a set amount or predetermined distance. By extending the distal portion of each tool a set amount from the distal portion of sEWC, the amount of deflection of the distal portion of each tool is limited. Thus, the distal portion of each tool is straight and/or rigid or substantially straight and/or rigid such that at least the distal portion of each tool may be modeled and/or represented on an EMN tracking screen or window based on position information from EM sensors disposed at the distal portion of sEWC. In some aspects, the distal portion of each tool is designed to be rigid or substantially rigid, for example, by using a suitable material or design configuration, to ensure accurate tracking of the target tissue by each tool.
The systems and methods of the present disclosure limit the amount of radiation generated using C-arm fluoroscopic or other intraoperative radiographic imaging to guide sEWC and place it near or at the target. When the clinician wants to take a biopsy or treat the target tissue, the BA is unlocked and the entire sEWC is moved until the tool is in the desired position and a sample of the target tissue may be taken or the target tissue may be treated. By moving the whole sEWC, the clinician can take a biopsy using EMN tracking, rather than using radiographic imaging modalities that emit potentially harmful radiation. This allows tools using the EMN system to be tracked for tissue biopsies and/or tissue treatments, minimizing radiation to the patient and/or clinician, and providing true 3D control of the tool compared to 2D views with fluoroscopic images captured by C-arm fluoroscopes or 2D views with radiographic images captured by radiographic imaging modalities.
Fig. 1 is a perspective view of an example of a system 100 for facilitating navigation of a medical device (e.g., a catheter) to a target via an airway of a lung. The system 100 may also be configured to construct fluoroscopic-based 3D volumetric data of the target region from intra-operative 2D radiographic images (e.g., intra-operative 2D fluoroscopic images) to confirm navigation sEWC to a desired location near the target region where a tool may be placed through sEWC and extended out sEWC. In aspects, the imaging system 124 of the system 100 may include one or more of a C-arm fluoroscope, a Cone Beam Computed Tomography (CBCT) imaging system, or a 3D fluoroscopic imaging system.
The system 100 may be further configured to facilitate the medical device or tool's proximity to the target area and determine the position of the medical tool relative to the target by using sEWC's electromagnetic navigation (EMN). One such EMN system is the ILLUMISITE system currently sold by the medton force company (Medtronic PLC), but other systems for intracavity navigation are also considered to be within the scope of the present disclosure.
One aspect of the system 100 is a software component for viewing Computed Tomography (CT) image scan data that has been acquired separately from the system 100. Viewing the CT image data allows the user to identify one or more targets, plan a path to the identified targets (planning phase), navigate sEWC to the targets using a user interface running on the computer system 122 (navigation phase), and confirm placement of the distal portion of sEWC 102 near the target using one or more Electromagnetic (EM) sensors 104b, 126 disposed in or on sEWC 102 at predetermined locations at or near the distal portion of sEWC 102. The target may be a tissue of interest identified by viewing CT image data during a planning phase. After sEWC is navigated near the target, a medical device (such as a biopsy tool, access tool, or treatment tool, e.g., a flexible microwave ablation catheter) is fixed in position relative to sEWC 102 and inserted therein such that a distal portion of the medical device extends beyond a desired distance 107 of the distal end of sEWC 102, and EM navigation is used to further navigate sEWC 102 to obtain a tissue sample, to enable access to a target site, or to treat the target using the medical device.
As shown in fig. 1, sEWC is part of a catheter guide assembly 110. In practice sEWC a 102 is inserted into the bronchoscope 108 to access the luminal network of patient P. In particular, sEWC of the catheter guide assembly 110 may be inserted into a working channel of the bronchoscope 108 for navigation through a lumen network of a patient. Bronchoscope adapter 109 is coupled to the proximal portion of the bronchoscope. Bronchoscope adapter 109 may be an EDGE TM bronchoscope adapter, which is currently marketed and sold by the meiton force company. Bronchoscope adapter 109 is configured to allow sEWC movement through the working channel of bronchoscope 108 (which may be referred to as an unlocked state of bronchoscope adapter 109) or to prevent sEWC movement through the working channel of bronchoscope (which may be referred to as an unlocked state of bronchoscope adapter 109).
A Locatable Guide (LG) 101a, which may be a catheter and may include a sensor 104a similar to sensor 104b, is inserted into sEWC a 102 and locked in place such that LG sensor 104a extends a predetermined distance beyond the distal portion of sEWC 102. The tools 101a-101d having the same length include fixation members 103a-d such that when the fixation members 103a-103d of the tools 101a-101d engage (e.g., bite) with the proximal end portion of the handle 106 of the catheter guide assembly 110, the LG 101a extends beyond the distal tip or end portion of sEWC by a predetermined distance 107. The predetermined distance 107 may be based on the length of sEWC a-d of the handle 105a-d or the end of the fixation member 103a-103d and the length between the distal portion of the LG 101a or other medical tool 101b-101 d. In various aspects, the handles 105a-105d may include control objects, such as buttons or levers, for controlling the operation of the medical tools 101a-101 d.
In some aspects, the position of the fixation members 105a-105d along the length of the medical tools 101a-101d may be adjustable such that a user may adjust the distance that the distal end portion of the LG 101a or medical tools 101b-101d extends beyond the distal end portion of sEWC. The position and orientation of the LG sensor 104a relative to a reference coordinate system within the electromagnetic field can be obtained using an application executed by the computer system 122. In some aspects sEWC 102 may serve as LG 101a, in which case LG 101a may not be used. In other aspects sEWC and LG 101a can be used together. For example, the data from sensors 104a and 104b may be fused together. Catheter guide assembly 110 is currently under the trade name of meiton force corporationThe suite of programs, or EDGE TM suite of programs, is marketed and sold and is contemplated for use with the present disclosure.
The system 100 generally includes: an operating table 112 configured to support a patient P; bronchoscope 108 configured for insertion into the airway of patient P through the mouth of patient P; a monitoring device 114 (e.g., a video display for displaying video images received from a video imaging system of the bronchoscope 108) coupled to the bronchoscope 108; and a tracking system 115 including a tracking module 116, a reference sensor 118, and an emitter pad 120 or emitter board. Emitter pad 120 may include a bound tag. The system 100 further includes a computer system 122 on which software and/or hardware is used to facilitate identification of the target, planning a path to the target, navigating the medical device to the target, and/or confirming and/or determining sEWC placement of the appropriate device 102 or passing therethrough relative to the target.
As described above, an imaging system 124 capable of acquiring fluoroscopic images or video or CBCT images of the patient P is also included in the system 100. The images, sequences of images, or videos captured by imaging system 124 may be stored within imaging system 124 or transmitted to computer system 122 for storage, processing, and display. Additionally, the imaging system 124 may be moved relative to the patient P such that images may be acquired from different angles or perspectives relative to the patient P to create a sequence of images, such as fluoroscopic video.
The pose of the imaging system 124 relative to the patient P, when capturing images, may be estimated via markers associated with the transmitter pad 120 in the operating table 112 or a pad (not shown) placed between the patient and the operating table 112. Markers are positioned below patient P, between patient P and operating table 112, and between patient P and a radiation source or sensing unit of imaging system 124. The markers may have symmetrical or asymmetrical spacing, repeated patterns, or no patterns at all. The imaging system 124 may include a single imaging system or more than one imaging system. When a CBCT system is employed, the captured images may be employed to confirm the location of sEWC in the patient and/or one of the medical tools 101a-101D, update CT-based 3D modeling, or replace pre-operative 3D modeling with intra-operative modeling of the patient's airway and the location of sEWC 102 in the patient.
Computer system 122 may be any suitable computing system including a processor and a storage medium such that the processor is capable of executing instructions stored on the storage medium. Computer system 122 may further include a database configured to store patient data, CT datasets including CT images, CBCT images and datasets, fluoroscopic datasets including fluoroscopic images and video, fluoroscopic 3D reconstruction, navigation planning, and any other such data. Although not explicitly shown, the computer system 122 may include an input or may be otherwise configured to receive a CT dataset, fluoroscopic images or video, and other suitable imaging data. In addition, computer system 122 includes a display configured to display a graphical user interface. The computer system 122 may be connected to one or more networks through which the computer system 122 may access one or more databases.
With respect to the navigation phase, a six degree of freedom electromagnetic positioning or tracking system 115 or other suitable system for determining the position and orientation of the distal portion of sEWC (e.g., a fiber-bragg flexible sensor) may be utilized to perform registration of the preoperative image (e.g., a CT image dataset and a 3D model derived therefrom) and the path for navigation with the patient while the patient is on the operating table 112.
In an EMN-type system, tracking system 115 may include tracking module 116, reference sensor 118, and emitter pad 120 (including markers). The tracking system 115 is configured for use with a locatable guide (particularly an LG sensor). As described above, the medical tool (e.g., positionable guide 101a with LG sensor 104 a) is configured for insertion into the airway of patient P (with or without bronchoscope 108) via sEWC and is selectively lockable relative to one another via a locking mechanism (e.g., bronchoscope adapter 109). The emitter pad 120 is positioned below the patient P. The transmitter pad 120 generates an electromagnetic field around at least a portion of the patient P within which the locations of the LG sensor 104a, the sbwc sensor 104b and the reference sensor 118 can be determined by using the tracking module 116. Additional electromagnetic sensors 126 may also be incorporated into the ends of sEWC. The additional electromagnetic sensor 126 may be a five-degree-of-freedom sensor or a six-degree-of-freedom sensor. One or more reference sensors 118 are attached to the chest of patient P.
Registration refers to a method of correlating the coordinate system of the pre-operative image (and in particular the 3D model derived therefrom) with the airway of the patient P as observed, for example, by bronchoscope 108, and allows navigation with accurate knowledge of the location of the LG sensor in the patient and accurate depiction of that location in the 3D model. Registration may be performed by moving the LG sensor through the airway of patient P. More specifically, as the locatable guide moves through the airway, data relating to the position of the LG sensor is recorded using the emitter pad 120, the reference sensor 118 and the tracking system 115. The shape resulting from the position data is compared to the transferred internal geometry of the 3D model generated in the planning phase and a position correlation between the compared shape and the 3D model is determined, e.g. using software on the computer system 122. The software aligns or registers the image representing the position of the LG sensor with the 3D model and/or the two-dimensional image generated by the three-dimensional model, based on the recorded position data and the assumption that the LG is still located in the non-tissue space in the airway of the patient P. Alternatively, a manual registration technique may be utilized by navigating the bronchoscope 108 and LG sensor to a pre-specified location in the lungs of the patient P and manually associating the image from the bronchoscope 108 with the model data of the 3D model.
Although an EMN system using an EM sensor is described herein, the present disclosure is not so limited and may be used in conjunction with a flexible sensor, a shape sensor (e.g., a fiber bragg grating sensor), an ultrasonic sensor, or any other suitable sensor that does not emit harmful radiation. Additionally, the methods described herein may be used in conjunction with robotic systems to cause robotic actuator drive sEWC or bronchoscope 108 to approach a target.
At any point during the navigation process, tools such as locatable guide 101a, a treatment tool (e.g., microwave ablation tool 101b or forceps 101 d), a biopsy tool (e.g., biopsy needle 101 c), etc., may be fixed in position relative to sEWC and inserted therein to place one of tools 101a-101d in proximity to the target using positional information from sEWC 102. The position of the distal tip or distal portion of any of the tools 101a-101d may be calculated using the position information from the sensors 104b and/or 126 of sEWC 102.
To ensure accuracy of the position calculation, the tools 101a-101d are each designed to extend a predetermined distance from the distal end of sEWC a 102, and at least the distal portion of the tools 101a-101d extending from sEWC 102 are designed to be rigid or substantially rigid. The predetermined distance may vary depending on one or more of the design of the tools 101a-101d, the stiffness of the tools 101a-101d, or how each tool 101a-101d interacts with different types of tissue. The tools 101a-101d may be designed or characterized to set a predetermined distance to ensure that deflection is managed (e.g., minimized) such that the virtual tools and environments displayed to the clinician are accurate representations of the actual clinical tools and environments.
Calculating the position of the distal portion of any of the tools 101a-101d may include projecting position information distally from the sensors 104b and/or 126 based on the tool information. The tool information may include one or more of a shape of the tool, a type of the tool, a hardness of the tool, a type or feature of tissue to be treated by the tool, or a size of the tool.
With respect to the planning phase, computer system 122 or a separate computer system, not shown, utilizes pre-acquired CT image data to generate and view a 3D model or rendering of the airway of patient P, enabling identification of the target (automatic, semi-automatic, or manual), and allowing determination of a path through the airway of patient P to tissue located at and surrounding the target. More specifically, CT images acquired by CT scanning are processed and assembled into a 3D CT volume, which is then used to generate a 3D model of the airway of patient P. The 3D model may be displayed on a display associated with the computer system 122, or in any other suitable manner. Using the computer system 122, various views of the 3D model or the enhanced two-dimensional image generated by the 3D model are presented. The enhanced two-dimensional images may have some 3D functionality because they are generated from 3D data. The 3D model may be manipulated to facilitate recognition of the target on the 3D model or two-dimensional image, and selection of an appropriate path through the airway of the patient P into tissue located at the target may be made. Once selected, the path plan, 3D model, and images derived therefrom may be saved and exported to the navigation system for use during the navigation phase(s). The ILLUMISITE software suite currently sold by the meiton force company includes one such programming software.
Fig. 2 depicts an example of a method of using the systems described herein. Those skilled in the art will recognize that one or more blocks of the method depicted in fig. 2 may be omitted without departing from the scope of this disclosure. The method 200 begins at block 202 with receiving a pre-operative CT image and generating a 3D model based on the pre-operative CT image. At block 204, the 3D model and the separate CT images (e.g., coronal view, axial view, sagittal view) may be displayed in a user interface, allowing the user to manually examine the CT images and determine the location of the target. Alternatively, the software application may automatically detect the target.
At block 206, the system receives an indication of a location of the target. This may be the result of a user manipulating the 3D model or CT image in the user interface for manual tagging, or may be the result of the user accepting an automatic identification of the target by the application as accurate. Upon receiving an indication of the location of the target, the location of the virtual target is displayed in a 3D model on the user interface. Next, at block 208, a path to the target may be generated for navigation of sEWC. As an example, the airway closest to the target may be identified, and once identified, a path from the closest airway to the trachea may be automatically generated. Once the location of the target and the path to the target are determined, the clinician may perform the procedure.
At block 210, after placing the patient P on the operating table 112 and inserting sEWC and associated sensors 104b, 126 into the airway of the patient P, the application receives one or more sensor signals from one or more of the sensors 104b, 126 that indicate the location of sEWC 102 as sEWC 102 is navigated through the patient's lumen network. At block 210, the application also registers the 3D model and/or CT image with the lumen network of the patient P. As sEWC moves through at least a portion of the airway, receiving positional information from one or more of the sensors 104b, 126 enables registration of the 3D model and/or CT image with the actual lungs of the patient presented on the operating table 112. Once the patient's lungs are registered with the 3D model and/or CT images, the position of sEWC (particularly sensors 104b, 126) is displayed in the 3D model at block 212. The displayed position may substantially correspond to the actual position of sEWC within the patient's lungs.
At block 212, as sEWC advances into the patient P, the positions of the sensors 104b, 126 are displayed in the 3D model to follow a path plan to navigate sEWC to a position near the target. At optional block 214, once the target is accessed, the imaging system 124 may be deployed and the intra-operative image captured and transmitted to the application. Block 214 may be optional where CT is minimally different from the body. In this way, radiation exposure may be reduced. These intra-operative images may be used to determine sEWC the position of the sensor 104b, 126 and the relative to the target. Further, at block 214, the position of the virtual sEWC relative to the target is updated in the 3D model. Next, tool information for one of the tools 101a-101d disposed within sEWC 102, fixed relative to sEWC 102, with a distal portion extending distally beyond the distal portion of sEWC 102 is received at block 216. For example, the tool information may be received via text entered into a field of the user interface. Alternatively, the tool information may be obtained from a pattern (e.g., a bar code or QR code) on the tool, or from an RFID tag disposed on or in the tool. The tool information may include one or more of a size of the tool, a defined feature or distal tip of the tool, a predetermined distance and/or angle between a portion or multiple tips (e.g., in the case of forceps) and a distal portion of sEWC.
Then, at block 216, after the tool is guided through sEWC and fixed in position relative thereto, position information and tool information are received, and a position of the tool, e.g., a portion of the tool, a distal portion, or a tip, is determined based on the position information and tool information from an EM sensor disposed at a distal portion of sEWC 102. For example, the position of the tool tip may be calculated by projecting position information from the EM sensor to the tip of the tool based on the distance and/or angle in the tool information. The tool information may also include size information that may be used to construct and display a virtual representation or model of the tool in an EMN display window or screen. At block 218, as navigation continues until sEWC approaches the target, a virtual tool is displayed in the 3D model based on the location information and the tool information. At block 220, a status of advancement of the virtual tool (e.g., a distal portion of the tool) toward the virtual target is displayed in the 3D model based on the location information and the tool information. As sEWC 102 advances based on the position information and tool information from one or more of the sensors 104b, 126 of sEWC 102, the 3D model may be displayed in a user interface (e.g., user interface 400 of fig. 4).
In some aspects, a user interface may be displayed to input sEWC and/or medical tool information, which may be used with the position information of one or more of the sensors 104b, 126 of sEWC 102 to calculate or project the position of the distal portion of one of the medical tools 101a-101 d. The user interface may include a prompt and text field for entering sEWC an identification number. The sEWC identification number entered into the text field may be used to search the database for information related to sEWC 102 corresponding to the sEWC identification number entered into the text field. The user interface may also include alternative prompts and text fields for entering information related to sEWC 102,102, which may be printed on sEWC 102,102 or a package associated with sEWC 102,102.
The user interface may also include prompt and text fields for entering a tool identifier. The tool identification number entered into the text field may be used to search the database for information related to the tool corresponding to the tool identification number entered into the text field. The user interface may also include alternative prompts and text fields for entering information related to the tool, which may be printed on the tool or a package associated with the tool.
Fig. 3A is a flowchart illustrating an example of a method for acquiring tool information and using the information to calculate a position of a distal portion of a tool. At block 302, tool information is stored in a tool memory (e.g., an electronic chip or RFID tag) coupled to the tool. At block 304, tool information is read from a tool memory when a tool is disposed within the catheter. Alternatively, the tool information may be obtained from a code (e.g., QR code) provided on the tool surface. At block 306, the position of the distal portion of the tool is determined by projecting position information from the tool information. Then, at block 308, a portion of the virtual tool is displayed in the 3D model based on the location of the distal portion of the tool.
Fig. 3B is a flowchart illustrating an example of an alternative method for updating a virtual target relative to a virtual tool using an intra-operative image (e.g., an intra-operative C-arm fluoroscopic image). The method may be performed after sEWC a 102 is navigated to the vicinity of the target in order to confirm that registration between the CT image and the actual lumen network is still accurate. At block 310, an intra-operative image is received. At block 312, the position of the virtual target in the 3D model relative to the virtual guide catheter is updated based on the intra-operative image. Then, at block 314, a message prompting the user to unlock the bronchoscope adapter is displayed prior to execution of the procedure with one of the tools 101a-101 b.
The 3D modeling and path generation of the present disclosure may be performed by a software application stored in memory on computer system 122 that, when executed by a processor, performs the various steps as described herein to generate an output that is displayed in a user interface. The 3D model is a model of the airway and vasculature surrounding the airway and is generated, for example, from pre-operative CT images of the patient's lungs. Using segmentation techniques, a3D model is defined from the CT image dataset and the airways are delineated in one color, the veins are delineated in a second color and the arteries are delineated in a third color to assist the surgeon in distinguishing portions of the anatomy based on color.
The application generating the 3D model may include a CT image viewer (not shown) that enables a user to view individual CT images (e.g., 2D slice images from CT image data) prior to generating the 3D model. By viewing the CT images, a clinician or other user can utilize their human anatomical knowledge to identify one or more targets within the patient. The clinician can mark the target or the position of the suspicious target in the CT image. If the target is identified in, for example, an axial slice CT image, the location may also be displayed in, for example, sagittal and coronal views. The user can then adjust the recognition of the target edge in all three views to ensure that the entire target is recognized. As will be appreciated, other views may be viewed to aid in this process without departing from the scope of the present disclosure. The application uses the position indication provided by the clinician to generate and display an indicator of the target position in the 3D model. In addition to manually marking target locations, there are various known automatic target recognition tools that are configured to automatically process CT image scans and recognize suspicious targets.
Once the target is identified, the path planning software allows the user to identify airways near the target in 2D slices of the 3D volume, e.g., a CT image dataset. When identifying the airway, the path planning software may automatically generate a path from the target (particularly the identified airway proximate to the target) to the trachea. This may be achieved using an algorithm that ensures that the diameter of the airway increases relative to the previous portion of the airway with each advancement of the path from the target to the trachea. As will be appreciated, the planning software may also automatically identify the airway nearest the target and generate a path from the target to the trachea that is acceptable to the clinician.
After identifying the target (e.g., tumor) and determining a path to the target, computer system 122 is used to guide sEWC navigation to the target. As described above, the first step in this navigation is to register the 3D model and the above-identified data with the patient P while the patient is on the operating table 112. After registering patient P with the image data and the path plan, a user interface 400, as shown in fig. 4, is displayed on computer system 122, allowing the clinician to advance sEWC into the airway of patient P and replicate sEWC movement of patient P and movement of the distal portion of the medical tool in the 3D model.
The user interface 400 includes various features and views common to pulmonary navigation applications. These include bronchoscopic views 408, which depict views of bronchoscopes (one of which is employed). The 3D map view 406 provides a perspective of the 3D model of the lumen network and the position of sEWC 102,102 as it navigates through the patient P toward the target. The various buttons enable changing the front view 410 displayed in the user interface 400. These buttons include a central navigation button 411, peripheral navigation buttons 401, and a target alignment view button 412.
As the navigation of sEWC changes from the central airway to the peripheral airway, the appropriate buttons may be selected to change the front view 410. This may also be automatic when the position sEWC 102,102 is detected. As shown in fig. 4, the target alignment view 412 has been selected. Once selected, the target window 404 is displayed overlaid on the front view 410. Alternatively, the target window 404 may not be overlaid on any other view of the user interface 400. Target window 404 depicts a crosshair, a virtual target 409, and a distance indicator 418 that displays the distance from the end position of virtual tool 407 (which is calculated based on the detected end position of sEWC 102) to virtual target 409. As shown in the user interface 400 of fig. 4, the virtual tool 407 has been navigated to within about 5cm of the virtual target 409 via sEWC a 102 to biopsy or treat another segment of the virtual target 409. Also depicted in the front view 410 is a planned path 420 to the virtual target 409. Virtual target 409 may be segmented and show segments that have been biopsied or otherwise treated by virtual tool 407. For example, virtual target 409 may show a point 419 indicating a location where virtual tool 407 has biopsied or treated virtual target 409. This enables the clinician to perform uniform treatment or biopsy of the target 409.
Front view 410 also shows the distal portion of virtual sEWC and the distal portion of virtual tool 407 that extends beyond the distal portion of virtual sEWC 405. The front view 410 may also display a message 415 to the clinician to unlock the bronchoscope adapter so that the clinician may control sEWC to move the distal portion of the virtual tool 407 toward and into or near the virtual target 409 to perform a biopsy or treatment procedure.
Local registration may be performed via button 403. Local registration helps to eliminate any differences, such as CT versus body differences, as well as any differences in target position caused by shrinkage related to treatment of other targets, insertion of tools into the airway, and the position of patient P. Local registration employs imaging system 124 to acquire intra-operative images of the region proximate sEWC and the end of virtual target 409. The initial step is to acquire a sEWC 2D fluoroscopic image of 405 and mark the area of sEWC. This helps to ensure that the imaging system 124 is properly focused on sEWC. Next, the imaging system 124 captures a sequence of fluoroscopic images, for example, from about 25 degrees on one side of the AP location to about 25 degrees on the other side of the AP location, and the computer system 122 can then generate a fluoroscopic 3D reconstruction. The generation of the fluoroscopic 3D reconstruction is based on projections of the fluoroscopic image sequence and structures of the marker within the emitter pad 120 that are bound to the fluoroscopic image sequence.
After generating the fluoroscopic 3D reconstruction, sEWC is marked 102 in the two 2D images of the 3D reconstruction. These two images are taken from different parts of the fluoroscopic 3D reconstruction. The 3D reconstructed fluoroscopic image may be presented on a user interface in a scrollable format in which a user is able to scroll through the slice continuously as desired. Next, the target needs to be labeled in a fluoroscopic 3D reconstruction. A scroll bar may be provided to allow the user to scroll through the fluoroscopic 3D reconstruction until the target is identified. In addition, it may be desirable to label the target from two different angles. Once completed, the user interface displays the entire fluoroscopic 3D reconstruction, which can be viewed slice-by-slice to ensure that the target or lesion remains within the marker through the fluoroscopic 3D reconstruction.
After confirming that the target has been accurately marked, the local registration process ends and the relative position of virtual sEWC in the 3D model and path plan are updated to show the actual current relative position of the end of virtual sEWC 405 and the target within the patient.
In some aspects, the system 100 includes an ablation instrument 101b (e.g., an ablation catheter), and the computer system 122 may perform an ablation planning feature. In the ablation planning feature, when the clinician is ready to initiate the treatment portion of the procedure, the clinician may be presented with all pre-operatively generated data relating to placement of the ablation catheter, timing and power settings of the ablation, and duration of any pauses on the user interface. In addition, the data may be overlaid on an intra-operative image (e.g., a fluoroscopic or CBCT image). These may be additional intra-operative images acquired after local registration. As an example, the location for inserting the ablation catheter or biopsy catheter into the target may be overlaid on the intra-operative image so that the positioning is accurate prior to inserting the ablation catheter or biopsy catheter. In some cases, the intraoperative image may be a real-time fluoroscopic video such that when the ablation catheter intersects the overlay mark location in the ablation planning feature, at least a portion of the advancement of the ablation catheter toward the target may be observed.
Once properly placed, and again prior to insertion of the ablation catheter, the ablation zone specified in the ablation planning feature may be covered and compared to the tissue presented in the intraoperative image. It is well known that the location of tissue during surgery is expected to be different from the location of tissue during the pre-operative image. In fact, this is the basic principle of local registration. Thus, by overlaying the intended ablation zone over the intra-operative image, the user can edit the ablation zone prior to inserting the ablation catheter into sEWC and the target.
Further, another set of intra-operative images may be acquired by the imaging system 124 as the ablation catheter is inserted into sEWC. At this point, the ablation catheter may be segmented from the images to confirm whether it is placed at the correct location and correct depth in the target, and the updated or altered intended ablation zone may be overlaid on these images before ablation begins. This provides yet another opportunity to adjust or alter the ablation zone, edge, placement of the ablation catheter, power, duration, and other factors prior to beginning treatment. Once the ablation zone (when it is overlaid on the target image in the intra-operative image) is deemed acceptable, treatment (e.g., application of microwave energy, RF energy, etc.) may begin.
In various aspects, critical structures may be overlaid on the intra-operative image. By observing the coverage of these critical structures on an intra-operative image (e.g., a fluoroscopic or CBCT image) acquired by imaging system 124, the lungs can be monitored during surgery. Such monitoring may be manual or automatic and enables the level of difference between the locations of these critical structures in the pre-operative image and the 3D model and their locations in the intra-operative image to be determined. This discrepancy may be the result of patient movement, or of some intraoperative aspect (such as insertion of one or more of the tools 101a-101d through sEWC 102,102). Further, the difference may be an indicator of lung tension or target movement of one or more lung lobes. As a result of this detected discrepancy, an indicator may appear on the user interface, directing the user to make another scan using the imaging system 124 or the same imaging modality as the pre-operative imaging, to confirm the discrepancy and assess the change in patient anatomy.
As described herein, a variety of intraoperative imaging may optionally be employed during treatment of a patient. One aspect of the present disclosure is the ability to register preoperative CT images with any intra-operative image. In some cases, this allows the data from the planning stage to be overlaid on the fluoroscopic image. In addition, in the case of any intraoperative CT imaging (including CBCT imaging), all path planning and ablation planning may be carried from one such CT image dataset to a second CT image dataset based on registration. This eliminates the need for additional planning during surgery. Registration may be based on the structure of the airway, as well as other harder, non-moving tissues such as ribs that may be observed in the CT image. Additional local registration as described above may also be employed to eliminate any error in registration at or near the target, where appropriate.
Regarding alternative intraoperative imaging using imaging system 124, a variety of techniques are contemplated in connection with the present disclosure. As described above, the imaging system 124 may be a C-arm fluoroscope, a CBCT imaging system, or a 3D fluoroscopic imaging system. Where the imaging system 124 is a CBCT imaging system, the local registration procedure described above for updating sEWC the position of the target in the patient may also be employed. In one aspect of the application, low dose CBCT scanning may be utilized because local registration does not require high resolution scanning. The low dose scan reduces the amount of radiation absorbed by patient P during surgery. It is always desirable to reduce the amount of radiation to which the patient P and doctors and nurses are exposed during surgery. In some cases, where there is integration between imaging system 124 and system 100, imaging system 124 may receive input directing imaging system 124 to a particular location for imaging.
Additional aspects of intraoperative imaging relate to additional methods of reducing the amount of radiation to patients and clinicians. CBCT imaging systems have a frame rate at which images are captured. By capturing images at a relatively high frequency and with a small angular displacement between the images, CBCT imaging systems can capture very high resolution image datasets. However, such high resolution is not required in all cases. One aspect of the present disclosure relates to directing the imaging system 124 to skip frames. Depending on the setting, this may be to skip one frame every other frame, every third frame, every fourth frame, etc. By doing so, the overall exposure to radiation may be reduced when considering hundreds or even thousands of frames that make up a radiographic image.
Alternatively, particularly where a high resolution image is desired, for example sEWC a 102 or tools 101a-101d may be placed within the target, thereby focusing the imaging system 124 only on the tissue of interest. The rays of the imaging system 124 (e.g., CBCT system) may be calibrated to reduce the imaging area to only the tissue or region of interest and not the entire lung of the patient. In this way, both the total amount of radiation and the area to which the radiation is directed are reduced. However, the images produced by such focused imaging have a very high resolution, allowing detailed analysis of, for example, the tissue surrounding the target or the correct placement of sEWC and the tools 101a-101 d.
The preoperative CT images acquired for performing the surgical planning are typically taken while the patient P is in a breath-hold state. This ensures that the lungs are fully inflated, providing a clearer image. However, during surgery, patient P typically maintains tidal volume breathing. This minimizes pressure on patient P, thereby ensuring adequate oxygen saturation during surgery and minimizing lung and target movement due to respiration. In connection with biopsies and other tissue treatments, imaging may be triggered at a specified positive end-expiratory pressure (PEEP) (i.e., the positive pressure that will remain in the airway at the end of the respiratory cycle) rather than continuing under tidal volume breathing during tissue treatment or even during insertion of a tool into the target. Imaging may be performed upon inhalation or exhalation at a predetermined PEEP, or may be performed during breath-hold at a predetermined PEEP. The result is that when an ablation catheter is inserted, accurate imaging can be obtained with limited movement of the target and ablation catheter or biopsy catheter during insertion. The imaging system 124 may be configured to automatically initiate imaging when the lung pressure is proximate to a specified PEEP and to terminate imaging after a certain threshold lung pressure is reached. In this way, the imaging system 124 (e.g., CBCT system) may be cycled on and off at specified times during an ablation procedure where the target is expected to be in the same location. In this way, a consistent comparison of the target ablation volume and the planned ablation volume can be made.
As will be appreciated, the integration of the imaging system 124 and the controls of the pulmonary navigation user interface 400 can greatly enhance the functionality of the overall system. For example, one or more buttons may be generated by an application program stored in the memory of computer system 122. These buttons transmit signals to the imaging system 124, for example, to initiate imaging of the patient. Additionally, the buttons may transmit signals to the imaging system 124 that control their position relative to the patient. In one embodiment, the navigation application knows the position and orientation of the distal portion sEWC, for example, via interaction of the sensors 104b, 126 and the transmitter pad 120. The positional information may be used to generate a signal that is transmitted to the imaging system 124 to orient the imaging system 124 such that the generated image is centered at sEWC a 102. Centering the image on sEWC 102 ensures that the target is visible in the image, with sEWC that has been navigated to the vicinity of the target.
As an example, after determining the location of sEWC 102,102, the navigation application presents one or more buttons on the user interface to drive the imaging system 124 to the optimal location to image the tip and target of sEWC 102,102. Once so positioned, the imaging system 124 performs a scan and then transmits the acquired image to the navigation application to display the acquired image or a 3D model generated from the image and update the navigation user interface 400.
Although described herein as being performed using one or more buttons displayed on a user interface, the present disclosure is not limited thereto. The process may be automated and may generate a signal when certain phases are reached during navigation. For example, upon navigating sEWC to near the target, a signal may be transmitted to the imaging system 124 to cause the imaging system 124 to automatically stop in order to begin EM tracking of the tip of sEWC and calculate the position of the tip or end portion of one of the tools 101a-101 d.
As described herein, these methods and systems relate in various aspects to placement of a tool, such as ablation tool 101b, in a lesion or tumor of a patient. In particular, an ablation instrument is navigated through the luminal network of the airway and placed within the tumor or lesion to ablate or kill cells of the lesion or tumor and the margin around the lesion or tumor, thereby preventing disease progression at that location. EMPRINT TM ablation platform offered by meiton force corporation is a system for performing ablations that produce spherical ablations. Other ablation systems, such as Radio Frequency (RF), ethanol, cryogenic, etc., may be used with the systems and methods described herein, and navigation is substantially as described above.
Referring now to fig. 5, which is a schematic diagram of a system 500 configured for use with the methods of the present disclosure, including the method of fig. 2. The system 500 may include a workstation 501 and an optional imaging system 515, for example, a fluoroscopic imaging system and/or a CT imaging system for capturing preoperative 3D images. In some aspects, the workstation 501 may be coupled to the imaging system 515 directly or indirectly, such as through wireless communication. The workstation 501 may include a memory 502, a processor 504, a display 506, and an input device 510. Processor 504 may include one or more hardware processors. The workstation 501 may optionally include an output module 512 and a network interface 508. Memory 502 may store applications 518 and image data 514. Application 518 may include instructions executable by processor 504 for performing the methods of the present disclosure, including the method of fig. 2.
Application 518 may further include user interface 516. The image data 514 may include preoperative CT image data, fluoroscopic image data or fluoroscopic 3D reconstruction data. The processor 504 may be coupled with the memory 502, the display 506, the input device 510, the output module 512, the network interface 508, and the imaging system 515. The workstation 501 may be a stationary computer system such as a personal computer or a portable computer system such as a tablet computer. The workstation 501 may be embedded with a plurality of computers.
Memory 502 may include any non-transitory computer readable storage medium for storing data and/or software including instructions executable by processor 504 and controlling the operation of workstation 501, processing data from one or more EM sensors 520 disposed in or on sEWC (e.g., at the distal portion of sEWC) to track sEWC the position and calculate or project the position of the distal portion of the medical tool at a fixed location within sEWC, and in some aspects may also control the operation of imaging system 515. The imaging system 515 may be used to capture a sequence of pre-operative CT images of a portion of a patient's body (e.g., a lung) as the portion moves (e.g., as the lung moves during a respiratory cycle). Optionally, the imaging system 515 may include a fluoroscopic imaging system that captures a sequence of fluoroscopic images, generates a fluoroscopic 3D reconstruction based on the sequence of fluoroscopic images, and/or captures real-time 2D fluoroscopic views to confirm sEWC and/or placement of the medical tool. In one aspect, memory 502 may include one or more storage devices, such as solid state storage devices, e.g., flash memory chips. Alternatively, or in addition to the one or more solid state storage devices, memory 502 may include one or more mass storage devices connected to processor 504 through a mass storage controller (not shown) and a communication bus (not shown).
Although the description of computer-readable media contained herein refers to a solid state storage device, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 504. That is, computer-readable storage media may include non-transitory, volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, a computer-readable storage medium may include RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, blu-ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by workstation 501.
An application 518, when executed by the processor 504, may cause the display 506 to present a user interface 516. The user interface 516 may be configured to present a single screen to a user that includes a three-dimensional (3D) view of a 3D model of the target from a perspective of a tip of the medical tool, a real-time two-dimensional (2D) fluoroscopic view displaying the medical tool, and target markers corresponding to the 3D model of the target overlaid on the real-time 2D fluoroscopic view. The user interface 516 may be further configured to display the target markers in different colors depending on whether the medical tool tip is aligned with the target in three dimensions.
The network interface 508 may be configured to connect to a network, such as a Local Area Network (LAN), a Wide Area Network (WAN), a wireless mobile network, a Bluetooth network, and/or the Internet, which may be comprised of wired and/or wireless networks. The network interface 508 may be used to connect between the workstation 501 and the imaging system 515. The network interface 508 may also be used to receive image data 514. Input device 510 may be any device that a user may use to interact with workstation 501, such as a mouse, keyboard, foot pedal, touch screen, and/or voice interface. The output module 512 may include any connection port or bus, such as a parallel port, a serial port, a Universal Serial Bus (USB), or any other similar connection port known to those skilled in the art. From the foregoing and with reference to the various figures, it will be appreciated by those skilled in the art that certain modifications may be made to the disclosure without departing from the scope of the disclosure.
It should be understood that the various aspects disclosed herein may be combined in different combinations than specifically presented in the specification and drawings. It should also be appreciated that, according to an example, certain acts or events of any of the processes or methods described herein can be performed in a different order, may be added, combined, or omitted entirely (e.g., all of the described acts or events may not be necessary to implement the techniques). Additionally, although certain aspects of the present disclosure are described as being performed by a single module or unit for clarity, it should be understood that the techniques of the present disclosure may be performed by a unit or combination of modules associated with, for example, a medical tool.
In one or more examples, the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on a computer-readable medium as one or more instructions or code and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media corresponding to tangible media, such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor" as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Furthermore, the techniques may be fully implemented in one or more circuits or logic elements.
Claims (20)
1. A system for navigating to a target via a lumen network of a patient, the system comprising:
an Extended Working Channel (EWC) including a position sensor disposed at a distal portion of the EWC;
At least one processor; and
A memory coupled to the at least one processor and having instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to:
receiving a pre-operative Computed Tomography (CT) image;
generating a three-dimensional (3D) model based on the CT image;
displaying the 3D model in a user interface on a display operatively coupled to the at least one processor;
Receiving an indication of a position of a target in the CT image;
Displaying the position of the target in the 3D model;
generating a path plan to the target for navigation of the EWC;
receiving location information from the location sensor as the EWC is navigated through a lumen network of a patient;
registering the 3D model with a lumen network of the patient based on the location information;
Displaying a position of the position sensor within the 3D model, the position substantially corresponding to a position of the position sensor within a luminal network of the patient;
displaying navigation conditions of the extended working channel reaching a position near the target following the path planning;
Receiving tool information of a tool disposed within the EWC, fixed relative to the EWC, and having a distal portion extending distally beyond the distal portion of the EWC;
Determining a position of a portion of the tool based on position information from the position sensor;
Displaying a virtual tool in the 3D model according to the position information and the tool information; and
And displaying the pushing condition of the virtual tool to the target in the 3D model based on the position information and the tool information.
2. The system of claim 1, wherein the tool comprises a positionable guidance tool, an ablation tool, or a biopsy tool comprising a second position sensor.
3. The system of claim 2, wherein the EWC comprises an EWC handle, and
Wherein the tool comprises:
A securing member configured to mate with the EWC handle when the tool is disposed within the EWC and a distal portion of the tool extends beyond a distal portion of the EWC, and configured to secure the tool in place relative to the EWC; and
For operating the handle of the tool.
4. The system of claim 1, wherein the distal portion of the EWC is curved,
Wherein the instructions, when executed by the at least one processor, further cause the at least one processor to receive position information having six degrees of freedom.
5. The system of claim 1, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to:
displaying a message prompting a user to lock the bronchoscope adapter;
Receiving an intra-operative image;
Updating the relative positions of the target and the EWC in the 3D model based on the intra-operative image; and
And displaying a message prompting the user to unlock the bronchoscope adapter.
6. The system of claim 5, wherein the intra-operative image is a C-arm fluoroscopic image, a 3D fluoroscopic image, or a cone beam CT image.
7. The system of claim 6, wherein the tool information is a tool type, a tool characteristic, or a tool size, and
Wherein the instructions, when executed by the at least one processor, further cause the at least one processor to:
determining a position of a distal portion of the tool by projecting the position information from the tool information; and
A portion of the virtual tool is displayed in the 3D model based on a position of a distal portion of the tool.
8. The system of claim 6, further comprising:
a tool memory coupled to the tool and configured to store the tool information; and
A tool memory reader configured to read the tool information from the tool memory,
Wherein the at least one processor is in operative communication with the tool memory reader, and
Wherein the instructions, when executed by the at least one processor, further cause the at least one processor to receive the tool information from the tool memory reader.
9. A method for navigating to a target via a luminal network, the method comprising:
Receiving a Computed Tomography (CT) image;
generating a three-dimensional (3D) model based on the CT image;
Displaying the 3D model in a user interface on a display;
Receiving an indication of a position of a target in the CT image;
Displaying the position of the target in the 3D model;
generating a path plan for navigation of the catheter to the target;
Receiving position information from a position sensor disposed at a distal portion of the catheter;
Registering the 3D model with a lumen network of the patient based on the location information;
displaying the position of the position sensor within the 3D model;
displaying navigation of the catheter to a location near the target following the path plan;
receiving tool information for a tool disposed within the catheter, fixed in position relative to the catheter and having a distal portion extending distally beyond a distal portion of the catheter;
Determining a position of a portion of the tool based on position information from the position sensor;
displaying at least a portion of the tool in the 3D model based on the location information and the tool information; and
Displaying a progress of the tool toward the target in the 3D model based on the position information and the tool information.
10. The method of claim 9, further comprising displaying a distal portion of the tool while treating, ablating, sampling or biopsy the target.
11. The method of claim 9, wherein determining the position of a portion of the tool comprises determining the position of a distal portion of the tool by projecting position information from the position sensor to the distal portion of the tool.
12. The method of claim 9, further comprising:
Receiving an intra-operative image of the catheter and the target; and
Updating a position of the catheter in the 3D model relative to the target based on the intra-operative image.
13. The method of claim 12, wherein receiving the intra-operative image comprises receiving a 2D fluoroscopic image, a 3D fluoroscopic image, or a cone beam CT image.
14. The method of claim 12, further comprising displaying at least a portion of the path plan, at least a portion of the tool, and the target on the intra-operative image.
15. The method of claim 12, wherein the intra-operative image is captured at a reduced radiation dose.
16. A system, comprising:
a guide catheter configured for insertion into a luminal network of a patient;
a sensor coupled to the guide catheter for sensing a position of a distal portion of the guide catheter within a luminal network of the patient;
A tool configured to pass through the guide catheter such that a distal portion of the tool extends distally beyond a distal portion of the guide catheter, and the tool is configured to be fixed relative to the guide catheter during navigation of the guide catheter;
At least one processor;
a display operatively coupled to the at least one processor; and
A memory coupled to the at least one processor and having instructions stored thereon that, when executed by the at least one processor, cause the at least one processor to:
Displaying virtual targets in the 3D model on the display;
Receiving location information from a sensor in a luminal network of the patient;
Displaying navigation conditions of a virtual guide catheter near the virtual target in the 3D model based on the position information;
Determining tool information from a tool that passes through the guide catheter and is fixed in position relative to the guide catheter;
Determining a position of a distal portion of the tool;
displaying a distal portion of a virtual tool in the 3D model based on the location information and the tool information; and
Displaying an operation of the virtual tool for executing a program on the virtual target in the 3D model based on the updated position information and the tool information.
17. The system of claim 16, wherein the guide catheter is a smart extended working channel, and
Wherein the tool comprises at least two of a positionable guide, forceps, a biopsy needle, or an ablation antenna.
18. The system of claim 16, further comprising a bronchoscope adapter configured to lock the guide catheter in place,
Wherein the instructions, when executed by the at least one processor, further cause the at least one processor to:
Receiving an intra-operative image;
updating a position of the virtual target in the 3D model relative to the virtual guide catheter based on the intra-operative image; and
A message prompting a user to unlock the bronchoscope adapter is displayed prior to execution of a program with a tool of the tools.
19. The system of claim 16, wherein the tool comprises a handle and the length from the distal portion of the handle to the distal portion of the tool is the same length.
20. The system of claim 16, further comprising an electromagnetic generator configured to generate an electromagnetic field,
Wherein the sensor is an EM sensor configured to sense the electromagnetic field and output an electromagnetic field signal indicative of a location of the EM sensor.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/453,105 | 2023-03-18 | ||
US18/430,192 US20240307126A1 (en) | 2023-03-18 | 2024-02-01 | Systems and methods for active tracking of electromagnetic navigation bronchoscopy tools with single guide sheaths |
US18/430,192 | 2024-02-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118662231A true CN118662231A (en) | 2024-09-20 |
Family
ID=92731066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410297047.2A Pending CN118662231A (en) | 2023-03-18 | 2024-03-15 | Active tracking system and method for electromagnetic navigation bronchoscopy tool with single guide sheath |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118662231A (en) |
-
2024
- 2024-03-15 CN CN202410297047.2A patent/CN118662231A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11547377B2 (en) | System and method for navigating to target and performing procedure on target utilizing fluoroscopic-based local three dimensional volume reconstruction | |
US11925493B2 (en) | Visualization, navigation, and planning with electromagnetic navigation bronchoscopy and cone beam computed tomography integrated | |
US11553968B2 (en) | Apparatuses and methods for registering a real-time image feed from an imaging device to a steerable catheter | |
US11622815B2 (en) | Systems and methods for providing proximity awareness to pleural boundaries, vascular structures, and other critical intra-thoracic structures during electromagnetic navigation bronchoscopy | |
CN110741414B (en) | Systems and methods for identifying, marking, and navigating to a target using real-time two-dimensional fluoroscopic data | |
CN111699515B (en) | System and method for pose estimation of imaging device and for determining position of medical device relative to target | |
US20240050166A1 (en) | Integration of multiple data sources for localization and navigation | |
EP3133995A2 (en) | Apparatuses and methods for endobronchial navigation to and confirmation of the location of a target tissue and percutaneous interception of the target tissue | |
CN111568544B (en) | Systems and methods for visualizing navigation of a medical device relative to a target | |
CA2939099C (en) | Marker placement | |
CN112386336A (en) | System and method for fluorescence-CT imaging with initial registration | |
CN112294436A (en) | Cone beam and 3D fluoroscopic lung navigation | |
CN111513844B (en) | System and method for fluoroscopically validating tools in lesions | |
CN112890953A (en) | Method for maintaining distal catheter tip to target positioning during ventilation and/or cardiac cycles | |
CN115843232A (en) | Zoom detection and fluoroscopic movement detection for target coverage | |
CN115998429A (en) | System and method for planning and navigating a lumen network | |
EP4434483A1 (en) | Systems and methods for active tracking of electromagnetic navigation bronchoscopy tools with single guide sheaths | |
CN118662231A (en) | Active tracking system and method for electromagnetic navigation bronchoscopy tool with single guide sheath | |
US20200046433A1 (en) | Identification and notification of tool displacement during medical procedure | |
US20240225584A1 (en) | Systems and methods of assessing breath hold during intraprocedural imaging | |
WO2024079584A1 (en) | Systems and methods of moving a medical tool with a target in a visualization or robotic system for higher yields |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |