US20220079686A1 - 3d navigation system and methods - Google Patents
3d navigation system and methods Download PDFInfo
- Publication number
- US20220079686A1 US20220079686A1 US17/456,230 US202117456230A US2022079686A1 US 20220079686 A1 US20220079686 A1 US 20220079686A1 US 202117456230 A US202117456230 A US 202117456230A US 2022079686 A1 US2022079686 A1 US 2022079686A1
- Authority
- US
- United States
- Prior art keywords
- providing
- feature
- focus
- imaging system
- navigation system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 188
- 238000012634 optical imaging Methods 0.000 claims abstract description 90
- 230000003287 optical effect Effects 0.000 claims abstract description 67
- 230000009023 proprioceptive sensation Effects 0.000 claims abstract description 37
- 230000008447 perception Effects 0.000 claims abstract description 33
- 238000004891 communication Methods 0.000 claims abstract description 32
- 230000002708 enhancing effect Effects 0.000 claims abstract description 20
- 230000004044 response Effects 0.000 claims abstract description 19
- 230000001953 sensory effect Effects 0.000 claims description 40
- 230000000007 visual effect Effects 0.000 claims description 32
- 230000033001 locomotion Effects 0.000 claims description 12
- 230000000737 periodic effect Effects 0.000 claims description 7
- 230000004888 barrier function Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 230000004069 differentiation Effects 0.000 claims description 3
- 230000002085 persistent effect Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 description 150
- 238000010586 diagram Methods 0.000 description 38
- 238000001356 surgical procedure Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 24
- 210000004556 brain Anatomy 0.000 description 21
- 210000003484 anatomy Anatomy 0.000 description 13
- 238000003860 storage Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000001574 biopsy Methods 0.000 description 9
- 210000001519 tissue Anatomy 0.000 description 9
- 238000007428 craniotomy Methods 0.000 description 7
- 238000013500 data storage Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 7
- 206010028980 Neoplasm Diseases 0.000 description 5
- 210000005013 brain tissue Anatomy 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 238000002271 resection Methods 0.000 description 5
- 208000008574 Intracranial Hemorrhages Diseases 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000000977 initiatory effect Effects 0.000 description 4
- 208000014674 injury Diseases 0.000 description 4
- 210000003625 skull Anatomy 0.000 description 4
- 230000008733 trauma Effects 0.000 description 4
- 241001269524 Dura Species 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 210000001841 basilar artery Anatomy 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 238000002405 diagnostic procedure Methods 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 239000012636 effector Substances 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- 208000003174 Brain Neoplasms Diseases 0.000 description 2
- 239000000853 adhesive Substances 0.000 description 2
- 230000001070 adhesive effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000002224 dissection Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001839 endoscopy Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002675 image-guided surgery Methods 0.000 description 2
- 238000000386 microscopy Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010002329 Aneurysm Diseases 0.000 description 1
- 241000894006 Bacteria Species 0.000 description 1
- 102000029797 Prion Human genes 0.000 description 1
- 108091000054 Prion Proteins 0.000 description 1
- 208000007660 Residual Neoplasm Diseases 0.000 description 1
- 208000006011 Stroke Diseases 0.000 description 1
- 208000030886 Traumatic Brain injury Diseases 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000000133 brain stem Anatomy 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000009699 differential effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000007340 echolocation Effects 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 238000002674 endoscopic surgery Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000004884 grey matter Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012317 liver biopsy Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 244000005700 microbiome Species 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004770 neurodegeneration Effects 0.000 description 1
- 208000015122 neurodegenerative disease Diseases 0.000 description 1
- 230000000771 oncological effect Effects 0.000 description 1
- 210000003977 optic chiasm Anatomy 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000000399 orthopedic effect Effects 0.000 description 1
- 230000001817 pituitary effect Effects 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 239000002520 smart material Substances 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 210000005166 vasculature Anatomy 0.000 description 1
- 210000004885 white matter Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/70—Manipulators specially adapted for use in surgery
- A61B34/76—Manipulators having means for providing feel, e.g. force or tactile feedback
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7405—Details of notification to user or communication with user or patient ; user input means using sound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7455—Details of notification to user or communication with user or patient ; user input means characterised by tactile indication, e.g. vibration or electrical stimulation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/0841—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating instruments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/20—Surgical microscopes characterised by non-optical aspects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/0004—Microscopes specially adapted for specific applications
- G02B21/0012—Surgical microscopes
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/18—Arrangements with more than one light path, e.g. for comparing two specimens
- G02B21/20—Binocular arrangements
- G02B21/22—Stereoscopic arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/34—Trocars; Puncturing needles
- A61B17/3417—Details of tips or shafts, e.g. grooves, expandable, bendable; Multiple coaxial sliding cannulas, e.g. for dilating
- A61B17/3421—Cannulas
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B2017/00973—Surgical instruments, devices or methods, e.g. tourniquets pedal-operated
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
- A61B2034/2057—Details of tracking cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2063—Acoustic tracking systems, e.g. using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2074—Interface software
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
- A61B2090/309—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using white LEDs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
- A61B2090/3614—Image-producing devices, e.g. surgical cameras using optical fibre
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/363—Use of fiducial points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/508—Supports for surgical instruments, e.g. articulated arms with releasable brake mechanisms
Definitions
- the present disclosure technically relates to optical imaging systems. More particularly, the present disclosure technically relates to optical imaging systems for use in image guided medical procedures. Even more particularly, the present disclosure technically relates to optical imaging systems for use in image guided medical procedures involving a pointer tool.
- related art image capture cameras and light sources are components that are separate from the related art surgical microscope.
- the specific camera and light source used with a given conventional surgical microscope are different for different medical centers and even for different surgical procedures within the same medical center. This circumstance results in an inconsistency in the images obtained, wherein comparing images between different medical centers is difficult or impossible.
- various related art navigation devices are used, such as a white probing stick for visually-challenged persons, such as a white probing stick that receives feedback in the form of a sound via echo location, two ultrasonic stereoscopic scanners for translating into an audio tone, and a motor vehicle backup camera system, wherein an audible sound or an indicator light is produced for collision warning.
- these related art devices do not address challenges in the area of surgical navigation.
- the related art navigation systems have experienced many challenges, including difficulty in accurately providing a surgeon with sufficient feedback relating to target depth in performing navigated surgery using only stereo imaging and surgeon eye strain. Therefore, a need exists for a navigation system that improves both planar and depth perception in relation to a surgical interrogation volume to overcome many of the related art challenges.
- the subject matter of the present disclosure involves systems and methods which consider 3D perception being an operator's ability to generate the relative positional sense (RPS) of objects located within a given interrogation volume.
- RPS relative positional sense
- the perception of the relative position of two objects is also achieved and enhanced through the use of proprioception, shadowing, sound, as well as other factors, whereby all such factors synergistically interact, in accordance with embodiment of the present disclosure.
- the 3D navigation systems and methods of the present disclosure involve features for acquiring data from vision, touch, sound, e.g., via a tracked tool; translating the data into a usable form for a surgeon; and presenting information, based on the translated data, to the surgeon, wherein the information comprises 3D information is related to at least two of three senses, e.g., vision, touch, and sound, capture, wherein the information is applicable to a particular context of use, e.g., a surgical context.
- the present disclosure provides an optical imaging system for imaging a target during a medical procedure.
- the system includes: an optical assembly including movable zoom optics and movable focus optics; a zoom actuator for positioning the zoom optics; a focus actuator for positioning the focus optics; a controller for controlling the zoom actuator and the focus actuator in response to received control input; and a camera for capturing an image of the target from the optical assembly, wherein the zoom optics and the focus optics are independently movable by the controller using the zoom actuator and the focus actuator, respectively, and wherein the optical imaging system is configured to operate at a minimum working distance (WD) from the target, the WD being defined between an aperture of the optical assembly and the target.
- WD minimum working distance
- the present disclosure provides a processor for controlling the optical imaging system disclosed herein.
- the processor is configured to: provide a user interface to receive control input, via an input device coupled to the processor, for controlling the zoom actuator and the focus actuator; transmit control instructions to the controller of the optical imaging system to adjust zoom and focus in accordance with the control input; and receive image data from the camera for outputting to an output device coupled to the processor.
- the present disclosure provides a system for optical imaging during a medical procedure.
- the system comprises: the optical imaging system disclosed herein; a positioning system for positioning the optical imaging system; and a navigation system for tracking each of the optical imaging system and the positioning system relative to the target.
- the present disclosure provides a method of autofocusing using an optical imaging system during a medical procedure, the optical imaging system comprising motorized focus optics and a controller for positioning the focus optics.
- the method includes: determining a WD between an imaging target and an aperture of the optical imaging system; determining a desired position of the focus optics based on the WD; and positioning the focus optics at the desired position.
- a method of fabricating a 3D navigation system for enhancing feedback during a medical procedure comprises: providing an optical imaging system, providing the optical imaging system comprising: providing an optical assembly comprising providing movable zoom optics and providing movable focus optics; providing a zoom actuator for positioning the zoom optics; providing a focus actuator for positioning the focus optics; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly; and providing a proprioception feature operable with the optical imaging system for generating a 3D perception, providing the proprioception feature comprising providing a communication feature configured to provide 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, providing the zoom optics and providing the focus optics comprising providing the zoom optics and providing the focus
- a method enhancing feedback during a medical procedure by way of a 3D navigation system comprises: providing the 3D navigation system, providing the 3D navigation system comprising: providing an optical imaging system, providing the optical imaging system comprising: providing an optical assembly comprising providing movable zoom optics and providing movable focus optics; providing a zoom actuator for positioning the zoom optics; providing a focus actuator for positioning the focus optics; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input;
- FIG. 1 is a diagram illustrating a perspective view of an access port inserted into a human brain, for providing access to internal brain tissue during an example medical procedure, in accordance with an embodiment of the present disclosure.
- FIG. 2A is a diagram illustrating a perspective view of an example navigation system to support image guided surgery, in accordance with an embodiment of the present disclosure.
- FIG. 2B is a diagram illustrating a front view of system components of an example navigation system, in accordance with an embodiment of the present disclosure.
- FIG. 3 is a block diagram illustrating an example control and processing system usable with the example navigation systems, as shown in FIGS. 2A and 2B , in accordance with an embodiment of the present disclosure.
- FIG. 4A is a flow diagram illustrating an example method involving a surgical procedure implementable using the example navigation systems, as shown in FIGS. 2A and 2B , in accordance with an embodiment of the present disclosure.
- FIG. 4B is a flow diagram illustrating an example method of registering a patient for a surgical procedure, as shown in FIG. 4A , in accordance with an embodiment of the present disclosure.
- FIG. 5 is a diagram illustrating a perspective view of an example optical imaging system being used during a medical procedure, in accordance with an embodiment of the present disclosure.
- FIG. 6 is a block diagram illustrating an example optical imaging system, in accordance with an embodiment of the present disclosure.
- FIG. 7 is a diagram illustrating a perspective view of an example optical imaging system, in accordance with an embodiment of the present disclosure.
- FIG. 8 is a diagram illustrating an alternate perspective view of the example optical imaging system, as shown in FIG. 7 , in accordance with an embodiment of the present disclosure.
- FIG. 9 is a flow diagram illustrating an example method of autofocusing using an example optical imaging system, in accordance with an embodiment of the present disclosure.
- FIG. 10 is a flow diagram illustrating an example method of autofocusing relative to a medical instrument, using an example optical imaging system, in accordance with an embodiment of the present disclosure.
- FIG. 11 is a set of diagrams illustrating perspective views of an optical imaging system using a method of autofocusing relative to a medical instrument, in accordance with an embodiment of the present disclosure.
- FIG. 12A is a diagram illustrating a perspective view of a 3D navigation system, in operation, in accordance with an embodiment of the present disclosure.
- FIG. 12B is a diagram illustrating a perspective view of a 3D navigation system, in operation, as shown in FIG. 12A , in accordance with an embodiment of the present disclosure.
- FIG. 12C is a diagram illustrating a perspective view of a 3D navigation system, in operation, as shown in FIG. 12B , in accordance with an embodiment of the present disclosure.
- FIG. 13 is a set of diagrams illustrating perspective views of an optical imaging system, using a 3D navigation system, in accordance with an alternative embodiment of the present disclosure.
- FIG. 14 is a flow diagram illustrating a method of fabricating a 3D navigation system, in accordance with an embodiment of the present disclosure.
- FIG. 15 is a flow diagram illustrating a method of enhancing surgical navigation by way of a 3D navigation system, in accordance with an embodiment of the present disclosure.
- the systems and methods described herein are useful in the field of neurosurgery, including oncological care, neurodegenerative disease, stroke, brain trauma, and orthopedic surgery.
- the subject matter of the present disclosure is applicable to other conditions or fields of medicine. While the present disclosure describes examples in the context of neurosurgery, the subject matter of the present disclosure is applicable to other surgical procedures that may use intraoperative optical imaging.
- the terms, “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in the specification and claims, the terms, “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
- exemplary or “example” means “serving as an example, instance, or illustration,” and should not be construed as preferred or advantageous over other configurations disclosed herein.
- the terms “about,” “approximately,” and “substantially” are meant to cover variations that may exist in the upper and lower limits of the ranges of values, such as variations in properties, parameters, and dimensions. In one non-limiting example, the terms “about,” “approximately,” and “substantially” are understood to denote plus or minus 10 percent or less.
- an access port refers to a cannula, conduit, sheath, port, tube, or other structure that is insertable into a subject, in order to provide access to internal tissue, organs, or other biological substances.
- an access port may directly expose internal tissue, for example, via an opening or aperture at a distal end thereof, and/or via an opening or aperture at an intermediate location along a length thereof.
- an access port may provide indirect access, via one or more surfaces that are transparent, or partially transparent, to one or more forms of energy or radiation, such as, but not limited to, electromagnetic waves and acoustic waves.
- Intraoperative refers to an action, process, method, event or step that occurs or is carried out during at least a portion of a medical procedure. Intraoperative, as defined herein, is not limited to surgical procedures, and may refer to other types of medical procedures, such as diagnostic and therapeutic procedures.
- Some embodiments of the present disclosure relate to minimally invasive medical procedures that are performed via an access port, whereby surgery, diagnostic imaging, therapy, or other medical procedures, e.g. minimally invasive medical procedures, are performed based on access to internal tissue through the access port.
- a surgeon or robotic surgical system may perform a surgical procedure involving tumor resection in which the residual tumor remaining after is minimized, while also minimizing the trauma to the intact white and grey matter of the brain.
- trauma may occur, for example, due to contact with the access port, stress to the brain matter, unintentional impact with surgical devices, and/or accidental resection of healthy tissue.
- a key to minimizing trauma is ensuring that the surgeon performing the procedure has the best possible view of the surgical site of interest without having to spend excessive amounts of time and concentration repositioning tools, scopes and/or cameras during the medical procedure.
- the systems and methods consider the impact of the differences in generating feedback with 3D perception using binocular vision in relation to using proprioception.
- embodiments of the present discloser consider that vision facilitates locating peripheral targets more precisely and that proprioception facilitates greater precision for locating targets in the depth dimension.
- the systems and methods of the present disclosure involve features which take into account that vision and proprioception have differential effects on the precision of target representation.
- vision contributes to the target representation localization is more precise along the lateral dimension, e.g., for locating the peripheral targets.
- proprioception contributes to the target representation, localization is more precise in depth, e.g., locating deep targets in the tissue.
- embodiments of the present disclosure consider several techniques for optimizing 3-D perception and, specifically, relative positional sense, at a high magnification.
- Such techniques include, but are not limited to, (a) implementing focused visual targets, e.g., maintaining the focal plane/point in conjunction with using visual obscuration throughout an interrogation volume and using a focused target in the depth dimension; (b) implementing serial focus adjustments, e.g., performing dynamic adjustment of the focal distance to create multiple focal points across a range of an interrogation volume; and (c) implementing an immersive contextual volume of view, e.g., generating a volume of view (VoV), wherein all of an anatomy is in simultaneous focus, thereby providing continuous contextual information throughout an interrogation volume.
- implementing focused visual targets e.g., maintaining the focal plane/point in conjunction with using visual obscuration throughout an interrogation volume and using a focused target in the depth dimension
- serial focus adjustments e.g., performing dynamic adjustment of the focal distance to create multiple focal points across a range of
- the technique (a) is implementable with a conventional stereoscopic binocular microscope (CS-m), wherein large portions of the interrogation volume are obscured, and wherein a given target is maintained in constant focus.
- CS-m stereoscopic binocular microscope
- embodiments of the present disclosure provide a very powerful mechanism to create 3D perception. For example, an operator's hands may come in and out of focus as the hands travel through a given VoV and approach a resolvable visual target within a volume of distortion, such as a basilar artery, thereby providing critical contextual information to the operator regarding focus, and whereby imperative visual cues of shadowing and distortion generate a framework for 3D perception and relative positional sense for facilitating navigation within the given VoV.
- dynamic movement within a surgical cavity provides visual cues for generating a depth of field (DoF) at high magnification approximating that of an endoscope, wherein distortions are tolerated for a trade-off in 3D perception and magnification.
- DoF depth of field
- the technique (b) is implementable when distortions are deemed intolerable or the given visual target has changed.
- an experienced operator user
- Technique (b) is implementable for obtaining useful information in the DoF using a CS-m, but may require manual dynamic movements approximating that of an endoscope.
- An endoscope requires mechanical movement of the payload along the z-axis within a surgical cavity to redefine the plane of focus.
- a CS-m involves manually moving the focal distance and adjusting the focal point outside a surgical cavity, whereby greater flexibility is provided.
- the technique (c) is implementable at high magnification in relation to a larger portion of a viewable anatomy, wherein imaging is simultaneously in focus and usable. If using a CS-m, at high magnification, imaging is serially adjusted to maintain focus of either a suprachiasmatic cistern or an interpeduncular cistern. If using a robotically operated video optical telescopic microscope (ROVOT-m), images are seen at the same optical parameters without manipulation.
- ROVOT-m robotically operated video optical telescopic microscope
- the RPS while moving through the VoV, is generated by combining monitoring an operator's hands and receiving inherent haptic feedback, e.g., as the operator's hands move past the focal planes of the arterial structure, through the opticocarotid cistern, and arriving at the basilar artery, all of which have simultaneously been in focus.
- any inherent haptic feedback is enhanced with additional haptic feedback.
- operator experience includes contextual knowledge of the anatomy and the relative location of the structures for facilitating perceiving an RPS of two structures.
- operator knowledge enhances the 3-D perception, especially during a learning curve thereof, i.e., the eye tends to be blind to what the mind does not know.
- a key component of systems and methods using the ROVOT-m further involves a global positioning system (GPS) for facilitating hands-free positioning of the payload, thereby further facilitating generating an RPS.
- GPS global positioning system
- the systems and methods in compensating for an absence of contextual knowledge, use a second navigation screen with a tracked instrument displaying the relative position for a novice operator, thereby rapidly resolving any initial loss of depth perception, and thereby facilitating learning the relative position(s) of the anatomy within an interrogated volume by the novice operator. While simultaneous navigation is not absolutely required, the systems and methods use simultaneous navigation for added value by, not only shortening the learning curve, but also providing meaningful contextual information, e.g., by using dynamic continuous navigation via one display with simultaneous optical imaging on another display.
- the systems and methods use two different visual input screens which in the aggregate synergistically created an immersive surgical volume, wherein all portions of the anatomy is resolvable and continuously referenced relative to one another, thereby minimizing a need for manual adjustment, and thereby providing enhanced “stereoscopy.”
- the loss of distortion and shadowing as critical 3D visual navigation cues otherwise provided by a CS-m are easily compensated by the foregoing mechanisms in embodiments of the systems and methods that use the ROVOT-m.
- the systems and methods using the ROVOT-m facilitate working in an immersive surgical volume than a surgical volume in which anatomical portions are obscured for both experienced and novice operators.
- the systems and methods use an untethered optical chain (OC), wherein a working axis of each operator hand is in a plane different than that of a viewing axis, whereby ergonomic value is enhanced, and whereby 3D perception is enhanced.
- OC optical chain
- VT-m video telescopic microscopy
- an operator may simply look down at the operator's hands approach the target and then look up at the monitor whenever magnification is desired. This manual technique (looking up and down) is another technique for adjusting, or compensating, loss of stereoscopy to generate 3D.
- the systems and methods overcome related art challenges by involving at least proprioception features, whereby enhanced tactile and haptic feedback between the surgeons two hands and relative anatomy are provided, and whereby RPS and pother spatial sensing is generates.
- Complex procedures such as clip ligation of aneurysms, carotid and pituitary transpositions, and dissection of brainstem perforators are increasingly performed by endonasal endoscopy.
- the systems and methods involve various techniques for acquiring 3D data, e.g., using five senses to determine location(s), such as inward and outward precession in a spiral pattern within an interrogation volume.
- a plurality of input data types are used, such as a combination of sound and haptic/proprioception data, a combination of visual and haptic/proprioception data, and a combination of a cross-sectional view of a brain and a view of the brain, wherein selected combinations are displayable in relation to a same field of view (FoV).
- Audio feedback for indicating a trajectory to target eliminates full reliance on merely visible feedback, e.g., audio feedback for a cannulation procedure.
- FIG. 1 this diagram illustrates, in a perspective view, an access port 12 inserted into a human brain 10 for providing access to internal brain tissue during a medical procedure, in accordance with an embodiment of the present disclosure.
- the access port 12 accommodates instruments, such as catheters, surgical probes, or cylindrical ports, e.g., the NICO BrainPathTM. Surgical tools and instruments may then be inserted within the lumen of the access port 12 in order to perform surgical, diagnostic or therapeutic procedures, such as resecting tumors as necessary.
- a straight or linear access port 12 is typically guided down a sulci path of the brain. Surgical instruments would then be inserted down the access port 12 .
- the access port 12 also facilitates use of catheters, DBS needles, a biopsy procedure, and also to biopsies and/or catheters in other medical procedures performed on other parts of the body, as well as to medical procedures that do not use an access port.
- Various examples of the present disclosure are generally suitable for use in any medical procedure that may use optical imaging systems.
- FIG. 2A this diagram illustrates, in a perspective view, an exemplary navigation system environment 200 , usable to support navigated image-guided surgery, in accordance with an embodiment of the present disclosure.
- a surgeon 201 performs surgery on a patient 202 in an operating room (OR) environment.
- a medical navigation system 205 comprises an equipment tower, tracking system, displays, and tracked instruments to assist the surgeon 201 during his procedure.
- An operator 203 may also be present to operate, control, and provide assistance for the medical navigation system 205 .
- FIG. 2B this diagram illustrates, in a front view, an example medical navigation system 205 in greater detail, in accordance with an embodiment of the present disclosure.
- the disclosed optical imaging system is usable in the context of the medical navigation system 205 .
- the medical navigation system 205 comprises at least one display, such as displays 206 , 211 , for displaying a video image, an equipment tower 207 , and a positioning system 208 , such as a mechanical arm, which may support an optical imaging system 500 , e.g., comprising an optical scope.
- At least one of the displays 206 , 211 comprises a touch-sensitive display for receiving touch input.
- the equipment tower 207 is mountable on a frame, e.g., a rack or cart, and may comprise a power supply and a computer or controller configured to execute at least one of planning software, navigation software, and other software for managing the positioning system 208 and at least one instrument tracked by the navigation system 205 .
- the equipment tower 207 comprises a single tower configuration operating with dual displays 206 , 211 ; however, the equipment tower 207 comprises other configurations, e.g., a dual tower, a single display, etc.
- the equipment tower 207 is configurable with a universal power supply (UPS) to provide for emergency power in addition to a regular AC adapter power supply.
- UPS universal power supply
- a portion of the patient's anatomy is retainable by a holder.
- the patient's head and brain is retainable by a head holder 217 .
- the access port 12 and associated introducer 210 are insertable into the head to provide access to a surgical site.
- the imaging system 500 is usable to view down the access port 12 at a sufficient magnification to allow for enhanced visibility.
- the output of the imaging system 500 is receivable by at least one computer or controller to generate a view that is depictable on a visual display, e.g., one or more displays 206 , 211 .
- the navigation system 205 comprises a tracked pointer tool 222 .
- the tracked pointer tool 222 comprises markers 212 to enable tracking by a tracking camera 213 and is configured to identify points, e.g., fiducial points, on a patient.
- An operator typically a nurse or the surgeon 201 , may use the tracked pointer tool 222 to identify the location of points on the patient 202 , in order to register the location of selected points on the patient 202 in the navigation system 205 .
- a guided robotic system with closed loop control is usable as a proxy for human interaction. Guidance to the robotic system is providable by any combination of input sources such as image analysis, tracking of objects in the operating room using markers placed on various objects of interest, or any other suitable robotic system guidance techniques.
- fiducial markers 212 are configured to couple with the introducer 210 for tracking by the tracking camera 213 , which may provide positional information of the introducer 210 from the navigation system 205 .
- the fiducial markers 212 are alternatively or additionally attached to the access port 12 .
- the tracking camera 213 comprises a 3D infrared optical tracking stereo camera, e.g., a camera comprising at least one feature of a Northern Digital Imaging® (NDI) camera.
- the tracking camera 213 alternatively comprises an electromagnetic system (not shown), such as a field transmitter, that configured to use at least one receiver coil disposed in relation to the tool(s) intended for tracking.
- a location of the tracked tool(s) is determinable by using the induced signals and their phases in each of the at least one receiver coil by way of a profile of the electromagnetic field (measured, calculated, or known) and a position of each at least one receiver coil relative to another at least one receiver coil (measured, calculated, or known). Operation and examples of this technology is further explained in Chapter 2 of “Image-Guided Interventions Technology and Application,” Peters, T.; Cleary, K., 2008 , ISBN: 978 - 0 - 387 - 72856 - 7 , incorporated herein by reference in its entirety, the subject matter of which is encompassed by the present disclosure.
- location data of the positioning system 208 and/or the access port 12 is determinable by the tracking camera 213 , the tracking camera 213 configured to detect the fiducial markers 212 disposed, or otherwise fixed, e.g., rigidly coupled, in relation to any of the positioning system 208 , the access port 12 , the introducer 210 , the tracked pointer tool 222 , and/or other tracked instruments.
- the fiducial marker(s) 212 comprise at least one of active markers and passive markers.
- the displays 206 , 211 are configured to output the computed data of the navigation system 205 .
- the output provided by the displays 206 , 211 comprises a multi-view output of a patient anatomy, the multi-view output comprising at least one of an axial view, a sagittal view, and a coronal view.
- At least one of the fiducial markers 212 are placed on tools, e.g., the access port 12 and/or the imaging system 500 , to be tracked, to facilitate determination of the location and orientation of such tools by using the tracking camera 213 and the navigation system 205 .
- a stereo camera of the tracking system is configured to detect the fiducial markers 212 and to capture images thereof for providing identifiable points for tracking such tools.
- a tracked tool is defined by a grouping of the fiducial markers 212 , whereby a rigid body is defined and identified by the tracking system. This definition, in turn, is usable for determining the position and/or orientation in 3D of a tracked tool in a virtual space.
- the position and orientation of the tracked tool in 3D is trackable in six degrees of freedom, e.g., x, y, and z coordinates as well as pitch, yaw, and roll rotations, and in five degrees of freedom, e.g., x, y, and z, coordinates as well as two degrees of free rotation.
- the tool is tracked in at least three degrees of freedom, e.g., tracking a position of a tip of a tool in at least the x, y, and z coordinates.
- at least three fiducial markers 212 are provided on a tracked tool to define the tracked tool in a virtual space; however, preferably, at least four fiducial markers 212 are used.
- camera images capturing the fiducial markers 212 are logged and tracked, by, for example, a closed circuit television (CCTV) camera.
- the fiducial markers 212 are selectable to enable, assist, and/or facilitate segmentation in the captured images.
- the navigation system 205 implements infrared (IR) reflecting markers used in conjunction with an IR light source originating from the direction of the camera.
- IR infrared
- An example of such an apparatus comprises tracking devices, such as the Polaris® system available from Northern Digital Inc.
- the spatial position and orientation of the tracked tool and/or the actual and desired position and orientation of the positioning system 208 are determinable by optical detection using a camera. The optical detection is performable by using an optical camera, thereby rendering the fiducial markers 212 optically visible.
- the fiducial markers 212 are combinable with a suitable tracking system to determine the spatial position of the tracked tools within the operating theatre.
- Different tools and/or targets are providable with respect to different sets of fiducial markers 212 in different configurations. Differentiation of the different tools and/or targets and their corresponding virtual volumes is possible based on the specification configuration and/or orientation of the each set of fiducial markers 212 relative to another set of fiducial markers 212 , thereby enabling each such tool and/or target to have a distinct individual identity associated with a distinct individual identifier within the navigation system 205 .
- the distinct individual identifiers provide information to the navigation system 205 , such as information relating to the size and/or shape of the tool within navigation system 205 .
- the distinct individual identifier may also provide additional information, such as the tool's central point or the tool's central axis, among other information.
- the virtual tool is also determinable from a database of tools stored in, or provided to, the navigation system 205 .
- the fiducial markers 212 are tracked relative to a reference point, or a reference object, in the operating room, such as the patient 202 .
- the fiducial markers 212 may comprise the same type or a combination of at least two different types. Possible types of markers comprise reflective markers, radiofrequency (RF) markers, electromagnetic (EM) markers, pulsed or un-pulsed light-emitting diode (LED) markers, glass markers, reflective adhesives, or reflective unique structures or patterns, among others.
- RF and EM markers may have specific signatures for the specific tools to which such markers are attached. Reflective adhesives, structures and patterns, glass markers, and LED markers are detectable using optical detectors, while RF and EM markers are detectable using antennas. Different marker types are selectable to suit different operating conditions. For example, using EM and RF markers enable tracking of tools without requiring a line-of-sight from a tracking camera to the fiducial markers 212 ; and using an optical tracking system avoids additional noise from electrical emission and detection systems.
- the fiducial markers 212 comprise printed, or 3D, features for detection by an auxiliary camera, such as a wide-field camera (not shown) and/or the imaging system 500 .
- Printed markers may also be used as a calibration pattern, for example to provide distance information, e.g., 3D distance information, to an optical detector.
- Printed identification markers comprise features, such as concentric circles with different ring spacing and/or different types of bar codes, among other features.
- the contours of objects e.g., the side of the access port 206 , are captured by, and identified, using optical imaging devices and the tracking system.
- a guide clamp 218 (or, more generally, a guide) for holding the access port 12 is providable.
- the guide clamp 218 facilitates retention of the access port 206 at a fixed position and orientation, thereby freeing use of the surgeon's hands.
- An articulated arm 219 is provided to hold the guide clamp 218 .
- the articulated arm 219 has up to six degrees of freedom for positioning the guide clamp 218 .
- the articulated arm 219 is lockable to fix its position and orientation, e.g., once a desired position is achieved.
- the articulated arm 219 is attached, or attachable, in relation to a point based on the patient head holder 217 , or another suitable point, such as on another patient support, e.g., on the surgical bed, to ensure that, when locked in place, the guide clamp 218 does not move relative to the patient's head.
- setup of a navigation system is relatively complex, e.g., many pieces of equipment associated with the surgical procedure, as well as elements of the navigation system 205 , must be arranged and/or prepared. Further, setup time typically increases as more equipment is added. To assist in addressing this, the navigation system 205 comprises two additional wide-field cameras to enable video overlay information. Video overlay information is then insertable into displayed images, such as images displayed on at least one of the displays 206 , 211 .
- the overlay information represents the physical space where accuracy of the 3D tracking system, e.g., a part of the navigation system, is greater, represents the available range of motion of the positioning system 208 and/or the imaging system 500 , and/or may facilitates guiding the head and/or positioning the patient.
- the navigation system 205 provides tools to the neurosurgeon that may help to provide more relevant information to the surgeon, and may assist in improving performance and accuracy of port-based neurosurgical operations.
- the navigation system 205 is also suitable for at least one of: a brain biopsy, a functional/deep-brain stimulation, a catheter/shunt placement (in the brain or elsewhere), an open craniotomy, and/or an endonasal/skull-based/ear-nose-throat (ENT) procedure, among others.
- a brain biopsy e.g., a functional/deep-brain stimulation, a catheter/shunt placement (in the brain or elsewhere), an open craniotomy, and/or an endonasal/skull-based/ear-nose-throat (ENT) procedure, among others.
- ENT endonasal/skull-based/ear-nose-throat
- the same navigation system 205 is usable for performing any or all of these procedures, with, or without, modification as appropriate.
- the navigation system 205 is usable for performing a diagnostic procedure, such as brain biopsy.
- a brain biopsy may involve the insertion of a thin needle into a patient's brain for purposes of removing a sample of brain tissue. The brain tissue is subsequently assessed by a pathologist to determine whether the brain tissue is cancerous, for example.
- Brain biopsy procedures are conducted with, or without, a stereotactic frame. Both types of procedures are performable using image-guidance.
- Frameless biopsies are performable by way of the navigation system 205 .
- the tracking camera 213 is adaptable to any suitable tracking system.
- the tracking camera 213 , and any associated tracking system that uses the tracking camera 213 is replaceable with any suitable tracking system which may, or may not, use camera-based tracking techniques.
- a tracking system that does not use the tracking camera 213 such as a radiofrequency tracking system, is used with the navigation system 205 .
- this block diagram illustrates a control and processing system 300 usable in the medical navigation system 205 , as shown in FIG. 2B , e.g., as part of the equipment tower 207 , in accordance with an embodiment of the present disclosure.
- the control and processing system 300 comprises at least one processor 302 , a memory 304 , a system bus 306 , at least one input/output (I/O) interface 308 , a communications interface 310 , and a storage device 312 .
- I/O input/output
- the control and processing system 300 is interfaceable with other external devices, such as a tracking system 321 , a data storage 342 , and at least one external user I/O device 344 , e.g., at least one of a display, a keyboard, a mouse, sensors attached to medical equipment, a foot pedal, a microphone, and a speaker.
- external devices such as a tracking system 321 , a data storage 342 , and at least one external user I/O device 344 , e.g., at least one of a display, a keyboard, a mouse, sensors attached to medical equipment, a foot pedal, a microphone, and a speaker.
- the data storage 342 comprises any suitable data storage device, such as a local, or remote, computing device, e.g., a computer, hard drive, digital media device, or server, having a database stored thereon.
- the data storage device 342 further comprises identification data 350 for identifying one or more medical instruments 360 and configuration data 352 that associates customized configuration parameters with one or more medical instruments 360 .
- the data storage device 342 further comprises preoperative image data 354 and/or medical procedure planning data 356 .
- the data storage device 342 is shown as a single device, understood is that, in other embodiments, the data storage device 342 alternatively comprises multiple storage devices.
- the medical instruments 360 are identifiable by the control and processing unit 300 .
- the medical instruments 360 are connected to, and controlled by, the control and processing unit 300 .
- the medical instruments 360 are operated, or otherwise employed, independent of the control and processing unit 300 .
- the tracking system 321 is employed to track at least one medical instrument 360 and spatially register the at least one tracked medical instrument to an intraoperative reference frame.
- a medical instrument 360 comprises tracking markers, such as tracking spheres, recognizable by the tracking camera 213 .
- the tracking camera 213 comprises an infrared (IR) tracking camera.
- a sheath placed over a medical instrument 360 is connected to, and controlled by, the control and processing unit 300 .
- control and processing unit 300 is also interfaceable with a number of configurable devices 320 , and can intraoperatively reconfigure at least one such device based on configuration parameters obtained from configuration data 352 .
- devices 320 include, but are not limited to, at least one external imaging device 322 , at least one illumination device 324 , the positioning system 208 , the tracking camera 213 , at least one projection device 328 , and at least one display, such as the displays 206 , 211 .
- exemplary aspects of the embodiments are implementable via the processor(s) 302 and/or memory 304 , in accordance with the present disclosure.
- the functionalities described herein can be partially implemented via hardware logic in the processor 302 and partially using the instructions stored in the memory 304 , as at least one processing module or engine 370 .
- Example processing modules include, but are not limited to, a user interface engine 372 , a tracking module 374 , a motor controller 376 , an image processing engine 378 , an image registration engine 380 , a procedure planning engine 382 , a navigation engine 384 , and a context analysis module 386 . While the example processing modules are separately shown in FIG.
- the processing modules 370 are storable in the memory 304 ; and the processing modules 370 are collectively referred as processing modules 370 .
- at least two modules 370 are used together for performing a function.
- the modules 370 is embodied as a unified set of computer-readable instructions, e.g., stored in the memory 304 , rather than as distinct sets of instructions.
- the system 300 is not intended to be limited to the components shown in FIG. 3 .
- One or more components of the control and processing system 300 are provided as an external component or device.
- the navigation module 384 is provided as an external navigation system that is integrated with the control and processing system 300 .
- Some embodiments is implemented using the processor 302 without additional instructions stored in memory 304 .
- Some embodiments are implemented using the instructions stored in memory 304 for execution by one or more general purpose microprocessors.
- the present disclosure is not limited to any specific configuration of hardware and/or software.
- the navigation system 205 which may include the control and processing unit 300 , provides tools to the surgeon for improving performance of the medical procedure and/or post-operative outcomes.
- the navigation system 205 is also applicable to a brain biopsy, a functional/deep-brain stimulation, a catheter/shunt placement procedure, open craniotomies, endonasal/skull-based/ENT, spine procedures, and other parts of the body, such as breast biopsies, liver biopsies, etc. While several examples have been provided, examples of the present disclosure are applied to any suitable medical procedure.
- this flow diagram illustrates a method 400 of performing a port-based surgical procedure using a navigation system, such as the medical navigation system 205 , as described in relation to FIGS. 2A and 2B , in accordance with an embodiment of the present disclosure.
- the method 400 comprises importing a port-based surgical plan, as indicated by block 402 .
- the method 400 further comprises positioning and fixing the patient by using a body holding mechanism and confirming that the head position complies with the patient plan in the navigation system, as indicated by block 404 , wherein confirming that the head position complies with the patient plan is implementable by a computer or a controller being a component of the equipment tower 207 .
- the method 400 further comprises initiating registration of the patient, as indicated by block 406 .
- registration or “image registration” refers to the process of transforming different sets of data into one coordinate system. Data may include multiple photographs, data from different sensors, times, depths, or viewpoints.
- the process of “registration” is used in the present application for medical imaging in which images from different imaging modalities are co-registered. Registration is used in order to be able to compare or integrate the data obtained from these different modalities.
- Non-limiting examples include intensity-based methods that compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours.
- Image registration methods may also be classified according to the transformation models they use to relate the target image space to the reference image space. Another classification can be made between single-modality and multi-modality methods.
- Single-modality methods typically register images in the same modality acquired by the same scanner or sensor type, for example, a series of magnetic resonance (MR) images is co-registered, while multi-modality registration methods are used to register images acquired by different scanner or sensor types, for example in magnetic resonance imaging (MRI) and positron emission tomography (PET).
- MR magnetic resonance imaging
- PET positron emission tomography
- multi-modality registration methods are used in medical imaging of the head and/or brain as images of a subject are frequently obtained from different scanners. Examples include registration of brain computerized tomography (CT)/MRI images or PET/CT images for tumor localization, registration of contrast-enhanced CT images against non-contrast-enhanced CT images, and registration of ultrasound and CT.
- CT brain computerized tomography
- CT contrast-enhanced CT images against non-contrast-enhanced CT images
- ultrasound and CT Registration
- this flow diagram illustrates an example of alternate sets of steps performable between performing the step of initiating registration, as indicated by block 406 , and performing the step of completing registration, as indicated by block 408 , in the method 400 , as shown in FIG. 4A , in accordance with embodiments of the present disclosure.
- the method 400 comprises performing a first alternate set of steps, as indicated by block 440 , the first alternative set of steps comprising: identifying fiducial markers 112 on images, as indicated by block 442 ; touching the touch points with a tracked instrument, as indicated by block 444 ; and computing the registration to the reference markers by way of the navigation system 205 , as indicated by block 446 .
- the method 400 comprises performing a second alternate set of steps, as indicated by block 450 , the second alternative set of steps comprising: scanning the face by using a 3D scanner, as indicated by block 452 ; extracting the face surface from MR/CT data, as indicated by block 454 ; and matching surfaces to determine registration data points, as indicated by block 456 .
- the method 400 further comprises confirming registration by using the extracted data extracted and processing the same, as indicated by block 408 , as also shown in FIG. 4A .
- the method 400 further comprises draping the patient, as indicated by block 410 .
- draping comprises covering the patient and surrounding areas with a sterile barrier to create and maintain a sterile field during the surgical procedure.
- the purpose of draping is to eliminate the passage of microorganisms, e.g., bacteria, viruses, prions, contamination, and the like, between non-sterile and sterile areas.
- conventional navigation systems require that the non-sterile patient reference is replaced with a sterile patient reference of identical geometry location and orientation.
- the method 400 further comprises: confirming the patient engagement points, as indicated by block 412 ; and preparing and planning the craniotomy, as indicated by block 414 .
- the method 400 further comprises: performing the craniotomy by cutting a bone flap and temporarily removing the same from the remainder of the skull to access the brain, as indicated by block 416 ; and updating registration data with the navigation system, as indicated by block 422 .
- the method 400 further comprises: confirming engagement and the motion range within region of the craniotomy, as indicated by block 418 ; and cutting the dura at the engagement points and identifying the sulcus, as indicated by block 420 .
- the method 400 further comprises determining whether the trajectory plan has been completed, as indicated by block 424 . If the trajectory plan is not yet completed, the method 400 further comprises: aligning a port on engagement and setting the planned trajectory, as indicated by block 432 ; and cannulating, as indicated by block 434 ; and determining whether the trajectory plan is completed, as indicated by block 424 .
- Cannulation involves inserting a port into the brain, typically along a sulci pat, the sulci path being identified in performing the step of cutting the dura at the engagement points and identifying the sulcus, as indicated by block 420 , along a trajectory plan.
- cannulation is typically an iterative process that involves repeating the steps of aligning the port on engagement and setting the planned trajectory, as indicated by block 432 , and then cannulating to the target depth, as indicated by block 434 , until the complete trajectory plan is executed by making such determination, as indicated by block 424 .
- the method 400 further comprises determining whether the trajectory plan has been completed, as indicated by block 424 . If the trajectory plan is completed, the method 400 further comprises: performing a resection to remove part of the brain and/or tumor of interest, as indicated by block 426 ; decannulating by removing the port and any tracking instruments from the brain, as indicated by block 428 ; and closing the dura and completing the craniotomy, as indicated by block 430 .
- Some aspects of the steps shown in FIG. 4A are specific to port-based surgery, such as portions of the steps indicated by blocks 428 , 420 , and 434 , but the appropriate portions of these blocks is skipped or suitably modified when performing non-port based surgery.
- the medical navigation system 205 may acquire and maintain a reference of the location of the tools in use as well as the patient in three-dimensional (3D) space.
- a tracked reference frame that is fixed, e.g., relative to the patient's skull, is present.
- a transformation is calculated that maps the frame of reference from preoperative MRI or CT imagery to the physical space of the surgery, specifically the patient's head.
- the navigation system 205 tracking locations of fiducial markers fixed to the patient's head, relative to the static patient reference frame.
- the patient reference frame is typically rigidly attached to the head fixation device, such as a Mayfield clamp. Registration is typically performed before the sterile field has been established, e.g., by performing the step as indicated by block 410 .
- FIG. 5 this diagram illustrates, in a perspective view, use of an example imaging system 500 , in a medical procedure, in accordance with an embodiment of the present disclosure.
- FIG. 5 shows the imaging system 500 being used in the context of a navigation system environment 200 , e.g., using a navigation system as above described, the imaging system 500 may also be used outside of a navigation system environment, e.g., without any navigation support.
- An operator typically a surgeon 201 , may use the imaging system 500 to observe the surgical site, e.g., to look down an access port.
- the imaging system 500 is attached to a positioning system 208 , e.g., a controllable and adjustable robotic arm.
- the position and orientation of the positioning system 208 , imaging system 500 and/or access port is tracked using a tracking system, such as described for the navigation system 205 .
- the distance d between the imaging system 500 (more specifically, the aperture of the imaging system 500 ) and the viewing target, e.g., the surface of the surgical site, is referred to as the WD.
- the imaging system 500 is configurable for use in a predefined range of WD, e.g., in the range of approximately 15 cm to approximately 75 cm. If the imaging system 500 is mounted on the positioning system 208 , the actual available range of WD is dependent on both the WD of the imaging system 500 as well as the workspace and kinematics of the positioning system 208 .
- the imaging system 500 comprises an optical assembly 505 (also referred to as an optical train).
- the optical assembly 505 comprises optics, e.g., lenses, optical fibers, etc., for focusing and zooming on the viewing target.
- the optical assembly 505 comprises zoom optics 510 (which may include one or more zoom lenses) and focus optics 515 (which may include one or more focus lenses).
- zoom optics 510 and the focus optics 515 are independently movable within the optical assembly 505 in order to respectively adjust the zoom and focus. Where the zoom optics 510 and/or the focus optics 515 comprise more than one lens, each individual lens is independently movable.
- the optical assembly 505 comprises an aperture (not shown) which is adjustable.
- the imaging system 500 comprises a zoom actuator 520 and a focus actuator 525 for respectively positioning the zoom optics 510 and the focus optics 515 .
- the zoom actuator 520 and/or the focus actuator 525 comprise an electric motor or other types of actuators, such as pneumatic actuators, hydraulic actuators, shape-changing materials, e.g., piezoelectric materials or other smart materials, or engines, among other possibilities.
- pneumatic actuators such as pneumatic actuators, hydraulic actuators, shape-changing materials, e.g., piezoelectric materials or other smart materials, or engines, among other possibilities.
- shape-changing materials e.g., piezoelectric materials or other smart materials, or engines, among other possibilities.
- the term “motorized” is used in the present disclosure, the use of this term does not limit the present disclosure to use of motors necessarily, but is intended to cover all suitable actuators, including motors.
- the zoom actuator 520 and the focus actuator 525 are shown outside of the optical assembly 505 , in some examples, the zoom actuator 520 and the focus actuator 525 are components of, or are integrated with, the optical assembly 505 .
- the zoom actuator 520 and the focus actuator 525 may operate independently, to respectively control positioning of the zoom optics 510 and the focus optics 515 .
- the lens(es) of the zoom optics 510 and/or the focus optics 515 is each mounted on a linear stage, e.g., a motion system that restricts an object to move in a single axis, which may include a linear guide and an actuator; or a conveyor system such as a conveyor belt mechanism, that is respectively moved by the zoom actuator 520 and/or the focus actuator 525 to control positioning of the zoom optics 510 and/or the focus optics 515 .
- the zoom optics 510 is mounted on a linear stage that is driven, via a belt drive, by the zoom actuator 520 , while the focus optics 515 is geared to the focus actuator 525 .
- the independent operation of the zoom actuator 520 and the focus actuator 525 may enable the zoom and focus to be adjusted independently. Thus, when an image is in focus, the zoom is adjusted without requiring further adjustments to the focus optics 515 to produce a focused image.
- operation of the zoom actuator 520 and the focus actuator 525 is controllable by a controller 530 , e.g., a microprocessor, of the imaging system 500 .
- the controller 530 may receive control input, e.g., from an external system, such as an external processor or an input device.
- the control input indicates at least one of a desired zoom and a desired focus, and the controller 530 may, in response, cause at least one of zoom actuator 520 and the focus actuator 525 to respectively move at least one of the zoom optics 510 and the focus optics 515 accordingly to respectively achieve at least one of the desired zoom and the desired focus.
- the zoom optics 510 and/or the focus optics 515 is moved or actuated without the use of the zoom actuator 520 and/or the focus actuator 525 .
- the focus optics 515 uses electrically-tunable lenses or other deformable material that is directly controllable by the controller 530 .
- the imaging system 500 may enable an operator, e.g., a surgeon, to control zoom and/or focus during a medical procedure without having to manually adjust the zoom and/or focus optics 510 , 515 .
- the operator may provide control input to the controller 530 verbally, e.g., via a voice recognition input system, by instructing an assistant to enter control input into an external input device, e.g., into a user interface provided by a workstation, using a foot pedal, or by other such means.
- the controller 530 executes preset instructions to maintain the zoom and/or focus at preset values, e.g., to perform autofocusing, without requiring further control input during the medical procedure.
- an external processor e.g., a processor of a workstation or the navigation system, in communication with the controller 530 is used to provide control input to the controller 530 .
- the external processor provides a graphical user interface via which the operator or an assistant input instructions to control zoom and/or focus of the imaging system 500 .
- the controller 530 is alternatively or additionally in communication with an external input system, e.g., a voice-recognition input system or a foot pedal.
- the optical assembly 505 comprises at least one auxiliary optic 540 , e.g., an adjustable aperture, which is static or dynamic. Where the auxiliary optics 540 is dynamic, the auxiliary optics 540 is moved using an auxiliary actuator (not shown) which is controlled by the controller 530 .
- the imaging system 500 further comprises a camera 535 , e.g., a high-definition (HD) camera, configured to capture image data from the optical assembly. Operation of the camera is controlled by the controller 530 .
- the camera 535 also outputs data to an external system, e.g., an external workstation or external output device, to view the captured image data.
- the camera 535 outputs data to the controller 530 , which, in turn, transmits the data to an external system for viewing.
- the captured images are viewable on a larger display and are displayable together with other information relevant to the medical procedure, e.g., a wide-field view of the surgical site, navigation markers, 3D images, etc.
- the camera 535 used with the imaging system 500 facilitates improving the consistency of image quality among different medical centers.
- Image data captured by the camera 535 is displayable on a display together with a wide-field view of the surgical site, for example, in a multiple-view user interface. The portion of the surgical site that is captured by the camera 535 is visually indicated in the wide-field view of the surgical site.
- the imaging system 500 comprises a three-dimensional (3D) scanner 545 or 3D camera for obtaining 3D information of the viewing target.
- 3D Information from the 3D scanner 545 is also captured by the camera 535 , or is captured by the 3D scanner 545 , itself. Operation of the 3D scanner 545 is controlled by the controller 530 ; and the 3D scanner 545 transmits data to the controller 530 .
- the 3D scanner 545 itself, transmits data to an external system, e.g., an external work station.
- 3D information from the 3D scanner 545 is used to generate a 3D image of the viewing target, e.g., a 3D image of a target tumor to be resected).
- 3D information is also useful in an augmented reality (AR) display provided by an external system.
- AR augmented reality
- an AR display e.g., provided via AR glasses, may, using information from a navigation system to register 3D information with optical images, overlay a 3D image of a target specimen on a real-time optical image, e.g., an optical image captured by the camera 535 .
- the controller 530 is coupled to a memory 550 .
- the memory 550 is internal or external in relation to the imaging system 500 .
- Data received by the controller 530 e.g., image data from the camera 535 and/or 3D data from the 3D scanner, is stored in the memory 550 .
- the memory 550 may also contain instructions to enable the controller to operate the zoom actuator 520 and the focus actuator 525 .
- the memory 550 stores instructions to enable the controller to perform autofocusing, as further below discussed.
- the imaging system 500 communicates with an external system, e.g., a navigation system or a workstation, via wired or wireless communication.
- the imaging system 500 comprises a wireless transceiver (not shown) to enable wireless communication.
- the imaging system 500 comprises a power source, e.g., a battery, or a connector to a power source, e.g., an AC adaptor. In some examples, the imaging system 500 receives power via a connection to an external system, e.g., an external workstation or processor.
- a power source e.g., a battery
- a connector to a power source e.g., an AC adaptor.
- the imaging system 500 receives power via a connection to an external system, e.g., an external workstation or processor.
- the imaging system 500 comprises a light source (not shown).
- the light source may not itself generate light but rather direct light from another light generating component.
- the light source comprises an output of a fiber optics cable connected to another light generating component, which is part of the imaging system 500 or external to the imaging system 500 .
- the light source is mounted near the aperture of the optical assembly, to direct light to the viewing target. Providing the light source with the imaging system 500 may help to improve the consistency of image quality among different medical centers.
- the power or output of the light source is controlled by the imaging system 500 , e.g., by the controller 530 , or is controlled by a system external to the imaging system 500 , e.g., by an external workstation or processor, such as a processor of a navigation system.
- the optical assembly 505 , zoom actuator 520 , focus actuator 525 , and camera 535 may all be housed within a single housing (not shown) of the imaging system.
- the controller 530 , memory 550 , 3D scanner 545 , wireless transceiver, power source, and/or light source are also housed within the housing.
- the imaging system 500 also provides mechanisms to enable manual adjusting of the zoom and/or focus optics 510 , 515 . Such manual adjusting is enabled in addition to motorized adjusting of zoom and focus. In some examples, such manual adjusting is enabled in response to user selection of a “manual mode” on a user interface.
- the imaging system 500 is mountable on a movable support structure, such as the positioning system, e.g., robotic arm, of a navigation system, a manually operated support arm, a ceiling mounted support, a movable frame, or other such support structure.
- the imaging system 500 is removably mounted on the movable support structure.
- the imaging system 500 comprises a support connector, e.g., a mechanical coupling, to enable the imaging system 500 to be quickly and easily mounted or dismounted from the support structure.
- the support connector on the imaging system 500 is configured to be suitable for connecting with a typical complementary connector on the support structure, e.g., as designed for typical end effectors.
- the imaging system 500 is mounted to the support structure together with other end effectors, or is mounted to the support structure via another end effector.
- the imaging system 500 when mounted, the imaging system 500 is at a known fixed position and orientation relative to the support structure, e.g., by calibrating the position and orientation of the imaging system 500 after mounting. In this way, by determining the position and orientation of the support structure, e.g., using a navigation system or by tracking the movement of the support structure from a known starting point), the position and orientation of the imaging system 500 is also determined.
- the imaging system 500 may include a manual release button that, when actuated, enable the imaging system 500 to be manually positioned, e.g., without software control by the support structure.
- the imaging system 500 comprises an array of trackable markers, which is mounted on a frame on the imaging system 500 to enable the navigation system to track the position and orientation of the imaging system 500 .
- the movable support structure e.g., a positioning system of the navigation system, on which the imaging system 500 is mounted, is tracked by the navigation system; and the position and orientation of the imaging system 500 is determined by using the known position and orientation of the imaging system 500 relative to the movable support structure.
- the trackable markers comprise passive reflective tracking spheres, active infrared (IR) markers, active light emitting diodes (LEDs), a graphical pattern, or a combination thereof. At least three trackable markers are provided on a frame to enable tracking of position and orientation. In some examples, four passive reflective tracking spheres are coupled to the frame. While some specific examples of the type and number of trackable markers have been given, any suitable trackable marker and configuration may be used, as appropriate.
- determination of the position and orientation of the imaging system 500 relative to the viewing target is performed by a processor external to the imaging system 500 , e.g., a processor of the navigation system.
- Information about the position and orientation of the imaging system 500 is used, together with a robotic positioning system, to maintain alignment of the imaging system 500 with the viewing target, e.g., to view down an access port during port-based surgery, throughout the medical procedure.
- the navigation system tracks the position and orientation of the positioning system and/or the imaging system 500 either collectively or independently. Using this information as well as tracking of the access port, the navigation system determines the desired joint positions for the positioning system so as to maneuver the imaging system 500 to the appropriate position and orientation to maintain alignment with the viewing target, e.g., the longitudinal axes of the imaging system 500 and the access port being aligned. This alignment is maintained throughout the medical procedure automatically, without requiring explicit control input. In some examples, the operator is able to manually move the positioning system and/or the imaging system 500 , e.g., after actuation of a manual release button.
- the navigation system continues to track the position and orientation of the positioning system and/or the imaging system 500 .
- the navigation system e.g., in response to user input, such as using a foot pedal, indicating that manual movement is complete, reposition and reorient the positioning system and the imaging system 500 to regain alignment with the access port.
- the controller 530 uses information about the position and orientation of the imaging system 500 to perform autofocusing. For example, the controller 530 determines the WD between the imaging system 500 and the viewing target; and, thus, determine the desired positioning of the focus optics 515 , e.g., using appropriate equations to calculate the appropriate positioning of the focus optics 515 to achieve a focused image, and move the focus optics 515 , using the focus actuator 525 , in order to bring the image into focus. For example, the position of the viewing target is determined by a navigation system.
- the WD is determined by the controller 530 using information, e.g., received from the navigation system, from the positioning system or other external system, about the position and orientation of the imaging system 500 and/or the positioning system relative to the viewing target.
- the WD is determined by the controller 530 using an infrared light (not shown) mounted on near the distal end of the imaging system 500 .
- the controller 530 may perform autofocusing without information about the position and orientation of the imaging system 500 .
- the controller 530 controls the focus actuator 525 to move the focus optics 515 into a range of focus positions and control the camera 535 to capture image data at each focus position.
- the controller 530 may then perform image processing on the captured images to determine which focus position has the sharpest image and determine this focus position to be the desired position of the focus optics 515 .
- the controller 530 then controls the focus actuator 525 to move the focus optics 515 to the desired position.
- Any other autofocus routine is implemented by the controller 530 as appropriate.
- the viewing target is dynamically defined by the surgeon, e.g., using a user interface provided by a workstation, by touching the desired target on a touch-sensitive display, by using eye or head tracking to detect a point at which the surgeon's gaze is focused and/or by voice command; and the imaging system 500 performs autofocusing to dynamically focus the image on the defined viewing target, thereby enabling the surgeon to focus an image on different points within a FoV, without changing the FoV, and without having to manually adjust the focus of the imaging system 500 .
- Autofocusing is performable by way of a surgeon or, alternatively, by way of the controller 530 .
- the imaging system 500 is configured to perform autofocusing relative to an instrument being used in the medical procedure.
- An example of this feature is shown in FIG. 11 .
- the position and orientation of a medical instrument such as a tracked pointer tool 222
- the controller 530 performs autofocusing to focus the captured image on a point defined relative to the medical instrument.
- the tracked pointer tool 222 has a defined focus point at the distal tip of the pointer 222 .
- the WD between the optical imaging system 500 and the defined focus point (at the distal tip of the tracked pointer tool 222 ) changes (from Dl in the left image to D 2 in the right image, for example).
- the autofocusing is performed in a manner similar to that as above described; however, instead of autofocusing on a viewing target in the surgical field, the imaging system 500 focuses on a focus point that is defined relative to the medical instrument.
- the medical instrument is used in the surgical field to guide the imaging system 500 to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of the imaging system 500 .
- the surgeon uses the medical instrument, e.g., a pointer, to indicate to the imaging system 500 the object and/or depth desired for autofocusing.
- the controller 530 may receive information about the position and orientation of a medical instrument. This position and orientation information is received from an external source, e.g., from an external system tracking the medical instrument or from the medical instrument itself, or is received from another component of the imaging system 500 , e.g., an infrared sensor or a machine vision component of the imaging system 500 . The controller 530 may determine a focus point relative to the position and orientation of the medical instrument.
- an external source e.g., from an external system tracking the medical instrument or from the medical instrument itself
- another component of the imaging system 500 e.g., an infrared sensor or a machine vision component of the imaging system 500 .
- the controller 530 may determine a focus point relative to the position and orientation of the medical instrument.
- the focus point is predefined for a given medical instrument, e.g., the distal tip of a pointer, the distal end of a catheter, the distal end of an access port, the distal end of a soft tissue resector, the distal end of a suction, the target of a laser, or the distal tip of a scalpel), and is different for different medical instruments.
- the controller 530 may use this information, together with information about the known position and orientation of the imaging system 500 , e.g., determined as discussed above, in order to determine the desired position of the focus optics 515 to achieve an image focused on the focus point defined relative to the medical instrument.
- the imaging system 500 in examples where the imaging system 500 is used with a navigation system 205 (see FIG. 2B ), the position and orientation of a medical instrument, e.g., a tracked pointer tool 222 or a tracked port 210 , is tracked and determined by the navigation system 205 .
- the controller 530 of the imaging system 500 automatically autofocuses the imaging system 500 to a predetermined point relative to the tracked medical instrument, e.g., autofocus on the tip of the tracked pointer tool 222 or on the distal end of the access port 210 . Autofocusing is performed relative to other medical instruments and other tools that are used in the medical procedure.
- the imaging system 500 is configured to perform autofocusing relative to a medical instrument only when a determination is made that the focus point relative to the medical instrument is within the FoV of the imaging system 500 is determined, whereby an unintentional change of focus is avoidable when a medical instrument is moved in the vicinity of but outside the FoV of the imaging system 500 .
- the imaging system 500 is mounted on a movable support system, e.g., a robotic arm
- the movable support system positions and orients the imaging system 500 to bring the focus point of the medical instrument within the FoV of the imaging system 500 , in response to input, e.g., in response to user command via a user interface or voice input, or via activation of a foot pedal.
- the imaging system 500 is configured to implement a small time lag before performing autofocus relative to a medical instrument in order to avoid erroneously changing focus while the focus point of the medical instrument is brought into, and out of, the FoV.
- the imaging system 500 is configured to autofocus on the focus point only after the focus point has been substantially stationary for a predetermined length of time, e.g., approximately 0.5 second to approximately 1 second.
- the imaging system 500 is also configured to perform zooming with the focus point as the zoom center.
- the user may provide command input, e.g., via a user interface, voice input or activation of a foot pedal, to instruct the imaging system 500 to zoom in on the focus point.
- the controller 530 then positions the zoom optics 520 accordingly to zoom in on the focus point.
- the positioning system if the imaging system 500 is mounted on a positioning system automatically repositions the imaging system 500 as needed to center the zoomed in view on the focus point.
- the imaging system 500 automatically changes between different autofocus modes. For example, if the current FoV does not include any focus point defined by a medical instrument, the controller 530 may perform autofocus based on a preset criteria, e.g., to obtain the sharpest image or to focus on the center of the FoV. When a focus point defined by a medical instrument is brought into the FoV, the controller 530 may automatically switch mode to autofocus on the focus point. In some examples, the imaging system 500 changes between different autofocus modes in response to user input, e.g., in response to user command via a user interface, voice input, or activation of a foot pedal. In various examples of autofocusing, whether or not relative to a medical instrument, the imaging system 500 is configured to maintain the focus as the zoom is adjusted.
- a preset criteria e.g., to obtain the sharpest image or to focus on the center of the FoV.
- the controller 530 may automatically switch mode to autofocus on the focus point.
- the imaging system 500 changes between different autofocus modes in response to user
- the imaging system 500 generates a depth map (not shown). This is performed by capturing images of the same FoV, wherein the imaging system 500 focuses on points at a plurality of depths, e.g., different depths, to simulate 3D depth perception.
- the imaging system 500 performs autofocusing through a predefined depth range, e.g., through a depth of approximately 1 cm, and capturing focused images at a plurality of distinct or different depths, e.g., at increments of approximately 1 mm, through a depth range, e.g., the predefined depth range.
- the plurality of images captured at the corresponding plurality of different depths is transmitted to an external system, e.g., an image viewing workstation, wherein the plurality of images is aggregated into a set of depth images to form a depth map for the same FoV.
- the depth map provides focused views of the FoV, at different depths, and comprises contours, color-coding, and/or other indicators of different depths.
- the external system (not shown) provides a user interface (not shown) that allows a user to navigate through the depth map.
- the optical imaging system 500 could be configured with a relatively large DoF.
- the 3D scanner 545 is used to create a depth map of the viewed area; and the depth map is registered to the image captured by the camera 535 .
- Image processing is performed, e.g., using the controller 530 or an external processor, to generate a pseudo 3D image, for example by visually encoding, e.g., using color, artificial blurring, or other visual symbols, different parts of the captured image according to the depth information from the 3D scanner 545 .
- FIGS. 7 and 8 illustrate, in alternate perspective views, an example embodiment of the imaging system 500 , in accordance with an embodiment of the present disclosure.
- the imaging system 500 is shown mounted to the positioning system 208 , e.g., a robotic arm, of a navigation system.
- the imaging system 500 is shown with a housing 555 that encloses the zoom and focus optics, the zoom and focus actuators, the camera, the controller, and the 3D scanner.
- the housing is provided with a frame 560 on which trackable markers are mounted to enable tracking by the navigation system.
- the imaging system 500 communicates with the navigation system via a cable 565 (cutaway view in FIG. 8 ).
- the distal end of the imaging system 500 is provided with light sources 570 .
- the example shows four broad spectrum LEDs; however, more or less light sources can be used, of any suitable type.
- the light sources 570 are shown provided surrounding the aperture 553 of the imaging system 500 , in other examples, the light source(s) 570 is located elsewhere on the imaging system 500 .
- the distal end of the imaging system 500 further has openings 575 for the cameras of the integrated 3D scanner.
- a support connector 580 for mounting the imaging system 500 to the positioning system 208 is also shown, as well as the frame 560 for mounting trackable markers.
- this flow diagram illustrates an example method 900 of autofocusing during a medical procedure, in accordance with an embodiment of the present disclosure.
- the example method 900 is performed by way of an example optical imaging system, as disclosed herein.
- the method 900 comprises: determining the position and orientation of the imaging system, as indicated by block 905 , wherein determining the position and orientation of the imaging system comprises is performed by tracking the imaging system, by performing calibration, or by tracking the positioning system on which the imaging system is mounted, for example; determining the WD between the imaging system and the imaging target, as indicated by block 910 , e.g., wherein determining the position of the imaging target is performed by a navigation system, and wherein information relating to the position of the imaging target is used together with the position and orientation information of the imaging system to determine the WD; determining the desired position of the focus optics in order to achieve a focused image, as indicated by block 915 ; and controlling the focus actuator, e.g., by a controller of the imaging system, to position the
- this flow diagram illustrates an example method 1000 of autofocusing relative to a medical instrument during a medical procedure, in accordance with an embodiment of the present disclosure.
- the example method 1000 is performable using an example optical imaging system as disclosed herein.
- the example method 1000 is similar to the example method 900 .
- the example method 1000 comprises: determining the position and orientation of the imaging system, as indicated by block 1005 , wherein determining the position and orientation of the imaging system is performable by tracking the imaging system, by performing calibration, or by tracking the positioning system on which the imaging system is mounted, for example; determining the position and orientation of the medical instrument, as indicated by block 1010 , wherein determining the position and orientation of the medical instrument is performed by tracking the medical instrument, e.g., using a navigation system, by sensing the medical instrument, e.g., using an infrared or machine vision component of the imaging system, or by any other suitable techniques; determining the focus point relative to the medical instrument, as indicated by block 1015 , wherein determining the focus point comprises looking-up preset definitions, e.g., stored in a database, of focus points for different medical instruments, and calculating the focus point for the particular medical instrument being used; determining the WD between the imaging system and the focus point, as indicated by block 1020 ; determining the desired position of the focus optics
- this set of diagrams illustrate, in perspective views, some examples of the imaging system 500 configured to perform autofocusing relative to an instrument using in the medical procedure, in accordance with an embodiment of the present disclosure.
- a medical instrument such as a tracked pointer tool 222
- the controller 530 performs autofocusing to focus the captured image on a point defined relative to the medical instrument.
- the tracked pointer tool 222 has a defined focus point at the distal tip of the pointer 222 .
- the WD between the optical imaging system 500 and the defined focus point (at the distal tip of the tracked pointer tool 222 ) changes (from D 1 in the left image to D 2 in the right image, for example).
- the autofocusing is performed in a manner similar to that as above described; however, instead of autofocusing on a viewing target in the surgical field, the imaging system 500 focuses on a focus point that is defined relative to the medical instrument.
- the medical instrument is used in the surgical field to guide the imaging system 500 to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of the imaging system 500 .
- the surgeon uses the medical instrument, e.g., a pointer, to indicate to the imaging system 500 the object and/or depth desired for autofocusing.
- the example methods 900 , 1000 described above are entirely performable by the controller of the imaging system, or are partly performed by the controller and partly performed by an external system. For example, one or more of: determining the position/orientation of the imaging system, determining the position/orientation of the imaging target or medical instrument, determining the WD, or determining the desired position of the focus optics is performed by one or more external systems.
- the controller of the imaging system may simply receive commands, from the external system(s) to position the focus optics at the desired position, or the controller of the imaging system may determine the desired position of the focus optics after receiving the calculated WD from the external system(s).
- FIG. 12A through FIG. 12C illustrate, in perspective views, a surgeon hand H s operating a 3D navigation system 1200 in relation to an interrogation volume V i , comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure.
- the at least one proprioception feature comprises at least one communication feature for providing 3D (depth information) to the surgeon.
- the at least one communication feature comprises at least one of at least one active tool 140 , such as a tracked tool, above-discussed, at least one camera (not shown), and software (not shown) for generating a 3D perception, e.g., by providing a combination of perceivable signals, the perceivable signals relating to at least one sense, such as touch (haptic feedback), e.g., a vibration, vision (visual cues), e.g., light indicators, and sound (audio cues), e.g., a beeping sound.
- the perceivable signal combination comprises at least two perceivable signals, e.g., providing a plurality of sensory inputs in combination with 3D feedback (beyond the visual cues), readily perceivable by a surgeon.
- the systems and methods use audiohaptics, visualacoustic, or any combination of visual, haptic, and acoustic feedback, signals, or cues to provide a surgeon with a depth indication in relation to each 2D view of a scene, e.g., in an interrogation volume.
- the systems and methods use an acoustic feedback comprises a periodic beep along a distance from a given surface, wherein the periodic beep comprises a reducing period as a function of the tool, e.g., the active tool 140 , traveling from the given surface 800 to a patient, an anatomical target 141 , or a tissue intended for resection (not shown), and wherein the period approaches zero at a point where the tool, e.g., the active tool 140 , touches the patient, e.g., at the given surface 800 , the anatomical target 141 , or the tissue intended for resection (not shown).
- the 3D navigation system 1200 of the present disclosure is configured to provide depth information to a surgeon in the absence of stereo imaging.
- this diagram illustrates a perspective view of a surgeon hand H s operating a 3D navigation system 1200 in relation to an interrogation volume V i , comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure.
- a surgeon working in a surgical field or an interrogation volume V i containing “white” matter W and vasculature R of the patient.
- the “black box” or the interrogation volume V i may represent a port, a portion of the patient anatomy, or other structure defining or containing internal anatomical or resectable parts.
- the surgeon defines a plane within the interrogation volume V i , the reference frame, or the interrogation volume V i by indicating either a point or a number of points on the anatomical parts or other structure intended for use as “landmarks” or “barriers” to facilitate accurately determining positions thereof.
- tracking of the tracked tool 140 e.g., via the tracked pointer tool 142 , is performable by at least one technique, such as sonar tracking, ultrasonic tracking, and optical tracking.
- this diagram illustrates a perspective view of a surgeon hand H s operating a 3D navigation system 1200 in relation to an interrogation volume V i , comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure.
- the surgeon defines a plane, such as the reference plane 800 , in accordance with an embodiment of the present disclosure.
- the reference plane 800 defines a “zero” point by which location or depth of landmarks, barriers, or targets and their relative positions are determinable.
- either a reference plane or a reference volume is definable, e.g., wherein frequency is 3D location-dependent.
- this diagram illustrates a perspective view of a surgeon hand H s operating a 3D navigation system 1200 in relation to an interrogation volume V i , comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure.
- the 3D navigation system 1200 comprises at least one communication feature, such as an audio sensory device and a visual sensory device, for providing 3D (including depth information) to the surgeon.
- the surgeon has a 2D view of the surgical field of interest, but the surgeon also has a depth cue provided by a periodic or persistent beep indicating a position P of the tracked pointer tool 142 relative to an intraoperatively defined plane, such as a the reference plane 800 , wherein the position P is defined by coordinates x, y, and z as related to both the reference plane 800 and the boundaries of the interrogation volume V i .
- the audio sensory device when the surgeon moves the active tool 140 to another plane at a location L 2 , the audio sensory device emits a
- the 3D navigation system 1200 comprises a dull pressure spring (not shown) for indicating a distance from the reference plane 800 at a location L 1 based on a pressure experienced by the spring.
- the active tool 140 is embeddable with an arrangement of strip light-emitting diode (LED), e.g., lengthwise embeddable, the arrangement, e.g., of activated LEDs, configured to shorten and lengthen based on the distance from the reference plane 800 at a location L 1 .
- the location L 1 of the reference plane 800 is importable into the 3D navigation system 1200 , e.g., via a user interface (UI) (not shown) for further assisting the surgeon.
- UI user interface
- this diagram illustrates, in a perspective view, an optical imaging system 500 ′ using a 3D navigation system 1200 , capable of enhanced autofocusing relative to a medical instrument, e.g., a tracked pointer tool 222 , in accordance with an alternative embodiment of the present disclosure.
- the imaging system 500 ′ is configured to perform enhanced autofocusing relative to an instrument, e.g., a tracked pointer tool 222 , using in the medical procedure, by example only. For example, the position and orientation of a medical instrument, such as a tracked pointer tool 222 , is determined; and the controller 530 performs enhanced autofocusing to focus the captured image on a point defined relative to the medical instrument.
- the optical imaging system 500 ′ comprises an optical imaging assembly and at least one detector operable with the optical imaging assembly 500 ′.
- the at least one detector of the optical imaging assembly comprises at least one of a single camera system and a dual camera system (not shown).
- the tracked pointer tool 222 has a defined focus point at the distal tip of the tracked pointer tool 222 .
- the WD between the optical imaging system 500 and the defined focus point changes (from Dl in the left image to D 2 in the right image, for example).
- the enhanced autofocusing is performed in a manner similar to that, as above described; however, instead of autofocusing on a viewing target in the surgical field, the optical imaging system 500 ′ focuses on a focus point that is defined relative to the medical instrument.
- the medical instrument is used in the surgical field to guide the optical imaging system 500 ′ to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of the optical imaging system 500 ′.
- the surgeon uses the medical instrument, e.g., a pointer, to indicate to the optical imaging system 500 ′ the object and/or depth desired for enhanced autofocusing.
- the optical imaging system 500 ′ is configured to use a method of enhanced autofocusing, e.g., by way of the 3D navigation system 1200 .
- the optical imaging system 500 ′ comprises at least one of: (a) a single array of detectors, such as a plurality of video cameras, (b) a pair of detectors, such as in a video loop configuration and a pair of video cameras, (c) a pair of detectors capable of stereovision, (d) two detectors, wherein each detector comprises at least one of a distinct resolution and a distinct color, and whereby differentiation between each view of a stereoscopic view is enabled, (e) a device configured to render an image on a display, for updating the image on the display, and for tracking a tip of a tool, (f) a sensory device configured to detect a plurality of sensory input signals, analyze the plurality of sensory input signals, translate or transform the plurality of sensory input signals into a plurality of sensory output signals, and transmit the plurality of sensory output
- the optical imaging system 500 ′ comprises two detectors for achieving a stereoscopic view, e.g., an inferring view using two detectors
- 3D navigation is achievable, e.g., via virtual 3D navigation, wherein a tool tip is viewable relative to an image rendered on a display
- the plurality of sensory output signals comprises a visual feedback and a haptic feedback
- the haptic feedback provides a sense of feel, whereby the sense of feel provide a surgeon with a sense of three-dimensionality.
- the sensory device comprises four sensors, for example, to enhance the haptic feedback provided to the surgeon.
- the tool itself is “active” wherein the plurality of sensory output signals may emanate from the tool itself
- the active tool itself thus, comprises the sensory device.
- the sensory device further comprises at least one visual indicator, such as at least one light indicator, the at least one visual indicator activable when the tool approaches a target or a barrier, e.g., in response to sending proximity thereto.
- the haptic feedback comprises a vibration, for example, emanating from the tool, itself, whereby the sense of feel is immediate. At least one of the visual feedback, the audio feedback, and the haptic feedback further comprises at least one of variable amplitude and variable frequency for providing the surgeon with an indication as to an appropriate degree of contact with the tissue.
- the optical imaging system 500 ′ using the 3D navigation system 1200 , utilizes tools and sensors, such as two detectors disposed in relation to a device positioning system (DPS), e.g., a drive system comprising a robotic arm, for providing and enhancing 3D navigation.
- DPS device positioning system
- the optical imaging system 500 ′, using the 3D navigation system 1200 integrates the foregoing features.
- a 3D navigation system 1200 for enhancing feedback during a medical procedure comprises: an optical imaging system comprising: an optical assembly comprising movable zoom optics and movable focus optics; a zoom actuator for positioning the zoom optics; a focus actuator for positioning the focus optics; a controller for controlling the zoom actuator and the focus actuator in response to received control input; at least one detector for capturing an image of at least one of a target and an obstacle, the at least one detector operable with the optical assembly; and a proprioception feature operable with the optical imaging system for generating a 3D perception, the proprioception feature comprising a communication feature for providing 3D information, the 3D information comprising real-time depth information in relation to real-time information, such as real-time planar information and real-time volumetric information, in relation to an interrogation volume, the zoom optics and the focus optics independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and the optical imaging system configured
- the three-dimensional feedback e.g., touch, sight, and sound feedback
- sensed information is used in conjunction with sensed information as a function of the three-dimensional spatial coordinates, e.g., x, y, and z coordinates.
- this flow diagram illustrates a method M 1 of fabricating a 3D navigation system 1200 system for enhancing feedback during a medical procedure, in accordance with an embodiment of the present disclosure.
- the method Ml comprises: providing an optical imaging system, as indicated by block 1401 , providing the optical imaging system comprising: providing an optical assembly, as indicated by block 1402 , providing the optical assembly comprising providing movable zoom optics and providing movable focus optics, as indicated by block 1403 ; providing a zoom actuator for positioning the zoom optics, as indicated by block 1404 ; providing a focus actuator for positioning the focus optics, as indicated by block 1405 ; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input, as indicated by block 1406 ; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly, as indicated by block 1407 ; and providing a proprioception feature operable with the
- this flow diagram illustrates a method M 2 of enhancing feedback during a medical procedure by way of a 3D navigation system 1200 , in accordance with an embodiment of the present disclosure.
- the method M 2 comprises: providing the 3D navigation system, as indicated by block 1500 , providing the 3D navigation system comprising: providing an optical imaging system, as indicated by block 1501 , providing the optical imaging system comprising: providing an optical assembly, as indicated by block 1502 , providing the optical assembly comprising providing movable zoom optics and providing movable focus optics, as indicated by block 1503 ; providing a zoom actuator for positioning the zoom optics, as indicated by block 1504 ; providing a focus actuator for positioning the focus optics, as indicated by block 1505 ; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input, as indicated by block 1506 ; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical
- At least some aspects disclosed are embodied, at least in part, in software. That is, some disclosed techniques and methods is carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- processor such as a microprocessor
- a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- a computer readable storage medium is used to store software and data which when executed by a data processing system causes the system to perform various methods or techniques of the present disclosure.
- the executable software and data is stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data are stored in any one of these storage devices.
- Examples of computer-readable storage media may include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media, e.g., compact discs (CDs), digital versatile disks (DVDs), etc.), among others.
- the instructions can be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, and the like.
- the storage medium is the internet cloud, or a computer readable storage medium such as a disc.
- the methods described herein are capable of being distributed in a computer program product comprising a computer readable medium that bears computer usable instructions for execution by one or more processors, to perform aspects of the methods described.
- the medium is provided in various forms such as, but not limited to, one or more diskettes, compact disks, tapes, chips, USB keys, external hard drives, wire-line transmissions, satellite transmissions, internet transmissions or downloads, magnetic and electronic storage media, digital and analog signals, and the like.
- the computer usable instructions may also be in various forms, including compiled and non-compiled code.
- At least some of the elements of the systems described herein are implemented by software, or a combination of software and hardware.
- Elements of the system that are implemented via software are written in a high-level procedural language such as object oriented programming or a scripting language. Accordingly, the program code is written in C, C++, J++, or any other suitable programming language and may comprise modules or classes, as is known to those skilled in object oriented programming.
- At least some of the elements of the system that are implemented via software are written in assembly language, machine language or firmware as needed.
- the program code can be stored on storage media or on a computer readable medium that is readable by a general or special purpose programmable computing device having a processor, an operating system and the associated hardware and software that is necessary to implement the functionality of at least one of the embodiments described herein.
- the program code when read by the computing device, configures the computing device to operate in a new, specific and predefined manner in order to perform at least one of the methods described herein.
- the present disclosure industrially applies to optical imaging systems. More particularly, the present disclosure industrially applies to optical imaging systems for use in image guided medical procedures. Even more particularly, the present disclosure industrially applies to optical imaging systems for use in image guided medical procedures involving a pointer tool.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Robotics (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Gynecology & Obstetrics (AREA)
- Endoscopes (AREA)
Abstract
Description
- This document is a continuation application claiming the benefit of, and priority to, U.S. patent application Ser. No. 16/346,498, filed on Apr. 30, 2019, entitled “3D NAVIGATION SYSTEM AND METHODS,” and International Patent Application No. PCT/CA2016/051264, filed Oct. 31, 2016, entitled “3D NAVIGATION SYSTEM AND METHODS,” all of which are hereby incorporated by reference in their entirety.
- Generally, the present disclosure technically relates to optical imaging systems. More particularly, the present disclosure technically relates to optical imaging systems for use in image guided medical procedures. Even more particularly, the present disclosure technically relates to optical imaging systems for use in image guided medical procedures involving a pointer tool.
- In the related art, conventional surgical microscopes are often used during surgical procedures to provide a detailed or magnified view of the surgical site. In some cases, separate narrow field and wide field scopes are used within the same surgical procedure to obtain image views with different zoom ranges. Often, adjusting the zoom and focus of such a related art surgical microscope requires the user, e.g., a surgeon, to manually adjust the optics of the microscope, which is difficult, time-consuming, and frustrating, particularly during a surgical procedure.
- Further, related art image capture cameras and light sources are components that are separate from the related art surgical microscope. Typically, the specific camera and light source used with a given conventional surgical microscope are different for different medical centers and even for different surgical procedures within the same medical center. This circumstance results in an inconsistency in the images obtained, wherein comparing images between different medical centers is difficult or impossible.
- In related art surgical navigation, differences between conventional stereoscopic optical chains and video telescopic microscopy optical chains exist, e.g., mechanisms used for generating 3-dimensional (3D) perception at high magnification. However, such differences usually require substantial human correction in an attempt to gauge a target location in the depth dimension. Over the previous decade, many related art surgical systems do not include any 3D perception features for at least that 3D perception has been believed to be a barrier to endoscopic surgery, e.g., endonasal surgery, in the related art.
- In addition, various related art navigation devices are used, such as a white probing stick for visually-challenged persons, such as a white probing stick that receives feedback in the form of a sound via echo location, two ultrasonic stereoscopic scanners for translating into an audio tone, and a motor vehicle backup camera system, wherein an audible sound or an indicator light is produced for collision warning. However, these related art devices do not address challenges in the area of surgical navigation.
- As such, the related art navigation systems have experienced many challenges, including difficulty in accurately providing a surgeon with sufficient feedback relating to target depth in performing navigated surgery using only stereo imaging and surgeon eye strain. Therefore, a need exists for a navigation system that improves both planar and depth perception in relation to a surgical interrogation volume to overcome many of the related art challenges.
- In addressing at least many of the challenges experienced in the related art, the subject matter of the present disclosure involves systems and methods which consider 3D perception being an operator's ability to generate the relative positional sense (RPS) of objects located within a given interrogation volume. Multiple mechanisms exist for generating 3D perception, wherein binocular vision is an important and powerful tactic. The perception of the relative position of two objects is also achieved and enhanced through the use of proprioception, shadowing, sound, as well as other factors, whereby all such factors synergistically interact, in accordance with embodiment of the present disclosure. The 3D navigation systems and methods of the present disclosure involve features for acquiring data from vision, touch, sound, e.g., via a tracked tool; translating the data into a usable form for a surgeon; and presenting information, based on the translated data, to the surgeon, wherein the information comprises 3D information is related to at least two of three senses, e.g., vision, touch, and sound, capture, wherein the information is applicable to a particular context of use, e.g., a surgical context.
- In some examples, the present disclosure provides an optical imaging system for imaging a target during a medical procedure. The system includes: an optical assembly including movable zoom optics and movable focus optics; a zoom actuator for positioning the zoom optics; a focus actuator for positioning the focus optics; a controller for controlling the zoom actuator and the focus actuator in response to received control input; and a camera for capturing an image of the target from the optical assembly, wherein the zoom optics and the focus optics are independently movable by the controller using the zoom actuator and the focus actuator, respectively, and wherein the optical imaging system is configured to operate at a minimum working distance (WD) from the target, the WD being defined between an aperture of the optical assembly and the target.
- In some examples, the present disclosure provides a processor for controlling the optical imaging system disclosed herein. The processor is configured to: provide a user interface to receive control input, via an input device coupled to the processor, for controlling the zoom actuator and the focus actuator; transmit control instructions to the controller of the optical imaging system to adjust zoom and focus in accordance with the control input; and receive image data from the camera for outputting to an output device coupled to the processor.
- In some examples, the present disclosure provides a system for optical imaging during a medical procedure. The system comprises: the optical imaging system disclosed herein; a positioning system for positioning the optical imaging system; and a navigation system for tracking each of the optical imaging system and the positioning system relative to the target.
- In some examples, the present disclosure provides a method of autofocusing using an optical imaging system during a medical procedure, the optical imaging system comprising motorized focus optics and a controller for positioning the focus optics. The method includes: determining a WD between an imaging target and an aperture of the optical imaging system; determining a desired position of the focus optics based on the WD; and positioning the focus optics at the desired position.
- In accordance with an embodiment of the present disclosure, a 3D navigation system for enhancing feedback during a medical procedure comprises: an optical imaging system comprising: an optical assembly comprising movable zoom optics and movable focus optics; a zoom actuator for positioning the zoom optics; a focus actuator for positioning the focus optics; a controller for controlling the zoom actuator and the focus actuator in response to received control input; at least one detector for capturing an image of at least one of a target and an obstacle, the at least one detector operable with the optical assembly; and a proprioception feature operable with the optical imaging system for generating a 3D perception, the proprioception feature comprising a communication feature for providing 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, the zoom optics and the focus optics independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and the optical imaging system configured to operate at a minimum WD from at least one of the target and the obstacle, the WD defined between an aperture of the optical assembly and at least one of the target and the obstacle, whereby feedback during the medical procedure is enhanceable. The obstacle may be an anatomical structure or any other structure, such as a surgical tool, a synthetic anatomical structure, an implanted structure, a transplanted structure, a grafted structure, and the like, by example only.
- In accordance with an embodiment of the present disclosure, a method of fabricating a 3D navigation system for enhancing feedback during a medical procedure comprises: providing an optical imaging system, providing the optical imaging system comprising: providing an optical assembly comprising providing movable zoom optics and providing movable focus optics; providing a zoom actuator for positioning the zoom optics; providing a focus actuator for positioning the focus optics; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly; and providing a proprioception feature operable with the optical imaging system for generating a 3D perception, providing the proprioception feature comprising providing a communication feature configured to provide 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, providing the zoom optics and providing the focus optics comprising providing the zoom optics and providing the focus optics as independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and providing the optical imaging system comprising configuring the optical imaging system to operate at a minimum WD from at least one of the target and the obstacle, the WD defined between an aperture of the optical assembly and at least one of the target and the obstacle, whereby feedback during the medical procedure is enhanceable.
- In accordance with an embodiment of the present disclosure, a method enhancing feedback during a medical procedure by way of a 3D navigation system comprises: providing the 3D navigation system, providing the 3D navigation system comprising: providing an optical imaging system, providing the optical imaging system comprising: providing an optical assembly comprising providing movable zoom optics and providing movable focus optics; providing a zoom actuator for positioning the zoom optics; providing a focus actuator for positioning the focus optics; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input;
- providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly; and providing a proprioception feature operable with the optical imaging system for generating a 3D perception, providing the proprioception feature comprising providing a communication feature for providing 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, providing the zoom optics and providing the focus optics comprising providing the zoom optics and providing the focus optics as independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and providing the optical imaging system comprising configuring the optical imaging system to operate at a minimum WD from at least one of the target and the obstacle, the WD defined between an aperture of the optical assembly and at least one of the target and the obstacle, wherein providing the communication feature comprises providing at least one sensory input device and providing at least one sensory output device, and wherein providing the communication feature comprises providing the communication feature as operable by way of a set of executable instructions storable on a nontransitory memory device; receiving at least one input signal by the at least one sensory input device; and providing at least one output signal by the at least one sensory output device, thereby enhancing feedback during the medical procedure.
- Some of the features in the present disclosure are broadly outlined in order that the section entitled Detailed Description is better understood and that the present contribution to the art by the present disclosure is better appreciated. Additional features of the present disclosure are described hereinafter. In this respect, understood is that the present disclosure is not limited in its application to the details of the components or steps set forth herein or as illustrated in the several figures of the drawing, but are capable of being carried out in various ways which are also encompassed by the present disclosure. Also, understood is that the phraseology and terminology employed herein are for illustrative purposes in the description and should not be regarded as limiting.
- The above, and other aspects, features, and advantages of several embodiments of the present disclosure will be more apparent from the following Detailed Description as presented in conjunction with the following several figures of the Drawing.
-
FIG. 1 is a diagram illustrating a perspective view of an access port inserted into a human brain, for providing access to internal brain tissue during an example medical procedure, in accordance with an embodiment of the present disclosure. -
FIG. 2A is a diagram illustrating a perspective view of an example navigation system to support image guided surgery, in accordance with an embodiment of the present disclosure. -
FIG. 2B is a diagram illustrating a front view of system components of an example navigation system, in accordance with an embodiment of the present disclosure. -
FIG. 3 is a block diagram illustrating an example control and processing system usable with the example navigation systems, as shown inFIGS. 2A and 2B , in accordance with an embodiment of the present disclosure. -
FIG. 4A is a flow diagram illustrating an example method involving a surgical procedure implementable using the example navigation systems, as shown inFIGS. 2A and 2B , in accordance with an embodiment of the present disclosure. -
FIG. 4B is a flow diagram illustrating an example method of registering a patient for a surgical procedure, as shown inFIG. 4A , in accordance with an embodiment of the present disclosure. -
FIG. 5 is a diagram illustrating a perspective view of an example optical imaging system being used during a medical procedure, in accordance with an embodiment of the present disclosure. -
FIG. 6 is a block diagram illustrating an example optical imaging system, in accordance with an embodiment of the present disclosure. -
FIG. 7 is a diagram illustrating a perspective view of an example optical imaging system, in accordance with an embodiment of the present disclosure. -
FIG. 8 is a diagram illustrating an alternate perspective view of the example optical imaging system, as shown inFIG. 7 , in accordance with an embodiment of the present disclosure. -
FIG. 9 is a flow diagram illustrating an example method of autofocusing using an example optical imaging system, in accordance with an embodiment of the present disclosure. -
FIG. 10 is a flow diagram illustrating an example method of autofocusing relative to a medical instrument, using an example optical imaging system, in accordance with an embodiment of the present disclosure. -
FIG. 11 is a set of diagrams illustrating perspective views of an optical imaging system using a method of autofocusing relative to a medical instrument, in accordance with an embodiment of the present disclosure. -
FIG. 12A is a diagram illustrating a perspective view of a 3D navigation system, in operation, in accordance with an embodiment of the present disclosure. -
FIG. 12B is a diagram illustrating a perspective view of a 3D navigation system, in operation, as shown inFIG. 12A , in accordance with an embodiment of the present disclosure. -
FIG. 12C is a diagram illustrating a perspective view of a 3D navigation system, in operation, as shown inFIG. 12B , in accordance with an embodiment of the present disclosure. -
FIG. 13 is a set of diagrams illustrating perspective views of an optical imaging system, using a 3D navigation system, in accordance with an alternative embodiment of the present disclosure. -
FIG. 14 is a flow diagram illustrating a method of fabricating a 3D navigation system, in accordance with an embodiment of the present disclosure. -
FIG. 15 is a flow diagram illustrating a method of enhancing surgical navigation by way of a 3D navigation system, in accordance with an embodiment of the present disclosure. - Corresponding reference numerals or characters indicate corresponding components throughout the several figures of the Drawing. Elements in the several figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some elements in the figures are emphasized relative to other elements for facilitating understanding of the various presently disclosed embodiments. Also, common, but well-understood, elements that are useful or necessary in commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure.
- The systems and methods described herein are useful in the field of neurosurgery, including oncological care, neurodegenerative disease, stroke, brain trauma, and orthopedic surgery. The subject matter of the present disclosure is applicable to other conditions or fields of medicine. While the present disclosure describes examples in the context of neurosurgery, the subject matter of the present disclosure is applicable to other surgical procedures that may use intraoperative optical imaging.
- Various example apparatuses or processes are below-described. No below-described example embodiment limits any claimed embodiment; and any claimed embodiments may cover processes or apparatuses that differ from those examples described below. The claimed embodiments are not limited to apparatuses or processes having all of the features of any one apparatus or process described below or to features common to multiple or all of the apparatuses or processes described below. The claimed embodiments optionally comprise any of the below-described apparatuses or processes.
- Furthermore, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. However, understood is that the embodiments described herein are practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein.
- As used herein, the terms, “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in the specification and claims, the terms, “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
- As used herein, the term “exemplary” or “example” means “serving as an example, instance, or illustration,” and should not be construed as preferred or advantageous over other configurations disclosed herein.
- As used herein, the terms “about,” “approximately,” and “substantially” are meant to cover variations that may exist in the upper and lower limits of the ranges of values, such as variations in properties, parameters, and dimensions. In one non-limiting example, the terms “about,” “approximately,” and “substantially” are understood to denote plus or minus 10 percent or less.
- Unless defined otherwise, all technical and scientific terms used herein are intended to have the same meaning as commonly understood by one of ordinary skill in the art. Unless otherwise indicated, such as through context, as used herein, the following terms are intended to have the following meanings:
- As used herein, the phrase “access port” refers to a cannula, conduit, sheath, port, tube, or other structure that is insertable into a subject, in order to provide access to internal tissue, organs, or other biological substances. In some embodiments, an access port may directly expose internal tissue, for example, via an opening or aperture at a distal end thereof, and/or via an opening or aperture at an intermediate location along a length thereof. In other embodiments, an access port may provide indirect access, via one or more surfaces that are transparent, or partially transparent, to one or more forms of energy or radiation, such as, but not limited to, electromagnetic waves and acoustic waves.
- As used herein the phrase “intraoperative” refers to an action, process, method, event or step that occurs or is carried out during at least a portion of a medical procedure. Intraoperative, as defined herein, is not limited to surgical procedures, and may refer to other types of medical procedures, such as diagnostic and therapeutic procedures.
- Some embodiments of the present disclosure relate to minimally invasive medical procedures that are performed via an access port, whereby surgery, diagnostic imaging, therapy, or other medical procedures, e.g. minimally invasive medical procedures, are performed based on access to internal tissue through the access port.
- In the example of a port-based surgery, a surgeon or robotic surgical system may perform a surgical procedure involving tumor resection in which the residual tumor remaining after is minimized, while also minimizing the trauma to the intact white and grey matter of the brain. In such procedures, trauma may occur, for example, due to contact with the access port, stress to the brain matter, unintentional impact with surgical devices, and/or accidental resection of healthy tissue. A key to minimizing trauma is ensuring that the surgeon performing the procedure has the best possible view of the surgical site of interest without having to spend excessive amounts of time and concentration repositioning tools, scopes and/or cameras during the medical procedure.
- In accordance with embodiments of the present disclosure, the systems and methods consider the impact of the differences in generating feedback with 3D perception using binocular vision in relation to using proprioception. In particular, embodiments of the present discloser consider that vision facilitates locating peripheral targets more precisely and that proprioception facilitates greater precision for locating targets in the depth dimension. More particularly, the systems and methods of the present disclosure involve features which take into account that vision and proprioception have differential effects on the precision of target representation. When vision contributes to the target representation, localization is more precise along the lateral dimension, e.g., for locating the peripheral targets. However, when proprioception contributes to the target representation, localization is more precise in depth, e.g., locating deep targets in the tissue.
- In particular, embodiments of the present disclosure consider several techniques for optimizing 3-D perception and, specifically, relative positional sense, at a high magnification. Such techniques include, but are not limited to, (a) implementing focused visual targets, e.g., maintaining the focal plane/point in conjunction with using visual obscuration throughout an interrogation volume and using a focused target in the depth dimension; (b) implementing serial focus adjustments, e.g., performing dynamic adjustment of the focal distance to create multiple focal points across a range of an interrogation volume; and (c) implementing an immersive contextual volume of view, e.g., generating a volume of view (VoV), wherein all of an anatomy is in simultaneous focus, thereby providing continuous contextual information throughout an interrogation volume.
- In accordance with some embodiments of the present disclosure, the technique (a) is implementable with a conventional stereoscopic binocular microscope (CS-m), wherein large portions of the interrogation volume are obscured, and wherein a given target is maintained in constant focus. In implementing technique (a), embodiments of the present disclosure provide a very powerful mechanism to create 3D perception. For example, an operator's hands may come in and out of focus as the hands travel through a given VoV and approach a resolvable visual target within a volume of distortion, such as a basilar artery, thereby providing critical contextual information to the operator regarding focus, and whereby imperative visual cues of shadowing and distortion generate a framework for 3D perception and relative positional sense for facilitating navigation within the given VoV. In such embodiments, dynamic movement within a surgical cavity provides visual cues for generating a depth of field (DoF) at high magnification approximating that of an endoscope, wherein distortions are tolerated for a trade-off in 3D perception and magnification.
- In accordance with some embodiments of the present disclosure, the technique (b) is implementable when distortions are deemed intolerable or the given visual target has changed. In implementing technique (b), an experienced operator (user) may be more tolerant of obscuration and less frequently adjusts the focal distance in relation to a less-experienced operator. Technique (b) is implementable for obtaining useful information in the DoF using a CS-m, but may require manual dynamic movements approximating that of an endoscope. An endoscope requires mechanical movement of the payload along the z-axis within a surgical cavity to redefine the plane of focus. Whereas a CS-m involves manually moving the focal distance and adjusting the focal point outside a surgical cavity, whereby greater flexibility is provided.
- In accordance with some embodiments of the present disclosure, the technique (c) is implementable at high magnification in relation to a larger portion of a viewable anatomy, wherein imaging is simultaneously in focus and usable. If using a CS-m, at high magnification, imaging is serially adjusted to maintain focus of either a suprachiasmatic cistern or an interpeduncular cistern. If using a robotically operated video optical telescopic microscope (ROVOT-m), images are seen at the same optical parameters without manipulation.
- In relation to technique (c), the visual cues of shadowing and distortion, otherwise provided by a CS-m as the operator's hands move past a blurred arterial structure (in a focal plane), optic nerve, and chiasm prior to arriving at a resolved basilar artery, are not provided if using a ROVOT-m. Thus, distortion is no longer available to generate a relative positional sense (RPS). However, the simultaneous contextual information of incrementally and clearly visualizing contents of cisterns provided to the operator is adequate compensation for creating a 3D perception and is useful for depth perception. In using a ROVOT-m, the RPS, while moving through the VoV, is generated by combining monitoring an operator's hands and receiving inherent haptic feedback, e.g., as the operator's hands move past the focal planes of the arterial structure, through the opticocarotid cistern, and arriving at the basilar artery, all of which have simultaneously been in focus. In using the 3D navigation system 1200 of the present disclosure, any inherent haptic feedback is enhanced with additional haptic feedback.
- In accordance with some embodiments of the present disclosure, operator experience includes contextual knowledge of the anatomy and the relative location of the structures for facilitating perceiving an RPS of two structures. In using the systems and methods of the present disclosure, operator knowledge enhances the 3-D perception, especially during a learning curve thereof, i.e., the eye tends to be blind to what the mind does not know. A key component of systems and methods using the ROVOT-m further involves a global positioning system (GPS) for facilitating hands-free positioning of the payload, thereby further facilitating generating an RPS.
- In accordance with some embodiments of the present disclosure, in compensating for an absence of contextual knowledge, the systems and methods use a second navigation screen with a tracked instrument displaying the relative position for a novice operator, thereby rapidly resolving any initial loss of depth perception, and thereby facilitating learning the relative position(s) of the anatomy within an interrogated volume by the novice operator. While simultaneous navigation is not absolutely required, the systems and methods use simultaneous navigation for added value by, not only shortening the learning curve, but also providing meaningful contextual information, e.g., by using dynamic continuous navigation via one display with simultaneous optical imaging on another display.
- In accordance with some embodiments of the present disclosure, the systems and methods use two different visual input screens which in the aggregate synergistically created an immersive surgical volume, wherein all portions of the anatomy is resolvable and continuously referenced relative to one another, thereby minimizing a need for manual adjustment, and thereby providing enhanced “stereoscopy.” The loss of distortion and shadowing as critical 3D visual navigation cues otherwise provided by a CS-m are easily compensated by the foregoing mechanisms in embodiments of the systems and methods that use the ROVOT-m. In addition, the systems and methods using the ROVOT-m facilitate working in an immersive surgical volume than a surgical volume in which anatomical portions are obscured for both experienced and novice operators.
- In accordance with some embodiments of the present disclosure, the systems and methods use an untethered optical chain (OC), wherein a working axis of each operator hand is in a plane different than that of a viewing axis, whereby ergonomic value is enhanced, and whereby 3D perception is enhanced. With the CS-m they did not have the ability to look directly at their hands which were obscured by the intervening OC. In contrast, with the video telescopic microscopy (VT-m) systems, an operator may simply look down at the operator's hands approach the target and then look up at the monitor whenever magnification is desired. This manual technique (looking up and down) is another technique for adjusting, or compensating, loss of stereoscopy to generate 3D. While the operators are unaccustomed to having the liberty to directly see their hands and the wound, this technique is a source of 3D perception. However, when combined with the proprioception, these techniques are synergistically useful, particularly in applications associated with bimanual dissection, and are encompassed by embodiment so the present disclosure.
- In accordance with some embodiments of the present disclosure, the systems and methods overcome related art challenges by involving at least proprioception features, whereby enhanced tactile and haptic feedback between the surgeons two hands and relative anatomy are provided, and whereby RPS and pother spatial sensing is generates. Complex procedures, such as clip ligation of aneurysms, carotid and pituitary transpositions, and dissection of brainstem perforators are increasingly performed by endonasal endoscopy. The systems and methods of the present disclosure involving 3D perception, e.g., via proprioception, enhance, not only endonasal endoscopy, but also enhance video-based telescopic neurosurgery and neurosurgical training programs.
- In accordance with some embodiments of the present disclosure, the systems and methods involve various techniques for acquiring 3D data, e.g., using five senses to determine location(s), such as inward and outward precession in a spiral pattern within an interrogation volume. For generating (translating) the 3D data into 3D information, a plurality of input data types are used, such as a combination of sound and haptic/proprioception data, a combination of visual and haptic/proprioception data, and a combination of a cross-sectional view of a brain and a view of the brain, wherein selected combinations are displayable in relation to a same field of view (FoV). Audio feedback for indicating a trajectory to target eliminates full reliance on merely visible feedback, e.g., audio feedback for a cannulation procedure.
- Referring to
FIG. 1 , this diagram illustrates, in a perspective view, anaccess port 12 inserted into ahuman brain 10 for providing access to internal brain tissue during a medical procedure, in accordance with an embodiment of the present disclosure. Theaccess port 12 accommodates instruments, such as catheters, surgical probes, or cylindrical ports, e.g., the NICO BrainPath™. Surgical tools and instruments may then be inserted within the lumen of theaccess port 12 in order to perform surgical, diagnostic or therapeutic procedures, such as resecting tumors as necessary. In the example of a port-based surgery, a straight orlinear access port 12 is typically guided down a sulci path of the brain. Surgical instruments would then be inserted down theaccess port 12. Theaccess port 12 also facilitates use of catheters, DBS needles, a biopsy procedure, and also to biopsies and/or catheters in other medical procedures performed on other parts of the body, as well as to medical procedures that do not use an access port. Various examples of the present disclosure are generally suitable for use in any medical procedure that may use optical imaging systems. - Referring to
FIG. 2A , this diagram illustrates, in a perspective view, an exemplarynavigation system environment 200, usable to support navigated image-guided surgery, in accordance with an embodiment of the present disclosure. Asurgeon 201 performs surgery on apatient 202 in an operating room (OR) environment. Amedical navigation system 205 comprises an equipment tower, tracking system, displays, and tracked instruments to assist thesurgeon 201 during his procedure. Anoperator 203 may also be present to operate, control, and provide assistance for themedical navigation system 205. - Referring to
FIG. 2B , this diagram illustrates, in a front view, an examplemedical navigation system 205 in greater detail, in accordance with an embodiment of the present disclosure. The disclosed optical imaging system is usable in the context of themedical navigation system 205. Themedical navigation system 205 comprises at least one display, such asdisplays equipment tower 207, and apositioning system 208, such as a mechanical arm, which may support anoptical imaging system 500, e.g., comprising an optical scope. At least one of thedisplays equipment tower 207 is mountable on a frame, e.g., a rack or cart, and may comprise a power supply and a computer or controller configured to execute at least one of planning software, navigation software, and other software for managing thepositioning system 208 and at least one instrument tracked by thenavigation system 205. In some examples, theequipment tower 207 comprises a single tower configuration operating withdual displays equipment tower 207 comprises other configurations, e.g., a dual tower, a single display, etc. Further, theequipment tower 207 is configurable with a universal power supply (UPS) to provide for emergency power in addition to a regular AC adapter power supply. - Still referring to
FIG. 2B , a portion of the patient's anatomy is retainable by a holder. For example, as shown, the patient's head and brain is retainable by ahead holder 217. Theaccess port 12 and associatedintroducer 210 are insertable into the head to provide access to a surgical site. Theimaging system 500 is usable to view down theaccess port 12 at a sufficient magnification to allow for enhanced visibility. The output of theimaging system 500 is receivable by at least one computer or controller to generate a view that is depictable on a visual display, e.g., one ormore displays - Still referring to
FIG. 2B , in some examples, thenavigation system 205 comprises a trackedpointer tool 222. The trackedpointer tool 222 comprisesmarkers 212 to enable tracking by a trackingcamera 213 and is configured to identify points, e.g., fiducial points, on a patient. An operator, typically a nurse or thesurgeon 201, may use the trackedpointer tool 222 to identify the location of points on thepatient 202, in order to register the location of selected points on thepatient 202 in thenavigation system 205. A guided robotic system with closed loop control is usable as a proxy for human interaction. Guidance to the robotic system is providable by any combination of input sources such as image analysis, tracking of objects in the operating room using markers placed on various objects of interest, or any other suitable robotic system guidance techniques. - Still referring to
FIG. 2B ,fiducial markers 212 are configured to couple with theintroducer 210 for tracking by the trackingcamera 213, which may provide positional information of theintroducer 210 from thenavigation system 205. In some examples, thefiducial markers 212 are alternatively or additionally attached to theaccess port 12. In some examples, the trackingcamera 213 comprises a 3D infrared optical tracking stereo camera, e.g., a camera comprising at least one feature of a Northern Digital Imaging® (NDI) camera. In some examples, the trackingcamera 213 alternatively comprises an electromagnetic system (not shown), such as a field transmitter, that configured to use at least one receiver coil disposed in relation to the tool(s) intended for tracking. A location of the tracked tool(s) is determinable by using the induced signals and their phases in each of the at least one receiver coil by way of a profile of the electromagnetic field (measured, calculated, or known) and a position of each at least one receiver coil relative to another at least one receiver coil (measured, calculated, or known). Operation and examples of this technology is further explained in Chapter 2 of “Image-Guided Interventions Technology and Application,” Peters, T.; Cleary, K., 2008, ISBN: 978-0-387-72856-7, incorporated herein by reference in its entirety, the subject matter of which is encompassed by the present disclosure. - Still referring to
FIG. 2B , location data of thepositioning system 208 and/or theaccess port 12 is determinable by the trackingcamera 213, the trackingcamera 213 configured to detect thefiducial markers 212 disposed, or otherwise fixed, e.g., rigidly coupled, in relation to any of thepositioning system 208, theaccess port 12, theintroducer 210, the trackedpointer tool 222, and/or other tracked instruments. The fiducial marker(s) 212 comprise at least one of active markers and passive markers. Thedisplays navigation system 205. In some examples, the output provided by thedisplays - Still referring to
FIG. 2B , at least one of thefiducial markers 212, e.g., at least one of active markers and passive markers, are placed on tools, e.g., theaccess port 12 and/or theimaging system 500, to be tracked, to facilitate determination of the location and orientation of such tools by using thetracking camera 213 and thenavigation system 205. A stereo camera of the tracking system is configured to detect thefiducial markers 212 and to capture images thereof for providing identifiable points for tracking such tools. A tracked tool is defined by a grouping of thefiducial markers 212, whereby a rigid body is defined and identified by the tracking system. This definition, in turn, is usable for determining the position and/or orientation in 3D of a tracked tool in a virtual space. The position and orientation of the tracked tool in 3D is trackable in six degrees of freedom, e.g., x, y, and z coordinates as well as pitch, yaw, and roll rotations, and in five degrees of freedom, e.g., x, y, and z, coordinates as well as two degrees of free rotation. Preferably, the tool is tracked in at least three degrees of freedom, e.g., tracking a position of a tip of a tool in at least the x, y, and z coordinates. In use with a navigation system, at least threefiducial markers 212 are provided on a tracked tool to define the tracked tool in a virtual space; however, preferably, at least fourfiducial markers 212 are used. - Still referring to
FIG. 2B , camera images capturing thefiducial markers 212 are logged and tracked, by, for example, a closed circuit television (CCTV) camera. Thefiducial markers 212 are selectable to enable, assist, and/or facilitate segmentation in the captured images. For example, thenavigation system 205 implements infrared (IR) reflecting markers used in conjunction with an IR light source originating from the direction of the camera. An example of such an apparatus comprises tracking devices, such as the Polaris® system available from Northern Digital Inc. In some examples, the spatial position and orientation of the tracked tool and/or the actual and desired position and orientation of thepositioning system 208 are determinable by optical detection using a camera. The optical detection is performable by using an optical camera, thereby rendering thefiducial markers 212 optically visible. - Still referring to
FIG. 2B , in some examples, thefiducial markers 212, e.g., reflectospheres, are combinable with a suitable tracking system to determine the spatial position of the tracked tools within the operating theatre. Different tools and/or targets are providable with respect to different sets offiducial markers 212 in different configurations. Differentiation of the different tools and/or targets and their corresponding virtual volumes is possible based on the specification configuration and/or orientation of the each set offiducial markers 212 relative to another set offiducial markers 212, thereby enabling each such tool and/or target to have a distinct individual identity associated with a distinct individual identifier within thenavigation system 205. The distinct individual identifiers provide information to thenavigation system 205, such as information relating to the size and/or shape of the tool withinnavigation system 205. The distinct individual identifier may also provide additional information, such as the tool's central point or the tool's central axis, among other information. The virtual tool is also determinable from a database of tools stored in, or provided to, thenavigation system 205. Thefiducial markers 212 are tracked relative to a reference point, or a reference object, in the operating room, such as thepatient 202. - Still referring to
FIG. 2B , various types of fiducial markers is used. Thefiducial markers 212 may comprise the same type or a combination of at least two different types. Possible types of markers comprise reflective markers, radiofrequency (RF) markers, electromagnetic (EM) markers, pulsed or un-pulsed light-emitting diode (LED) markers, glass markers, reflective adhesives, or reflective unique structures or patterns, among others. RF and EM markers may have specific signatures for the specific tools to which such markers are attached. Reflective adhesives, structures and patterns, glass markers, and LED markers are detectable using optical detectors, while RF and EM markers are detectable using antennas. Different marker types are selectable to suit different operating conditions. For example, using EM and RF markers enable tracking of tools without requiring a line-of-sight from a tracking camera to thefiducial markers 212; and using an optical tracking system avoids additional noise from electrical emission and detection systems. - Still referring to
FIG. 2B , in some examples, thefiducial markers 212 comprise printed, or 3D, features for detection by an auxiliary camera, such as a wide-field camera (not shown) and/or theimaging system 500. Printed markers may also be used as a calibration pattern, for example to provide distance information, e.g., 3D distance information, to an optical detector. Printed identification markers comprise features, such as concentric circles with different ring spacing and/or different types of bar codes, among other features. In some examples, in addition to, or in place of, using thefiducial markers 212, the contours of objects e.g., the side of theaccess port 206, are captured by, and identified, using optical imaging devices and the tracking system. - Still referring to
FIG. 2B , a guide clamp 218 (or, more generally, a guide) for holding theaccess port 12 is providable. Theguide clamp 218 facilitates retention of theaccess port 206 at a fixed position and orientation, thereby freeing use of the surgeon's hands. An articulatedarm 219 is provided to hold theguide clamp 218. The articulatedarm 219 has up to six degrees of freedom for positioning theguide clamp 218. The articulatedarm 219 is lockable to fix its position and orientation, e.g., once a desired position is achieved. The articulatedarm 219 is attached, or attachable, in relation to a point based on thepatient head holder 217, or another suitable point, such as on another patient support, e.g., on the surgical bed, to ensure that, when locked in place, theguide clamp 218 does not move relative to the patient's head. - Still referring to
FIG. 2B , in a surgical operating room (or theatre), setup of a navigation system is relatively complex, e.g., many pieces of equipment associated with the surgical procedure, as well as elements of thenavigation system 205, must be arranged and/or prepared. Further, setup time typically increases as more equipment is added. To assist in addressing this, thenavigation system 205 comprises two additional wide-field cameras to enable video overlay information. Video overlay information is then insertable into displayed images, such as images displayed on at least one of thedisplays positioning system 208 and/or theimaging system 500, and/or may facilitates guiding the head and/or positioning the patient. - Still referring to
FIG. 2B , thenavigation system 205 provides tools to the neurosurgeon that may help to provide more relevant information to the surgeon, and may assist in improving performance and accuracy of port-based neurosurgical operations. Although described in the present disclosure in the context of port-based neurosurgery, e.g., for removal of brain tumors and/or for treatment of intracranial hemorrhages (ICH), thenavigation system 205 is also suitable for at least one of: a brain biopsy, a functional/deep-brain stimulation, a catheter/shunt placement (in the brain or elsewhere), an open craniotomy, and/or an endonasal/skull-based/ear-nose-throat (ENT) procedure, among others. Thesame navigation system 205 is usable for performing any or all of these procedures, with, or without, modification as appropriate. - Still referring to
FIG. 2B , although the present disclosure may discuss thenavigation system 205 in the context of neurosurgery, thenavigation system 205, for example, is usable for performing a diagnostic procedure, such as brain biopsy. A brain biopsy may involve the insertion of a thin needle into a patient's brain for purposes of removing a sample of brain tissue. The brain tissue is subsequently assessed by a pathologist to determine whether the brain tissue is cancerous, for example. Brain biopsy procedures are conducted with, or without, a stereotactic frame. Both types of procedures are performable using image-guidance. Frameless biopsies, in particular, are performable by way of thenavigation system 205. - Still referring to
FIG. 2B , in some examples, the trackingcamera 213 is adaptable to any suitable tracking system. In some examples, the trackingcamera 213, and any associated tracking system that uses thetracking camera 213, is replaceable with any suitable tracking system which may, or may not, use camera-based tracking techniques. For example, a tracking system that does not use thetracking camera 213, such as a radiofrequency tracking system, is used with thenavigation system 205. - Referring to
FIG. 3 , this block diagram illustrates a control andprocessing system 300 usable in themedical navigation system 205, as shown inFIG. 2B , e.g., as part of theequipment tower 207, in accordance with an embodiment of the present disclosure. In one example, the control andprocessing system 300 comprises at least oneprocessor 302, amemory 304, asystem bus 306, at least one input/output (I/O)interface 308, acommunications interface 310, and astorage device 312. The control andprocessing system 300 is interfaceable with other external devices, such as atracking system 321, adata storage 342, and at least one external user I/O device 344, e.g., at least one of a display, a keyboard, a mouse, sensors attached to medical equipment, a foot pedal, a microphone, and a speaker. - Still referring to
FIG. 3 , thedata storage 342 comprises any suitable data storage device, such as a local, or remote, computing device, e.g., a computer, hard drive, digital media device, or server, having a database stored thereon. Thedata storage device 342 further comprisesidentification data 350 for identifying one or moremedical instruments 360 andconfiguration data 352 that associates customized configuration parameters with one or moremedical instruments 360. Thedata storage device 342 further comprisespreoperative image data 354 and/or medicalprocedure planning data 356. Although thedata storage device 342 is shown as a single device, understood is that, in other embodiments, thedata storage device 342 alternatively comprises multiple storage devices. - Still referring to
FIG. 3 , themedical instruments 360 are identifiable by the control andprocessing unit 300. Themedical instruments 360 are connected to, and controlled by, the control andprocessing unit 300. Alternatively, themedical instruments 360 are operated, or otherwise employed, independent of the control andprocessing unit 300. Thetracking system 321 is employed to track at least onemedical instrument 360 and spatially register the at least one tracked medical instrument to an intraoperative reference frame. For example, amedical instrument 360 comprises tracking markers, such as tracking spheres, recognizable by the trackingcamera 213. In one example, the trackingcamera 213 comprises an infrared (IR) tracking camera. In another example, a sheath placed over amedical instrument 360 is connected to, and controlled by, the control andprocessing unit 300. - Still referring to
FIG. 3 , the control andprocessing unit 300 is also interfaceable with a number ofconfigurable devices 320, and can intraoperatively reconfigure at least one such device based on configuration parameters obtained fromconfiguration data 352. Examples ofdevices 320, include, but are not limited to, at least oneexternal imaging device 322, at least oneillumination device 324, thepositioning system 208, the trackingcamera 213, at least oneprojection device 328, and at least one display, such as thedisplays - Still referring to
FIG. 3 , exemplary aspects of the embodiments are implementable via the processor(s) 302 and/ormemory 304, in accordance with the present disclosure. For example, the functionalities described herein can be partially implemented via hardware logic in theprocessor 302 and partially using the instructions stored in thememory 304, as at least one processing module orengine 370. Example processing modules include, but are not limited to, auser interface engine 372, atracking module 374, amotor controller 376, animage processing engine 378, animage registration engine 380, aprocedure planning engine 382, anavigation engine 384, and acontext analysis module 386. While the example processing modules are separately shown inFIG. 3 , in some examples, theprocessing modules 370 are storable in thememory 304; and theprocessing modules 370 are collectively referred asprocessing modules 370. In some examples, at least twomodules 370 are used together for performing a function. Although depicted asseparate modules 370, themodules 370 is embodied as a unified set of computer-readable instructions, e.g., stored in thememory 304, rather than as distinct sets of instructions. - Still referring to
FIG. 3 , thesystem 300 is not intended to be limited to the components shown inFIG. 3 . One or more components of the control andprocessing system 300 are provided as an external component or device. In one example, thenavigation module 384 is provided as an external navigation system that is integrated with the control andprocessing system 300. Some embodiments is implemented using theprocessor 302 without additional instructions stored inmemory 304. Some embodiments are implemented using the instructions stored inmemory 304 for execution by one or more general purpose microprocessors. Thus, the present disclosure is not limited to any specific configuration of hardware and/or software. - Still referring to
FIG. 3 , in some examples, thenavigation system 205, which may include the control andprocessing unit 300, provides tools to the surgeon for improving performance of the medical procedure and/or post-operative outcomes. In addition to removal of brain tumours and intracranial hemorrhages (ICH), thenavigation system 205 is also applicable to a brain biopsy, a functional/deep-brain stimulation, a catheter/shunt placement procedure, open craniotomies, endonasal/skull-based/ENT, spine procedures, and other parts of the body, such as breast biopsies, liver biopsies, etc. While several examples have been provided, examples of the present disclosure are applied to any suitable medical procedure. - Referring to
FIG. 4A , this flow diagram illustrates amethod 400 of performing a port-based surgical procedure using a navigation system, such as themedical navigation system 205, as described in relation toFIGS. 2A and 2B , in accordance with an embodiment of the present disclosure. Themethod 400 comprises importing a port-based surgical plan, as indicated byblock 402. Once the plan has been imported into the navigation system at theblock 402, themethod 400 further comprises positioning and fixing the patient by using a body holding mechanism and confirming that the head position complies with the patient plan in the navigation system, as indicated byblock 404, wherein confirming that the head position complies with the patient plan is implementable by a computer or a controller being a component of theequipment tower 207. Themethod 400 further comprises initiating registration of the patient, as indicated byblock 406. The phrase “registration” or “image registration” refers to the process of transforming different sets of data into one coordinate system. Data may include multiple photographs, data from different sensors, times, depths, or viewpoints. The process of “registration” is used in the present application for medical imaging in which images from different imaging modalities are co-registered. Registration is used in order to be able to compare or integrate the data obtained from these different modalities. - Still referring to
FIG. 4A , appreciated is that numerous registration techniques are available and at least one ofthe techniques is applied to the present example, in accordance with embodiments ofthe present disclosure. Non-limiting examples include intensity-based methods that compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours. Image registration methods may also be classified according to the transformation models they use to relate the target image space to the reference image space. Another classification can be made between single-modality and multi-modality methods. Single-modality methods typically register images in the same modality acquired by the same scanner or sensor type, for example, a series of magnetic resonance (MR) images is co-registered, while multi-modality registration methods are used to register images acquired by different scanner or sensor types, for example in magnetic resonance imaging (MRI) and positron emission tomography (PET). In the present disclosure, multi-modality registration methods are used in medical imaging of the head and/or brain as images of a subject are frequently obtained from different scanners. Examples include registration of brain computerized tomography (CT)/MRI images or PET/CT images for tumor localization, registration of contrast-enhanced CT images against non-contrast-enhanced CT images, and registration of ultrasound and CT. - Referring to
FIG. 4B , this flow diagram illustrates an example of alternate sets of steps performable between performing the step of initiating registration, as indicated byblock 406, and performing the step of completing registration, as indicated byblock 408, in themethod 400, as shown inFIG. 4A , in accordance with embodiments of the present disclosure. If the use of fiducial touch points is contemplated, after the step of initiating registration, as indicated byblock 406, themethod 400 comprises performing a first alternate set of steps, as indicated byblock 440, the first alternative set of steps comprising: identifying fiducial markers 112 on images, as indicated byblock 442; touching the touch points with a tracked instrument, as indicated byblock 444; and computing the registration to the reference markers by way of thenavigation system 205, as indicated byblock 446. However, if the use of a surface scan is contemplated, after the step of initiating registration, as indicated byblock 406, themethod 400 comprises performing a second alternate set of steps, as indicated byblock 450, the second alternative set of steps comprising: scanning the face by using a 3D scanner, as indicated byblock 452; extracting the face surface from MR/CT data, as indicated byblock 454; and matching surfaces to determine registration data points, as indicated byblock 456. Upon completion of either the first alternate set of steps, as indicated byblock 440, or the second alternate set of steps, as indicated byblock 450, themethod 400 further comprises confirming registration by using the extracted data extracted and processing the same, as indicated byblock 408, as also shown inFIG. 4A . - Referring back to
FIG. 4A , once registration is confirmed, as indicated byblock 408, themethod 400 further comprises draping the patient, as indicated byblock 410. Typically, draping comprises covering the patient and surrounding areas with a sterile barrier to create and maintain a sterile field during the surgical procedure. The purpose of draping is to eliminate the passage of microorganisms, e.g., bacteria, viruses, prions, contamination, and the like, between non-sterile and sterile areas. At this point, conventional navigation systems require that the non-sterile patient reference is replaced with a sterile patient reference of identical geometry location and orientation. - Still referring back to
FIG. 4A , upon completion of draping, as indicated byblock 410, themethod 400 further comprises: confirming the patient engagement points, as indicated byblock 412; and preparing and planning the craniotomy, as indicated byblock 414. Upon completion of the preparation and planning of the craniotomy, as indicated byblock 414, themethod 400 further comprises: performing the craniotomy by cutting a bone flap and temporarily removing the same from the remainder of the skull to access the brain, as indicated byblock 416; and updating registration data with the navigation system, as indicated byblock 422. Next, themethod 400 further comprises: confirming engagement and the motion range within region of the craniotomy, as indicated byblock 418; and cutting the dura at the engagement points and identifying the sulcus, as indicated byblock 420. - Still referring back to
FIG. 4A , themethod 400 further comprises determining whether the trajectory plan has been completed, as indicated byblock 424. If the trajectory plan is not yet completed, themethod 400 further comprises: aligning a port on engagement and setting the planned trajectory, as indicated byblock 432; and cannulating, as indicated byblock 434; and determining whether the trajectory plan is completed, as indicated byblock 424. Cannulation involves inserting a port into the brain, typically along a sulci pat, the sulci path being identified in performing the step of cutting the dura at the engagement points and identifying the sulcus, as indicated byblock 420, along a trajectory plan. Further, cannulation is typically an iterative process that involves repeating the steps of aligning the port on engagement and setting the planned trajectory, as indicated byblock 432, and then cannulating to the target depth, as indicated byblock 434, until the complete trajectory plan is executed by making such determination, as indicated byblock 424. - Still referring back to
FIG. 4A , themethod 400 further comprises determining whether the trajectory plan has been completed, as indicated byblock 424. If the trajectory plan is completed, themethod 400 further comprises: performing a resection to remove part of the brain and/or tumor of interest, as indicated byblock 426; decannulating by removing the port and any tracking instruments from the brain, as indicated byblock 428; and closing the dura and completing the craniotomy, as indicated byblock 430. Some aspects of the steps shown inFIG. 4A are specific to port-based surgery, such as portions of the steps indicated byblocks - Referring back to both
FIGS. 4A and 4B , when performing a surgical procedure using amedical navigation system 205, themedical navigation system 205 may acquire and maintain a reference of the location of the tools in use as well as the patient in three-dimensional (3D) space. In other words, during a navigated neurosurgery, a tracked reference frame that is fixed, e.g., relative to the patient's skull, is present. During the registration phase of a navigated neurosurgery, e.g., in performing the step indicated byblock 406, a transformation is calculated that maps the frame of reference from preoperative MRI or CT imagery to the physical space of the surgery, specifically the patient's head. This is accomplished by thenavigation system 205 tracking locations of fiducial markers fixed to the patient's head, relative to the static patient reference frame. The patient reference frame is typically rigidly attached to the head fixation device, such as a Mayfield clamp. Registration is typically performed before the sterile field has been established, e.g., by performing the step as indicated byblock 410. - Referring to
FIG. 5 , this diagram illustrates, in a perspective view, use of anexample imaging system 500, in a medical procedure, in accordance with an embodiment of the present disclosure. AlthoughFIG. 5 shows theimaging system 500 being used in the context of anavigation system environment 200, e.g., using a navigation system as above described, theimaging system 500 may also be used outside of a navigation system environment, e.g., without any navigation support. An operator, typically asurgeon 201, may use theimaging system 500 to observe the surgical site, e.g., to look down an access port. Theimaging system 500 is attached to apositioning system 208, e.g., a controllable and adjustable robotic arm. The position and orientation of thepositioning system 208,imaging system 500 and/or access port is tracked using a tracking system, such as described for thenavigation system 205. The distance d between the imaging system 500 (more specifically, the aperture of the imaging system 500) and the viewing target, e.g., the surface of the surgical site, is referred to as the WD. Theimaging system 500 is configurable for use in a predefined range of WD, e.g., in the range of approximately 15 cm to approximately 75 cm. If theimaging system 500 is mounted on thepositioning system 208, the actual available range of WD is dependent on both the WD of theimaging system 500 as well as the workspace and kinematics of thepositioning system 208. - Referring to
FIG. 6 , this block diagram illustrates components of anexample imaging system 500, in accordance with an embodiment of the present disclosure. Theimaging system 500 comprises an optical assembly 505 (also referred to as an optical train). Theoptical assembly 505 comprises optics, e.g., lenses, optical fibers, etc., for focusing and zooming on the viewing target. Theoptical assembly 505 comprises zoom optics 510 (which may include one or more zoom lenses) and focus optics 515 (which may include one or more focus lenses). Each of thezoom optics 510 and thefocus optics 515 are independently movable within theoptical assembly 505 in order to respectively adjust the zoom and focus. Where thezoom optics 510 and/or thefocus optics 515 comprise more than one lens, each individual lens is independently movable. Theoptical assembly 505 comprises an aperture (not shown) which is adjustable. - Still referring to
FIG. 6 , theimaging system 500 comprises azoom actuator 520 and afocus actuator 525 for respectively positioning thezoom optics 510 and thefocus optics 515. Thezoom actuator 520 and/or thefocus actuator 525 comprise an electric motor or other types of actuators, such as pneumatic actuators, hydraulic actuators, shape-changing materials, e.g., piezoelectric materials or other smart materials, or engines, among other possibilities. Although the term “motorized” is used in the present disclosure, the use of this term does not limit the present disclosure to use of motors necessarily, but is intended to cover all suitable actuators, including motors. Although thezoom actuator 520 and thefocus actuator 525 are shown outside of theoptical assembly 505, in some examples, thezoom actuator 520 and thefocus actuator 525 are components of, or are integrated with, theoptical assembly 505. Thezoom actuator 520 and thefocus actuator 525 may operate independently, to respectively control positioning of thezoom optics 510 and thefocus optics 515. The lens(es) of thezoom optics 510 and/or thefocus optics 515 is each mounted on a linear stage, e.g., a motion system that restricts an object to move in a single axis, which may include a linear guide and an actuator; or a conveyor system such as a conveyor belt mechanism, that is respectively moved by thezoom actuator 520 and/or thefocus actuator 525 to control positioning of thezoom optics 510 and/or thefocus optics 515. In some examples, thezoom optics 510 is mounted on a linear stage that is driven, via a belt drive, by thezoom actuator 520, while thefocus optics 515 is geared to thefocus actuator 525. The independent operation of thezoom actuator 520 and thefocus actuator 525 may enable the zoom and focus to be adjusted independently. Thus, when an image is in focus, the zoom is adjusted without requiring further adjustments to thefocus optics 515 to produce a focused image. - Still referring to
FIG. 6 , operation of thezoom actuator 520 and thefocus actuator 525 is controllable by acontroller 530, e.g., a microprocessor, of theimaging system 500. Thecontroller 530 may receive control input, e.g., from an external system, such as an external processor or an input device. The control input indicates at least one of a desired zoom and a desired focus, and thecontroller 530 may, in response, cause at least one ofzoom actuator 520 and thefocus actuator 525 to respectively move at least one of thezoom optics 510 and thefocus optics 515 accordingly to respectively achieve at least one of the desired zoom and the desired focus. In some examples, thezoom optics 510 and/or thefocus optics 515 is moved or actuated without the use of thezoom actuator 520 and/or thefocus actuator 525. For example, thefocus optics 515 uses electrically-tunable lenses or other deformable material that is directly controllable by thecontroller 530. - Still referring to
FIG. 6 , by providing thecontroller 530, thezoom actuator 520 and thefocus actuator 525 all as part of theimaging system 500, theimaging system 500 may enable an operator, e.g., a surgeon, to control zoom and/or focus during a medical procedure without having to manually adjust the zoom and/or focusoptics controller 530 verbally, e.g., via a voice recognition input system, by instructing an assistant to enter control input into an external input device, e.g., into a user interface provided by a workstation, using a foot pedal, or by other such means. In some examples, thecontroller 530 executes preset instructions to maintain the zoom and/or focus at preset values, e.g., to perform autofocusing, without requiring further control input during the medical procedure. - Still referring to
FIG. 6 , an external processor, e.g., a processor of a workstation or the navigation system, in communication with thecontroller 530 is used to provide control input to thecontroller 530. For example, the external processor provides a graphical user interface via which the operator or an assistant input instructions to control zoom and/or focus of theimaging system 500. Thecontroller 530 is alternatively or additionally in communication with an external input system, e.g., a voice-recognition input system or a foot pedal. Theoptical assembly 505 comprises at least oneauxiliary optic 540, e.g., an adjustable aperture, which is static or dynamic. Where theauxiliary optics 540 is dynamic, theauxiliary optics 540 is moved using an auxiliary actuator (not shown) which is controlled by thecontroller 530. - Still referring to
FIG. 6 , theimaging system 500 further comprises acamera 535, e.g., a high-definition (HD) camera, configured to capture image data from the optical assembly. Operation of the camera is controlled by thecontroller 530. Thecamera 535 also outputs data to an external system, e.g., an external workstation or external output device, to view the captured image data. In some examples, thecamera 535 outputs data to thecontroller 530, which, in turn, transmits the data to an external system for viewing. By providing image data to an external system for viewing, the captured images are viewable on a larger display and are displayable together with other information relevant to the medical procedure, e.g., a wide-field view of the surgical site, navigation markers, 3D images, etc. Thecamera 535 used with theimaging system 500 facilitates improving the consistency of image quality among different medical centers. Image data captured by thecamera 535 is displayable on a display together with a wide-field view of the surgical site, for example, in a multiple-view user interface. The portion of the surgical site that is captured by thecamera 535 is visually indicated in the wide-field view of the surgical site. - Still referring to
FIG. 6 , theimaging system 500 comprises a three-dimensional (3D)scanner 3D scanner 545 is also captured by thecamera 535, or is captured by the3D scanner 545, itself. Operation of the3D scanner 545 is controlled by thecontroller 530; and the3D scanner 545 transmits data to thecontroller 530. In some examples, the3D scanner 545, itself, transmits data to an external system, e.g., an external work station. 3D information from the3D scanner 545 is used to generate a 3D image of the viewing target, e.g., a 3D image of a target tumor to be resected). 3D information is also useful in an augmented reality (AR) display provided by an external system. For example, an AR display, e.g., provided via AR glasses, may, using information from a navigation system to register 3D information with optical images, overlay a 3D image of a target specimen on a real-time optical image, e.g., an optical image captured by thecamera 535. - Still referring to
FIG. 6 , thecontroller 530 is coupled to amemory 550. Thememory 550 is internal or external in relation to theimaging system 500. Data received by thecontroller 530, e.g., image data from thecamera 535 and/or 3D data from the 3D scanner, is stored in thememory 550. Thememory 550 may also contain instructions to enable the controller to operate thezoom actuator 520 and thefocus actuator 525. For example, thememory 550 stores instructions to enable the controller to perform autofocusing, as further below discussed. Theimaging system 500 communicates with an external system, e.g., a navigation system or a workstation, via wired or wireless communication. In some examples, theimaging system 500 comprises a wireless transceiver (not shown) to enable wireless communication. In some examples, theimaging system 500 comprises a power source, e.g., a battery, or a connector to a power source, e.g., an AC adaptor. In some examples, theimaging system 500 receives power via a connection to an external system, e.g., an external workstation or processor. - Still referring to
FIG. 6 , in some examples, theimaging system 500 comprises a light source (not shown). In some examples, the light source may not itself generate light but rather direct light from another light generating component. For example, the light source comprises an output of a fiber optics cable connected to another light generating component, which is part of theimaging system 500 or external to theimaging system 500. The light source is mounted near the aperture of the optical assembly, to direct light to the viewing target. Providing the light source with theimaging system 500 may help to improve the consistency of image quality among different medical centers. In some examples, the power or output of the light source is controlled by theimaging system 500, e.g., by thecontroller 530, or is controlled by a system external to theimaging system 500, e.g., by an external workstation or processor, such as a processor of a navigation system. - Still referring to
FIG. 6 , in some examples, theoptical assembly 505,zoom actuator 520, focusactuator 525, andcamera 535 may all be housed within a single housing (not shown) of the imaging system. In some examples, thecontroller 530,memory 3D scanner 545, wireless transceiver, power source, and/or light source are also housed within the housing. In some examples, theimaging system 500 also provides mechanisms to enable manual adjusting of the zoom and/or focusoptics - Still referring to
FIG. 6 , theimaging system 500 is mountable on a movable support structure, such as the positioning system, e.g., robotic arm, of a navigation system, a manually operated support arm, a ceiling mounted support, a movable frame, or other such support structure. Theimaging system 500 is removably mounted on the movable support structure. In some examples, theimaging system 500 comprises a support connector, e.g., a mechanical coupling, to enable theimaging system 500 to be quickly and easily mounted or dismounted from the support structure. The support connector on theimaging system 500 is configured to be suitable for connecting with a typical complementary connector on the support structure, e.g., as designed for typical end effectors. In some examples, theimaging system 500 is mounted to the support structure together with other end effectors, or is mounted to the support structure via another end effector. - Still referring to
FIG. 6 , when mounted, theimaging system 500 is at a known fixed position and orientation relative to the support structure, e.g., by calibrating the position and orientation of theimaging system 500 after mounting. In this way, by determining the position and orientation of the support structure, e.g., using a navigation system or by tracking the movement of the support structure from a known starting point), the position and orientation of theimaging system 500 is also determined. In some examples, theimaging system 500 may include a manual release button that, when actuated, enable theimaging system 500 to be manually positioned, e.g., without software control by the support structure. - Still referring to
FIG. 6 , in some examples, where theimaging system 500 is intended to be used in a navigation system environment, theimaging system 500 comprises an array of trackable markers, which is mounted on a frame on theimaging system 500 to enable the navigation system to track the position and orientation of theimaging system 500. Alternatively or additionally, the movable support structure, e.g., a positioning system of the navigation system, on which theimaging system 500 is mounted, is tracked by the navigation system; and the position and orientation of theimaging system 500 is determined by using the known position and orientation of theimaging system 500 relative to the movable support structure. - Still referring to
FIG. 6 , the trackable markers comprise passive reflective tracking spheres, active infrared (IR) markers, active light emitting diodes (LEDs), a graphical pattern, or a combination thereof. At least three trackable markers are provided on a frame to enable tracking of position and orientation. In some examples, four passive reflective tracking spheres are coupled to the frame. While some specific examples of the type and number of trackable markers have been given, any suitable trackable marker and configuration may be used, as appropriate. - Still referring to
FIG. 6 , determination of the position and orientation of theimaging system 500 relative to the viewing target is performed by a processor external to theimaging system 500, e.g., a processor of the navigation system. Information about the position and orientation of theimaging system 500 is used, together with a robotic positioning system, to maintain alignment of theimaging system 500 with the viewing target, e.g., to view down an access port during port-based surgery, throughout the medical procedure. - Still referring to
FIG. 6 , for example, the navigation system tracks the position and orientation of the positioning system and/or theimaging system 500 either collectively or independently. Using this information as well as tracking of the access port, the navigation system determines the desired joint positions for the positioning system so as to maneuver theimaging system 500 to the appropriate position and orientation to maintain alignment with the viewing target, e.g., the longitudinal axes of theimaging system 500 and the access port being aligned. This alignment is maintained throughout the medical procedure automatically, without requiring explicit control input. In some examples, the operator is able to manually move the positioning system and/or theimaging system 500, e.g., after actuation of a manual release button. During such manual movement, the navigation system continues to track the position and orientation of the positioning system and/or theimaging system 500. After completion of manual movement, the navigation system, e.g., in response to user input, such as using a foot pedal, indicating that manual movement is complete, reposition and reorient the positioning system and theimaging system 500 to regain alignment with the access port. - Still referring to
FIG. 6 , thecontroller 530 uses information about the position and orientation of theimaging system 500 to perform autofocusing. For example, thecontroller 530 determines the WD between theimaging system 500 and the viewing target; and, thus, determine the desired positioning of thefocus optics 515, e.g., using appropriate equations to calculate the appropriate positioning of thefocus optics 515 to achieve a focused image, and move thefocus optics 515, using thefocus actuator 525, in order to bring the image into focus. For example, the position of the viewing target is determined by a navigation system. The WD is determined by thecontroller 530 using information, e.g., received from the navigation system, from the positioning system or other external system, about the position and orientation of theimaging system 500 and/or the positioning system relative to the viewing target. In some examples, the WD is determined by thecontroller 530 using an infrared light (not shown) mounted on near the distal end of theimaging system 500. - Still referring to
FIG. 6 , in some examples, thecontroller 530 may perform autofocusing without information about the position and orientation of theimaging system 500. For example, thecontroller 530 controls thefocus actuator 525 to move thefocus optics 515 into a range of focus positions and control thecamera 535 to capture image data at each focus position. Thecontroller 530 may then perform image processing on the captured images to determine which focus position has the sharpest image and determine this focus position to be the desired position of thefocus optics 515. Thecontroller 530 then controls thefocus actuator 525 to move thefocus optics 515 to the desired position. Any other autofocus routine, such as those suitable for handheld cameras, is implemented by thecontroller 530 as appropriate. - Still referring to
FIG. 6 , in some examples, the viewing target is dynamically defined by the surgeon, e.g., using a user interface provided by a workstation, by touching the desired target on a touch-sensitive display, by using eye or head tracking to detect a point at which the surgeon's gaze is focused and/or by voice command; and theimaging system 500 performs autofocusing to dynamically focus the image on the defined viewing target, thereby enabling the surgeon to focus an image on different points within a FoV, without changing the FoV, and without having to manually adjust the focus of theimaging system 500. Autofocusing is performable by way of a surgeon or, alternatively, by way of thecontroller 530. - Still referring to
FIG. 6 and ahead toFIG. 11 , in some examples, theimaging system 500 is configured to perform autofocusing relative to an instrument being used in the medical procedure. An example of this feature is shown inFIG. 11 . For example, the position and orientation of a medical instrument, such as a trackedpointer tool 222, is determined; and thecontroller 530 performs autofocusing to focus the captured image on a point defined relative to the medical instrument. In the examples shown inFIG. 11 , the trackedpointer tool 222 has a defined focus point at the distal tip of thepointer 222. As the trackedpointer tool 222 is moved, the WD between theoptical imaging system 500 and the defined focus point (at the distal tip of the tracked pointer tool 222) changes (from Dl in the left image to D2 in the right image, for example). The autofocusing is performed in a manner similar to that as above described; however, instead of autofocusing on a viewing target in the surgical field, theimaging system 500 focuses on a focus point that is defined relative to the medical instrument. The medical instrument is used in the surgical field to guide theimaging system 500 to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of theimaging system 500. Where the FoV includes objects at different depths, the surgeon uses the medical instrument, e.g., a pointer, to indicate to theimaging system 500 the object and/or depth desired for autofocusing. - Still referring to
FIG. 6 , for example, thecontroller 530 may receive information about the position and orientation of a medical instrument. This position and orientation information is received from an external source, e.g., from an external system tracking the medical instrument or from the medical instrument itself, or is received from another component of theimaging system 500, e.g., an infrared sensor or a machine vision component of theimaging system 500. Thecontroller 530 may determine a focus point relative to the position and orientation of the medical instrument. The focus point is predefined for a given medical instrument, e.g., the distal tip of a pointer, the distal end of a catheter, the distal end of an access port, the distal end of a soft tissue resector, the distal end of a suction, the target of a laser, or the distal tip of a scalpel), and is different for different medical instruments. Thecontroller 530 may use this information, together with information about the known position and orientation of theimaging system 500, e.g., determined as discussed above, in order to determine the desired position of thefocus optics 515 to achieve an image focused on the focus point defined relative to the medical instrument. - Still referring to
FIG. 6 , in examples where theimaging system 500 is used with a navigation system 205 (seeFIG. 2B ), the position and orientation of a medical instrument, e.g., a trackedpointer tool 222 or a trackedport 210, is tracked and determined by thenavigation system 205. Thecontroller 530 of theimaging system 500 automatically autofocuses theimaging system 500 to a predetermined point relative to the tracked medical instrument, e.g., autofocus on the tip of the trackedpointer tool 222 or on the distal end of theaccess port 210. Autofocusing is performed relative to other medical instruments and other tools that are used in the medical procedure. - Still referring to
FIG. 6 , in some examples, theimaging system 500 is configured to perform autofocusing relative to a medical instrument only when a determination is made that the focus point relative to the medical instrument is within the FoV of theimaging system 500 is determined, whereby an unintentional change of focus is avoidable when a medical instrument is moved in the vicinity of but outside the FoV of theimaging system 500. In examples where theimaging system 500 is mounted on a movable support system, e.g., a robotic arm, if the focus point of the medical instrument is outside of the current FoV of theimaging system 500, the movable support system positions and orients theimaging system 500 to bring the focus point of the medical instrument within the FoV of theimaging system 500, in response to input, e.g., in response to user command via a user interface or voice input, or via activation of a foot pedal. - Still referring to
FIG. 6 , theimaging system 500 is configured to implement a small time lag before performing autofocus relative to a medical instrument in order to avoid erroneously changing focus while the focus point of the medical instrument is brought into, and out of, the FoV. For example, theimaging system 500 is configured to autofocus on the focus point only after the focus point has been substantially stationary for a predetermined length of time, e.g., approximately 0.5 second to approximately 1 second. In some examples, theimaging system 500 is also configured to perform zooming with the focus point as the zoom center. For example, while a focus point is in the FoV, or after autofocusing on a certain point in the FoV, the user may provide command input, e.g., via a user interface, voice input or activation of a foot pedal, to instruct theimaging system 500 to zoom in on the focus point. Thecontroller 530 then positions thezoom optics 520 accordingly to zoom in on the focus point. Where appropriate, the positioning system (if theimaging system 500 is mounted on a positioning system) automatically repositions theimaging system 500 as needed to center the zoomed in view on the focus point. - Still referring to
FIG. 6 , in some examples, theimaging system 500 automatically changes between different autofocus modes. For example, if the current FoV does not include any focus point defined by a medical instrument, thecontroller 530 may perform autofocus based on a preset criteria, e.g., to obtain the sharpest image or to focus on the center of the FoV. When a focus point defined by a medical instrument is brought into the FoV, thecontroller 530 may automatically switch mode to autofocus on the focus point. In some examples, theimaging system 500 changes between different autofocus modes in response to user input, e.g., in response to user command via a user interface, voice input, or activation of a foot pedal. In various examples of autofocusing, whether or not relative to a medical instrument, theimaging system 500 is configured to maintain the focus as the zoom is adjusted. - Still referring to
FIG. 6 , in some examples, theimaging system 500 generates a depth map (not shown). This is performed by capturing images of the same FoV, wherein theimaging system 500 focuses on points at a plurality of depths, e.g., different depths, to simulate 3D depth perception. For example, theimaging system 500 performs autofocusing through a predefined depth range, e.g., through a depth of approximately 1 cm, and capturing focused images at a plurality of distinct or different depths, e.g., at increments of approximately 1 mm, through a depth range, e.g., the predefined depth range. The plurality of images captured at the corresponding plurality of different depths is transmitted to an external system, e.g., an image viewing workstation, wherein the plurality of images is aggregated into a set of depth images to form a depth map for the same FoV. The depth map provides focused views of the FoV, at different depths, and comprises contours, color-coding, and/or other indicators of different depths. The external system (not shown) provides a user interface (not shown) that allows a user to navigate through the depth map. - Still referring to
FIG. 6 , in some examples, theoptical imaging system 500 could be configured with a relatively large DoF. The3D scanner 545 is used to create a depth map of the viewed area; and the depth map is registered to the image captured by thecamera 535. Image processing is performed, e.g., using thecontroller 530 or an external processor, to generate a pseudo 3D image, for example by visually encoding, e.g., using color, artificial blurring, or other visual symbols, different parts of the captured image according to the depth information from the3D scanner 545. - Referring to
FIGS. 7 and 8 , together, these diagrams illustrate, in alternate perspective views, an example embodiment of theimaging system 500, in accordance with an embodiment of the present disclosure. In this example, theimaging system 500 is shown mounted to thepositioning system 208, e.g., a robotic arm, of a navigation system. Theimaging system 500 is shown with ahousing 555 that encloses the zoom and focus optics, the zoom and focus actuators, the camera, the controller, and the 3D scanner. The housing is provided with aframe 560 on which trackable markers are mounted to enable tracking by the navigation system. Theimaging system 500 communicates with the navigation system via a cable 565 (cutaway view inFIG. 8 ). The distal end of theimaging system 500 is provided withlight sources 570. - The example shows four broad spectrum LEDs; however, more or less light sources can be used, of any suitable type. Although the
light sources 570 are shown provided surrounding theaperture 553 of theimaging system 500, in other examples, the light source(s) 570 is located elsewhere on theimaging system 500. The distal end of theimaging system 500 further hasopenings 575 for the cameras of the integrated 3D scanner. Asupport connector 580 for mounting theimaging system 500 to thepositioning system 208 is also shown, as well as theframe 560 for mounting trackable markers. - Referring to
FIG. 9 , this flow diagram illustrates anexample method 900 of autofocusing during a medical procedure, in accordance with an embodiment of the present disclosure. Theexample method 900 is performed by way of an example optical imaging system, as disclosed herein. Themethod 900 comprises: determining the position and orientation of the imaging system, as indicated byblock 905, wherein determining the position and orientation of the imaging system comprises is performed by tracking the imaging system, by performing calibration, or by tracking the positioning system on which the imaging system is mounted, for example; determining the WD between the imaging system and the imaging target, as indicated byblock 910, e.g., wherein determining the position of the imaging target is performed by a navigation system, and wherein information relating to the position of the imaging target is used together with the position and orientation information of the imaging system to determine the WD; determining the desired position of the focus optics in order to achieve a focused image, as indicated byblock 915; and controlling the focus actuator, e.g., by a controller of the imaging system, to position the focus optics at the desired position, as indicated byblock 920, whereby a focused image is capturable, for example, by using a camera of the optical imaging system. - Referring to
FIG. 10 , this flow diagram illustrates anexample method 1000 of autofocusing relative to a medical instrument during a medical procedure, in accordance with an embodiment of the present disclosure. Theexample method 1000 is performable using an example optical imaging system as disclosed herein. Theexample method 1000 is similar to theexample method 900. The example method 1000 comprises: determining the position and orientation of the imaging system, as indicated by block 1005, wherein determining the position and orientation of the imaging system is performable by tracking the imaging system, by performing calibration, or by tracking the positioning system on which the imaging system is mounted, for example; determining the position and orientation of the medical instrument, as indicated by block 1010, wherein determining the position and orientation of the medical instrument is performed by tracking the medical instrument, e.g., using a navigation system, by sensing the medical instrument, e.g., using an infrared or machine vision component of the imaging system, or by any other suitable techniques; determining the focus point relative to the medical instrument, as indicated by block 1015, wherein determining the focus point comprises looking-up preset definitions, e.g., stored in a database, of focus points for different medical instruments, and calculating the focus point for the particular medical instrument being used; determining the WD between the imaging system and the focus point, as indicated by block 1020; determining the desired position of the focus optics in order to achieve a focused image, as indicated by block 1025; controlling the focus actuator, e.g., by a controller of the imaging system, to position the focus optics at the desired position, as indicated by block 1030, whereby a focused image is capturable, for example, using a camera of the optical imaging system. - Referring to
FIG. 11 , this set of diagrams illustrate, in perspective views, some examples of theimaging system 500 configured to perform autofocusing relative to an instrument using in the medical procedure, in accordance with an embodiment of the present disclosure. For example, the position and orientation of a medical instrument, such as a trackedpointer tool 222, is determined; and thecontroller 530 performs autofocusing to focus the captured image on a point defined relative to the medical instrument. In the examples shown inFIG. 11 , the trackedpointer tool 222 has a defined focus point at the distal tip of thepointer 222. As the trackedpointer tool 222 is moved, the WD between theoptical imaging system 500 and the defined focus point (at the distal tip of the tracked pointer tool 222) changes (from D1 in the left image to D2 in the right image, for example). The autofocusing is performed in a manner similar to that as above described; however, instead of autofocusing on a viewing target in the surgical field, theimaging system 500 focuses on a focus point that is defined relative to the medical instrument. The medical instrument is used in the surgical field to guide theimaging system 500 to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of theimaging system 500. Where the FoV includes objects at different depths, the surgeon uses the medical instrument, e.g., a pointer, to indicate to theimaging system 500 the object and/or depth desired for autofocusing. - Referring back to
FIG. 1 throughFIG. 11 , theexample methods - Referring to
FIG. 12A throughFIG. 12C , together, these diagrams illustrate, in perspective views, a surgeon hand Hs operating a 3D navigation system 1200 in relation to an interrogation volume Vi, comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure. The at least one proprioception feature comprises at least one communication feature for providing 3D (depth information) to the surgeon. The at least one communication feature comprises at least one of at least oneactive tool 140, such as a tracked tool, above-discussed, at least one camera (not shown), and software (not shown) for generating a 3D perception, e.g., by providing a combination of perceivable signals, the perceivable signals relating to at least one sense, such as touch (haptic feedback), e.g., a vibration, vision (visual cues), e.g., light indicators, and sound (audio cues), e.g., a beeping sound. The perceivable signal combination comprises at least two perceivable signals, e.g., providing a plurality of sensory inputs in combination with 3D feedback (beyond the visual cues), readily perceivable by a surgeon. - Still referring to
FIG. 12A throughFIG. 12C , for example, the systems and methods use audiohaptics, visualacoustic, or any combination of visual, haptic, and acoustic feedback, signals, or cues to provide a surgeon with a depth indication in relation to each 2D view of a scene, e.g., in an interrogation volume. In another example, the systems and methods use an acoustic feedback comprises a periodic beep along a distance from a given surface, wherein the periodic beep comprises a reducing period as a function of the tool, e.g., theactive tool 140, traveling from the givensurface 800 to a patient, ananatomical target 141, or a tissue intended for resection (not shown), and wherein the period approaches zero at a point where the tool, e.g., theactive tool 140, touches the patient, e.g., at the givensurface 800, theanatomical target 141, or the tissue intended for resection (not shown). Thus, the 3D navigation system 1200 of the present disclosure is configured to provide depth information to a surgeon in the absence of stereo imaging. - Referring to
FIG. 12A , for example, this diagram illustrates a perspective view of a surgeon hand Hs operating a 3D navigation system 1200 in relation to an interrogation volume Vi, comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure. A surgeon working in a surgical field or an interrogation volume Vi, containing “white” matter W and vasculature R of the patient. The “black box” or the interrogation volume Vi may represent a port, a portion of the patient anatomy, or other structure defining or containing internal anatomical or resectable parts. Using a trackedpointer tool 142, the surgeon defines a plane within the interrogation volume Vi, the reference frame, or the interrogation volume Vi by indicating either a point or a number of points on the anatomical parts or other structure intended for use as “landmarks” or “barriers” to facilitate accurately determining positions thereof. In the 3D navigation system 1200, tracking of the trackedtool 140, e.g., via the trackedpointer tool 142, is performable by at least one technique, such as sonar tracking, ultrasonic tracking, and optical tracking. - Referring to
FIG. 12B , for example, this diagram illustrates a perspective view of a surgeon hand Hs operating a 3D navigation system 1200 in relation to an interrogation volume Vi, comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure. The surgeon defines a plane, such as thereference plane 800, in accordance with an embodiment of the present disclosure. Thereference plane 800 defines a “zero” point by which location or depth of landmarks, barriers, or targets and their relative positions are determinable. In addition, in some embodiments, either a reference plane or a reference volume is definable, e.g., wherein frequency is 3D location-dependent. - Referring to
FIG. 12C , for example, this diagram illustrates a perspective view of a surgeon hand Hs operating a 3D navigation system 1200 in relation to an interrogation volume Vi, comprising at least one proprioception feature, in accordance with some embodiments of the present disclosure. For example, the 3D navigation system 1200 comprises at least one communication feature, such as an audio sensory device and a visual sensory device, for providing 3D (including depth information) to the surgeon. In this example, the surgeon has a 2D view of the surgical field of interest, but the surgeon also has a depth cue provided by a periodic or persistent beep indicating a position P of the trackedpointer tool 142 relative to an intraoperatively defined plane, such as a thereference plane 800, wherein the position P is defined by coordinates x, y, and z as related to both thereference plane 800 and the boundaries of the interrogation volume Vi. - Still referring to
FIG. 12C (anatomy removed for illustrative purposes only) in relation toFIG. 12A , when theactive tool 140 arrives at thereference plane 800 at a location L1, the audio sensory device (not shown) emits a sound, such as an audible cue, e.g., a constant beep (as the periodic beep then has a period=0). However, when the surgeon moves theactive tool 140 to another plane at a location L2, the audio sensory device emits a sound, such as an audible cue, e.g., a periodic beep (as the periodic beep then has a period >0). The period increases, thereby producing an incremental beeping sound, and thereby facilitating gauging a distance in relation to the plane having the location L1. - Referring back to
FIGS. 12A through 12C , an example of the 3D navigation system 1400 in operation is illustrated. However, a plethora of other operational applications are encompassed by the present disclosure. For example, in other embodiments of the present disclosure, the 3D navigation system 1200 comprises a dull pressure spring (not shown) for indicating a distance from thereference plane 800 at a location L1 based on a pressure experienced by the spring. Alternatively, in another embodiment, theactive tool 140 is embeddable with an arrangement of strip light-emitting diode (LED), e.g., lengthwise embeddable, the arrangement, e.g., of activated LEDs, configured to shorten and lengthen based on the distance from thereference plane 800 at a location L1. In yet another embodiment, the location L1 of thereference plane 800 is importable into the 3D navigation system 1200, e.g., via a user interface (UI) (not shown) for further assisting the surgeon. - Referring to
FIG. 13 , this diagram illustrates, in a perspective view, anoptical imaging system 500′ using a 3D navigation system 1200, capable of enhanced autofocusing relative to a medical instrument, e.g., a trackedpointer tool 222, in accordance with an alternative embodiment of the present disclosure. Theimaging system 500′ is configured to perform enhanced autofocusing relative to an instrument, e.g., a trackedpointer tool 222, using in the medical procedure, by example only. For example, the position and orientation of a medical instrument, such as a trackedpointer tool 222, is determined; and thecontroller 530 performs enhanced autofocusing to focus the captured image on a point defined relative to the medical instrument. Theoptical imaging system 500′ comprises an optical imaging assembly and at least one detector operable with theoptical imaging assembly 500′. The at least one detector of the optical imaging assembly comprises at least one of a single camera system and a dual camera system (not shown). - Still referring to
FIG. 13 , the trackedpointer tool 222 has a defined focus point at the distal tip of the trackedpointer tool 222. As the trackedpointer tool 222 is moved, the WD between theoptical imaging system 500 and the defined focus point (at the distal tip of the tracked pointer tool 222) changes (from Dl in the left image to D2 in the right image, for example). The enhanced autofocusing is performed in a manner similar to that, as above described; however, instead of autofocusing on a viewing target in the surgical field, theoptical imaging system 500′ focuses on a focus point that is defined relative to the medical instrument. - Still referring to
FIG. 13 , the medical instrument is used in the surgical field to guide theoptical imaging system 500′ to autofocus on different points in the surgical field, as below discussed, thereby enabling a surgeon to change the focus within a FoV, e.g., focus on a point other than at the center of the FoV, without changing the FoV, and without needing to manually adjust the focus of theoptical imaging system 500′. Where the FoV includes objects at different depths, the surgeon uses the medical instrument, e.g., a pointer, to indicate to theoptical imaging system 500′ the object and/or depth desired for enhanced autofocusing. - Still referring to
FIG. 13 , theoptical imaging system 500′ is configured to use a method of enhanced autofocusing, e.g., by way of the 3D navigation system 1200. The optical imaging system 500′ comprises at least one of: (a) a single array of detectors, such as a plurality of video cameras, (b) a pair of detectors, such as in a video loop configuration and a pair of video cameras, (c) a pair of detectors capable of stereovision, (d) two detectors, wherein each detector comprises at least one of a distinct resolution and a distinct color, and whereby differentiation between each view of a stereoscopic view is enabled, (e) a device configured to render an image on a display, for updating the image on the display, and for tracking a tip of a tool, (f) a sensory device configured to detect a plurality of sensory input signals, analyze the plurality of sensory input signals, translate or transform the plurality of sensory input signals into a plurality of sensory output signals, and transmit the plurality of sensory output signals, wherein the plurality of sensory output signals comprises at least two of a visual feedback, a haptic feedback, and an audio feedback, and (g) at least one ultra-high-definition (UHD) detector, such as at least one UHD camera disposed in relation to a distal end of a robotic arm, with a thin focus frame for facilitating movement of a focal plane by way of moving a tool, such as the tracked pointer tool 222, whereby 3D image enhanceable. - Still referring to
FIG. 13 , if theoptical imaging system 500′ comprises two detectors for achieving a stereoscopic view, e.g., an inferring view using two detectors, 3D navigation is achievable, e.g., via virtual 3D navigation, wherein a tool tip is viewable relative to an image rendered on a display, wherein the plurality of sensory output signals comprises a visual feedback and a haptic feedback, wherein the haptic feedback provides a sense of feel, whereby the sense of feel provide a surgeon with a sense of three-dimensionality. The sensory device comprises four sensors, for example, to enhance the haptic feedback provided to the surgeon. The tool itself is “active” wherein the plurality of sensory output signals may emanate from the tool itself The active tool itself, thus, comprises the sensory device. The sensory device further comprises at least one visual indicator, such as at least one light indicator, the at least one visual indicator activable when the tool approaches a target or a barrier, e.g., in response to sending proximity thereto. - Still referring to
FIG. 13 , the haptic feedback comprises a vibration, for example, emanating from the tool, itself, whereby the sense of feel is immediate. At least one of the visual feedback, the audio feedback, and the haptic feedback further comprises at least one of variable amplitude and variable frequency for providing the surgeon with an indication as to an appropriate degree of contact with the tissue. Theoptical imaging system 500′, using the 3D navigation system 1200, utilizes tools and sensors, such as two detectors disposed in relation to a device positioning system (DPS), e.g., a drive system comprising a robotic arm, for providing and enhancing 3D navigation. Theoptical imaging system 500′, using the 3D navigation system 1200, integrates the foregoing features. - Referring back to
FIGS. 12A throughFIG. 13 , a 3D navigation system 1200 for enhancing feedback during a medical procedure comprises: an optical imaging system comprising: an optical assembly comprising movable zoom optics and movable focus optics; a zoom actuator for positioning the zoom optics; a focus actuator for positioning the focus optics; a controller for controlling the zoom actuator and the focus actuator in response to received control input; at least one detector for capturing an image of at least one of a target and an obstacle, the at least one detector operable with the optical assembly; and a proprioception feature operable with the optical imaging system for generating a 3D perception, the proprioception feature comprising a communication feature for providing 3D information, the 3D information comprising real-time depth information in relation to real-time information, such as real-time planar information and real-time volumetric information, in relation to an interrogation volume, the zoom optics and the focus optics independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and the optical imaging system configured to operate at a minimum WD from at least one of the target and the obstacle, the WD defined between an aperture of the optical assembly and at least one of the target and the obstacle, whereby feedback during the medical procedure is enhanceable, in accordance with an embodiment of the present disclosure. By enhancing feedback during the medical procedure, a surgeon's “feel” during the medical procedure is maximized, a surgeon's fatigue is minimized, a patient's tissue trauma is minimized and medical or surgical error is minimized. In the embodiments of the present disclosure, the three-dimensional feedback, e.g., touch, sight, and sound feedback, is used in conjunction with sensed information as a function of the three-dimensional spatial coordinates, e.g., x, y, and z coordinates. - Referring to
FIG. 14 , this flow diagram illustrates a method M1 of fabricating a 3D navigation system 1200 system for enhancing feedback during a medical procedure, in accordance with an embodiment of the present disclosure. The method Ml comprises: providing an optical imaging system, as indicated by block 1401, providing the optical imaging system comprising: providing an optical assembly, as indicated by block 1402, providing the optical assembly comprising providing movable zoom optics and providing movable focus optics, as indicated by block 1403; providing a zoom actuator for positioning the zoom optics, as indicated by block 1404; providing a focus actuator for positioning the focus optics, as indicated by block 1405; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input, as indicated by block 1406; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly, as indicated by block 1407; and providing a proprioception feature operable with the optical imaging system for generating a 3D perception, providing the proprioception feature comprising providing a communication feature configured to provide 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, as indicated by block 1408, providing the zoom optics and providing the focus optics comprising providing the zoom optics and providing the focus optics as independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and providing the optical imaging system comprising configuring the optical imaging system to operate at a minimum WD from at least one of the target and the obstacle, the WD defined between an aperture of the optical assembly and at least one of the target and the obstacle, whereby feedback during the medical procedure is enhanceable. - Referring to
FIG. 15 , this flow diagram illustrates a method M2 of enhancing feedback during a medical procedure by way of a 3D navigation system 1200, in accordance with an embodiment of the present disclosure. The method M2 comprises: providing the 3D navigation system, as indicated by block 1500, providing the 3D navigation system comprising: providing an optical imaging system, as indicated by block 1501, providing the optical imaging system comprising: providing an optical assembly, as indicated by block 1502, providing the optical assembly comprising providing movable zoom optics and providing movable focus optics, as indicated by block 1503; providing a zoom actuator for positioning the zoom optics, as indicated by block 1504; providing a focus actuator for positioning the focus optics, as indicated by block 1505; providing a controller for controlling the zoom actuator and the focus actuator in response to received control input, as indicated by block 1506; providing at least one detector for capturing an image of at least one of a target and an obstacle, providing the at least one detector comprising providing the at least one detector as operable with the optical assembly, as indicated by block 1507; and providing a proprioception feature operable with the optical imaging system for generating a 3D perception, providing the proprioception feature comprising providing a communication feature for providing 3D information, the 3D information comprising real-time depth information in relation to real-time planar information in relation to an interrogation volume, providing the communication feature comprises providing at least one sensory input device and providing at least one sensory output device, and providing the communication feature comprises providing the communication feature as operable by way of a set of executable instructions storable on a nontransitory memory device, as indicated by block 1508, providing the zoom optics and providing the focus optics comprising providing the zoom optics and providing the focus optics as independently movable by the controller by way of the zoom actuator and the focus actuator, respectively, and providing the optical imaging system comprising configuring the optical imaging system to operate at a minimum distance WD from at least one of the target and the obstacle, the minimum distance WD defined between an aperture of the optical assembly and at least one of the target and the obstacle; receiving at least one input signal by the at least one sensory input device, as indicated by block 1509; and providing at least one output signal by the at least one sensory output device, as indicated by block 1510, thereby enhancing feedback during the surgical procedure. - While some embodiments or aspects of the present disclosure is implemented in fully functioning computers and computer systems, other embodiments or aspects is capable of being distributed as a computing product in a variety of forms and is capable of being applied regardless of the particular type of machine or computer readable media used to actually effect the distribution.
- At least some aspects disclosed are embodied, at least in part, in software. That is, some disclosed techniques and methods is carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- A computer readable storage medium is used to store software and data which when executed by a data processing system causes the system to perform various methods or techniques of the present disclosure. The executable software and data is stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data are stored in any one of these storage devices.
- Examples of computer-readable storage media may include, but are not limited to, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media, e.g., compact discs (CDs), digital versatile disks (DVDs), etc.), among others. The instructions can be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, and the like. The storage medium is the internet cloud, or a computer readable storage medium such as a disc.
- Furthermore, at least some of the methods described herein are capable of being distributed in a computer program product comprising a computer readable medium that bears computer usable instructions for execution by one or more processors, to perform aspects of the methods described. The medium is provided in various forms such as, but not limited to, one or more diskettes, compact disks, tapes, chips, USB keys, external hard drives, wire-line transmissions, satellite transmissions, internet transmissions or downloads, magnetic and electronic storage media, digital and analog signals, and the like. The computer usable instructions may also be in various forms, including compiled and non-compiled code.
- At least some of the elements of the systems described herein are implemented by software, or a combination of software and hardware. Elements of the system that are implemented via software are written in a high-level procedural language such as object oriented programming or a scripting language. Accordingly, the program code is written in C, C++, J++, or any other suitable programming language and may comprise modules or classes, as is known to those skilled in object oriented programming. At least some of the elements of the system that are implemented via software are written in assembly language, machine language or firmware as needed. In either case, the program code can be stored on storage media or on a computer readable medium that is readable by a general or special purpose programmable computing device having a processor, an operating system and the associated hardware and software that is necessary to implement the functionality of at least one of the embodiments described herein. The program code, when read by the computing device, configures the computing device to operate in a new, specific and predefined manner in order to perform at least one of the methods described herein.
- While the present disclosure describes various embodiments for illustrative purposes, such description is not intended to be limited to such embodiments. On the contrary, the applicant's teachings described and illustrated herein encompass various alternatives, modifications, and equivalents, without departing from the embodiments, the general scope of which is defined in the appended claims. Except to the extent necessary or inherent in the processes themselves, no particular order to steps or stages of methods or processes described in this disclosure is intended or implied. In many cases the order of process steps is varied without changing the purpose, effect, or import of the methods described.
- Information, as herein shown and described in detail, is fully capable of attaining the above-described object of the present disclosure as well as the presently preferred embodiment of the present disclosure, and is, thus, representative of the subject matter which is broadly contemplated by the present disclosure. The scope of the present disclosure fully encompasses other embodiments as well. In the appended claims, any reference to an element being made in the singular is not intended to denote “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described preferred embodiment and additional embodiments are hereby expressly incorporated by reference and are intended to be encompassed by the present disclosure as well as the appended claims.
- Moreover, no requirement exists for a system, apparatus, device, composition of matter, or method to address each and every problem sought to be resolved by the present disclosure, for such to be encompassed by the present disclosure as well as the appended claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public, regardless of whether the element, component, or method step is explicitly recited in the appended claims. However, that various changes and modifications in form, material, work-piece, and fabrication material detail is made, without departing from the spirit and scope of the present disclosure, as set forth in the appended claims, are also encompassed by the present disclosure.
- Generally, the present disclosure industrially applies to optical imaging systems. More particularly, the present disclosure industrially applies to optical imaging systems for use in image guided medical procedures. Even more particularly, the present disclosure industrially applies to optical imaging systems for use in image guided medical procedures involving a pointer tool.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/456,230 US20220079686A1 (en) | 2016-10-31 | 2021-11-23 | 3d navigation system and methods |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CA2016/051264 WO2018076094A1 (en) | 2016-10-31 | 2016-10-31 | 3d navigation system and methods |
US201916346498A | 2019-04-30 | 2019-04-30 | |
US17/456,230 US20220079686A1 (en) | 2016-10-31 | 2021-11-23 | 3d navigation system and methods |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/346,498 Continuation US11207139B2 (en) | 2016-10-31 | 2016-10-31 | 3D navigation system and methods |
PCT/CA2016/051264 Continuation WO2018076094A1 (en) | 2016-10-31 | 2016-10-31 | 3d navigation system and methods |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220079686A1 true US20220079686A1 (en) | 2022-03-17 |
Family
ID=62022928
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/346,498 Active 2037-04-24 US11207139B2 (en) | 2016-10-31 | 2016-10-31 | 3D navigation system and methods |
US17/456,230 Pending US20220079686A1 (en) | 2016-10-31 | 2021-11-23 | 3d navigation system and methods |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/346,498 Active 2037-04-24 US11207139B2 (en) | 2016-10-31 | 2016-10-31 | 3D navigation system and methods |
Country Status (4)
Country | Link |
---|---|
US (2) | US11207139B2 (en) |
CA (1) | CA3042091A1 (en) |
GB (1) | GB2571857B (en) |
WO (1) | WO2018076094A1 (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10828104B2 (en) * | 2014-09-15 | 2020-11-10 | Synaptive Medical (Barbados) Inc. | Surgical navigation system using image segmentation |
WO2018085694A1 (en) * | 2016-11-04 | 2018-05-11 | Intuitive Surgical Operations, Inc. | Reconfigurable display in computer-assisted tele-operated surgery |
US11058497B2 (en) * | 2017-12-26 | 2021-07-13 | Biosense Webster (Israel) Ltd. | Use of augmented reality to assist navigation during medical procedures |
US10705323B2 (en) | 2018-04-24 | 2020-07-07 | Synaptive Medical (Barbados) Inc. | Surgical microscope system with automatic zoom control |
CN112533556A (en) * | 2018-07-12 | 2021-03-19 | 深度健康有限责任公司 | System method and computer program product for computer-assisted surgery |
US12042163B2 (en) * | 2018-10-05 | 2024-07-23 | Acclarent, Inc. | Hollow tube surgical instrument with single axis sensor |
US20220079675A1 (en) * | 2018-11-16 | 2022-03-17 | Philipp K. Lang | Augmented Reality Guidance for Surgical Procedures with Adjustment of Scale, Convergence and Focal Plane or Focal Point of Virtual Data |
JP7189969B2 (en) * | 2019-01-21 | 2022-12-14 | オリンパス株式会社 | Image processing device, method of operating image processing device, and image processing program |
EP3696593A1 (en) | 2019-02-12 | 2020-08-19 | Leica Instruments (Singapore) Pte. Ltd. | A controller for a microscope, a corresponding method and a microscope system |
WO2020195928A1 (en) * | 2019-03-22 | 2020-10-01 | 川崎重工業株式会社 | Robot system |
JP2020157467A (en) * | 2019-03-22 | 2020-10-01 | 川崎重工業株式会社 | Robot system |
EP3805834B1 (en) * | 2019-10-10 | 2023-12-06 | Leica Instruments (Singapore) Pte. Ltd. | Optical imaging system and corresponding apparatus, method and computer program |
US20210298833A1 (en) * | 2020-03-30 | 2021-09-30 | Mitaka Kohki Co., Ltd. | Navigation auto focus system |
NL2026875B1 (en) * | 2020-11-11 | 2022-06-30 | Elitac B V | Device, method and system for aiding a surgeon while operating |
CN112515767B (en) * | 2020-11-13 | 2021-11-16 | 中国科学院深圳先进技术研究院 | Surgical navigation device, surgical navigation apparatus, and computer-readable storage medium |
US11974053B2 (en) * | 2021-03-29 | 2024-04-30 | Alcon, Inc. | Stereoscopic imaging platform with continuous autofocusing mode |
WO2023281372A2 (en) * | 2021-07-05 | 2023-01-12 | Moon Surgical Sas | Co-manipulation surgical system having optical scanners for use with surgical instruments for performing laparoscopic surgery |
US11832909B2 (en) | 2021-03-31 | 2023-12-05 | Moon Surgical Sas | Co-manipulation surgical system having actuatable setup joints |
US12042241B2 (en) | 2021-03-31 | 2024-07-23 | Moon Surgical Sas | Co-manipulation surgical system having automated preset robot arm configurations |
US11812938B2 (en) | 2021-03-31 | 2023-11-14 | Moon Surgical Sas | Co-manipulation surgical system having a coupling mechanism removeably attachable to surgical instruments |
US11844583B2 (en) | 2021-03-31 | 2023-12-19 | Moon Surgical Sas | Co-manipulation surgical system having an instrument centering mode for automatic scope movements |
US11819302B2 (en) | 2021-03-31 | 2023-11-21 | Moon Surgical Sas | Co-manipulation surgical system having user guided stage control |
WO2022208414A1 (en) | 2021-03-31 | 2022-10-06 | Moon Surgical Sas | Co-manipulation surgical system for use with surgical instruments for performing laparoscopic surgery |
CN113499166A (en) * | 2021-06-21 | 2021-10-15 | 西安交通大学 | Autonomous stereoscopic vision navigation method and system for corneal transplantation surgical robot |
EP4223250A1 (en) * | 2022-02-08 | 2023-08-09 | Leica Instruments (Singapore) Pte Ltd | Surgical microscope system and system, method and computer program for a microscope of a surgical microscope system |
EP4331664A1 (en) * | 2022-08-31 | 2024-03-06 | Vision RT Limited | A system for monitoring position of a patient |
US11839442B1 (en) | 2023-01-09 | 2023-12-12 | Moon Surgical Sas | Co-manipulation surgical system for use with surgical instruments for performing laparoscopic surgery while estimating hold force |
US11986165B1 (en) | 2023-01-09 | 2024-05-21 | Moon Surgical Sas | Co-manipulation surgical system for use with surgical instruments for performing laparoscopic surgery while estimating hold force |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040106916A1 (en) * | 2002-03-06 | 2004-06-03 | Z-Kat, Inc. | Guidance system and method for surgical procedures with improved feedback |
US20140107471A1 (en) * | 2011-06-27 | 2014-04-17 | Hani Haider | On-board tool tracking system and methods of computer assisted surgery |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060293557A1 (en) * | 2005-03-11 | 2006-12-28 | Bracco Imaging, S.P.A. | Methods and apparati for surgical navigation and visualization with microscope ("Micro Dex-Ray") |
EP2023842A2 (en) * | 2006-05-19 | 2009-02-18 | Mako Surgical Corp. | Method and apparatus for controlling a haptic device |
US8982203B2 (en) * | 2007-06-06 | 2015-03-17 | Karl Storz Gmbh & Co. Kg | Video system for viewing an object on a body |
US10070903B2 (en) * | 2008-01-09 | 2018-09-11 | Stryker European Holdings I, Llc | Stereotactic computer assisted surgery method and system |
CA2797302C (en) | 2010-04-28 | 2019-01-15 | Ryerson University | System and methods for intraoperative guidance feedback |
US9642606B2 (en) * | 2012-06-27 | 2017-05-09 | Camplex, Inc. | Surgical visualization system |
CH707486A1 (en) * | 2013-01-25 | 2014-07-31 | Axpo Kompogas Engineering Ag | Fermenter charging process, biogas plant and conversion process. |
CA2906414C (en) | 2013-03-15 | 2016-07-26 | Synaptive Medical (Barbados) Inc. | Systems and methods for navigation and simulation of minimally invasive therapy |
EP2967297B1 (en) | 2013-03-15 | 2022-01-05 | Synaptive Medical Inc. | System for dynamic validation, correction of registration for surgical navigation |
US11026750B2 (en) | 2015-01-23 | 2021-06-08 | Queen's University At Kingston | Real-time surgical navigation |
KR20240044536A (en) | 2015-02-25 | 2024-04-04 | 마코 서지컬 코포레이션 | Navigation systems and methods for reducing tracking interruptions during a surgical procedure |
-
2016
- 2016-10-31 US US16/346,498 patent/US11207139B2/en active Active
- 2016-10-31 CA CA3042091A patent/CA3042091A1/en active Pending
- 2016-10-31 GB GB1907704.9A patent/GB2571857B/en not_active Expired - Fee Related
- 2016-10-31 WO PCT/CA2016/051264 patent/WO2018076094A1/en active Application Filing
-
2021
- 2021-11-23 US US17/456,230 patent/US20220079686A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040106916A1 (en) * | 2002-03-06 | 2004-06-03 | Z-Kat, Inc. | Guidance system and method for surgical procedures with improved feedback |
US20140107471A1 (en) * | 2011-06-27 | 2014-04-17 | Hani Haider | On-board tool tracking system and methods of computer assisted surgery |
Also Published As
Publication number | Publication date |
---|---|
CA3042091A1 (en) | 2018-05-03 |
WO2018076094A1 (en) | 2018-05-03 |
US20190254757A1 (en) | 2019-08-22 |
GB2571857B (en) | 2022-05-04 |
GB201907704D0 (en) | 2019-07-17 |
US11207139B2 (en) | 2021-12-28 |
GB2571857A (en) | 2019-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220079686A1 (en) | 3d navigation system and methods | |
US11648064B2 (en) | Motorized full field adaptive microscope | |
US11826208B2 (en) | Dual zoom and dual field-of-view microscope | |
US11506876B2 (en) | Surgical optical zoom system | |
US11237373B2 (en) | Surgical microscope system with automatic zoom control | |
US20220031422A1 (en) | System and methods using a videoscope with independent-zoom for enabling shared-mode focusing | |
US10828114B2 (en) | Methods and systems for providing depth information | |
US11672609B2 (en) | Methods and systems for providing depth information | |
WO2016041051A1 (en) | End effector for a positioning device | |
WO2017143427A1 (en) | System and method for scope based depth map acquisition | |
US20230329804A1 (en) | Trackable retractor systems, apparatuses, devices, and methods | |
US20240085684A1 (en) | System and methods of concurrent white light and visible fluorescence visualization | |
US20240085682A1 (en) | System and methods of multiple fluorophore visualization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SYNAPTIVE MEDICAL INC., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYNAPTIVE MEDICAL (BARBADOS) INC.;REEL/FRAME:058947/0490 Effective date: 20200902 Owner name: SYNAPTIVE MEDICAL (BARBADOS) INC., BARBADOS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PIRON, CAMERON ANTHONY;REEL/FRAME:058194/0738 Effective date: 20161231 |
|
AS | Assignment |
Owner name: SYNAPTIVE MEDICAL (BARBADOS) INC., BARBADOS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE SECOND ASSIGNOR, MICHAEL FRANK GUNTER WOOD, TO BE ADDED PREVIOUSLY RECORDED AT REEL: 058194 FRAME: 0738. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:PIRON, CAMERON ANTHONY;WOOD, MICHAEL FRANK GUNTER;SIGNING DATES FROM 20161219 TO 20161231;REEL/FRAME:058299/0571 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |