US10835344B2 - Display of preoperative and intraoperative images - Google Patents

Display of preoperative and intraoperative images Download PDF

Info

Publication number
US10835344B2
US10835344B2 US16/159,424 US201816159424A US10835344B2 US 10835344 B2 US10835344 B2 US 10835344B2 US 201816159424 A US201816159424 A US 201816159424A US 10835344 B2 US10835344 B2 US 10835344B2
Authority
US
United States
Prior art keywords
video
image
preoperative
preoperative image
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/159,424
Other versions
US20190110855A1 (en
Inventor
Joëlle K. Barral
Martin Habbecke
Eric Johnson
Francois Brahic
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verily Life Sciences LLC
Original Assignee
Verily Life Sciences LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Verily Life Sciences LLC filed Critical Verily Life Sciences LLC
Priority to US16/159,424 priority Critical patent/US10835344B2/en
Assigned to VERILY LIFE SCIENCES LLC reassignment VERILY LIFE SCIENCES LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARRAL, JOËLLE K., HABBECKE, MARTIN
Publication of US20190110855A1 publication Critical patent/US20190110855A1/en
Application granted granted Critical
Publication of US10835344B2 publication Critical patent/US10835344B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00045Display arrangement
    • A61B1/0005Display arrangement combining images e.g. side-by-side, superimposed or tiled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/363Use of fiducial points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/371Surgical systems with images on a monitor during operation with simultaneous use of two cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • H04N2005/2255
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/555Constructional details for picking-up images in sites, inaccessible due to their dimensions or hazardous conditions, e.g. endoscopes or borescopes

Definitions

  • This disclosure relates generally to systems for performing surgery, and in particular but not exclusively, it relates to robotic and endoscopic surgery.
  • Robotic or computer assisted surgery uses robotic systems to aid in surgical procedures.
  • Robotic surgery was developed as a way to overcome limitations (e.g., spatial constraints associated with a surgeon's hands, inherent shakiness of human movements, and inconsistency in human work product, etc.) of pre-existing surgical procedures.
  • limitations e.g., spatial constraints associated with a surgeon's hands, inherent shakiness of human movements, and inconsistency in human work product, etc.
  • the field has advanced greatly to limit the size of incisions, and reduce patient recovery time.
  • robotically controlled instruments may replace traditional tools to perform surgical motions.
  • Feedback controlled motions may allow for smoother surgical steps than those performed by humans. For example, using a surgical robot for a step such as rib spreading, may result in less damage to the patient's tissue than if the step were performed by a surgeon's hand. Additionally, surgical robots can reduce the amount of time in the operating room by requiring fewer steps to complete a procedure.
  • robotic surgery may still be relatively expensive, and suffer from limitations associated with conventional surgery. For example, surgeons may become disoriented when performing robotic surgery, which may result in harm to the patient. Further, when parts of the body are deformed during surgery, the surgeon may not recognize them and unintentionally cut or damage tissue.
  • FIG. 1A illustrates a system for robotic video assisted surgery, in accordance with an embodiment of the disclosure.
  • FIG. 1B illustrates tracking a surface element in a video, in accordance with an embodiment of the disclosure.
  • FIG. 1C illustrates at least one preoperative image displayed at the same time as a video of a surgical area, in accordance with an embodiment of the disclosure
  • FIG. 2 illustrates a system for endoscopic video assisted surgery, in accordance with an embodiment of the disclosure.
  • FIG. 3 illustrates a method of video assisted surgery, in accordance with an embodiment of the disclosure.
  • Embodiments of an apparatus and method for the display of preoperative and intraoperative images are described herein.
  • numerous specific details are set forth to provide a thorough understanding of the embodiments.
  • One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc.
  • well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
  • preoperative images e.g., X-rays, ultrasound, etc.
  • Preoperative images are hugely valuable to guide the procedure (e.g., by indicating the extent of tumor invasion, or providing anatomical clues as to what to expect behind currently dissected structures).
  • the instant disclosure provides a system and method for combining user input and automatic tracking of the procedure, as applied to the preoperative images.
  • the surgeon or a member of the surgical staff may open a 3D imaging dataset (e.g., a magnetic resonance image (MM), computerized tomography (CT) scan, ultrasound, X-ray, or the like) and adjust the slicing plane (e.g., zooming in or out; reformatting in axial, sagittal, coronal, or oblique orientations; etc.) such that the resulting 2D image best aligns with the current surgical field (which is captured by a camera).
  • various metrics for “best aligns” can be used (e.g., Dice coefficient). That metric can also be focused on a specific region of interest of the surgical field (e.g., the organ of interest).
  • the surgeon (or a member of the surgical staff, or potentially a remote radiologist) can then draw, on the preoperative image, the contour corresponding to the current video frame, or provide matching landmarks so anatomical features in the current video frame can be detected automatically and marked on the preoperative image.
  • This may be accomplished using a custom made user interface or other input device (e.g., mouse, touch screen, keyboard, or the like).
  • That contour and the underlying preoperative image may be automatically updated by combining information about the position of the camera and the stereo video feed in order to estimate the position difference between the new video frame and the reference one.
  • the position of the features in the surgical video can be estimated along with the difference in location between the new video frame and the reference one. Additional information may also be used to calculate where the video is being imaged. The location of the features in the video may be accentuated on the preoperative image(s).
  • the preoperative image or slicing plane for visualization may not be adjusted according to the current view of the surgical field. Instead a visualization of the endoscope and/or surgical tools may be displayed over the pre-op image (which is one example of accentuating anatomical features shown in the video). This allows the surgeon and/or the surgical staff to orient the pre-op image arbitrarily, but at the same time get a sense of the relation between pre-op image and surgical field. For instance, in a prostatectomy where the view of the endoscope camera is roughly aligned axially, the pre-op image can be viewed in a coronal orientation, enabling the system to display the locations of the endoscope and surgical tools in relation to the boundary of the prostate.
  • the reverse may occur: updating the surgical field based on the historical video stream while navigating within the preoperative images (e.g., if the radiologist is providing input that the surgeon wants to convert in his/her field of view). In such a case, the procedure is stopped while navigation happens.
  • FIG. 1A illustrates system 100 for robotic video assisted surgery, in accordance with an embodiment of the disclosure.
  • System 100 includes surgical robot 121 (including arms 141 ), camera 101 (e.g., CMOS image sensor or the like), light source 103 (e.g., one or more light emitting diodes, or the like), controller 107 (including a display), network 131 (e.g., one or more servers connected to the internet or local area network), and storage 133 (e.g., solid state memory or hard drives on servers).
  • surgical robot 121 may be used to hold surgical instruments (e.g., each arm 141 holds an instrument at the distal ends of the arm) and perform surgery, diagnose disease, take biopsies, or conduct any other procedure a doctor could perform.
  • Surgical instruments may include scalpels, forceps, cameras (e.g., camera 101 ) or the like. While surgical robot 121 only has three arms 141 , one skilled in the art will appreciate that surgical robot 121 is merely a cartoon illustration, and that surgical robot 121 can take any number of shapes depending on the type of surgery needed to be performed and other requirements.
  • Surgical robot 121 may be coupled to controller 107 , network 131 , and/or storage 133 either by wires or wirelessly. Furthermore, surgical robot 121 may be coupled (wirelessly or by wires) to a user input and controller to receive instructions from a surgeon or doctor.
  • Controller 107 may be located very close to the surgical robot 121 and patient (e.g., in the same room) or may be located many miles apart.
  • surgical robot 121 may be used to perform surgery where a specialist is many miles away from the patient, and instructions from the surgeon are sent over the internet or a secure network (e.g., network 131 ).
  • the surgeon may be local and may simply prefer using surgical robot 121 because surgical robot 121 can better access a portion of the body than the hand of the surgeon could.
  • storage 133 may be included in servers connected to the internet.
  • storage 133 maybe local storage such as a hard drive, solid state memory, or the like.
  • Storage 133 may be coupled to network 131 , which may include the internet or local area network. It is appreciated that storage 133 and network 131 may be considered part of controller 107 .
  • controller 107 is a distributed system.
  • Network 131 and storage 133 may provide logic to controller 107 that when executed by controller 107 causes system 100 to perform a variety of operations.
  • controller 107 may include the processor and memory of a general purpose computer.
  • a general purpose computer with a single display is coupled to surgical robot 121 .
  • Controller 107 includes a memory including at least one preoperative image (e.g., X-ray image, a magnetic resonance image (MM), a computerized tomography (CT) image, or an ultrasound image).
  • Camera 101 is coupled to capture a video of a surgical area, including anatomical features. The video of the surgical area is shown on the display(s) along with the at least one preoperative image. As will be shown in FIG. 1C , a location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image.
  • displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time. For example, if an open heart surgery is being performed and the camera is showing the left side of the heart, the preoperative image will show the left side of the heart accentuated; however, if camera 101 shifts to show the right side of the heart, the right side of the heart will be accentuated in the preoperative image.
  • FIG. 1B illustrates tracking a surface element in a video, in accordance with an embodiment of the disclosure.
  • FIG. 1B shows surface element 151 displayed on a surgical video captured by a camera (e.g., camera 101 in FIG. 1A ).
  • Surface element 151 may be used to track the position of the video in the body.
  • the camera can image surface element 151 and based on the location of surface element 151 in the video, the camera can determine where the video is imaging in the human body.
  • the location of surface element 151 may be correlated with the location of features in the pre-op images.
  • the location of the accentuated region in the at least one preoperative image may change based on the relative location of surface element 151 in the video.
  • surface element may be on a liver, and the video shifts to the right so that the surface element is on the far right side of the video feed. Accordingly, the annotated region on the preoperative image(s) may also shift to the right.
  • surface element 151 may be identified by the user of the surgical robot (e.g., surgical robot 121 ), or the controller in the surgical robot may identify a number of surface elements autonomously to track the procedure.
  • surface element 151 was chosen by a user and represents a unique piece of human anatomy.
  • the surgical robot can track surface element 151 even when surface element 151 is moving.
  • surface element 151 may be located on a lung and the lung is breathing.
  • surface element 151 may be located on the heart, and the heart is beating. Accordingly, surface element 151 will be moving throughout the image recognition processes, and the controller (coupled to the surgical robot and performing the image processing) can still determine the location of surface element 151 despite the movement.
  • surface element 151 may be accentuated in the video.
  • surface element 151 is highlighted using a bounding box surrounding surface element 151 .
  • Surface element 151 may be highlighted with comment box, bounding box, light contrast, dark contrast, or the like.
  • surface element 151 may not be highlighted in order to not distract surgeon while performing the surgery. Accentuation of surface element 151 may be toggled on and off through voice control or other methods.
  • FIG. 1C illustrates at least one preoperative image 163 displayed at the same time as a video 165 (e.g., the video captured by camera 101 in FIG. 1A ) of a surgical area, in accordance with an embodiment of the disclosure.
  • the at least one preoperative image 163 is located above the video of the surgical area 165 ; however, in other embodiments preoperative image 163 may be located anywhere relative to the video of surgical area 165 including overlaid on video 165 .
  • the at least one preoperative image 163 is an MRI; however, in other embodiments the at least one preoperative image 163 may include an X-ray image, a CT image, or an ultrasound image. Within preoperative image 163 is accentuated region 153 .
  • Accentuated region 153 is identified using a bounding box. However, in other embodiments accentuated region 153 may include at least one of bordering the accentuated region with a line, changing a color of the accentuated region (e.g., with a semitransparent overlay or the like), changing a brightness of the accentuated region, or labeling the accentuated region.
  • the at least one preoperative image 163 includes a three dimensional preoperative model (here a 3D MRI).
  • orientation of the three dimensional preoperative model may change as the location of the anatomical features shown in the video changes with time. For example, if it becomes more desirable to show a different angle of preoperative image 163 , preoperative image 163 may change the orientation of the MRI model.
  • arms of the surgical robot 141 can be shown operating on tissue.
  • the location of accentuated area 153 in preoperative image 163 automatically changes based on the location of the video 165 . That way the surgeon knows where he or she is operating relative to other organs and structures in the body.
  • the location of accentuated region 153 in the at least one preoperative image 163 may be determined by tracking a surface element (e.g., surface element 151 in FIG. 1B ).
  • the position and size of accentuated region 153 on preoperative image 163 may also be determined by the movement of arms 141 of the surgical robot (which may include motion tracking systems), as well as markers placed in the body, and/or by measuring distances using the camera.
  • the location of the surgical instruments in the body may be displayed on the preoperative image as the accentuated region.
  • FIG. 2 illustrates a system 200 for endoscopic video assisted surgery, in accordance with an embodiment of the disclosure.
  • System 200 includes endoscope 271 , controller 207 , first display 215 , and second display 217 .
  • Endoscope 271 is coupled to controller 207 (e.g., a general purpose computer) and includes camera 201 (e.g., a CMOS image sensor or the like) attached to the distal end of endoscope 271 .
  • the distal end of endoscope 271 is opposite the proximal end of endoscope 271 , and may include one or more light sources such as light emitting diodes.
  • the light emitted from endoscope 271 may be visible, infrared, ultraviolet, or the like.
  • endoscope 271 is not shown inserted into a surgical area for simplicity of illustration. However, it is appreciated that the distal end of endoscope 271 may be inserted into a patient to provide the video 265 discussed in greater detail below.
  • First display 215 is coupled to the controller 207 to display a video 265 of a surgical area received from endoscope 271 .
  • the surgical area includes a lung.
  • Second display 217 is coupled to controller 207 to display at least one preoperative image 263 stored in memory (e.g., RAM, ROM, etc.).
  • preoperative image 263 includes a chest X-ray.
  • the chest X-ray includes the same lung as shown in video 265 .
  • Preoperative image 263 includes an accentuated region 253 including a bounding box containing the lung shown in video 265 .
  • the accentuated region 265 will change location and size on preoperative image 263 (e.g., the bounding box may move, or grow larger or smaller depending on how “zoomed in” video 265 is).
  • the size of the various videos and images displayed may be changed by the user (e.g., make the video/image windows larger or smaller).
  • FIG. 3 illustrates a method of video-assisted surgery, in accordance with an embodiment of the disclosure.
  • blocks ( 301 - 309 ) in method 300 may occur in any order or even in parallel.
  • blocks may be added to, or removed from, method 300 in accordance with the teachings of the present disclosure.
  • Block 301 shows providing at least one preoperative image stored in a memory.
  • the at least one preoperative image may include at least one of an X-ray image, a magnetic resonance image, a computerized tomography image, an ultrasound image, or the like.
  • the at least one image may include a microscopy image, or a pathology image.
  • the at least one preoperative image may include complex three dimensional models (e.g., a 3D reconstruction of a specific organ).
  • a preoperative image includes any image captured before a surgical step is performed (e.g., including an image captured during a surgery).
  • the surgeon may change the orientation of the at least one preoperative image to show the anatomical features in the preoperative image.
  • the surgeon sees the optimal view of the preoperative image while performing surgery.
  • Block 303 illustrates capturing a video of a surgical area including anatomical features using a camera, where the preoperative image includes at least part of the surgical area.
  • the camera may be included in an endoscope, and the endoscope may be used by a doctor to perform surgery.
  • the camera and the controller may be included in, or coupled to, a surgical robot.
  • Block 305 describes displaying the video of the surgical area on the one or more displays.
  • the video of the surgical area may share a screen with the preoperative images, while in other embodiments the video of the surgical area may be displayed on its own display.
  • a display includes a number of different devices such as flat panel displays, virtual reality headsets, tablets, and the like.
  • Block 307 shows displaying the at least one preoperative image on the one or more displays at the same time as the video.
  • a location of the anatomical features (which may be identified using ICG or other contrast agents to visualize specific organs/anatomical structures and help with the localization of features) shown in the video is displayed as an accentuated region on the at least one preoperative image.
  • the location of the anatomical features, with respect to the preoperative image may be tracked by identifying a surface element in the video, determining coordinates of the surface element in two successive frames in the video, and determining a change in the coordinates between the two successive frames.
  • a feature in the video can be identified, and using changes in position of the feature in the video, the location of the accentuated region on the preoperative image can be changed accordingly.
  • a controller may be used to identify the surface element, or a user may select the surface element with a user interface.
  • the accentuated region may be accentuated using at least one of bordering the accentuated region with a line, changing a color of the accentuated region, changing a brightness of the accentuated region, or labeling the accentuated region.
  • the preoperative image is superimposed on the video of the surgical area (e.g., via partial transparency, or image blending). This way the anatomical features in the preoperative image are “accentuated” by being overlaid on the video feed. However, the anatomical features may also be accentuated via other techniques described elsewhere.
  • Block 309 illustrates changing a position of the accentuated region on the preoperative image(s) in real time, as the location of the anatomical features shown in the video changes over time.
  • the preoperative image(s) includes a three dimensional preoperative model, and displaying the preoperative image(s) on the one or more displays at the same time as the video includes changing an orientation of the three dimensional preoperative model as the location of the anatomical features shown in the video changes over time.
  • the at least one preoperative image may include a 3D MRI scan. As the location of the place in the body where the video is being captured changes (e.g., because the camera moved to show new organs, etc.) the orientation of the 3D MRI model may change to show the new video location.
  • the surgeon is provided the accentuated region highlighting the organs shown in the video, and also the preoperative model is orienting itself to better show images of the organs in the preoperative image(s).
  • Changes to the preoperative image(s) may be achieved by the system recognizing different organs/anatomical features or fiducial markers (e.g., surgical clips or the like), using computer vision systems.
  • Recognition of organs/fiducial markers may be performed at least in part by machine learning algorithms (e.g., a convolutional neural network trained to recognize specific features, recurrent neural network, long short-term memory network, or the like), and object tracking may be used to shift views.
  • the user may select an order of images the surgeon wishes to see, and “tie” these images to various steps in the surgery. For instance, in a surgery involving multiple organs (e.g., lung and lymph nodes), the surgeon may want to see preoperative images of the lung while operating on the lung, and a different preoperative image of lymph nodes while operating on lymph nodes.
  • the system may recognize (e.g., using the machine vision techniques or fiducial markers described herein) when the surgeon has switched from operating on the lung to operating on the lymph nodes, and the system will display the preoperative image(s) of the lymph nodes.
  • the surgeon or surgical team may “tie” certain images to certain events in the surgery, therefore the preoperative images will be displayed in response to certain events occurring (e.g., when an organ comes into view of the video feed, after a certain amount of time has elapsed, when a specific instrument is being used (e.g., a stapler), when a marker is placed, when the user of the system instructs the system to switch preoperative images, etc.).
  • a specific instrument e.g., a stapler
  • the surgeon may “tie” the preoperative image to fiducial markers placed in the body. For instance, when a fiducial marker comes into view the system may change the preoperative image displayed (e.g., switch to a different image, update its orientation, magnification level, or the like).
  • the camera capturing the surgical video may move to always show the tips of the instruments or other important aspect of the surgery (e.g., the organ being operated on), and the preoperative image may also move to include the important location and be displayed in the right orientation. This may be achieved by correlating the amount of motion of the camera and/or surgical instruments to a corresponding change to the preoperative image to show the same relative location.
  • the preoperative image could also be scaled/stretched to map the anatomy (e.g., lungs being operated on might be collapsed, accordingly, the preoperative image is similarly altered to reflect the collapsed state).
  • the system may recognize the preferred sizing, location, or frame of the preoperative image that the surgeon likes to first look at.
  • the system may then display this specific image to the user of the system. For example, when performing a specific type of lobectomy, there may be a CT scan of the lung. The surgeon may always like to begin a surgery by examining a sagittal view and slice near the middle of the 3D CT scan. Accordingly the system may recognize the surgeon's preferences and display the appropriate image.
  • Recognition of preferences may be performed using a machine learning algorithm that is trained with user log in information (e.g., the specific user using the system), the preoperative images selected, the time when the preoperative images are selected (e.g., relative time to other events or absolute time), the type of surgery to be performed (which may be input into the system prior to the surgery or identified using a machine learning algorithm), or the like.
  • the system may perform other analysis about how the surgeon is using the application and apply settings accordingly.
  • the processes explained above are described in terms of computer software and hardware.
  • the techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described.
  • the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise. Processes may also occur locally or across distributed systems (e.g., multiple servers).
  • a tangible non-transitory machine-readable storage medium includes any mechanism that provides (i.e., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
  • a machine-readable storage medium includes recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).

Abstract

A system for video-assisted surgery includes one or more displays, a memory including at least one preoperative image, and a camera coupled to capture a video. A controller is coupled to the memory, the camera, and the one or more displays, and the controller includes logic that when executed by the controller causes the system to perform a variety of operations. The system may capture a video of a surgical area, including anatomical features, using the camera, and display the video of the surgical area on the one or more displays. The system may also display the at least one preoperative image on the one or more displays at the same time as the video. The location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image.

Description

REFERENCE TO RELATED APPLICATION
This application claims the benefit of U.S. Application No. 62/573,321, filed on Oct. 17, 2017, the contents of which are incorporated herein by reference.
TECHNICAL FIELD
This disclosure relates generally to systems for performing surgery, and in particular but not exclusively, it relates to robotic and endoscopic surgery.
BACKGROUND INFORMATION
Robotic or computer assisted surgery uses robotic systems to aid in surgical procedures. Robotic surgery was developed as a way to overcome limitations (e.g., spatial constraints associated with a surgeon's hands, inherent shakiness of human movements, and inconsistency in human work product, etc.) of pre-existing surgical procedures. In recent years, the field has advanced greatly to limit the size of incisions, and reduce patient recovery time.
In the case of open surgery, robotically controlled instruments may replace traditional tools to perform surgical motions. Feedback controlled motions may allow for smoother surgical steps than those performed by humans. For example, using a surgical robot for a step such as rib spreading, may result in less damage to the patient's tissue than if the step were performed by a surgeon's hand. Additionally, surgical robots can reduce the amount of time in the operating room by requiring fewer steps to complete a procedure.
However, robotic surgery may still be relatively expensive, and suffer from limitations associated with conventional surgery. For example, surgeons may become disoriented when performing robotic surgery, which may result in harm to the patient. Further, when parts of the body are deformed during surgery, the surgeon may not recognize them and unintentionally cut or damage tissue.
BRIEF DESCRIPTION OF THE DRAWINGS
Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles being described.
FIG. 1A illustrates a system for robotic video assisted surgery, in accordance with an embodiment of the disclosure.
FIG. 1B illustrates tracking a surface element in a video, in accordance with an embodiment of the disclosure.
FIG. 1C illustrates at least one preoperative image displayed at the same time as a video of a surgical area, in accordance with an embodiment of the disclosure
FIG. 2 illustrates a system for endoscopic video assisted surgery, in accordance with an embodiment of the disclosure.
FIG. 3 illustrates a method of video assisted surgery, in accordance with an embodiment of the disclosure.
DETAILED DESCRIPTION
Embodiments of an apparatus and method for the display of preoperative and intraoperative images are described herein. In the following description numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
When a surgeon views preoperative images (e.g., X-rays, ultrasound, etc.) during a case, it is hard and time consuming for the surgeon to put them in the context of the intraoperative surgical field. This is partially because the surgeon is not used to reading radiological images, and partially because the structures are deformed intraoperatively (e.g., the abdomen is inflated, the table is inclined, dissection happens). Preoperative images, however, are hugely valuable to guide the procedure (e.g., by indicating the extent of tumor invasion, or providing anatomical clues as to what to expect behind currently dissected structures).
The instant disclosure provides a system and method for combining user input and automatic tracking of the procedure, as applied to the preoperative images. At a time t=t0 (preoperative), the surgeon (or a member of the surgical staff) may open a 3D imaging dataset (e.g., a magnetic resonance image (MM), computerized tomography (CT) scan, ultrasound, X-ray, or the like) and adjust the slicing plane (e.g., zooming in or out; reformatting in axial, sagittal, coronal, or oblique orientations; etc.) such that the resulting 2D image best aligns with the current surgical field (which is captured by a camera). In case of deformation, various metrics for “best aligns” can be used (e.g., Dice coefficient). That metric can also be focused on a specific region of interest of the surgical field (e.g., the organ of interest).
The surgeon (or a member of the surgical staff, or potentially a remote radiologist) can then draw, on the preoperative image, the contour corresponding to the current video frame, or provide matching landmarks so anatomical features in the current video frame can be detected automatically and marked on the preoperative image. This may be accomplished using a custom made user interface or other input device (e.g., mouse, touch screen, keyboard, or the like).
At time t>t0 (when surgery is being performed), that contour and the underlying preoperative image (properly zoomed in, reformatted, and oriented) may be automatically updated by combining information about the position of the camera and the stereo video feed in order to estimate the position difference between the new video frame and the reference one. Using the tracking of a surface element in the surgical video, the position of the features in the surgical video can be estimated along with the difference in location between the new video frame and the reference one. Additional information may also be used to calculate where the video is being imaged. The location of the features in the video may be accentuated on the preoperative image(s).
In one embodiment, the preoperative image or slicing plane for visualization may not be adjusted according to the current view of the surgical field. Instead a visualization of the endoscope and/or surgical tools may be displayed over the pre-op image (which is one example of accentuating anatomical features shown in the video). This allows the surgeon and/or the surgical staff to orient the pre-op image arbitrarily, but at the same time get a sense of the relation between pre-op image and surgical field. For instance, in a prostatectomy where the view of the endoscope camera is roughly aligned axially, the pre-op image can be viewed in a coronal orientation, enabling the system to display the locations of the endoscope and surgical tools in relation to the boundary of the prostate.
In some embodiments the reverse may occur: updating the surgical field based on the historical video stream while navigating within the preoperative images (e.g., if the radiologist is providing input that the surgeon wants to convert in his/her field of view). In such a case, the procedure is stopped while navigation happens.
The following disclosure will discuss the embodiments described above, as they relate to the embodiments shown in FIGS. 1A-3.
FIG. 1A illustrates system 100 for robotic video assisted surgery, in accordance with an embodiment of the disclosure. System 100 includes surgical robot 121 (including arms 141), camera 101 (e.g., CMOS image sensor or the like), light source 103 (e.g., one or more light emitting diodes, or the like), controller 107 (including a display), network 131 (e.g., one or more servers connected to the internet or local area network), and storage 133 (e.g., solid state memory or hard drives on servers). As shown, surgical robot 121 may be used to hold surgical instruments (e.g., each arm 141 holds an instrument at the distal ends of the arm) and perform surgery, diagnose disease, take biopsies, or conduct any other procedure a doctor could perform. Surgical instruments may include scalpels, forceps, cameras (e.g., camera 101) or the like. While surgical robot 121 only has three arms 141, one skilled in the art will appreciate that surgical robot 121 is merely a cartoon illustration, and that surgical robot 121 can take any number of shapes depending on the type of surgery needed to be performed and other requirements. Surgical robot 121 may be coupled to controller 107, network 131, and/or storage 133 either by wires or wirelessly. Furthermore, surgical robot 121 may be coupled (wirelessly or by wires) to a user input and controller to receive instructions from a surgeon or doctor. Controller 107, and the user of controller 107, may be located very close to the surgical robot 121 and patient (e.g., in the same room) or may be located many miles apart. Thus surgical robot 121 may be used to perform surgery where a specialist is many miles away from the patient, and instructions from the surgeon are sent over the internet or a secure network (e.g., network 131). Alternatively, the surgeon may be local and may simply prefer using surgical robot 121 because surgical robot 121 can better access a portion of the body than the hand of the surgeon could.
In the depicted embodiment storage 133 may be included in servers connected to the internet. Alternatively storage 133 maybe local storage such as a hard drive, solid state memory, or the like. Storage 133 may be coupled to network 131, which may include the internet or local area network. It is appreciated that storage 133 and network 131 may be considered part of controller 107. Thus controller 107 is a distributed system. Network 131 and storage 133 may provide logic to controller 107 that when executed by controller 107 causes system 100 to perform a variety of operations. Alternatively controller 107 may include the processor and memory of a general purpose computer.
In the depicted embodiment, a general purpose computer with a single display (including controller 107) is coupled to surgical robot 121. Controller 107 includes a memory including at least one preoperative image (e.g., X-ray image, a magnetic resonance image (MM), a computerized tomography (CT) image, or an ultrasound image). Camera 101 is coupled to capture a video of a surgical area, including anatomical features. The video of the surgical area is shown on the display(s) along with the at least one preoperative image. As will be shown in FIG. 1C, a location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image. In some embodiments, displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time. For example, if an open heart surgery is being performed and the camera is showing the left side of the heart, the preoperative image will show the left side of the heart accentuated; however, if camera 101 shifts to show the right side of the heart, the right side of the heart will be accentuated in the preoperative image.
FIG. 1B illustrates tracking a surface element in a video, in accordance with an embodiment of the disclosure. FIG. 1B shows surface element 151 displayed on a surgical video captured by a camera (e.g., camera 101 in FIG. 1A). Surface element 151 may be used to track the position of the video in the body. For example, the camera can image surface element 151 and based on the location of surface element 151 in the video, the camera can determine where the video is imaging in the human body. The location of surface element 151 may be correlated with the location of features in the pre-op images. For example the location of the accentuated region in the at least one preoperative image may change based on the relative location of surface element 151 in the video. In one embodiment, surface element may be on a liver, and the video shifts to the right so that the surface element is on the far right side of the video feed. Accordingly, the annotated region on the preoperative image(s) may also shift to the right.
In some embodiments, surface element 151 may be identified by the user of the surgical robot (e.g., surgical robot 121), or the controller in the surgical robot may identify a number of surface elements autonomously to track the procedure. In the depicted embodiment, surface element 151 was chosen by a user and represents a unique piece of human anatomy.
It is appreciated that the surgical robot can track surface element 151 even when surface element 151 is moving. For example surface element 151 may be located on a lung and the lung is breathing. Similarly, surface element 151 may be located on the heart, and the heart is beating. Accordingly, surface element 151 will be moving throughout the image recognition processes, and the controller (coupled to the surgical robot and performing the image processing) can still determine the location of surface element 151 despite the movement.
In some embodiments, surface element 151 may be accentuated in the video. In the depicted embodiment, surface element 151 is highlighted using a bounding box surrounding surface element 151. Surface element 151 may be highlighted with comment box, bounding box, light contrast, dark contrast, or the like. However, in other embodiments, surface element 151 may not be highlighted in order to not distract surgeon while performing the surgery. Accentuation of surface element 151 may be toggled on and off through voice control or other methods.
FIG. 1C illustrates at least one preoperative image 163 displayed at the same time as a video 165 (e.g., the video captured by camera 101 in FIG. 1A) of a surgical area, in accordance with an embodiment of the disclosure. As shown, the at least one preoperative image 163 is located above the video of the surgical area 165; however, in other embodiments preoperative image 163 may be located anywhere relative to the video of surgical area 165 including overlaid on video 165. In the depicted embodiment, the at least one preoperative image 163 is an MRI; however, in other embodiments the at least one preoperative image 163 may include an X-ray image, a CT image, or an ultrasound image. Within preoperative image 163 is accentuated region 153. Accentuated region 153 is identified using a bounding box. However, in other embodiments accentuated region 153 may include at least one of bordering the accentuated region with a line, changing a color of the accentuated region (e.g., with a semitransparent overlay or the like), changing a brightness of the accentuated region, or labeling the accentuated region.
In the depicted embodiment, the at least one preoperative image 163 includes a three dimensional preoperative model (here a 3D MRI). Although not amenable to illustration, orientation of the three dimensional preoperative model may change as the location of the anatomical features shown in the video changes with time. For example, if it becomes more desirable to show a different angle of preoperative image 163, preoperative image 163 may change the orientation of the MRI model.
In the video 165 of the surgical area, arms of the surgical robot 141 can be shown operating on tissue. The location of accentuated area 153 in preoperative image 163 automatically changes based on the location of the video 165. That way the surgeon knows where he or she is operating relative to other organs and structures in the body. The location of accentuated region 153 in the at least one preoperative image 163 may be determined by tracking a surface element (e.g., surface element 151 in FIG. 1B). The position and size of accentuated region 153 on preoperative image 163 may also be determined by the movement of arms 141 of the surgical robot (which may include motion tracking systems), as well as markers placed in the body, and/or by measuring distances using the camera. In some embodiments, the location of the surgical instruments in the body may be displayed on the preoperative image as the accentuated region.
FIG. 2 illustrates a system 200 for endoscopic video assisted surgery, in accordance with an embodiment of the disclosure. System 200 includes endoscope 271, controller 207, first display 215, and second display 217. Endoscope 271 is coupled to controller 207 (e.g., a general purpose computer) and includes camera 201 (e.g., a CMOS image sensor or the like) attached to the distal end of endoscope 271. The distal end of endoscope 271 is opposite the proximal end of endoscope 271, and may include one or more light sources such as light emitting diodes. The light emitted from endoscope 271 may be visible, infrared, ultraviolet, or the like. In the depicted embodiment, endoscope 271 is not shown inserted into a surgical area for simplicity of illustration. However, it is appreciated that the distal end of endoscope 271 may be inserted into a patient to provide the video 265 discussed in greater detail below.
First display 215 is coupled to the controller 207 to display a video 265 of a surgical area received from endoscope 271. In the depicted embodiment, the surgical area includes a lung. Second display 217 is coupled to controller 207 to display at least one preoperative image 263 stored in memory (e.g., RAM, ROM, etc.). In the depicted embodiment, preoperative image 263 includes a chest X-ray. The chest X-ray includes the same lung as shown in video 265. Preoperative image 263 includes an accentuated region 253 including a bounding box containing the lung shown in video 265. As the location of video feed 265 moves in the body of the patient, the accentuated region 265 will change location and size on preoperative image 263 (e.g., the bounding box may move, or grow larger or smaller depending on how “zoomed in” video 265 is). The size of the various videos and images displayed may be changed by the user (e.g., make the video/image windows larger or smaller).
FIG. 3 illustrates a method of video-assisted surgery, in accordance with an embodiment of the disclosure. One of ordinary skill in the art having the benefit of the present disclosure will appreciate that blocks (301-309) in method 300 may occur in any order or even in parallel. Moreover, blocks may be added to, or removed from, method 300 in accordance with the teachings of the present disclosure.
Block 301 shows providing at least one preoperative image stored in a memory. The at least one preoperative image may include at least one of an X-ray image, a magnetic resonance image, a computerized tomography image, an ultrasound image, or the like. In microsurgery, the at least one image may include a microscopy image, or a pathology image. Further, the at least one preoperative image may include complex three dimensional models (e.g., a 3D reconstruction of a specific organ). One of ordinary skill in the art having the benefit of the present disclosure will appreciate there are many different types of preoperative imaging, and that many of them may be used in conjunction with the techniques described herein. It is further appreciated that a preoperative image includes any image captured before a surgical step is performed (e.g., including an image captured during a surgery).
In one embodiment, before the preoperative image is displayed to the surgeon, the surgeon (or the controller) may change the orientation of the at least one preoperative image to show the anatomical features in the preoperative image. Thus the surgeon sees the optimal view of the preoperative image while performing surgery.
Block 303 illustrates capturing a video of a surgical area including anatomical features using a camera, where the preoperative image includes at least part of the surgical area. In one embodiment, the camera may be included in an endoscope, and the endoscope may be used by a doctor to perform surgery. In other embodiments, the camera and the controller may be included in, or coupled to, a surgical robot.
Block 305 describes displaying the video of the surgical area on the one or more displays. In some embodiments, the video of the surgical area may share a screen with the preoperative images, while in other embodiments the video of the surgical area may be displayed on its own display. It is appreciated that a display includes a number of different devices such as flat panel displays, virtual reality headsets, tablets, and the like.
Block 307 shows displaying the at least one preoperative image on the one or more displays at the same time as the video. A location of the anatomical features (which may be identified using ICG or other contrast agents to visualize specific organs/anatomical structures and help with the localization of features) shown in the video is displayed as an accentuated region on the at least one preoperative image. In some embodiments, the location of the anatomical features, with respect to the preoperative image, may be tracked by identifying a surface element in the video, determining coordinates of the surface element in two successive frames in the video, and determining a change in the coordinates between the two successive frames. Thus, a feature in the video can be identified, and using changes in position of the feature in the video, the location of the accentuated region on the preoperative image can be changed accordingly. In some embodiments, a controller may be used to identify the surface element, or a user may select the surface element with a user interface.
In one embodiment the accentuated region may be accentuated using at least one of bordering the accentuated region with a line, changing a color of the accentuated region, changing a brightness of the accentuated region, or labeling the accentuated region.
In one embodiment, the preoperative image is superimposed on the video of the surgical area (e.g., via partial transparency, or image blending). This way the anatomical features in the preoperative image are “accentuated” by being overlaid on the video feed. However, the anatomical features may also be accentuated via other techniques described elsewhere.
Block 309 illustrates changing a position of the accentuated region on the preoperative image(s) in real time, as the location of the anatomical features shown in the video changes over time. In one embodiment, the preoperative image(s) includes a three dimensional preoperative model, and displaying the preoperative image(s) on the one or more displays at the same time as the video includes changing an orientation of the three dimensional preoperative model as the location of the anatomical features shown in the video changes over time. For example, the at least one preoperative image may include a 3D MRI scan. As the location of the place in the body where the video is being captured changes (e.g., because the camera moved to show new organs, etc.) the orientation of the 3D MRI model may change to show the new video location. Thus, the surgeon is provided the accentuated region highlighting the organs shown in the video, and also the preoperative model is orienting itself to better show images of the organs in the preoperative image(s). Changes to the preoperative image(s) may be achieved by the system recognizing different organs/anatomical features or fiducial markers (e.g., surgical clips or the like), using computer vision systems. Recognition of organs/fiducial markers may be performed at least in part by machine learning algorithms (e.g., a convolutional neural network trained to recognize specific features, recurrent neural network, long short-term memory network, or the like), and object tracking may be used to shift views.
In one embodiment, the user may select an order of images the surgeon wishes to see, and “tie” these images to various steps in the surgery. For instance, in a surgery involving multiple organs (e.g., lung and lymph nodes), the surgeon may want to see preoperative images of the lung while operating on the lung, and a different preoperative image of lymph nodes while operating on lymph nodes. In this embodiment, the system may recognize (e.g., using the machine vision techniques or fiducial markers described herein) when the surgeon has switched from operating on the lung to operating on the lymph nodes, and the system will display the preoperative image(s) of the lymph nodes. Prior to the surgery, the surgeon or surgical team may “tie” certain images to certain events in the surgery, therefore the preoperative images will be displayed in response to certain events occurring (e.g., when an organ comes into view of the video feed, after a certain amount of time has elapsed, when a specific instrument is being used (e.g., a stapler), when a marker is placed, when the user of the system instructs the system to switch preoperative images, etc.).
In some embodiments, the surgeon may “tie” the preoperative image to fiducial markers placed in the body. For instance, when a fiducial marker comes into view the system may change the preoperative image displayed (e.g., switch to a different image, update its orientation, magnification level, or the like). In some embodiments, the camera capturing the surgical video may move to always show the tips of the instruments or other important aspect of the surgery (e.g., the organ being operated on), and the preoperative image may also move to include the important location and be displayed in the right orientation. This may be achieved by correlating the amount of motion of the camera and/or surgical instruments to a corresponding change to the preoperative image to show the same relative location. In one embodiment, the preoperative image could also be scaled/stretched to map the anatomy (e.g., lungs being operated on might be collapsed, accordingly, the preoperative image is similarly altered to reflect the collapsed state).
In one embodiment, when the application to show the video of the surgical area and the preoperative image is initiated, the system may recognize the preferred sizing, location, or frame of the preoperative image that the surgeon likes to first look at. The system may then display this specific image to the user of the system. For example, when performing a specific type of lobectomy, there may be a CT scan of the lung. The surgeon may always like to begin a surgery by examining a sagittal view and slice near the middle of the 3D CT scan. Accordingly the system may recognize the surgeon's preferences and display the appropriate image. Recognition of preferences may be performed using a machine learning algorithm that is trained with user log in information (e.g., the specific user using the system), the preoperative images selected, the time when the preoperative images are selected (e.g., relative time to other events or absolute time), the type of surgery to be performed (which may be input into the system prior to the surgery or identified using a machine learning algorithm), or the like. The system may perform other analysis about how the surgeon is using the application and apply settings accordingly.
The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise. Processes may also occur locally or across distributed systems (e.g., multiple servers).
A tangible non-transitory machine-readable storage medium includes any mechanism that provides (i.e., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). For example, a machine-readable storage medium includes recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.

Claims (21)

What is claimed is:
1. A system for video-assisted surgery, comprising:
one or more displays;
a memory including at least one preoperative image stored thereon;
a camera configured to capture a video; and
a controller coupled to the memory, the camera, and the one or more displays, wherein the controller includes logic that when executed by the controller causes the system to perform operations including:
capturing the video of a surgical area, including anatomical features, using the camera;
displaying the video of the surgical area on the one or more displays; and
displaying the at least one preoperative image on the one or more displays at the same time as the video, wherein a location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image, wherein displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time;
identifying a surface element in the video;
determining coordinates of the surface element in two successive frames in the video; and
determining a difference in the coordinates between the two successive frames.
2. The system of claim 1, wherein the at least one preoperative image includes a three dimensional preoperative model, and wherein displaying the at least one preoperative image includes changing an orientation of the three dimensional preoperative model as the location of the anatomical features shown in the video changes over time.
3. The system of claim 1, wherein the controller further includes logic that when executed by the controller causes the system to perform operations including:
changing a position of the accentuated region on the at least one preoperative image in response to determining the change in the coordinates.
4. The system of claim 1, wherein the controller further includes logic that when executed by the controller causes the system to perform operations including accentuating the surface element in the video.
5. The system of claim 1, wherein the camera is included in an endoscope, and wherein the endoscope is coupled to the controller.
6. The system of claim 1, wherein the camera and the controller are coupled to a surgical robot, wherein the controller further includes logic that when executed by the controller causes the system to perform operations including controlling the movement of one or more arms of the surgical robot.
7. The system of claim 1, wherein the accentuated region includes at least one of bordering the accentuated region with a line, changing a color of the accentuated region, changing a brightness of the accentuated region, or labeling the accentuated region.
8. The system of claim 1, wherein the at least one preoperative image includes at least one of an X-ray image, a magnetic resonance image, a computerized tomography image, microscopy image, pathology image, or an ultrasound image.
9. The system of claim 1, wherein the at least one preoperative image includes a three-dimensional preoperative model, and wherein displaying the at least one preoperative image on the one or more displays at the same time as the video includes changing an orientation of the three-dimensional preoperative model independently of an orientation of the video of the surgical area.
10. The system of claim 1, wherein the controller further includes logic that when executed by the controller causes the system to perform operations including:
changing a location of the accentuated region based on a relative location of the surface element in the video.
11. A method of video-assisted surgery, comprising:
providing at least one preoperative image stored in a memory;
capturing a video of a surgical area including anatomical features using a camera, wherein the preoperative image includes at least part of the surgical area;
displaying the video of the surgical area on the one or more displays; and
displaying the at least one preoperative image on the one or more displays at the same time as the video, wherein a location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image, wherein displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time;
identifying a surface element in the video;
determining coordinates of the surface element in two successive frames in the video; and
determining a difference in the coordinates between the two successive frames.
12. The method of claim 11, wherein identifying the surface element includes using at least one of a controller to identify the surface element, or having a user select the surface element with a user interface.
13. The method of claim 11, further comprising:
changing a position of the accentuated region on the at least one preoperative image in response to determining the change in the coordinates.
14. The method of claim 11, wherein the at least one preoperative image includes a three dimensional preoperative model, and wherein displaying the at least one preoperative image on the one or more displays at the same time as the video includes changing an orientation of the three dimensional preoperative model as the location of the anatomical features shown in the video changes over time.
15. The method of claim 11, wherein capturing a video of a surgical area includes capturing the video with an endoscope, and wherein the camera is disposed in the endoscope.
16. The method of claim 11, wherein capturing a video of a surgical area includes capturing the video with a surgical robot, wherein the camera is coupled to the surgical robot.
17. The method of claim 11, wherein the accentuated region includes at least one of bordering the accentuated region with a line, changing a color of the accentuated region, changing a brightness of the accentuated region, or labeling the accentuated region.
18. The method of claim 11, wherein the at least one preoperative image includes at least one of an X-ray image, a magnetic resonance image, a computerized tomography image, or an ultrasound image.
19. The method of claim 11, further comprising changing the orientation of the at least one preoperative image to show the anatomical features in the preoperative image, prior to displaying the at least one preoperative image on the one or more displays at the same time as the video.
20. The method of claim 11, wherein the preoperative image is superimposed on the video.
21. A system for video-assisted surgery, comprising:
one or more displays;
a memory including at least one preoperative image stored thereon;
a camera configured to capture a video; and
a controller coupled to the memory, the camera, and the one or more displays, wherein the controller includes logic that when executed by the controller causes the system to perform operations including:
capturing the video of a surgical area, including anatomical features, using the camera;
displaying the video of the surgical area on the one or more displays; and
displaying the at least one preoperative image on the one or more displays at the same time as the video, wherein a location of the anatomical features shown in the video is displayed as an accentuated region on the at least one preoperative image, wherein displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time, wherein displaying the at least one preoperative image includes changing a position of the accentuated region on the at least one preoperative image in real time, as the location of the anatomical features shown in the video changes over time;
identifying a surface element in the video;
determining coordinates of the surface element in two successive frames in the video; and
determining a difference in the coordinates between the two successive frames.
US16/159,424 2017-10-17 2018-10-12 Display of preoperative and intraoperative images Active US10835344B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/159,424 US10835344B2 (en) 2017-10-17 2018-10-12 Display of preoperative and intraoperative images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762573321P 2017-10-17 2017-10-17
US16/159,424 US10835344B2 (en) 2017-10-17 2018-10-12 Display of preoperative and intraoperative images

Publications (2)

Publication Number Publication Date
US20190110855A1 US20190110855A1 (en) 2019-04-18
US10835344B2 true US10835344B2 (en) 2020-11-17

Family

ID=64270947

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/159,424 Active US10835344B2 (en) 2017-10-17 2018-10-12 Display of preoperative and intraoperative images

Country Status (2)

Country Link
US (1) US10835344B2 (en)
WO (1) WO2019079126A1 (en)

Families Citing this family (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11871901B2 (en) 2012-05-20 2024-01-16 Cilag Gmbh International Method for situational awareness for surgical network or surgical network connected device capable of adjusting function based on a sensed situation or usage
US11504192B2 (en) 2014-10-30 2022-11-22 Cilag Gmbh International Method of hub communication with surgical instrument systems
US11317919B2 (en) 2017-10-30 2022-05-03 Cilag Gmbh International Clip applier comprising a clip crimping system
US11311342B2 (en) 2017-10-30 2022-04-26 Cilag Gmbh International Method for communicating with surgical instrument systems
US11911045B2 (en) 2017-10-30 2024-02-27 Cllag GmbH International Method for operating a powered articulating multi-clip applier
US11564756B2 (en) 2017-10-30 2023-01-31 Cilag Gmbh International Method of hub communication with surgical instrument systems
US11510741B2 (en) 2017-10-30 2022-11-29 Cilag Gmbh International Method for producing a surgical instrument comprising a smart electrical system
US10959744B2 (en) 2017-10-30 2021-03-30 Ethicon Llc Surgical dissectors and manufacturing techniques
US11291510B2 (en) 2017-10-30 2022-04-05 Cilag Gmbh International Method of hub communication with surgical instrument systems
US11141160B2 (en) 2017-10-30 2021-10-12 Cilag Gmbh International Clip applier comprising a motor controller
US11801098B2 (en) 2017-10-30 2023-10-31 Cilag Gmbh International Method of hub communication with surgical instrument systems
US11202570B2 (en) 2017-12-28 2021-12-21 Cilag Gmbh International Communication hub and storage device for storing parameters and status of a surgical device to be shared with cloud based analytics systems
US11832840B2 (en) 2017-12-28 2023-12-05 Cilag Gmbh International Surgical instrument having a flexible circuit
US11410259B2 (en) 2017-12-28 2022-08-09 Cilag Gmbh International Adaptive control program updates for surgical devices
US11291495B2 (en) 2017-12-28 2022-04-05 Cilag Gmbh International Interruption of energy due to inadvertent capacitive coupling
US11571234B2 (en) 2017-12-28 2023-02-07 Cilag Gmbh International Temperature control of ultrasonic end effector and control system therefor
US11818052B2 (en) 2017-12-28 2023-11-14 Cilag Gmbh International Surgical network determination of prioritization of communication, interaction, or processing based on system or device needs
US11109866B2 (en) 2017-12-28 2021-09-07 Cilag Gmbh International Method for circular stapler control algorithm adjustment based on situational awareness
US11253315B2 (en) 2017-12-28 2022-02-22 Cilag Gmbh International Increasing radio frequency to create pad-less monopolar loop
US11832899B2 (en) 2017-12-28 2023-12-05 Cilag Gmbh International Surgical systems with autonomously adjustable control programs
US11419630B2 (en) 2017-12-28 2022-08-23 Cilag Gmbh International Surgical system distributed processing
US11304720B2 (en) 2017-12-28 2022-04-19 Cilag Gmbh International Activation of energy devices
US11666331B2 (en) 2017-12-28 2023-06-06 Cilag Gmbh International Systems for detecting proximity of surgical end effector to cancerous tissue
US11311306B2 (en) 2017-12-28 2022-04-26 Cilag Gmbh International Surgical systems for detecting end effector tissue distribution irregularities
US11278281B2 (en) 2017-12-28 2022-03-22 Cilag Gmbh International Interactive surgical system
US11308075B2 (en) 2017-12-28 2022-04-19 Cilag Gmbh International Surgical network, instrument, and cloud responses based on validation of received dataset and authentication of its source and integrity
US11864728B2 (en) 2017-12-28 2024-01-09 Cilag Gmbh International Characterization of tissue irregularities through the use of mono-chromatic light refractivity
US11857152B2 (en) 2017-12-28 2024-01-02 Cilag Gmbh International Surgical hub spatial awareness to determine devices in operating theater
US11589888B2 (en) 2017-12-28 2023-02-28 Cilag Gmbh International Method for controlling smart energy devices
US11612408B2 (en) 2017-12-28 2023-03-28 Cilag Gmbh International Determining tissue composition via an ultrasonic system
US11304745B2 (en) 2017-12-28 2022-04-19 Cilag Gmbh International Surgical evacuation sensing and display
US11602393B2 (en) 2017-12-28 2023-03-14 Cilag Gmbh International Surgical evacuation sensing and generator control
US11744604B2 (en) 2017-12-28 2023-09-05 Cilag Gmbh International Surgical instrument with a hardware-only control circuit
US11419667B2 (en) 2017-12-28 2022-08-23 Cilag Gmbh International Ultrasonic energy device which varies pressure applied by clamp arm to provide threshold control pressure at a cut progression location
US20190200981A1 (en) 2017-12-28 2019-07-04 Ethicon Llc Method of compressing tissue within a stapling device and simultaneously displaying the location of the tissue within the jaws
US11324557B2 (en) 2017-12-28 2022-05-10 Cilag Gmbh International Surgical instrument with a sensing array
US11432885B2 (en) 2017-12-28 2022-09-06 Cilag Gmbh International Sensing arrangements for robot-assisted surgical platforms
US11166772B2 (en) 2017-12-28 2021-11-09 Cilag Gmbh International Surgical hub coordination of control and communication of operating room devices
US11284936B2 (en) 2017-12-28 2022-03-29 Cilag Gmbh International Surgical instrument having a flexible electrode
US11896322B2 (en) 2017-12-28 2024-02-13 Cilag Gmbh International Sensing the patient position and contact utilizing the mono-polar return pad electrode to provide situational awareness to the hub
US11026751B2 (en) 2017-12-28 2021-06-08 Cilag Gmbh International Display of alignment of staple cartridge to prior linear staple line
US11266468B2 (en) 2017-12-28 2022-03-08 Cilag Gmbh International Cooperative utilization of data derived from secondary sources by intelligent surgical hubs
US11424027B2 (en) 2017-12-28 2022-08-23 Cilag Gmbh International Method for operating surgical instrument systems
US11678881B2 (en) 2017-12-28 2023-06-20 Cilag Gmbh International Spatial awareness of surgical hubs in operating rooms
US20190201039A1 (en) 2017-12-28 2019-07-04 Ethicon Llc Situational awareness of electrosurgical systems
US11896443B2 (en) 2017-12-28 2024-02-13 Cilag Gmbh International Control of a surgical system through a surgical barrier
US10892995B2 (en) 2017-12-28 2021-01-12 Ethicon Llc Surgical network determination of prioritization of communication, interaction, or processing based on system or device needs
US11304763B2 (en) 2017-12-28 2022-04-19 Cilag Gmbh International Image capturing of the areas outside the abdomen to improve placement and control of a surgical device in use
US11612444B2 (en) 2017-12-28 2023-03-28 Cilag Gmbh International Adjustment of a surgical device function based on situational awareness
US11234756B2 (en) 2017-12-28 2022-02-01 Cilag Gmbh International Powered surgical tool with predefined adjustable control algorithm for controlling end effector parameter
US11304699B2 (en) 2017-12-28 2022-04-19 Cilag Gmbh International Method for adaptive control schemes for surgical network control and interaction
US11389164B2 (en) 2017-12-28 2022-07-19 Cilag Gmbh International Method of using reinforced flexible circuits with multiple sensors to optimize performance of radio frequency devices
US11786245B2 (en) 2017-12-28 2023-10-17 Cilag Gmbh International Surgical systems with prioritized data transmission capabilities
US11559308B2 (en) 2017-12-28 2023-01-24 Cilag Gmbh International Method for smart energy device infrastructure
US20190201139A1 (en) 2017-12-28 2019-07-04 Ethicon Llc Communication arrangements for robot-assisted surgical platforms
US11464535B2 (en) 2017-12-28 2022-10-11 Cilag Gmbh International Detection of end effector emersion in liquid
US11633237B2 (en) 2017-12-28 2023-04-25 Cilag Gmbh International Usage and technique analysis of surgeon / staff performance against a baseline to optimize device utilization and performance for both current and future procedures
US11576677B2 (en) 2017-12-28 2023-02-14 Cilag Gmbh International Method of hub communication, processing, display, and cloud analytics
US10595887B2 (en) 2017-12-28 2020-03-24 Ethicon Llc Systems for adjusting end effector parameters based on perioperative information
US11446052B2 (en) 2017-12-28 2022-09-20 Cilag Gmbh International Variation of radio frequency and ultrasonic power level in cooperation with varying clamp arm pressure to achieve predefined heat flux or power applied to tissue
US11659023B2 (en) * 2017-12-28 2023-05-23 Cilag Gmbh International Method of hub communication
US11559307B2 (en) 2017-12-28 2023-01-24 Cilag Gmbh International Method of robotic hub communication, detection, and control
US11423007B2 (en) 2017-12-28 2022-08-23 Cilag Gmbh International Adjustment of device control programs based on stratified contextual data in addition to the data
US11529187B2 (en) 2017-12-28 2022-12-20 Cilag Gmbh International Surgical evacuation sensor arrangements
US11132462B2 (en) 2017-12-28 2021-09-28 Cilag Gmbh International Data stripping method to interrogate patient records and create anonymized record
US11540855B2 (en) 2017-12-28 2023-01-03 Cilag Gmbh International Controlling activation of an ultrasonic surgical instrument according to the presence of tissue
US11937769B2 (en) 2017-12-28 2024-03-26 Cilag Gmbh International Method of hub communication, processing, storage and display
US11317937B2 (en) 2018-03-08 2022-05-03 Cilag Gmbh International Determining the state of an ultrasonic end effector
US10758310B2 (en) 2017-12-28 2020-09-01 Ethicon Llc Wireless pairing of a surgical device with another device within a sterile surgical field based on the usage and situational awareness of devices
US11903601B2 (en) 2017-12-28 2024-02-20 Cilag Gmbh International Surgical instrument comprising a plurality of drive systems
US11786251B2 (en) 2017-12-28 2023-10-17 Cilag Gmbh International Method for adaptive control schemes for surgical network control and interaction
US11464559B2 (en) 2017-12-28 2022-10-11 Cilag Gmbh International Estimating state of ultrasonic end effector and control system therefor
US11364075B2 (en) 2017-12-28 2022-06-21 Cilag Gmbh International Radio frequency energy device for delivering combined electrical signals
US10517681B2 (en) * 2018-02-27 2019-12-31 NavLab, Inc. Artificial intelligence guidance system for robotic surgery
US11589915B2 (en) 2018-03-08 2023-02-28 Cilag Gmbh International In-the-jaw classifier based on a model
US11389188B2 (en) 2018-03-08 2022-07-19 Cilag Gmbh International Start temperature of blade
US11259830B2 (en) 2018-03-08 2022-03-01 Cilag Gmbh International Methods for controlling temperature in ultrasonic device
US11278280B2 (en) 2018-03-28 2022-03-22 Cilag Gmbh International Surgical instrument comprising a jaw closure lockout
US11197668B2 (en) 2018-03-28 2021-12-14 Cilag Gmbh International Surgical stapling assembly comprising a lockout and an exterior access orifice to permit artificial unlocking of the lockout
US11589865B2 (en) 2018-03-28 2023-02-28 Cilag Gmbh International Methods for controlling a powered surgical stapler that has separate rotary closure and firing systems
US11471156B2 (en) 2018-03-28 2022-10-18 Cilag Gmbh International Surgical stapling devices with improved rotary driven closure systems
US11090047B2 (en) 2018-03-28 2021-08-17 Cilag Gmbh International Surgical instrument comprising an adaptive control system
WO2020058808A1 (en) 2018-09-18 2020-03-26 Johnson & Johnson Surgical Vision, Inc. Live cataract surgery video in phacoemulsification surgical system
US11317915B2 (en) 2019-02-19 2022-05-03 Cilag Gmbh International Universal cartridge based key feature that unlocks multiple lockout arrangements in different surgical staplers
US11751872B2 (en) 2019-02-19 2023-09-12 Cilag Gmbh International Insertable deactivator element for surgical stapler lockouts
US11357503B2 (en) 2019-02-19 2022-06-14 Cilag Gmbh International Staple cartridge retainers with frangible retention features and methods of using same
US11331100B2 (en) 2019-02-19 2022-05-17 Cilag Gmbh International Staple cartridge retainer system with authentication keys
US11369377B2 (en) 2019-02-19 2022-06-28 Cilag Gmbh International Surgical stapling assembly with cartridge based retainer configured to unlock a firing lockout
USD952144S1 (en) 2019-06-25 2022-05-17 Cilag Gmbh International Surgical staple cartridge retainer with firing system authentication key
USD964564S1 (en) 2019-06-25 2022-09-20 Cilag Gmbh International Surgical staple cartridge retainer with a closure system authentication key
USD950728S1 (en) 2019-06-25 2022-05-03 Cilag Gmbh International Surgical staple cartridge
US11269173B2 (en) 2019-08-19 2022-03-08 Covidien Lp Systems and methods for displaying medical video images and/or medical 3D models
WO2022044606A1 (en) * 2020-08-24 2022-03-03 富士フイルム株式会社 Medical image processing apparatus, medical image processing method, endoscope system, and medical image processing program
US20220313370A1 (en) * 2021-03-30 2022-10-06 Ethicon Llc Method for system architecture for modular energy system
US11928834B2 (en) 2021-05-24 2024-03-12 Stryker Corporation Systems and methods for generating three-dimensional measurements using endoscopic video data

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060281971A1 (en) 2005-06-14 2006-12-14 Siemens Corporate Research Inc Method and apparatus for minimally invasive surgery using endoscopes
US20070060984A1 (en) 2005-09-09 2007-03-15 Webb James S Apparatus and method for optical stimulation of nerves and other animal tissue
US20080033240A1 (en) * 2005-10-20 2008-02-07 Intuitive Surgical Inc. Auxiliary image display and manipulation on a computer display in a medical robotic system
US20080243142A1 (en) 2007-02-20 2008-10-02 Gildenberg Philip L Videotactic and audiotactic assisted surgical methods and procedures
US20090292175A1 (en) * 2008-05-23 2009-11-26 Olympus Medical Systems Corp. Medical device
US7831294B2 (en) 2004-10-07 2010-11-09 Stereotaxis, Inc. System and method of surgical imagining with anatomical overlay for navigation of surgical devices
US20110069159A1 (en) 2009-06-10 2011-03-24 Luc Soler System for orientation assistance and display of an instrument in an object under examination particularly for use in human body
US20110105895A1 (en) 2009-10-01 2011-05-05 Giora Kornblau Guided surgery
US20110117025A1 (en) 2008-05-20 2011-05-19 Ralph Sebastian Dacosta Device and method for fluorescence-based imaging and monitoring
US8169696B2 (en) 2009-06-04 2012-05-01 General Electric Company Systems for intraoperative nerve imaging
US20130218024A1 (en) 2011-10-09 2013-08-22 Clear Guide Medical, Llc Interventional In-Situ Image-Guidance by Fusing Ultrasound and Video
US20140022283A1 (en) 2012-07-20 2014-01-23 University Health Network Augmented reality apparatus
US20140133727A1 (en) 2012-11-15 2014-05-15 Ozan Oktay System and Method for Registering Pre-Operative and Intra-Operative Images Using Biomechanical Model Simulations
US20140187967A1 (en) 2012-12-05 2014-07-03 Fred Wood System and Method for Multi-Color Laser Imaging and Ablation of Cancer Cells Using Fluorescence
WO2014139021A1 (en) 2013-03-15 2014-09-18 Synaptive Medical (Barbados) Inc. Intramodal synchronization of surgical data
US20140276008A1 (en) 2013-03-15 2014-09-18 The Regents Of The University Of California Imaging system and method for fluorescence guided surgery
US20150146946A1 (en) 2012-06-28 2015-05-28 Koninklijke Pjilips N.V. Overlay and registration of preoperative data on live video using a portable device
WO2015121764A1 (en) 2014-02-11 2015-08-20 Koninklijke Philips N.V. Spatial visualization of internal mammary artery during minimally invasive bypass surgery
US20150269741A1 (en) * 2014-03-24 2015-09-24 Fujifilm Corporation Medical image processing device and method for operating the same
US20160055268A1 (en) * 2014-06-06 2016-02-25 Matterport, Inc. Semantic understanding of 3d data
US20160100909A1 (en) 2014-02-25 2016-04-14 JointPoint, Inc. Systems and Methods for Intra-Operative Image Analysis
WO2016178690A1 (en) 2015-05-07 2016-11-10 Siemens Aktiengesellschaft System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
US20160353968A1 (en) * 2014-06-10 2016-12-08 Olympus Corporation Endoscope system and method for operating endoscope system
US20170042631A1 (en) 2014-04-22 2017-02-16 Surgerati, Llc Intra-operative medical image viewing system and method
US20170084036A1 (en) * 2015-09-21 2017-03-23 Siemens Aktiengesellschaft Registration of video camera with medical imaging
US20170228877A1 (en) * 2016-02-05 2017-08-10 Fujifilm Corporation Device and method for image registration, and a nontransitory recording medium

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7831294B2 (en) 2004-10-07 2010-11-09 Stereotaxis, Inc. System and method of surgical imagining with anatomical overlay for navigation of surgical devices
US20060281971A1 (en) 2005-06-14 2006-12-14 Siemens Corporate Research Inc Method and apparatus for minimally invasive surgery using endoscopes
US20070060984A1 (en) 2005-09-09 2007-03-15 Webb James S Apparatus and method for optical stimulation of nerves and other animal tissue
US20080033240A1 (en) * 2005-10-20 2008-02-07 Intuitive Surgical Inc. Auxiliary image display and manipulation on a computer display in a medical robotic system
US20080243142A1 (en) 2007-02-20 2008-10-02 Gildenberg Philip L Videotactic and audiotactic assisted surgical methods and procedures
US20110117025A1 (en) 2008-05-20 2011-05-19 Ralph Sebastian Dacosta Device and method for fluorescence-based imaging and monitoring
US20090292175A1 (en) * 2008-05-23 2009-11-26 Olympus Medical Systems Corp. Medical device
US8169696B2 (en) 2009-06-04 2012-05-01 General Electric Company Systems for intraoperative nerve imaging
US20110069159A1 (en) 2009-06-10 2011-03-24 Luc Soler System for orientation assistance and display of an instrument in an object under examination particularly for use in human body
US20110105895A1 (en) 2009-10-01 2011-05-05 Giora Kornblau Guided surgery
US20130218024A1 (en) 2011-10-09 2013-08-22 Clear Guide Medical, Llc Interventional In-Situ Image-Guidance by Fusing Ultrasound and Video
US20150146946A1 (en) 2012-06-28 2015-05-28 Koninklijke Pjilips N.V. Overlay and registration of preoperative data on live video using a portable device
US20140022283A1 (en) 2012-07-20 2014-01-23 University Health Network Augmented reality apparatus
US20140133727A1 (en) 2012-11-15 2014-05-15 Ozan Oktay System and Method for Registering Pre-Operative and Intra-Operative Images Using Biomechanical Model Simulations
US20140187967A1 (en) 2012-12-05 2014-07-03 Fred Wood System and Method for Multi-Color Laser Imaging and Ablation of Cancer Cells Using Fluorescence
WO2014139021A1 (en) 2013-03-15 2014-09-18 Synaptive Medical (Barbados) Inc. Intramodal synchronization of surgical data
US20140276008A1 (en) 2013-03-15 2014-09-18 The Regents Of The University Of California Imaging system and method for fluorescence guided surgery
WO2015121764A1 (en) 2014-02-11 2015-08-20 Koninklijke Philips N.V. Spatial visualization of internal mammary artery during minimally invasive bypass surgery
US20170172663A1 (en) * 2014-02-11 2017-06-22 Koninklijke Philips N.V. Spatial visualization of internal mammary artery during minimally invasive bypass surgery
US20160100909A1 (en) 2014-02-25 2016-04-14 JointPoint, Inc. Systems and Methods for Intra-Operative Image Analysis
US20150269741A1 (en) * 2014-03-24 2015-09-24 Fujifilm Corporation Medical image processing device and method for operating the same
US20170042631A1 (en) 2014-04-22 2017-02-16 Surgerati, Llc Intra-operative medical image viewing system and method
US20160055268A1 (en) * 2014-06-06 2016-02-25 Matterport, Inc. Semantic understanding of 3d data
US20160353968A1 (en) * 2014-06-10 2016-12-08 Olympus Corporation Endoscope system and method for operating endoscope system
WO2016178690A1 (en) 2015-05-07 2016-11-10 Siemens Aktiengesellschaft System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
US20180189966A1 (en) * 2015-05-07 2018-07-05 Siemens Aktiengesellschaft System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
US20170084036A1 (en) * 2015-09-21 2017-03-23 Siemens Aktiengesellschaft Registration of video camera with medical imaging
US20170228877A1 (en) * 2016-02-05 2017-08-10 Fujifilm Corporation Device and method for image registration, and a nontransitory recording medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
"UNC Laparoscopic Visualization Research", Augmented Reality Technology, University of North Carolina at Chapel Hill, Aug. 11, 1998, http://www.cs.unc.edu/Research/us/laparo.html, Last accessed Nov. 4, 2014, 4 pages.
International Search Report and Written Opinion from the International Searching Authority dated Jan. 7, 2019, for International Application No. PCT/US2018/055724, filed Oct. 12, 2018, 14 pages.
Wikipedia, the free encyclopedia, "Endoscopy", http://en.wikipedia.org/wiki/Endoscopy, Last accessed Oct. 20, 2014, 7 pages.
Wikipedia, the free encyclopedia, "Fluorescence image-guided surgery", http://en.wikipedia.org/wiki/Fluorescence_image-guided_surgery, Last accessed Oct. 20, 2014, 4 pages.
Wikipedia, the free encyclopedia, "Image-guided surgery", http://en.wikipedia.org/wiki/Image-guided_surgery, Last accessed Oct. 20, 2014, 2 pages.
Wikipedia, the free encyclopedia, "Laparoscopic surgery", http://en.wikipedia.org/wiki/Laparoscopic_surgery, Last accessed Oct. 20, 2014, 8 pages.

Also Published As

Publication number Publication date
WO2019079126A1 (en) 2019-04-25
US20190110855A1 (en) 2019-04-18

Similar Documents

Publication Publication Date Title
US10835344B2 (en) Display of preoperative and intraoperative images
JP4152402B2 (en) Surgery support device
JP6122875B2 (en) Detection of invisible branches in blood vessel tree images
US11452464B2 (en) Guidance tools to manually steer endoscope using pre-operative and intra-operative 3D images
JP2022512420A (en) Surgical system with a combination of sensor-based navigation and endoscopy
US11464582B1 (en) Surgery guidance system
US20160163105A1 (en) Method of operating a surgical navigation system and a system using the same
JP6972163B2 (en) Virtual shadows that enhance depth perception
JP2019506922A (en) System, controller, and method for using virtual reality devices for robotic surgery
US20210290317A1 (en) Systems and methods for tracking a position of a robotically-manipulated surgical instrument
CN105188594B (en) Robotic control of an endoscope based on anatomical features
US11896441B2 (en) Systems and methods for measuring a distance using a stereoscopic endoscope
WO2012035492A1 (en) Robotic control of an endoscope from blood vessel tree images
JP7460631B2 (en) ENDOSCOPY HAVING DUAL IMAGE SENSORS - Patent application
AU2018202682A1 (en) Endoscopic view of invasive procedures in narrow passages
US20220215539A1 (en) Composite medical imaging systems and methods
JP2021509987A (en) Systems and methods for detecting abnormal tissue using vascular features
US10951837B2 (en) Generating a stereoscopic representation
JP2018521774A (en) Endoscopic guidance from interactive planar slices of volume images
US20220319135A1 (en) Multi-modal visualization in computer-assisted tele-operated surgery
US11045075B2 (en) System and method for generating a three-dimensional model of a surgical site
US20230215059A1 (en) Three-dimensional model reconstruction
Eck et al. Display technologies
WO2023018684A1 (en) Systems and methods for depth-based measurement in a three-dimensional view

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: VERILY LIFE SCIENCES LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BARRAL, JOELLE K.;HABBECKE, MARTIN;REEL/FRAME:047173/0780

Effective date: 20181015

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE