CN117795555A - System and method for differentiating interaction environments - Google Patents

System and method for differentiating interaction environments Download PDF

Info

Publication number
CN117795555A
CN117795555A CN202280054728.5A CN202280054728A CN117795555A CN 117795555 A CN117795555 A CN 117795555A CN 202280054728 A CN202280054728 A CN 202280054728A CN 117795555 A CN117795555 A CN 117795555A
Authority
CN
China
Prior art keywords
image
captured image
user input
readable medium
transitory machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280054728.5A
Other languages
Chinese (zh)
Inventor
P·施拉兹安
D·普若克斯
E·D·韦克菲尔德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intuitive Surgical Operations Inc
Original Assignee
Intuitive Surgical Operations Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Surgical Operations Inc filed Critical Intuitive Surgical Operations Inc
Priority claimed from PCT/US2022/039786 external-priority patent/WO2023018685A1/en
Publication of CN117795555A publication Critical patent/CN117795555A/en
Pending legal-status Critical Current

Links

Landscapes

  • Endoscopes (AREA)

Abstract

A system includes a processor and a memory on which computer readable instructions are stored. The computer readable instructions, when executed by the processor, cause the system to generate a current endoscopic video image of the surgical environment, capture an image from the current endoscopic video image, display the current endoscopic video image and the captured image in a common display, and perform an action with the captured image in response to a user input.

Description

System and method for differentiating interaction environments
Cross-referenced application
The present application claims priority and benefit from U.S. provisional application Ser. Nos. 63/303,101, filed on 26 1 month 2022, and 63/231,658, filed on 108 months 2021, both entitled "systems and methods for differentiating interaction environments," which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates to such systems and methods: the system and method are for providing a displayed interaction environment that is different from a displayed clinical environment.
Background
Minimally invasive medical techniques aim to reduce the amount of damaged tissue during diagnostic or surgical procedures, thereby reducing patient recovery time, discomfort, and adverse side effects. Such minimally invasive techniques may be performed through one or more surgical incisions or through natural tunnels in the patient anatomy. Through these incisions or natural tunnels, a clinician may insert a minimally invasive medical instrument including an endoscopic imaging system to capture images of tissue within the patient's anatomy. The endoscopic imaging system may be a three-dimensional imaging system that provides three-dimensional video images of tissue. There is a need for such a system and method: the system and method are for displaying a video image of an organization while also providing a differentiated displayed interaction environment for interacting with images captured from the video image.
Disclosure of Invention
Examples of the invention are outlined by the claims following the description. Consistent with some examples, a system may include a processor and memory on which computer-readable instructions are stored. The computer readable instructions, when executed by the processor, cause the system to: generating a current endoscopic video image of the surgical environment, capturing an image from the current endoscopic video image, displaying the current endoscopic video image and the captured image in a common display, and performing an operation with the captured image in response to a user input.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory in nature, and are intended to provide an understanding of the disclosure, without limiting the scope of the disclosure. In this regard, additional aspects, features and advantages of the present disclosure will be apparent to those skilled in the art from the following detailed description.
Drawings
Fig. 1 illustrates a display system, according to some embodiments, that displays an endoscopic video image and an image captured from the endoscopic video image,
FIG. 2 is a flow chart illustrating a method for differentiating interaction environments according to some embodiments.
Fig. 3A illustrates an enhanced captured image according to some embodiments.
Fig. 3B illustrates an enhanced captured image according to some embodiments.
Fig. 3C illustrates an enhanced captured image according to some embodiments.
Fig. 4 is a simplified diagram of a robotic-assisted medical system according to some embodiments.
Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be understood that for purposes of illustrating, but not limiting embodiments of the present disclosure, the same reference numerals are used to designate the same elements shown in one or more of the figures.
Detailed Description
Systems and methods are provided for displaying a surgical environment in which clinical interactions with patient tissue occur and a differential interaction environment that allows interactions with captured images of the surgical environment in response to user input.
Fig. 1 shows a display system 100 that displays a current endoscopic video image 102 of a surgical environment 103 in a first window 104 and a captured image 106 of the surgical environment 103 in a second window 108. The display system 100 may be part of a robotic-assisted medical system (e.g., the display system 308 of the medical system 300 shown in fig. 4) that controls the medical instruments 110 in the surgical environment 103 during a medical procedure. The current endoscopic video image 102 may be a two-dimensional or three-dimensional real-time video image that is generated by an endoscopic imaging system located in a surgical environment (e.g., imaging system 304 of medical system 300 as shown in fig. 4). The captured image 106 may be a still or static image captured or recorded by the endoscopic imaging system during a medical procedure. For example, the captured image 106 may be a still image captured from an endoscopic video image at a time prior to the current endoscopic video image 102. Thus, the first window 104 may display the current endoscopic video image 102, while the second window 108 may display the image 106 captured at a time prior to the current endoscopic video image 102. Because the current endoscopic video image 102 may display a real-time image of the surgical environment, the position and orientation of the medical instrument 110 or other device in the current endoscopic video image 102 may be different from the position and orientation of the medical instrument 110 in the captured image 106 because the medical instrument 110 may have moved since the captured image 106 was recorded. Similarly, the position and orientation of patient tissue in the current endoscopic video image 102 may be different from the position and orientation of patient tissue in the captured image 106 because patient tissue may move due to surgical intervention or patient motion such as breathing. Additionally or alternatively, the position and orientation of the medical instrument and/or patient tissue may appear at different locations in the captured image 106, as the position of the imaging system may change since the image 106 was recorded. In some examples, the captured image 106 may be a video image (rather than a still image) captured or recorded by an endoscopic imaging system during a medical procedure prior to the video image 102.
In the example of fig. 1, window 104 is positioned above window 108, but in alternative examples, the positions of the windows may be interchanged, or the windows may be arranged side-by-side (e.g., left and right). In some examples, the user may change the position or size of the window. In some examples, the position or size of the window may be changed according to the mode of operation of the robotic-assisted medical system. For example, window 104 may be larger than window 108 when in an instrument following mode of the robotic-assisted medical system in which instrument 110 is under active control of a user input device (e.g., a user interface device of user control system 306 of medical system 300 as shown in fig. 4). The second window may be larger than the first window when in an image control mode of the robotic-assisted medical system in which the user input device is not actively controlling the instrument 110 but is actively controlling a cursor or other user interface element.
Fig. 2 is a flow chart illustrating a method 200 for generating an augmented or mixed reality image according to some embodiments. The method 200 is shown as a set of operations or processes. The processes shown in fig. 2 may be performed in a different order than shown in fig. 2, and one or more of the processes shown may not be performed in some embodiments of the method 200. Furthermore, one or more processes not explicitly shown in fig. 2 may be included before, after, between, or as part of the illustrated processes. In some embodiments, one or more processes of method 200 may be implemented at least in part in the form of executable code stored on a non-transitory tangible machine-readable medium, which when executed by one or more processors (e.g., a processor of a control system) may cause the one or more processors to perform the one or more processes.
At process 202, an image (e.g., endoscopic image 102) may be generated by an imaging system (e.g., endoscopic imaging system 304). Endoscopic images may be generated from real-time three-dimensional video of the surgical environment captured by an endoscopic imaging system. The image may be a two-dimensional or three-dimensional real-time video, or a still image, produced by an imaging system within the current patient anatomy.
At process 204, an image (e.g., image 106) may be captured from an endoscopic video image. The captured image may be a still or still image captured or recorded by the imaging system during a medical procedure that produces a real-time image of the process 202. For example, the captured image 106 may be a still image captured from an endoscopic video image at a time prior to the current endoscopic video image 102.
At process 206, the endoscopic video image and the captured image may be displayed on a common display system (e.g., display system 100). For example, a first window 104 on the display system 100 may display the current endoscopic video image 102, while a second window 108 may display an image 106 captured at a time prior to the current endoscopic video image 102.
At process 208, an action may be performed with the captured image. For example, in response to user input, interactions may be performed with the captured image 106. The interaction may be a video marker (agitation) displayed with the captured image, virtual movement or rotation of the captured image, measurements of structures or instruments in the surgical environment, or other interactions that assist in performing the medical procedure. The actions may also or alternatively include enhancing or modifying the captured image to create an enhanced or mixed reality image displayed in window 108.
Optionally, in some embodiments, the follow mode of the robotic-assisted medical system may be paused and the image control mode of the robotic-assisted medical system may be entered before the process 208 performs an action with the captured image. In the follow mode, movement of the medical instrument 110 in the surgical environment 103 is responsive to movement of the user input device. In the image control mode, movement of the medical instrument in the surgical environment is not responsive to movement of the user input device. In some examples, in the image control mode, a user input device for moving the medical instrument (in the follow mode) may instead be used to perform the interaction with the captured image. For example, a six degree of freedom user input device may be limited to move in two dimensions registered to a captured image to control a cursor, selector, keyboard, menu, or other user interface associated with the captured image. In some examples, in the image control mode, a second user input device, different from the user input device for moving the medical instrument (in the follow mode), may be used to perform the interaction with the captured image. The second user input device may be part of a user console or may be located at a second surgical console, computer, tablet computer or other device controlled by the clinician or a second user.
Optionally, in some embodiments, the follow-up mode of the robotic-assisted medical system may not be paused until the process 208 performs an action with the captured image. In this embodiment, movement of the medical instrument 110 in the surgical environment 103 is still responsive to movement of the user input device, and a second user input device, different from the first user input device that moves the medical instrument in the follow mode, may instead be used to perform interaction with the captured image.
Various actions may be performed with the captured image 106 displayed in the window 108 in response to user input at a user control system (e.g., the user control system 306 in fig. 4). For example, the action performed may be displaying a videomark 112 generated in response to a user input. Videomarks may include numeric or alphabetic characters, sketches, symbols, shapes, arrows, or other annotations or notes superimposed on or around the captured image. These videomarks may provide communication to other viewers of the captured image (e.g., a surgeon, student, or supervisor), or the videomarks may assist the clinician in generating the videomarks to provide reminders, guidance, or additional information. The videomark may be generated by an operator of the medical instrument 110 or by a different user. In some examples, the videomark may be generated by a keyboard user input device for generating text annotations referencing the scene in the captured image 106. In some examples, the videomark may be generated by moving a multiple degree of freedom user input device to generate a hand-drawn shape or to select an annotation option from a menu. In some examples, videomarks or other indicia made by an operator or another clinician may be saved and retrieved for later viewing in the captured image in window 108.
In some examples, the action performed with the captured image may be a measurement of structure or distance in the surgical environment 103 in response to user input. For example, points a and B may be indicated on the captured image 106 by a user input device, and a linear or curvilinear distance 114 between the points may be calculated based on a two-dimensional (X-Y dimension) scale of the image 106 and optionally based on a depth map (Z dimension) that provides a distance between the distal end of the endoscope and the tissue surface at each pixel in the image. In some examples, to select an appropriate size for the mesh application, the ventral hernia site may be measured.
In some examples, the action performed with the captured image may be changing a virtual viewpoint of the virtual camera in response to user input. The virtual camera for generating the virtual camera image may be generated based on the intrinsic parameters and the extrinsic parameters of the endoscope calibration data. The virtual camera may simulate the behavior of an endoscope (e.g., image capture device 304) allowing viewing of a region within the patient's anatomy (e.g., an interventional region or surgical environment) from a camera angle or position that differs from the position and/or orientation of a real-time endoscopic view from an endoscope (e.g., image capture device 304) physically positioned within the patient's anatomy. The virtual camera may have the same calibration parameters as the endoscope. Virtual cameras may allow a viewer to view and analyze the size, volume, and spatial relationship of anatomical structures, suspicious bumps, other instruments, or any other objects in an anatomical region from different perspectives.
In some examples, when the captured image is displayed on a public display system (e.g., at process 206), the displayed captured image may be an enhanced captured image. For example, the captured image 106 may be distinguished from the displayed current endoscopic video image 102 by graphical identification, processing, or image enhancement, which may include, for example, a gray scale representation of the captured image, a color modification of the captured image, a grid projection on the captured image, and/or a text, graphic, or digital designation on or adjacent to the captured image. In some examples, as shown in fig. 3A, the enhanced captured image 250 may be generated using depth map visualization for display as the enhanced captured image 106. To generate the depth map visualization, a depth map point cloud 252 may be calculated from the captured stereoscopic endoscopic image (e.g., the image captured from the current image 102). The depth map point cloud 252 may be presented and displayed as an enhanced captured image 106 in the captured image window 108. Alternatively, the depth map point cloud 252 may be merged with, superimposed on, or otherwise displayed with the captured image. The depth map may be generated from a disparity map that maps the difference between pixel locations between a pair of stereoscopic images. Parallax is the relative distance between image features such as pixels. From the known parallax, the known baseline distance between the two camera lenses, and the known focal length of the lenses, the depth of each pixel or feature may be determined and mapped. The depth map may be used to convert the captured image into a cloud of pixel points in the form of a three-dimensional image. Converting the depth map to a point cloud allows for performing point cloud specific operations. For example, a point cloud of an intraoperatively captured image may be registered to a point cloud of a preoperative image. In some examples, the point cloud may be sub-sampled to remove noise and perform feature detection. The point cloud may also be used to generate a three-dimensional surface model of the structure in the captured image.
In some examples, as shown in fig. 3B, localized grid visualization may be used around the endpoints of a two-dimensional or three-dimensional scale to produce an enhanced captured image 260. In this example, the mixed reality image 260 may be created from a captured stereoscopic endoscopic image 262 (e.g., an image captured from the current image 102). Endpoints 264, 266 of a scale or measurement device 268 may be marked on the image 262. A mesh 270 reconstructed from the depth map point clouds may be rendered at the scale endpoints 264, 266. For example, in anatomical regions with sharp curves, an operator may observe a virtual view of the anatomical structure from a slightly perturbed orientation to better reveal anatomical features that are not visible in the real-time image. The changed or perturbed view may help the operator to understand the curvature of the anatomical surface when performing the measurement.
In some examples, as shown in fig. 3C, a cut may be used to produce an enhanced captured image 280 through visualization with a grade image. In this embodiment, a captured stereoscopic endoscopic image 282 (e.g., an image captured from the current image 102) is enhanced to include a through view to a pre-operative segmentation model of a structure 284, such as a tumor, positioned below the tissue surface, from which image 282 a mixed reality image 280 may be created. Enhanced image 280 may be the product of three blended images including: stereoscopic endoscopic images, gradient images generated from stereoscopic endoscopic images as described below, and pre-operative segmentation models captured by imaging modalities such as CT or MR. To create a through-visualization from the grade image, the intensity image may first be calculated from the endoscopic color image. The horizontal and vertical components of the image gradients g0 and g1 can be calculated from the intensity images using sobel filters. For each pixel, the surface gradient components p and q in the X and Y directions may be calculated, respectively. The X and Y coordinates may be normalized coordinates of the output image pixels. The values g0 and g1 may be the surface gradients along the X and Y directions, respectively. The value e is an epsilon value, which in some examples may be set to 0.00001, respectively. The slope is the final image slope at the pixel.
p=(-g0 x^2-g0 y^2-g0-3e x)/(-g0 x^3-g0 x y^2-g0 x-g1 x^2y-g1 y^3-g1 y+3e)
q=(g1 x^2+g1 y^2+g1+3e y)/(g0 x^3+g0 x y^2+g0 x+g1 x^2y+g1y^3+g1 y-3e)
Slope = log (abs (p) +abs (q))
A gaussian filter may be applied to the slope image to smooth rough edges. As shown in fig. 3C, the grade image may be blended with the color image and the pre-operative segmentation model image to achieve a differential visualization 400.
The systems and methods described herein may be implemented with the following robotic-assisted medical systems: the system includes an endoscopic imaging system, a user input device for identifying surface points, and a display system for displaying rendered endoscopic and mixed reality images. Fig. 4 is a simplified diagram of a robotic-assisted medical system 300 that may be used with the systems and methods described herein. In some embodiments, the system 300 may be adapted for therapeutic, diagnostic, and/or imaging procedures. Although some embodiments are provided herein with respect to such procedures, any reference to medical or surgical instruments and medical or surgical methods is not limiting. The systems, instruments and methods described herein may be used for animal, human cadaver, animal cadaver, human or part of an animal anatomy, non-surgical diagnosis, as well as for industrial systems and general purpose robotic, universal teleoperated or robotic medical systems. For example, the systems, instruments, and methods described herein may be used for non-medical purposes, including industrial uses, general-purpose robotic uses, and manipulating non-tissue workpieces.
As shown in fig. 4, the system 300 generally includes a manipulator assembly 302. Manipulator assembly 302 is used to operate a medical instrument 303 (e.g., a surgical instrument) and a medical instrument 304 (e.g., an image capture device) while performing various procedures on patient P. Manipulator assembly 302 may be a remote, non-remote, or hybrid remote and non-remote type assembly having a motor-driven and/or remote-controlled selected degree of freedom of movement and a non-motor-driven and/or non-remote-controlled selected degree of freedom of movement. Manipulator assembly 302 is mounted to or positioned adjacent to a surgical or surgical table T.
The user control system 306 allows an operator O (e.g., a surgeon or other clinician as shown in fig. 4) to view the intervention site and control the manipulator assembly 302. In some examples, the user control system 306 is a surgeon console that is typically located in the same room as the procedure or surgical table T, such as at a side of the surgical table where the patient P is located. However, it should be understood that operator O may be located in a different room or completely different building than patient P. That is, one or more user control systems 306 may be co-located with manipulator assembly 302, or the user control systems may be located at different locations. The multi-user control system allows more than one operator to control one or more robotic-assisted manipulator assemblies in various combinations.
The user control system 306 generally includes one or more input devices for controlling the manipulator assembly 302. The input device may include any number of various devices such as a keyboard, joystick, trackball, data glove, trigger gun, manual controller, voice recognition device, body motion or presence sensor, and the like. In order to provide the operator O with a strong feel of directly controlling the medical instrument 303, 304, the input device may have the same degree of freedom as the associated medical instrument 303, 304. In this way, the input device provides the operator O with a sense of presence and the perception that the input device is integral with the medical instrument 303, 304. Optionally, the system 300 may also include a second user control system 316 having a second user input device. The second input device may include any number of various devices, such as a keyboard, joystick, trackball, data glove, trigger gun, manual controller, voice recognition device, body motion or presence sensor, and the like. The second user control system 316 may be under the control of the operator O or another user for interaction with the captured images.
The manipulator assembly 302 supports the medical instruments 303, 304 and may include the following kinematic manipulator support structures: one or more non-servo-controlled linkages (e.g., one or more linkages that may be manually positioned and locked in place) and/or one or more servo-controlled linkages (e.g., one or more linkages that may be controlled in response to commands from a control system), and an instrument holder. The manipulator assembly 302 may optionally include a plurality of actuators or motors that drive inputs on the medical instruments 303, 304 in response to commands from a control system (e.g., control system 310). The actuator may optionally include a drive system that, when coupled to the medical instrument 303, 304, may advance the medical instrument 303, 304 into a natural or surgically created anatomical passageway. Other drive systems may move the distal ends of the medical instruments 303, 304 in multiple degrees of freedom, which may include three degrees of linear motion (e.g., linear motion along the X, Y, Z cartesian axis) and three degrees of rotational motion (e.g., rotation about the X, Y, Z cartesian axis). In addition, the actuator may be used to actuate an articulatable end effector of the medical instrument 303, which is used to grasp tissue in the jaws of a biopsy device or the like. Actuator position sensors, such as resolvers, encoders, potentiometers, and other mechanisms, may provide sensor data describing the rotation and orientation of the motor shaft to the system 300. This position sensor data may be used to determine the motion of an object being manipulated by the actuator. The manipulator assembly 302 may position the instruments 303, 304 it holds so that a pivot point occurs at the entrance of the instrument into the patient. The pivot point may be referred to as a remote center of manipulation. Manipulator assembly 302 may then manipulate the instrument it holds so that the instrument may be rotated about the remote center of manipulation, inserted into and pulled back from the portal, and rotated about the axis of the instrument shaft.
The system 300 also includes a display system 308 for displaying images or representations of the surgical site and medical instrument 303 produced by the instrument 304. The display system 308 and the user control system 306 may be oriented such that the operator O is able to control the medical instruments 303, 304 and the user control system 306 with a sense of presence. In some examples, display system 308 may present an image of the surgical site recorded preoperatively or intraoperatively using image data generated by imaging techniques such as: computed Tomography (CT), magnetic Resonance Imaging (MRI), fluoroscopy, thermal imaging techniques, ultrasound examination, optical Coherence Tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and the like.
The system 300 also includes a control system 310. The control system 310 includes at least one memory 314 and at least one computer processor 312 for effecting control between the medical instruments 303, 304, the user control system 306, and the display system 308. The control system 310 also includes programming instructions (e.g., a non-transitory machine readable medium storing instructions) for implementing some or all of the methods described in accordance with aspects disclosed herein, including instructions for providing information to the display system 308. Although control system 310 is shown as a single block in the simplified schematic of fig. 4, the system may include two or more data processing circuits, with one portion of the processing optionally performed on or near manipulator assembly 302, another portion of the processing performed at user control system 306, and so forth. The processor of control system 310 may execute instructions corresponding to the processes disclosed herein and described in more detail below. Any of a variety of centralized or distributed data processing architectures may be employed. Similarly, the programming instructions may be implemented as a number of separate programs or subroutines, or the programming instructions may be integrated into a number of other aspects of the robotic medical system described herein. In one embodiment, control system 310 supports wireless communication protocols such as Bluetooth, irDA, homeRF, IEEE 802.11, DECT, and wireless telemetry.
Movement of manipulator assembly 302 may be controlled by control system 310 such that a shaft or intermediate portion of an instrument to be mounted to manipulator assembly 302 is constrained to safe movement through a minimally invasive surgical access site or other aperture. Such movement may include, for example, axial insertion of the shaft through the aperture location, rotation of the shaft about its axis, and pivotal movement of the shaft about a pivot point adjacent the access site. In some cases, the following excessive lateral movement of the shaft may be inhibited: tearing tissue adjacent the orifice or accidentally enlarging the access site. The manipulator assembly 302 is constrained for movement at the access site, and some or all of such constraints may be applied using a mechanical manipulator articulation linkage that inhibits improper movement, or may be applied in part or in whole using data processing and control techniques. In some embodiments, the control system 310 may receive force and/or torque feedback from the medical instrument 304. In response to the feedback, control system 310 may send a signal to user control system 306. In some examples, the control system 310 may send signals that instruct one or more actuators of the manipulator assembly 302 to move the medical instruments 303, 304.
In the description, specific details describing some embodiments are set forth. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art, that some embodiments may be practiced without some or all of these specific details. The particular embodiments disclosed herein are meant to be illustrative, but not limiting. Those skilled in the art may implement other elements that, although not specifically described herein, are within the scope and spirit of the present disclosure
Detailed description of the inventioncertain elements may alternatively be included in other embodiments, implementations, or applications that are not specifically shown or described whenever practical with reference to one embodiment, implementation, or application. For example, if an element is described in detail with reference to one embodiment and is not described with reference to the second embodiment, the element may be considered to be included in the second embodiment. Thus, to avoid unnecessary repetition in the following description, one or more elements shown and described in connection with one embodiment, implementation, or application may be incorporated into other embodiments, implementations, or aspects, with the following exceptions: it is also expressly stated that one or more elements may render an embodiment or implementation non-functional or that two or more of the elements provide conflicting functionality. Not all illustrated processes may be performed in all embodiments of the disclosed methods. Additionally, one or more processes not explicitly described may be included before, after, between, or as part of the described processes. In some embodiments, one or more processes may be performed by a control system, or may be implemented at least in part in the form of executable code stored on a non-transitory tangible machine-readable medium, which when executed by one or more processors, may cause the one or more processors to perform the one or more processes.
Any alterations and further modifications in the described devices, instruments, methods, and any further applications of the principles of the disclosure are contemplated as would normally occur to one skilled in the art to which the disclosure relates. Furthermore, the dimensions provided herein are for specific examples, and it is contemplated that different sizes, dimensions, and/or ratios may be used to implement the concepts of the present disclosure. To avoid unnecessary repetition of the description, one or more components or acts described in accordance with one illustrative embodiment may be used or omitted from other illustrative embodiments where appropriate. For the sake of brevity, numerous versions of these combinations will not be described separately. For simplicity, in some cases, the same reference numbers are used throughout the drawings to refer to the same or the same components.
The systems and methods described herein may be adapted for imaging any of a variety of anatomical systems including: lung, colon, intestine, stomach, liver, kidney and renal calix, brain, heart, circulatory system including vasculature, and the like. Although some embodiments are provided herein with respect to medical procedures, any reference to medical or surgical instruments and medical or surgical methods is not limited. For example, the instruments, systems, and methods described herein may be used for non-medical purposes, including industrial purposes, general-purpose robotic purposes, and sensing or manipulating non-tissue workpieces. Other example applications include cosmetic, imaging of human or animal anatomy, collecting data from human or animal anatomy, and training medical or non-medical personnel. Other example applications include procedures for tissue removed from (not returned to) human or animal anatomy and procedures performed on human or animal carcasses. In addition, these techniques may also be used in surgical and non-surgical medical treatment or diagnostic procedures.
One or more elements of embodiments of the present disclosure may be implemented in software as follows: software for execution on a processor of a computer system, such as a control processing system. When implemented in software, the elements of an embodiment of the disclosure may be code segments for performing various tasks. The program or code segments can be stored in a processor readable storage medium or means, which can be downloaded by a computer data signal embodied in a carrier wave over a transmission medium or communication link. Processor-readable storage devices may include any medium that can store information, including optical, semiconductor, and/or magnetic media. Examples of processor readable storage devices include electronic circuits; semiconductor devices, semiconductor memory devices, read-only memory (ROM), flash memory, erasable programmable read-only memory (EPROM); a floppy disk, a CD-ROM, an optical disk, a hard disk, or other storage device. The code segments may be downloaded via computer networks such as the internet, intranets. Any of a variety of centralized or distributed data processing architectures may be employed. The programming instructions may be implemented as a number of separate programs or subroutines, or the programming instructions may be integrated into many other aspects of the systems described herein. In some examples, the control system may support wireless communication protocols such as bluetooth, infrared data communication (IrDA), home radio frequency technology, IEEE 802.11, digital enhanced wireless communication (DECT), ultra Wideband (UWB), wireless personal area networks, and wireless telemetry.
It should be noted that the processes and displays presented may not necessarily be associated with any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the described operations. The required structure for a variety of these systems will appear as elements of the claims. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
The present disclosure describes various instruments, portions of instruments, and states of anatomical structures in three-dimensional space. As used herein, the term location refers to the positioning of an object or a portion of an object in three dimensions (e.g., three translational degrees of freedom along cartesian x, y and z coordinates). As used herein, the term orientation refers to rotational placement of an object or portion of an object (e.g., in one or more rotational degrees of freedom such as roll, pitch, and/or yaw). As used herein, the term pose refers to the position of an object or portion of an object in at least one translational degree of freedom, and the orientation of the object or portion of an object in at least one rotational degree of freedom (e.g., up to six total degrees of freedom). As used herein, the term shape refers to a set of poses, positions, or orientations measured along an object.
While certain illustrative embodiments of the invention have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this embodiment not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art.

Claims (34)

1. A system, comprising:
a processor; and
a memory, on which computer readable instructions are stored, which when executed by the processor cause the system to:
generating a current endoscopic video image of the surgical environment;
capturing an image from the current endoscopic video image;
displaying the current endoscopic video image and the captured image in a common display; and is also provided with
In response to user input, an action is performed with the captured image.
2. The system of claim 1, wherein the displayed current endoscopic video image and the captured image both comprise images of a medical instrument and patient tissue, wherein the medical instrument is positioned in the captured image at a first location relative to the patient tissue and at a second location different from the first location in the current endoscopic video image.
3. The system of claim 1, wherein the captured image is a still image.
4. The system of claim 1, wherein the captured image is a video image.
5. The system of claim 1, wherein the displayed endoscopic video image is displayed in a first window and the displayed captured image is displayed in a second window, wherein the first window and the second window are adjacent to each other.
6. The system of claim 1, wherein the action is displaying a videomark generated in response to the user input.
7. The system of claim 1, wherein the action is measuring a dimension between two points in the captured image.
8. The system of claim 7, wherein the two points are generated in response to the user input.
9. The system of claim 1, wherein the action is changing a virtual viewpoint of a virtual camera.
10. The system of claim 1, further comprising suspending a follow-up mode of a robotic-assisted medical system and entering an image control mode prior to performing the action with the captured image.
11. The system of claim 10, further comprising a user input device, wherein movement of a medical instrument in the surgical environment in the follow-up mode is responsive to movement of the user input device, and wherein movement of the medical instrument in the surgical environment in the image control mode is not responsive to movement of the user input device.
12. The system of claim 10, further comprising a first user input device, wherein movement of a medical instrument in the surgical environment is responsive to movement of the user input device, and further comprising a second user input device, wherein performing the action with the captured image is responsive to the user input at the second user input device.
13. The system of claim 1, wherein the captured image is captured at a first time and the current endoscopic video image and the captured image are displayed at a second time later than the first time.
14. The system of claim 1, wherein the captured image is enhanced by applying a grid pattern over at least a portion of the captured image.
15. The system of claim 1, wherein the captured image is enhanced by means of a depth map point cloud.
16. The system of claim 1, wherein the captured image is enhanced by means of a pass-through visualization comprising a grade image.
17. The system of claim 1, wherein the captured image is enhanced by text or logos.
18. A non-transitory machine-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to:
generating a current endoscopic video image of the surgical environment;
capturing an image from the current endoscopic video image;
displaying the current endoscopic video image and the captured image in a common display; and is also provided with
In response to user input, an action is performed with the captured image.
19. The non-transitory machine readable medium of claim 18, wherein the displayed current endoscopic video image and the captured image both comprise images of a medical instrument and patient tissue, wherein the medical instrument is positioned in the captured image at a first location relative to the patient tissue and at a second location different from the first location in the current endoscopic video image.
20. The non-transitory machine-readable medium of claim 18, wherein the captured image is a still image.
21. The non-transitory machine-readable medium of claim 18, wherein the captured image is a video image.
22. The non-transitory machine readable medium of claim 18, wherein the displayed endoscopic video image is displayed in a first window and the displayed captured image is displayed in a second window, wherein the first window and the second window are adjacent to each other.
23. The non-transitory machine-readable medium of claim 18, wherein the action is displaying a videomark generated in response to the user input.
24. The non-transitory machine-readable medium of claim 18, wherein the action is measuring a dimension between two points in the captured image.
25. The non-transitory machine-readable medium of claim 24, wherein the two points are generated in response to the user input.
26. The non-transitory machine-readable medium of claim 18, wherein the action is changing a virtual viewpoint of a virtual camera.
27. The non-transitory machine readable medium of claim 18, which when executed by one or more processors causes the one or more processors to pause a follow mode of a robotic-assisted medical system and enter an image control mode before performing the action with the captured image.
28. The non-transitory machine readable medium of claim 27, which when executed by one or more processors causes the one or more processors to control movement of a medical instrument in the surgical environment in response to movement of the user input device in the following mode and not control movement of the medical instrument in the surgical environment in response to movement of the user input device in the image control mode.
29. The non-transitory machine readable medium of claim 27, which when executed by one or more processors causes the one or more processors to control movement of the medical instrument in the surgical environment in response to movement of a user input device, and wherein performing the action with the captured image is in response to the user input at a second user input device.
30. The non-transitory machine readable medium of claim 18, wherein the captured image is captured at a first time and the current endoscopic video image and the captured image are displayed at a second time that is later than the first time.
31. The non-transitory machine readable medium of claim 18, wherein the captured image is enhanced by applying a grid pattern over at least a portion of the captured image.
32. The non-transitory machine readable medium of claim 18, wherein the captured image is enhanced with a depth map point cloud.
33. The non-transitory machine readable medium of claim 18, wherein the captured image is enhanced by a pass-through visualization comprising a grade image.
34. The non-transitory machine readable medium of claim 18, wherein the captured image is enhanced by text or a logo.
CN202280054728.5A 2021-08-10 2022-08-09 System and method for differentiating interaction environments Pending CN117795555A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/231,658 2021-08-10
US202263303101P 2022-01-26 2022-01-26
US63/303,101 2022-01-26
PCT/US2022/039786 WO2023018685A1 (en) 2021-08-10 2022-08-09 Systems and methods for a differentiated interaction environment

Publications (1)

Publication Number Publication Date
CN117795555A true CN117795555A (en) 2024-03-29

Family

ID=90380236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280054728.5A Pending CN117795555A (en) 2021-08-10 2022-08-09 System and method for differentiating interaction environments

Country Status (1)

Country Link
CN (1) CN117795555A (en)

Similar Documents

Publication Publication Date Title
CN110944595B (en) System for mapping an endoscopic image dataset onto a three-dimensional volume
US20240041531A1 (en) Systems and methods for registering elongate devices to three-dimensional images in image-guided procedures
KR102501099B1 (en) Systems and methods for rendering on-screen identification of instruments in teleoperated medical systems
US11547490B2 (en) Systems and methods for navigation in image-guided medical procedures
CN109069217B (en) System and method for pose estimation in image-guided surgery and calibration of fluoroscopic imaging system
JP7118890B2 (en) Systems and methods for using registered fluoroscopic images in image-guided surgery
KR20170127561A (en) System and method for on-screen identification of instruments in a remotely operated medical system
US20210315637A1 (en) Robotically-assisted surgical system, robotically-assisted surgical method, and computer-readable medium
US20220211270A1 (en) Systems and methods for generating workspace volumes and identifying reachable workspaces of surgical instruments
US12011236B2 (en) Systems and methods for rendering alerts in a display of a teleoperational system
CN117795555A (en) System and method for differentiating interaction environments
EP4384984A1 (en) Systems and methods for a differentiated interaction environment
CN117813631A (en) System and method for depth-based measurement in three-dimensional views
EP4384985A1 (en) Systems and methods for depth-based measurement in a three-dimensional view
US11850004B2 (en) Systems and methods for determining an arrangement of explanted tissue and for displaying tissue information
US20230360212A1 (en) Systems and methods for updating a graphical user interface based upon intraoperative imaging
US20230099522A1 (en) Elongate device references for image-guided procedures
WO2023055723A1 (en) Navigation assistance for an instrument
WO2023129934A1 (en) Systems and methods for integrating intra-operative image data with minimally invasive medical techniques
CN117355862A (en) Systems, methods, and media including instructions for connecting model structures representing anatomic passageways
WO2023150449A1 (en) Systems and methods for remote mentoring in a robot assisted medical system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication