US12251170B2 - Configuration marker design and detection for instrument tracking - Google Patents
Configuration marker design and detection for instrument tracking Download PDFInfo
- Publication number
- US12251170B2 US12251170B2 US17/942,777 US202217942777A US12251170B2 US 12251170 B2 US12251170 B2 US 12251170B2 US 202217942777 A US202217942777 A US 202217942777A US 12251170 B2 US12251170 B2 US 12251170B2
- Authority
- US
- United States
- Prior art keywords
- tool
- marker
- features
- image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/90—Identification means for patients or instruments, e.g. tags
- A61B90/94—Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/371—Surgical systems with images on a monitor during operation with simultaneous use of two cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3983—Reference marker arrangements for use with image guided surgery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- Minimally-invasive surgical techniques are aimed at reducing the amount of extraneous tissue that is damaged during diagnostic or surgical procedures, thereby reducing patient recovery time, discomfort, and deleterious side effects.
- the average length of a hospital stay for standard surgery may be shortened significantly using minimally-invasive surgical techniques.
- patient recovery times, patient discomfort, surgical side effects, and time away from work may also be reduced with minimally-invasive surgery.
- a common form of minimally-invasive surgery is endoscopy, and a common form of endoscopy is laparoscopy, which is minimally-invasive inspection and surgery inside the abdominal cavity.
- laparoscopy In standard laparoscopic surgery, a patient's abdomen is insufflated with gas, and cannula sleeves are passed through small (approximately 1 ⁇ 2 inch or less) incisions to provide entry ports for laparoscopic instruments.
- Laparoscopic surgical instruments generally include a laparoscope or an endoscope (for viewing the surgical field), and working tools.
- the working tools are similar to those used in conventional (open) surgery, except that the working end or end effector of each tool is separated from its handle by an extension tube.
- end effector means the actual working part of the surgical instrument and can include clamps, graspers, scissors, staplers, and needle holders, for example.
- the surgeon passes these working tools or instruments through cannula sleeves to an internal surgical site and manipulates them from outside the abdomen.
- the surgeon views the procedure by means of a monitor that displays an image of the surgical site taken from the laparoscope.
- Similar endoscopic techniques are employed in, e.g., arthroscopy, retroperitoneoscopy, pelviscopy, nephroscopy, cystoscopy, cisternoscopy, sinoscopy, hysteroscopy, urethroscopy, and the like.
- Minimally-invasive telesurgical robotic systems are being developed to increase a surgeon's dexterity when working within an internal surgical site, as well as to allow a surgeon to operate on a patient from a remote location.
- the surgeon is often provided with an image of the surgical site at a control console. While viewing a three-dimensional (3-D) image of the surgical site on a suitable viewer or display, the surgeon performs the surgical procedures on the patient by manipulating master input or control devices of the control console. Each of the master input devices controls the motion of a servomechanically operated surgical instrument.
- the telesurgical system can provide mechanical actuation and control of a variety of surgical instruments or tools having end effectors that perform various functions for the surgeon, e.g., holding or driving a needle, grasping a blood vessel, dissecting tissue, or the like, in response to manipulation of the master input devices.
- the surgeon may manipulate the tool so that its end effector is moved outside of the endoscope's field of view, or the end effector may become difficult to see due to occlusion by fluids or other intervening objects. In such cases it would be useful to be able to provide assistance to the surgeon in locating and/or identifying the end effector on the workstation's display screen.
- Accurate information regarding a tool's 3-D pose can be used to provide this assistance. In general, accurate information of a tool's 3-D pose is important for a number of image guided surgical and user interface applications.
- kinematics-based pose information with image-derived pose information.
- Such a fusion of tool tracking information can provide the advantages of both types of data without the associated disadvantages.
- kinematics joint data are usually available at a very high update rate
- a kinematics estimated pose may not be very accurate due to error accumulation at each joint, with errors in joints located farther away from the tool having a greater impact on accuracy.
- image-derived tool pose estimation can be highly accurate, but may run at a slower update rate that what is useful for many real-time applications. By correcting the higher-update kinematics-pose estimation using the more accurate image-derived tool pose estimation, a more accurate higher-update tool pose estimation can be obtained.
- an optical tracker is used to track the position of a marker assembly that is attached to a location on the surgical instrument outside the patient's body.
- the optical tracker requires a dedicated stereo camera and dedicated lighting, which take space in an already crowded operating room. Attaching such optical trackers also reduces the range of motion of the robotic arms due to the potential for collision. There can also be some level of error that results from propagating the 3-D pose to the surgical tool tip. Additional problems include: the extra space required, limited visibility range, the added hardware setup in the operating room, and cost.
- Another approach uses an electromagnetic tracker, which has its own associated disadvantages. For example, most surgical instruments have metal parts that can cause distortion, which can vary in time due to changes in distances between an electromagnetic tracker attached to one tool tip and metal components of an adjacent surgical tool. An electromagnetic tracker also involves extra cost.
- 3-D pose is a well-studied problem in computer/robot vision.
- a 3-D pose can be solved by starting with the known features of an object and matching these features with their 2D correspondence in the image. Features such as point and line segments are commonly used. Determination of the 3-D pose of a rigid body from a single 2D image is referred to as “pose estimation” in computer vision (see introduction in Christophe Doignon, “Scene Reconstruction, Pose Estimation and Tracking,” 2007). If using point-based correspondences, the problem is known as “perspective-n-point,” where n is the number of correspondences. Three non-collinear points provides four solutions. Four or more non-collinear points provides a unique solution.
- Determination of the 3-D pose of a rigid object using a stereo camera can be accomplished using two approaches.
- the determination of the 3-D pose can be approached as an optimization problem where the 3-D pose is selected that provides the best fit between the projected 3-D points with the image correspondences in both images.
- image points in both views can be used to determine corresponding 3-D points using stereo triangulation and relative pose is determined by solving a rigid transformation between the determined 3-D points and corresponding model points.
- an image-derived estimate is only available when the object's features are within the field of view of the imaging device(s) and they can be extracted.
- Some of the factors that may prevent the extraction of features include: occlusion of the features by anatomical structure or other instruments, degenerated image quality caused by fast instrument or camera motion (i.e., motion blur), adverse lighting conditions (e.g., saturation when the light is too strong, lack of contrast when the light is too weak, strong specularity due to the relative geometric configurations of the light source, instrument, and imaging device), and complex background clutter.
- More reliable image-derived tool pose estimation would, therefore, be beneficial in order to increase the rate at which highly-accurate tool pose estimates are available, which in turn may help to provide more accurate overall tool tracking. Accordingly, improved methods and systems providing improved image-derived tool pose estimates would be desirable, particularly those with reduced sensitivities to adverse conditions, such as occlusions, motion blur, and adverse lighting conditions.
- improved systems, methods, and tools for performing 3-D tool tracking using image-derived data from one or more tool located reference features are provided.
- the use of one or more reference features can provide for improved image-derived tool pose estimation by supplying one or more features that can be more reliably imaged and processed.
- Effective and reliable image-derived tool pose estimation can be particularly useful during minimally-invasive surgery, where accurate and reliable tool tracking can provide a number of advantages, such as to provide assistance to a Surgeon in locating an occluded or out-of-view tool.
- the disclosed systems, methods, and tools can be used in a wide variety of applications, both inside and outside a human body, as well as in non-surgical tool tracking applications. In general, accurate information of a tool's 3-D pose is important for a number of image-guided and user interface applications.
- a robotic surgical method for determining a tool state for an imaged tool includes: capturing a first image of a tool that includes multiple features defining a first marker, where at least one of the features of the first marker includes an identification feature; determining a position for the first marker by processing the first image; determining an identification for the first marker by using the at least one identification feature by processing the first image; and determining a tool state for the tool by using the position and the identification of the first marker.
- a robotic surgical method for determining a tool state for an imaged tool can involve a number of options.
- the first marker can include redundant features defining error-checking data and/or check sum data
- the method can include: processing the first image to detect the redundant features and read the error-checking data and/or check sum data; and validating the identification of the first marker by verifying that the first marker identification is consistent with the error-checking data and/or check sum data.
- Some options involve a tool having two or more markers.
- Each of the two or more markers can have at least one identification feature associated with an identification that differs from other markers on the tool.
- the first image can include a second marker of the tool.
- a method can include: determining a position for the second marker by processing the first image; and determining the identification of the second marker by processing the first image; determining a tool state for the tool by using the second marker position, the second marker identification, and the predetermined positional relationship data associated with the second marker.
- a method can include steps that can be used where the second marker is obscured in the first image, such as: moving the tool after determining the tool state by using the first marker, capturing a second image of the moved tool where the first marker is obscured but the second marker is not obscured; determining a position for the second marker by processing the second image; determining the identification of the second marker by processing the second image; and determining a moved tool state for the tool using the second marker position, the second marker identification, and the predetermined positional relationship data associated with the second marker.
- stereo images of a tool For example, a stereo-imaging device, such as a stereoscopic endoscope, can be used to capture a first and second image of the surgical tool, which can be processed so as to determine 3-D positional data for the first marker.
- a tool state can be determined in three dimensions or more.
- each marker can have at least one localizer feature, and at least one identification feature at a known positional relationship relative to at least one localizer feature.
- the position of the first marker can be determined by using the localizer feature and the orientation feature.
- the identification of the first marker can be determined by identifying at least one localizer feature of the first marker and reading the identification feature according to the known positional relationship between the localizer feature and the identification feature.
- a method can include: processing the first image so as to identify the at least one localizer feature; selecting a candidate identity for the first marker; generating a candidate view of a marker having the candidate identity by using the identified at least one localizer feature; and comparing the candidate view with the first image so as to verify that the selected candidate identity is the first marker identity.
- Selecting a candidate identity for the first marker can include generating an estimated pose for the surgical tool by using at least one prior tool state from a prior image of the tool or joint data from a robotic actuation system effectuating movement of the tool.
- the candidate identity can be selected so as to result in a candidate pose for the surgical tool that is within a predetermined deviation of the estimated pose for the surgical tool.
- a method can include processing an image containing multiple surgical tools, where each surgical tool has an identity.
- An identity can be associated with an imaged tool having the first marker by verifying that the candidate identity for the first marker results in a candidate pose that is within a predetermined deviation of the estimated pose for the surgical tool having the first marker.
- MSER Maximum Stable Extremal Region
- adaptive thresholding can be used.
- a robotic surgical system that can be used for determining a tool state for an imaged tool.
- the system includes: a surgical tool having multiple features defining a first marker, with at least one of the features including an identification feature; an imaging device for capturing a first image of the tool during use and outputting first image data in response thereto; and a processor coupled with the imaging device and adapted to process the first image so as to: determine positional data for the first marker, determine an identification of the first marker by using the identification feature; and determine tool state data for the imaged tool by using the positional data for the first marker and the identification of the first marker.
- a robotic surgery system for determining a tool state for an imaged tool can include optional components and/or variations.
- a system can include a tangible medium that includes machine-readable instructions executable by the processor for processing a captured image.
- a system can include an input for non-endoscopically derived tool state data that is derived from robotic joints supporting the tool, and the processor can be configured to process the non-endoscopically derived tool state information and the image-derived tool state information for tracking the state of the tool.
- the imaging device can be adapted to capture a second image of the surgical tool at substantially the same time as the first image and output second image data in response thereto.
- the processor can be configured so as to determine 3-D positional data for the first marker by processing the first and second image data.
- the imaging device can include a stereoscopic endoscope.
- a first marker can include redundant features defining error-checking data.
- the processor can be configured to process the first image data so as to: detect the first marker redundant features; read the error-checking data; and validate the identification of the first marker by verifying that the first marker identification is consistent with the error-checking data.
- Redundant features can also define check sum data and the processor can be configured to process the first image data so as to read the check sum data.
- the processor can validate the identification of the first marker by verifying that the first marker identification is consistent with the check sum data.
- Markers can have various configurations.
- at least one marker can include at least one localizer feature that is shared with an adjacent marker.
- the features of one or more markers can be arranged in a two-dimensional (2-D) pattern.
- One or more markers can use circles or corners as localizer features. The corners can include saddle points.
- One or more markers can include three localizer features.
- One or more markers can include four localizer features.
- One or more marker can include four circles and a bar as localizer features.
- a marker can include text, which can be modified to increase positional data or discriminative features.
- Optional components and/or variations can involve multiple markers. Multiple markers can be distributed around a tool and the processor can include data for each marker indicating an associated marker identification and an associated predetermined positional relationship between the marker and a joint of the surgical tool. Multiple markers can have identification features that differ sufficiently for the processor to determine the identification of the markers encompassed within the first image.
- a processor can use the determined 3-D pose to modify a displayed image of the tool in a variety of ways.
- the displayed image can be modified so that the added reference features are less visually obtrusive, or are “erased” entirely by altering portions of the images corresponding to the reference features.
- a surgical tool for use with a robotic surgery system includes an imaging device for capturing an image of the surgical tool during use and a processor coupled with the imaging device for processing the captured image so as to determine image-derived positional information for the surgical tool.
- the surgical tool includes multiple markers, where each marker has at least one identification feature. The identification features of each marker differ sufficiently for the surgery system to discriminate between the markers based on images encompassing the markers.
- a robotic surgical method includes capturing a first image of a surgical tool, the surgical tool including multiple features defining multiple markers where each marker has a predetermined positional relationship with the surgical tool, the first image including one of the markers; determining a position for the imaged marker by processing the first image; generating an estimated tool state for the tool by using at least one prior tool state from a prior image of the tool or joint data from a robotic actuation system effectuating movement of the tool; and determining a tool state for the tool using the position of the imaged marker, the predetermined positional relationship between the surgical tool and the imaged marker, and the estimated tool state for the tool.
- a surgical robotic tool tracking method includes: directing illuminating light from a light source onto a robotic surgical tool within a patient body where the illuminating light includes a visible light spectrum, the tool including a plurality of primitive features having known positions on the tool, and where each feature includes a spherical reflective surface; capturing stereo images of a plurality of the primitive features when the tool is within the patient body, the stereo images being captured by a stereo image capture device adjacent the illumination source so that the illumination light reflected from the imaged primitive features towards the image capture device substantially aligns with spherical centers of the surfaces of the imaged primitive features; and determining a position for the tool by processing the stereo images so as to locate the spherical centers of the imaged primitive features by using the reflected light.
- a surgical robotic tool tracking method can involve a number of options. Determining a position for the tool by processing the image can be accomplished so as to identify at least one of the primitive features by using specular reflected light. The stereo images can be processed so as to determine 3-D positional data for the spherical centers of the imaged primitive features.
- a constellation algorithm can be used to identify a pattern of primitive features in the first image.
- a method can include generating an estimated tool state for the tool by using at least one prior tool state from a prior image of the tool or joint data from a robotic actuation system effecting movement of the tool, and using the estimated tool state in the constellation algorithm.
- a method can include: capturing stereo images for multiple time points; generating an estimated tool state for the multiple time points; and rejecting any incompatible pattern detection using a robust estimation technique, which can be a Random Sample Consensus (RANSAC) technique.
- RANSAC Random Sample Consensus
- a model based image signature can be used in the identification of a primitive feature in an image.
- a method can include: processing the stereo images so as to identify a natural feature of the tool in both of the images; determine a 3-D position for the identified natural feature; and determine an image-derived tool state by using the 3-D position for the natural feature in combination with the 3-D positional data for the imaged primitive features.
- a method can include generating an estimated tool state for the tool by using at least one prior tool state from a prior image of the tool or joint data from a robotic actuation system effecting movement of the tool, and using the estimated tool state to reject an incompatible pattern detection.
- At least one of the primitive feature can include convex or concave spherical reflective surface aligned with a joint axis of the tool and the reflective surface can be defined by a joint structure.
- a minimally-invasive robotic surgery system includes: a robotic surgical tool having multiple primitive features having know positions on the tool, where each feature includes a spherical reflective surface; a light source oriented to transmit illumination light within a patient body; a stereo image capture device adjacent the illumination source so that the illumination light reflected from the primitive features toward the image capture device substantially aligns with a spherical centers of the spherical surfaces; and a processor coupled with the image capture device and configured for determining a position for the tool by processing stereo images so as to locate the spherical centers of the primitive features by using the reflected light.
- a minimally-invasive robotic surgery system can involve a number of options.
- a system can include a tangible medium that includes machine-readable instructions executable by the processor for processing the stereo images.
- the processor can be configured to determine a position for the tool by processing the stereo images so as to identify at least one of the multiple primitive features by using specular reflected light.
- a primitive feature can be aligned with a joint axis of the tool and can include a reflective spherical surface defined by a joint structure.
- the processor can be further configured so as to determine 3-D positional data for the spherical centers of the imaged primitive features by processing the stereo images.
- the imaging device can include a stereoscopic endoscope.
- a spherical reflective surface can include a convex or concave surface.
- a surgical tool for use with a robotic surgery system includes: a stereo imaging device for capturing stereo images of the surgical tool during use; and a processor coupled with the imaging device for processing the captured stereo images so as to determine image-derived positional information for the surgical tool.
- the surgical tool includes multiple primitive features with each primitive feature including a spherical reflective surface.
- an object tracking system includes: an object having multiple primitive features with each primitive feature including a spherical reflective surface; a light source oriented to transmit illumination light toward the object; a stereo image capture device for capturing stereo images of the object, the image device being disposed adjacent the illumination source so that illumination light reflected from a plurality of the primitive features towards the image capture device substantially aligns with spherical centers of the spherical surfaces, the image device outputting image data for the stereo images; and a processor coupled with the image capture device and configured to process the image data so as to: determine 3-D position data for three or more of the imaged primitive features; and determine a position for the tool by processing the 3-D position data.
- a method for estimating the pose of a surgical tool having three or more substantially corner-less primitive features having known positions on the tool includes: using a stereoscopic endoscope to capture stereo images of three or more of the primitive features, the stereo images including a first image and a second image; extracting at least three primitive feature images from the first image; extracting at least three primitive feature images from the second image; determining correspondences between extracted primitive feature images by using image signatures; using the determined correspondences to determine 3-D positions for at least three of the primitive features; identifying a pattern of extracted primitive feature images that corresponds to a pattern of the tool primitive features; and estimating a pose for the surgical tool by using the identified pattern and the determined 3-D positions.
- FIG. 1 is a plan view of a minimally-invasive robotic surgery system being used to perform a surgery, in accordance with embodiments.
- FIG. 2 is a front view of a surgeon's control console for a robotic surgery system, in accordance with embodiments.
- FIG. 3 is a front view of a robotic surgery system vision cart, in accordance with embodiments.
- FIG. 4 diagrammatically illustrates a robotic surgery system, in accordance with embodiments.
- FIG. 5 A is a front view of a patient side cart (surgical robot) of a robotic surgery system, in accordance with embodiments.
- FIGS. 5 B and 5 C are respective front views of an 8 mm shaft robotic surgery tool and a 5 mm shaft robotic surgery tool, in accordance with embodiments.
- FIG. 6 diagrammatically illustrates relative differences between a kinematics-estimated tool pose, an image-derived estimated tool pose, and a true tool pose, in accordance with embodiments.
- FIG. 7 diagrammatically illustrates variations with time of a raw kinematics-estimated tool pose, an image-derived estimated tool pose, an estimate of the true tool pose, and a true tool pose, in accordance with embodiments.
- FIG. 9 is a flow diagram of a tool tracking method employing imaging of markers, in accordance with embodiments.
- FIG. 10 diagrammatically illustrates a system for tracking tools with markers, in accordance with embodiments.
- FIG. 11 is a flow diagram of a tool tracking method for determining a tool state showing steps for processing stereoscopic images of markers and kinematics data to generate a corrected-kinematics estimated tool state using an image-derived pose offset, in accordance with embodiments.
- FIGS. 12 B and 12 C are images of surgical instruments with the marker pattern of FIG. 12 A during a minimally-invasive robotic surgery, in accordance with embodiments.
- FIGS. 14 A and 14 B respectively illustrate 2-D markers that can be used for an 8 mm instrument shaft and an 8 mm instrument shaft with the markers, in accordance with embodiments.
- FIGS. 15 A and 15 B respectively illustrate 2-D markers that can be used for a 10 mm (ultrasound) instrument shaft and a 10 mm (ultrasound) instrument shaft with the markers, in accordance with embodiments.
- FIGS. 16 A and 16 B respectively illustrate 2-D markers that can be used for a 5 mm instrument shaft and a 5 mm instrument shaft with the markers, in accordance with embodiments.
- FIGS. 17 A and 17 B respectively illustrate 2-D markers that can be used for an ultrasound transducer and an ultrasound transducer with the markers, in accordance with embodiments.
- FIG. 18 is a flow diagram of a method for processing stereoscopic images of tool tracking markers, in accordance with embodiments.
- FIG. 19 is a flow diagram of a method for processing stereoscopic images of 2-D tool tracking markers, in accordance with embodiments.
- FIGS. 20 A, 20 B, 20 C, 20 D, and 20 E illustrate steps for processing an image of a 2-D tool tracking marker, in accordance with embodiments.
- FIG. 22 B diagrammatically illustrates a surgical tool having multiple 1-D tool tracking markers, in accordance with embodiments.
- FIG. 22 C diagrammatically illustrates another 1-D tool tracking marker, in accordance with embodiments.
- FIG. 24 diagrammatically illustrates primitive features, each feature having a reflective concave spherical surface, being illuminated/imaged from three different directions, in accordance with embodiments.
- FIG. 25 diagrammatically illustrates primitive features, each feature having a reflective convex spherical surface, in accordance with embodiments.
- FIGS. 26 A and 26 B are endoscopic images of prototype surgical tools having point configuration markers with reflective spherical surfaces, in accordance with embodiments.
- FIG. 27 is a flow diagram of a tool tracking method that employs processing of stereoscopic images of a surgical tool having primitive features with reflective spherical surfaces, in accordance with embodiments.
- FIGS. 28 A and 28 B illustrate discernible tool markers, in accordance with embodiments.
- FIGS. 30 A, 30 B, 30 C, and 30 D illustrate some additional exemplary discernible marker designs, in accordance with embodiments.
- FIG. 31 is a flow diagram of a tool tracking method that employs processing of an image of a surgical tool having a discernible marker, in accordance with embodiments.
- a tool-located reference feature provides at least one feature that can be more easily detected within an image.
- Some tool use environments such as minimally-invasive robotic surgery, present challenges to the use of image-derived tool tracking, such as the presence of bodily fluids on the tool and/or the presence of cauterization vapors, which can result in partial or total occlusion of the tool.
- image-derived tool tracking such as the presence of bodily fluids on the tool and/or the presence of cauterization vapors, which can result in partial or total occlusion of the tool.
- a marker can include redundant features defining error-checking data.
- the error-checking data can be checked for consistency with a identification for the marker so as to validate the determined identification.
- the redundant features can include check sum data, which can be used to guard against misidentification due to occlusion (or non-imaging in general) of one or more marker features.
- the explicit error-checking mechanism provides confidence in the detection of such markers by reducing the chance of falsely detecting a marker from background clutter or accidental alignment of markers close by to a very low probability.
- FIG. 1 provides an appropriate starting point for a discussion of the present invention.
- FIG. 1 is a plan view illustration of a Minimally-Invasive Robotic Surgical (MIRS) system 10 , typically used for performing a minimally-invasive diagnostic or surgical procedure on a Patient 12 who is lying on an Operating table 14 .
- the system can include a Surgeon's Console 16 for use by a Surgeon 18 during the procedure.
- One or more Assistants 20 may also participate in the procedure.
- the MIRS system 10 can further include a Patient Side Cart 22 (surgical robot), and a Vision Cart 24 .
- the Patient Side Cart 22 can manipulate at least one removably coupled instrument or tool assembly 26 (hereinafter simply referred to as a “tool”) through a minimally invasive incision in the body of the Patient 12 while the Surgeon 18 views the surgical site through the Console 16 .
- An image of the surgical site can be obtained by an endoscope 28 , such as a stereoscopic endoscope, which can be manipulated by the Patient Side Cart 22 so as to orient the endoscope 28 .
- the Vision Cart 24 can be used to process the images of the surgical site for subsequent display to the Surgeon 18 through the Surgeon's Console 16 .
- the number of surgical tools 26 used at one time will generally depend on the diagnostic or surgical procedure and the space constraints within the operating room among other factors.
- an Assistant 20 may remove the tool 26 no longer being used at the time from the Patient Side Cart 22 , and replace it with another tool 26 from a tray 30 in the operating room.
- An illustrative example of system 10 is the da Vinci® Surgical System manufactured by Intuitive Surgical, Inc., Sunnyvale, Calif.
- FIG. 2 is a front view of the Surgeon's Console 16 .
- the Surgeon's Console 16 includes a left eye display 32 and a right eye display 34 for presenting the Surgeon 18 with a coordinated stereo view of the surgical site that enables depth perception.
- the Console 16 further includes one or more control devices 36 , which in turn cause the Patient Side Cart 22 (shown in FIG. 1 ) to manipulate one or more tools.
- control devices 36 will provide the same degrees of freedom as their associated tools 26 (shown in FIG. 1 ) so as to provide the Surgeon with telepresence, or the perception that the control devices 36 are integral with the tools 26 so that the Surgeon has a strong sense of directly controlling the tools 26 .
- position, force, and tactile feedback sensors are preferably employed to transmit position, force, and tactile sensations from the tools 26 back to the Surgeon's hands through the control devices 36 .
- the Surgeon's Console 16 is usually located in the same room as the patient so that the Surgeon may directly monitor the procedure, be physically present if necessary, and speak to an Assistant directly rather than over the telephone or other communication medium. However, it will be understood that the Surgeon can be located in a different room, a different building, or other remote location from the Patient, thus allowing for remote surgical procedures.
- FIG. 4 diagrammatically illustrates a robotic surgery system 50 (such as MIRS system 10 of FIG. 1 ), showing communication paths between components.
- Surgeon's Console 52 (such as Surgeon's Console 16 in FIG. 1 ) can be used by a Surgeon to control a Patient Side Cart (Surgical Robot) 54 (such as Patent-Side Cart 22 in FIG. 1 ) during a minimally-invasive procedure.
- the Patient Side Cart 54 can use an imaging device, such as a stereoscopic endoscope, to capture images of the procedure site and output the captured images to a Vision Cart 56 (such as Vision Cart 24 in FIG. 1 ).
- a Vision Cart 56 can process the captured images in a variety of ways prior to any subsequent display.
- the Patient Side Cart 54 can output the captured images for processing outside the Vision Cart 56 .
- the Patient Side Cart 54 can output the captured images to a processor 58 , which can be used to process the captured images.
- the images can also be processed by a combination the Vision Cart 56 and the processor 58 , which can be coupled together so as to process the captured images jointly, sequentially, and/or combinations thereof.
- One or more separate displays 60 can also be coupled with the processor 58 and/or the Vision Cart 56 for local and/or remote display of images, such as images of the procedure site, or any other related images.
- a kinematics-estimated pose 70 can deviate significantly from a true pose 74 for the surgical tool.
- a kinematics-estimated tool pose for an exemplary surgical robot may differ from a true pose for the tool by up to 10 to 15 mm on a well-calibrated system, and even more if the system has not been recently and/or accurately calibrated.
- An image-derived tool pose estimate 72 can be significantly more accurate than a raw kinematics-estimated tool pose 70 . This increased accuracy is diagrammatically illustrated in FIG. 6 by the relatively small positional difference between the image-derived tool pose 72 and the true tool pose 74 shown.
- FIG. 8 illustrates variations that can occur in the portion of a surgical instrument 84 (e.g., the tool 26 ) that may be within view of an imaging device 86 , such as the stereoscopic endoscope 28 .
- the imaging device 86 can include two overlapping fields of view 88 used to capture images of the procedure site and any surgical instrument portion within a field of view 88 .
- a greater portion of the surgical instrument 84 may be included within the captured image, but the relative size of any imaged tool feature(s) will be smaller as compared with the field of view as a whole.
- a relatively smaller portion may be included within the captured image, but the relative size of any imaged tool feature(s) will be larger as compared with the field of view as a whole.
- FIG. 9 is a flow diagram of a tool tracking method 100 employing imaging of one or more markers attached to a tool.
- a tool such as the tool 26
- a tool can include one or more markers so as to provide features that can be imaged and processed to provide an image-derived tool pose estimate.
- step 102 one or more images of the tool and marker are captured.
- the captured image(s) can be a single image obtained through the use of a mono-vision imaging device or stereo images obtained with a stereo-vision imaging device, such as a stereo endoscope.
- the captured image(s) are processed so as to determine positional data associated with one or more marker(s).
- the positional data can include the location of one or more marker features within the image(s).
- the image can be processed in step 106 to determine the identification of one or more of the markers.
- a marker can contain one or more identification features that can be imaged and subsequently processed to determine the identification of the marker.
- the positional data and any identification can be used to determine tool state data, such as the tool's 3-D pose. Additional information, such as relative positional data between a marker and the tool can be used during the determination of tool state data. For example, relative 3-D pose offset data (offset position and offset orientation) between the 3-D pose of the marker and the 3-D pose of the tool can provide the relative positional data.
- the tool state data determined in step 108 can be rejected if it is insufficiently consistent with an expected tool state data range.
- an estimated 3-D pose for the tool can be generated by using a prior image of the tool or joint data from a robotic actuation system effecting movement of the tool. This estimated 3-D pose can be compared with the tool state data determined in step 108 so as to verify that they are consistent with each other. Any inconsistency can be evaluated to determine whether to reject the determined tool state data as being an outlier.
- FIG. 10 diagrammatically illustrates a system 110 for tracking a tool with marker(s) 112 .
- the system includes at least one tool with a marker(s) 112 , similar to the tool 26 .
- An imaging device 114 such as the stereoscopic endoscope 28 , is used to capture one or more image(s) of the tool with marker(s) 112 .
- the imaging device 114 is coupled with a processor 116 and transfers image data to the processor 116 in response to imaging the tool with marker(s) 112 .
- the processor 116 is configured to process the received image data so as to generate tool state data 118 , which can include an estimated 3-D pose for the tool with marker(s) 112 .
- FIG. 11 is a flow diagram of a tool tracking method 120 for determining a tool state showing steps for processing stereo images of markers and raw-kinematics data to generate a corrected kinematics-estimated tool state using an image-derived 3-D pose offset, in accordance with an embodiment. Because of the higher update rate of the joint sensor data used to generate an estimated tool state from raw kinematics data 124 as compared to an image-derived estimated tool state, an image-derived pose offset can be combined with an estimated tool state from raw kinematics to generate a corrected kinematics estimated tool state.
- a series of corrected kinematics estimated tool states can be generated using a single pose offset combined with a corresponding series of estimated tool states from raw kinematics data 124 .
- the pose offset can be updated over time in response to new image data 122 .
- the determination of a pose offset starts in step 126 with the acquisition of image data of the tool with marker(s) and corresponding raw kinematics data 124 for the tool with marker(s).
- the image data 122 can include left image data and right image data, but it should be understood that a single image of one or more marker features can be processed so as to generate image-derived positional information useful in generating a pose offset.
- the location within an image of a single marker feature can be compared with an expected location within the image for the single marker feature so as to generate a one-dimensional (1-D) correction for the previous pose offset.
- a marker in step 128 , the left image and the right image are processed so as to detect marker features.
- the position of the marker(s) feature(s) within the left image and the position of the marker(s) feature(s) within the right image are used in step 130 to generate 3-D coordinates for the marker(s) feature(s).
- 3-D coordinates for the marker(s) feature(s).
- a marker can include at least one identification feature that can be processed to determine the identification of the marker.
- the 3-D coordinates for the marker(s) features(s) can be processed in combination with any identification(s) of markers(s) so as to determine an image-derived tool state.
- images of a number of markers can be used to provide sufficient pose information for determining a 3-D pose for the tool, it can be advantageous for a single marker to contain a sufficient number of features for determining a 3-D pose for the tool. Additionally, it can be advantageous for each marker on a tool to have an identification that differs from neighboring markers. With such a marker, an image-derived tool state can be determined by determining the 3-D pose of the marker, determining the identification of the marker, and using data regarding how the identified marker is positioned and oriented on the tool.
- features from a combination of markers can be combined to determine the 3-D pose of the combination of markers, which can be combined with data regarding how the features from the combination of markers are positioned and oriented on the tool.
- a corrected kinematics estimated tool state (from a previously determined pose offset) can be compared against the image-derived estimated tool state so as to reject any image-derived estimated tool states that differ too much from the corrected kinematics estimated tool state.
- the pose offset is determined so that it can be combined with a raw kinematics data 124 estimated tool state to obtain a corrected-kinematics estimated tool state.
- the pose offset can be calculated as a difference between an estimate of the true tool pose (shown in FIG. 7 ) and a corresponding raw kinematics data 124 estimated tool state for substantially the same point in time.
- the pose offset can be calculated as a difference between an image-derived estimated tool state and a corresponding raw kinematics data 124 estimated tool state for substantially the same point in time.
- a marker design (i) provides sufficient constraint for tool pose estimation; (ii) is distinguishable under various realistic conditions (e.g., viewpoint, lighting) and under various realistic backgrounds; (iii) works with different operational ranges of the tool; (iv) is resilient and/or robust to partial occlusions; (v) is visually acceptable; (vi) is easily manufactured; (vii) is compact enough to allow the use of multiple markers within the space provided (e.g., enough to supply a sufficient level of redundancy), and (viii) can be extracted by an image analysis algorithm.
- One-dimensional (1-D) and two-dimensional (2-D) markers can provide a number of advantageous aspects. These include: (i) the use of separate localizer and identification features that support more efficient detection and parsing; (ii) the use of explicit coding schemes for primitive feature locations; (iii) the use of explicit error checking and error correction; (iv) the ability to create a large number of different patterns; (v) the use of a compact marker with dense information; and (vi) the use of a “hypothesize and test” detection algorithm framework, which scales very well with the total number of marker patterns.
- This differential spacing can be used to identify the specific pattern 142 in an image. Since the three patterns 142 are arranged around a tool at 120-degree intervals, there may be a sufficient differential between identical images of the overall marker 140 , given the inherent level of accuracy of a corrected-kinematics estimated tool state, to discriminate between imaged patterns 142 .
- Marker 140 provides an example how marker features, such as the identical patterns 142 shown, can be arranged so as to present features that can be imaged so as to determine a tool state.
- FIGS. 13 A, 13 B, and 13 C illustrate three embodiments of a 2-D marker 150 , 170 , 190 that can be used on a tool for tracking the tool's state.
- a 2-D marker includes primitive features arranged in two dimensions. Some of the features can serve as localizer features, and the other features can serve as identification features. Localizer features provide positional or orientation information to determine pose/alignment of the marker, and the identification features are used to differentiate different markers. The identification features can follow a certain coding scheme and can include redundant information for error checking and/or correction.
- By using compact 2-D markers multiple markers can be arranged in different ways to fit the geometric shapes of different tools.
- the markers can also be arranged at different locations on the tool shaft to cope with different operational ranges.
- the markers can also be used to estimate the roll of the tool or instrument. Compared to multiple 1-D patterns stacked together, a 2-D marker pattern may advantageously provide better information compactness and locality.
- These 2-D self-discriminative markers have been designed to meet a number of considerations.
- the size of the markers has been selected to be as small as possible given the constraint of image resolution.
- These 2-D markers do not rely on a specific color, because color can be an unreliable feature due to dependence on lighting and white balance. Additionally, some colors can be visually intrusive.
- These 2-D markers were designed to include features that could be reliably detected in images, because some features are easier to detect than others.
- these 2-D markers were designed to include localizer shapes (the black circles 152 , 154 , 156 , 158 ; 172 , 174 , 176 , 178 ; the black bar 160 ; 180 ; and the saddle points 192 ) and a number of information bits or identification features (nine gray dots 162 in FIG. 13 A , thirteen gray dots 182 in FIG. 13 B , and the 16 dots 194 in FIG. 13 C ).
- localizer shapes the black circles 152 , 154 , 156 , 158 ; 172 , 174 , 176 , 178 ; the black bar 160 ; 180 ; and the saddle points 192
- a number of information bits or identification features no gray dots 162 in FIG. 13 A , thirteen gray dots 182 in FIG. 13 B , and the 16 dots 194 in FIG. 13 C .
- a and 13 B are used for convenience of reference only.
- a circle was chosen as a localizer shape because its topology (a dark blob inside a bright blob, or vice versa) is invariant to view point and it usually does not appear in the background.
- Other such features include certain corners, especially a saddle point 192 as shown in FIG. 13 C .
- the marker designs do not restrict how the information bits 162 , 182 , 194 (identification features) are used, they can be divided into data and error checking bits.
- the presence or absence of the dots corresponding to data bits can be used to designate a number of unique codes (or identifications).
- the presence or absence of the gray dots corresponding to error checking bits can be used to validate a code or identification determination.
- the size of the marker patterns 150 , 170 , 190 were selected considering a desired working distance range for minimally-invasive robotic surgery. However, it is appreciated that if the instrument usually works closer or farther away from an imaging device, the size of the pattern could be made smaller or larger accordingly.
- the markers 150 and 170 shown in FIGS. 13 A and 13 B include a white background and dark features, as can be seen in subsequent figures, a dark background with white features was selected based on clinical feedback on the visual experience. However, it is appreciated that a white background and dark features can also be used.
- the 3-D geometry of the pattern (the 3-D coordinates of all the circles and dots in a local coordinate system) is fixed and known. If a single image is used to provide 2-D coordinates, coordinates of four points are sufficient to determine the pose of the marker (and hence the tool). If stereo images are used to provide 3-D coordinates, coordinates of three points are sufficient to determine the pose of the instrument. Accordingly, the design of these 2-D markers 150 and 170 includes four circles, thereby providing a sufficient number for either single image or stereo image processing. The dots can also be used for object pose estimation. Also, although the markers can be placed on a tool in any number of different orientations, it is presently preferred that the markers be placed so that the vertical direction aligns with the instrument axial direction.
- the marker designs 150 and 170 of FIGS. 13 A and 13 B represent two separate design versions, with the design version of FIG. 13 B representing an improved version after experiments. Although the overall size of the pattern did not change, a number of differences exist. The number of information bits 162 and 182 (or identification features) was increased from nine to thirteen, which effectively increased the number of unique patterns. The number of columns for the information bits 162 and 182 increased from three to four, which provided for a more efficient use of limited space. Because it was observed that many typical viewing directions in robotic surgery led to more severe foreshortening of the tool image in the axial direction than in the lateral direction, the pattern of FIG. 13 B includes larger vertical spacing between the information bits 182 than horizontal spacing. The rows of the information bits 182 in the pattern of FIG.
- 13 B are also interleaved, which also helps alleviate foreshortening relative to a non-interleaved pattern.
- the diameter of the information bits 162 and 182 (dots) and the thickness of the circles were also reduced, which resulted from an observation that the testing vision system usually dilated bright features. Accordingly, the features were made thinner to maintain isolation.
- the information bits 162 , 182 , 194 in these 2-D patterns can be used in a variety of ways, such as using a number for identification bits and the remaining number for error checking/correction bits.
- the partition between identification bits and error checking/correction bits and their arrangement are flexible and can be determined based upon the specific application requirements. One may use fewer numbers of bits for error checking/correction if the imaging situation is less challenging.
- the thirteen information bits of the marker of FIG. 13 B are separated into six bits used to carry identification information (resulting in 64 unique codes), with the remaining seven bits used for error checking/correction. Among the seven error checking/correction bits, six can be set to be the inverse of the identification bits, and the remaining bit can be used as checksum data.
- FIGS. 14 A, 14 B, 15 A, 15 B, 16 A, 16 B, 17 A, and 17 B show four different multiple marker patterns by themselves and as applied to specific robotic tool instruments.
- FIGS. 14 A and 14 B respectively illustrate 2-D markers that can be used for an 8 mm (diameter, same convention for other instruments) instrument shaft and an 8 mm instrument shaft with the markers.
- FIGS. 15 A and 15 B respectively illustrate 2-D markers that can be used for a 10 mm instrument shaft and a 10 mm instrument shaft with the markers.
- FIGS. 16 A and 16 B respectively illustrate 2-D markers that can be used for a 5 mm instrument shaft and a 5 mm instrument shaft with the markers.
- FIGS. 17 A and 17 B respectively illustrate a 2-D markers that can be used for an ultrasound transducer and an ultrasound transducer with the markers.
- multiple rows of patterns can be shifted by a half a pattern to ensure some pattern is fully visible at any angle.
- a variety of approaches can be used to extract marker features from images and process the extracted information to determine image-derived tool pose estimates.
- possible approaches can include a top-down approach, a bottom-up approach, and combined top-down/bottom-up approach.
- 2-D images can be rendered from a 3-D model of the instrument at a given pose, and the rendered images can be compared with the real input images to evaluate how well they match.
- the pose that gives the best matching score is the best solution.
- a bottom-up approach tries to find some local feature in the image and then compute the solution.
- a bottom-up approach can apply to scenarios where salient local features can be extracted and grouped easily, often under some assumptions or using some heuristics. Since local features are more likely to have ambiguity, markers or background color can be added to ensure the robustness of the method.
- a bottom-up approach is generally more computationally efficient than a top-down approach, since the features can be computed locally and the approach does not involve search or iterative optimization.
- a combined top-down/bottom-up approach can be used that integrates the advantages of both of the above two classes of methods.
- a bottom-up approach can be used to report a finite number of hypotheses, which are then tested and verified using a top-down method. This type of method has sometimes been called “hypothesize and test.”
- FIG. 18 is a flow diagram of a method 200 for processing stereoscopic endoscope images of tool tracking markers.
- left image data 202 and right image data 204 are processed to extract primitive image features.
- Primary image features refers to visually salient features that can be detected locally, such as blobs and corners.
- a blob is a small patch with sufficient contrast with respect to its surroundings.
- a corner is the intersection of two edges.
- a Maximum Stable Extremal Region (MSER) approach provides an excellent way to detect blobs at an affordable cost.
- MSER is based on a very minimal assumption of boundary contrast and is therefore able to detect salient regions (blobs) of any size, and any shape.
- MSER is based on a very minimal assumption of boundary contrast and is therefore able to detect salient regions (blobs) of any size, and any shape.
- An alternative feature (blob) detector approach is to use adaptive thresholding plus connected component analysis.
- the threshold used for binarization is computed adaptively according to the mean grey value of its neighborhood.
- the kernel convolution to compute the mean at each pixel can be implemented using integral image for fast mean within a rectangular window.
- adaptive thresholding is that it works for a fixed scale. For multiple scales, it has to be run multiple times at different scales. One may also consider to run adaptive thresholding and connected component analysis in a pyramid fashion.
- a learning-based approach is also available for dot detection that considers the fine appearance of the dot to disambiguate with background dots (see D. Claus and A. W. Fitzgibbon, “Reliable fiducial detection in natural scenes,” In Proc. European Conf Computer Vision, 2004). This approach could be used for more complex marker patterns than dots.
- the output from a blob detector is a list of blobs from the image. It can be much faster to analyze these blobs than all the image pixels.
- We detected circles by a simple heuristics that the centroid of a bright blob is inside the bounding box of a dark blob and the bounding box of the dark blob is fully contained by the bounding box of the bright blob. There may be better ways to detect bars and circles (e.g., by analyzing their higher order moments). Since our overall method is tolerant to the errors in the lower level processing, we have found these methods to be sufficient.
- FIG. 19 is a flow diagram of a method 220 that can be used for processing stereo images of tool tracking markers (embodiments of which are shown in FIGS. 13 A and 13 B ).
- Method 220 follows a “hypothesize and test” framework.
- the left image data 222 and the right image data 224 can be processed to extract primitive features, which can be accomplished using a variety of methods such as an above described method.
- step 228 some of the extracted primitive features are processed so as to generate one or more localizer hypotheses (for one or more markers) by identifying one or more primitive features that exhibit characteristics of one or more marker localizer features.
- a localizer hypothesis is a tentative assumption that one or more extracted primitive features correspond to one or more localizer features in a marker.
- One or more localizer features can be used to determine positional and at least partial orientation of the marker. For example, in the 2-D markers of FIGS. 13 A and 13 B , the four circles and the bar can be used as localizer features. With these 2-D markers, the extracted primitive features (or the image data in general) can be processed to look for two circles (designated in FIGS.
- a partial orientation of the pattern can be determined (i.e., about a line in the image). It is appreciated that a range of different marker patterns can be used and that various combinations of any of the features within a marker pattern can be used as one or more localizer feature.
- step 230 the extracted primitive features are processed so as to generate one or more full pattern hypotheses.
- a full pattern hypothesis is a tentative assumption that multiple primitive features correspond to one or more marker features that can be used to determine the basic position and orientation of the marker pattern within the image, which can be skewed or foreshortened as determined by the 3-D pose of the marker relative to the imaging device.
- the localizer hypothesis (the identified circles “0” and “1” with the bar in between) can be used as a starting point to search for the remaining localizer circles (designated in FIGS. 13 A and 13 B as “2” and “3”).
- the search can look for all the compatible localizer “2” and “3” features within a search area defined by a minimum and a maximum pattern skew, and a minimum and a maximum pattern aspect ratio.
- the “2” and “3” circles do not have bar between them, which can be used to aid in their identification.
- the combination of the localizer hypothesis and the identified “2” and “3” localizer circles can be used generate a full pattern hypothesis.
- the full pattern hypothesis can also be checked to see if its perspective is less than a maximum value, by which the consistency of skew and aspect ratio can be checked.
- step 232 one or more of the generated full pattern hypotheses are verified by processing the image features so as to identify the marker.
- the generation of a full pattern hypothesis provides information regarding the position and orientation of a marker pattern within the image. This information can be used to orient or align candidate marker patterns with the imaged pattern.
- the imaged patterns and the aligned candidate marker patterns can then be checked for consistency. Where consistency exists, the imaged marker pattern can be identified as the candidate marker pattern. For example, with the 2-D marker patterns of FIGS.
- the location of detected 2-D blobs within a full pattern hypothesis can be compared with locations of information bits set to “1” (i.e., physically present) of a candidate marker pattern model that has been aligned with the full pattern hypothesis.
- the alignment of a candidate marker pattern with a marker image can also be accomplished by homography.
- Four 2-D point correspondences define a plane perspective transformation (i.e., homography), which contains all the possible transformations of a plane under perspective transformation.
- a plane approximation can be useful for a wide range of viewpoints.
- This approach involves an approximation that the marker features reside on a plane, which provides a simplified process for aligning a candidate marker pattern with a marker image.
- the image locations for the dots can be based on the image locations of the four circles by assuming the pattern is attached to a plane through a plane perspective transformation (see R.
- Marker design is closely related to how marker features are detected from images.
- the design of marker embodiments disclosed herein and feature detection methods disclosed herein have been co-evolved for better overall system performance. For example, with respect to the 2-D marker patterns of FIGS. 13 A and 13 B , if the bar between the localizer circles “0” and “1” did not exist, the specific details of the detection method would likely need to be modified. However, it is appreciated that a wide variety of marker patterns and corresponding marker feature detection methods can be practice and still be within the spirit and scope of the present invention.
- FIGS. 20 A, 20 B, 20 C, 20 D, and 20 E illustrate the method of FIG. 19 as applied to the 2-D marker of FIG. 13 B .
- the bright circles in the image are detected (as shown by the cross-hair annotations).
- two localizer hypotheses are formed using adjacent bright circles that have aligned bars.
- a full pattern hypothesis is formed by identifying the designated bright circles by searching relative to the associated localizer hypothesis.
- a candidate marker pattern is aligned with the image full pattern hypothesis and the location of candidate marker pattern features relative to the image are determined. The determined locations are used to check the image to see if corresponding detected features are present.
- FIG. 22 C illustrates a modified version of the 1-D marker 240 of FIG. 22 A .
- Marker 260 incorporates circular features 262 that can be used as localizer features similar to the localizer features of the 2-D markers of FIGS. 13 A and 13 B described above.
- the use of the circle features 262 may help to reduce the length of the pattern, thereby providing a better close range pattern.
- Dots 264 can be used for marker identification and error checking/correction data.
- FIG. 22 D illustrates an alternative version of the 1-D marker 260 of FIG. 22 C .
- Marker 265 incorporates a combination of circular features 266 and bar features 268 .
- Circular features 266 can be used as localizer features and bar features 268 can be used for marker identification and error checking/correction data.
- a difference between dots 264 of marker 260 and bar features 268 of marker 265 is that with marker 265 information is coded by the positions of the transitions between dark and bright regions, whereas marker 260 uses the positions of the centers of the dots to carry information.
- FIG. 23 is a flow diagram of a method 270 that can be used for processing stereo images of one or more 1-D tool tracking markers (embodiments of which are shown in FIGS. 22 A, 22 B, and 22 C ).
- left image data 272 and right image data 274 can be processed to extract 2-D blobs (i.e., features), which can be accomplished using approaches as described above with reference to the extraction of 2-D blobs from 2-D markers (see FIG. 19 and related discussion).
- the extracted blobs are grouped into lines. Line grouping can be performed using a Random Sample Consensus (RANSAC) approach by extracting multiple straight lines from all the detected feature points.
- RNSAC Random Sample Consensus
- Random Sample Consensus (For details of Random Sample Consensus, refer to M. A. Fischler and R. C. Bolles, “Random sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography” Comm. of the ACM, 24: 381-395, 1981, which is hereby incorporated by reference.) More discriminative features against background clutter, such as the circles in the marker of FIG. 22 C , can also be used to form hypotheses.
- one or more lines are rectified.
- Line rectification refers to removing the perspective effect on the line to restore the metric relationship of the information bits (e.g., dots).
- the vanishing point of the lines parallel to the shaft is sufficient to rectify the line.
- There are a number of ways to obtain the location of the vanishing point For example, if there are more than one visible linear markers on the shaft, the vanishing point is the intersection of these lines.
- images of points with equal or known spaces can be used to compute the vanishing point. (See, for example, FIG. 22 C for examples of linear markers having equally spaced circles.)
- one or more markers are identified. Marker identification can involve locating the start and end patterns and then reading the data bits to identify the pattern. It is appreciated that the coding scheme can be designed so as to encode sufficient redundancy for error checking. Where some data bits have been used for error checking, the error checking bits can be read to validate the identification. As discussed above, the error checking data bits can include at least one data bit used as checksum data.
- step 282 When a stereo camera is used, once a marker (1-D or 2-D) has been identified, the 3-D reconstruction of step 282 becomes a simple step. The correspondences between the imaged features in both the left and right images are known at this state, and only triangulation is needed. The resulting 3-D marker feature locations can then be used in combination with the known relative spatial arrangement of the marker features relative to the tool to determine a 3-D pose for the tool.
- a tool such as the surgical tool 26
- An individual primitive feature is usually not sufficient to serve as a marker because it may not be unique and does not provide enough geometric constraints to determine object pose.
- a number of primitive features can be used to form a pattern having a unique configuration in 3-D space, which is herein referred to as a “configuration marker.”
- the pattern i.e., configuration
- Three non-collinear features extracted from stereo images provides sufficient information to determine pose for the tool. However, having more features than the minimum requirement can be beneficial in gaining more confidence in detection and better accuracy in pose determination.
- the shape or appearance of the primitive features can be identical (e.g., circular disks of the same size), can include a few variations, or can be unique. As such, a wide variety of primitive features can be used, such as circles, dots, bars, corners, etc. Where the primitive features used include some level of variations, the resulting differences in appearance can be used to help match image locations for particular features between two stereoscopic images (i.e., using feature signatures during feature matching) and the images with the model (i.e., using feature signatures invariant or less sensitive to viewpoint and lighting changes).
- a reflective spherical surface has the nice property that it appears as a bright spot irrespective of viewpoint as long as a light source and an imaging device are aligned along a common direction, as is typically the case with endoscopic imaging during minimally-invasive robotic surgery.
- the center of the bright spot also coincides with the projection of the center of the spherical surface.
- a reflective spherical surface can be either concave or convex. In most cases, a reflective spherical surface may produce a bright spot with sufficient contrast with respect to its background to allow detection in an image for a variety of viewpoints and distances.
- FIG. 24 illustrates a primitive feature 300 that includes a concave spherical surface and that is being illuminated/imaged from three directions.
- the spherical surface 302 of the marker 300 has a center point 304 through which illumination light that is reflected directly back towards the imaging device travels. Illumination light that does not travel substantially through the center point is reflected away from the illumination/imaging direction.
- Some natural features on a tool may also appear as salient visual features in captured images. These natural features may provide additional image-derived information regarding the 3-D pose of a tool. Examples of such natural features for an exemplary surgical tool can include the end of a bolt having an approximately spherical surface, and the end of a hinge of an articulated instrument having a reflective concave spherical surface. Such natural features may form stable bright blobs in images similar to those of artificial markers. However, for many tools, such natural features by themselves may not provide a sufficient number of features to form patterns distinctive enough to be extracted against a cluttered background. By introducing artificial primitive features in conjunction with such natural features, sufficient distinctiveness can be achieved. The use of existing natural features helps reduce the number of artificial features added and therefore reduces the changes (such as appearance) to the mechanical device to be tracked.
- FIG. 25 illustrates a primitive feature 320 that includes a convex spherical surface 322 and that is being illuminated/imaged from three directions. Similar to the primitive feature of FIG. 24 , the spherical surface 322 has a center point 324 through which illumination light that is reflected directly back towards the imaging device travels. Illumination light that does not travel substantially through the center point is reflected away from the illumination/imaging direction.
- Reflective convex spherical surfaces may be more suitable for surgical applications than concave reflective spherical surfaces in that blood (or any fluid or substance in general) may be more easily trapped in concave recesses, which may cause a concave primitive feature to lose its contrast with adjacent areas of the tool, or become darker than adjacent areas depending on the amount of blood trapped.
- a reflective convex spherical surface is less likely to trap blood.
- the interaction of the reflective convex spherical surface and tissue may help keep the surface clean, which may help it to produce a bright spot even in a heavy blood field.
- FIGS. 26 A and 26 B show surgical tools having primitive features with reflective spherical surfaces. These surgical tools are for use without any special illumination, but instead are for use with an existing stereo imaging system used by a surgeon to view a procedure site in an exemplary robotic surgery system. This use is in contrast with existing systems that use controlled active infra-red (IR) illumination, which ensures that only the marker points are bright in the view, which significantly simplifies related image processing and estimation.
- IR active infra-red
- the use of an existing stereo imaging system avoids the added system complexity associated with controlled active IR illumination.
- these surgical tools have primitive features placed on their distal clevis, it is appreciated that primitive features can be placed at other locations, such as on the instrument shaft and/or the proximal clevis. It may be advantageous to select locations that are not prone to reflective image saturation.
- FIG. 27 is a flow diagram of a tool tracking method 340 that employs processing of stereo images of a tool having a configuration marker.
- the method makes use of the geometric invariance between the primitive features in 3-D space, therefore stereo matching/3-D reconstruction is performed first.
- left image data 342 and right image data 344 can be separately processed so as to extract primitive features that exhibit a qualifying amount of contrast relative to adjacent areas (i.e., bright spots).
- the extracted primitive image features are processed so as to identify “image signatures” that are consistent with the primitive features used.
- “Signatures” can be extracted for every primitive image feature. Where the primitive features used are identical in shape, their image signatures may be substantially similar. Where the primitive features used have shape or appearance variations, the resulting differences in appearance can be used to help associate a particular primitive feature with a particular primitive image feature, such as a bright spot.
- a primitive image feature signature can be extracted from the primitive image feature (i.e., image patch) around the feature point.
- a simple feature signature approach is to use the extracted primitive image feature (image patch) itself as used in traditional stereo.
- HOG Histogram of Gradient
- model based signatures may be used in step 354 .
- Matching feature signatures between image and model is expected to be more difficult than matching feature signatures between left and right stereo images since stereo images have similar viewpoints, illumination, and epipolar constraint.
- the features may need to be invariant to viewpoint and lighting conditions. If identical primitive features are used, it may be more difficult to match against a model.
- primitive features can be designed to have shapes (and resulting appearances) that are easy to match under large viewpoint variations.
- One approach is to rely on topological properties that are invariant to viewpoint change.
- An example is a circle, such as described above with reference to 1- and 2-D markers.
- a primitive feature can use multiple bright dots inside a dark dot. Even if not all of the dots are matched with a model, or even if the matches are not unique, partial matching can be useful in feature grouping.
- the matched features are used to perform 3-D feature grouping so that the correspondence of the observed features with features in the model is established (i.e., to get identified marker points in 3-D 358 ).
- the process uses 3-D positions of the features and optionally their matching score with the model primitive features and/or optionally prior knowledge on the instrument pose.
- Step 352 can be performed by a “Constellation algorithm.”
- the Constellation algorithm performed is an efficient Bayesian approach for 3-D grouping based on geometric constraint, appearance constraint, and other prior pose information on the object pose (i.e., prior object pose data 356 ).
- the use of appearance constraint is an option if the geometric constraint is insufficient.
- the output of the Constellation algorithm is the label for each observed feature, taking values from one of the model primitive features or background clutter. Random Sample Consensus (RANSAC) is used at the end to enforce the rigidity constraint.
- RASAC Random Sample Consensus
- a discernible marker that includes text and/or one or more symbols can be used for tool tracking.
- Such a discernible marker can include a wide range of text and symbols.
- a discernible marker can include a company name, a company trademark symbol, a product name, a product trademark symbol, a component name, and/or a user name.
- a discernible marker can use a variety of colors set on a variety of backgrounds.
- text and/or symbols may be light colored (such as white) set against a dark background (such as black), and vice-versa.
- FIGS. 28 A and 28 B illustrate some exemplary discernible tool markers. It can be advantageous to use a discernible marker that is familiar to a human user. Familiar information tends to blend well with the scene and may cause less distraction to users compared to other markers with similar information content.
- FIGS. 29 A, 29 B, 29 C, 29 D, 29 E, 29 F, 29 G, and 29 H illustrate some exemplary approaches that can be used to incorporate positional and/or identification information within a discernible marker.
- FIG. 29 A illustrates variations in local combinations of rectangles that can be used at text corner locations.
- Three exemplary corner types are shown, specifically corner type 1 360 , corner type 2 362 , and corner type 3 364 . Although three are shown, additional corner types can be formulated using four adjacent grid squares. Additionally, other combinations of grid squares can be used to formulate patterns that can be imaged and processed so as to be identified (e.g., a 3 by 3 pattern, a 3 by 2 pattern, etc.).
- FIGS. 29 A, 29 B, 29 C, 29 D, 29 E, 29 F, 29 G, and 29 H illustrate some exemplary approaches that can be used to incorporate positional and/or identification information within a discernible marker.
- FIG. 29 A illustrates variations in local combinations of rectangles that can be used at text corner locations.
- FIGS. 29 B and 29 C illustrate discernible text constructed using rectangular features selected from a 2-D array (i.e., checkerboard array) of rectangular primatives.
- FIG. 29 D illustrates how a discernible text marker can be configured to have more corner features while still being readable.
- FIGS. 29 E and 29 F illustrate how a variation in the amount of overlap between adjacent rectangles can be used to change the appearance of the resulting text ( FIG. 29 E having no overlap and FIG. 29 F having slight overlap, which makes the “cross” point/saddle point look like a cross point). Such an overlap may help compensate where an imaging system dilates the white area(s).
- FIGS. 290 and 29 H illustrates discernible text markers having features at multiple scales. Unlike the marker shown in FIG. 290 , the marker shown in FIG. 29 H does not include a second level that is readily discernible by a human viewer, which may be advantageous in certain situations.
- FIGS. 30 A, 30 B, 30 C, and 30 D illustrate some additional exemplary discernible marker designs.
- Marker 370 is similar to marker 190 (shown in FIG. 13 C ), but information dots 194 are replaced with discernible letters 372 .
- Marker 380 is similar to marker 370 , but has been extended to multiple rows.
- Marker 390 is an example where the text background 392 differs from its surroundings 394 so that the rectangular structure of the text background can be used to provide alignment.
- Marker 400 is similar to marker 390 , but includes four corners 402 having saddle points, which are more distinctive relative to surroundings.
- Marker 410 illustrates the use of a portion of text as a localizer, such as the letter “V” 412 shown, and the rest of the text for identification and/or verification (i.e., error checking/correction).
- the part(s) chosen for a localizer(s) can be enlarged or modified with more visual features to ensure they can be detected reliably from images.
- Marker 420 illustrates the use of added localizer features, such as circles 422 , that are blended with the text.
- FIG. 31 is a flow diagram of a tool tracking method 430 that employs processing of an image of a tool having a discernible marker.
- the method 430 produces matched feature points that can be used to estimate a 3-D pose for a tool using above described methods.
- step 434 feature detection
- feature points e.g., corners
- a discernable marker can be configured to boost the number of such stable features, such as by using a rectangular font or by including zigzagged strokes (e.g., see FIG. 29 D ).
- a variety of approaches can be used for feature detection.
- One such approach is to use a corner detector. (See C. Harris and M. Stephens (1988), “A combined corner and edge detector,” in Proceedings of the 4 th Alvey Vision Conference : pages 147-151.)
- Another approach is to locate distinctive image features from scale-invariant keypoints. (See D. Lowe (2004), “Distinctive Image Features from Scale-Invariant Keypoints,” in International Journal of Computer Vision, 2004.)
- a description of the neighborhood around a feature point(s) is determined.
- a variety of approaches can be used for feature description.
- One such approach is to use adaptive thresholding to convert a gray scale image to a binary image and use Shape Context as the descriptor.
- Shape Context As the descriptor.
- Another approach is to use Histogram of Orientation as the descriptor on a gray scale image. (see D. Lowe (2004), “Distinctive Image Features from Scale-Invariant Keypoints,” in International Journal of Computer Vision, 2004, which is hereby incorporated by reference.)
- step 438 feature matching
- individual feature points are matched against feature points from images of models using model features with descriptions data 450 .
- the model feature with descriptions data 450 can be formulated off-line (using 442) by processing model image data 444 so as to detect (step 446 ) and generate descriptions (step 448 ) for model features, which can be accomplished using the above described approaches.
- a number of model images from various viewpoints can be used to facilitate the matching of markers viewed at different viewpoints.
- step 440 feature grouping
- the matched features are grouped so as to enforce geometric constraints among the matched points.
- Pose estimation and robust estimation can be used during the grouping of the feature points and can provide for outlier rejection of inconsistent feature points.
- the resulting matched feature points data 452 can be used for tool state estimation using above-described methods.
- Pose data from multiple time instances can be used in the determination of an object's pose. For example, different video frames over time can provide extra constraint on the pose of an object, such as a minimally invasive surgical instrument, that can be used to help outliers which are not consistent with the constraint.
- an object such as a minimally invasive surgical instrument
- Kinematic constrains can also be used in the determination of an object's pose.
- the surgical instruments are inserted into the patient body through insertion points on the body wall. These insertion points are fixed and surgical tools are constrained to pass through these points.
- Such insertion point constraint implies that the surgical tool's axes at different times intersect at a common point. Accordingly, a tool pose whose axis does not pass through the insertion point can be classified as an outlier and therefore discarded by using a robust estimation technique, such as RANSAC.
- Pose data for multiple tools for multiple time instances can be used to identify a tool in an image of two or more tools. For example, when two or more tools in an image have identical markers, an image-derived pose for one of the tools can be compared with an estimated pose for that tool.
- the estimated pose can be generated by using at least one prior tool state from a prior image of the tool or joint data from a robotic actuation system effectuating movement of the tool. Where the imaged-derived tool pose is within a predetermined deviation of the estimated pose, the identity of the tool can be confirmed.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Robotics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Endoscopes (AREA)
- Manipulator (AREA)
- Image Processing (AREA)
Abstract
Description
where ψi,j(oi, oj) is the pair-wise distance compatibility function within each pattern. ∈ is a neighborhood radius defined by the maximum pattern spread in the model.
where σ is the measurement noise of the distance between nodes and α is a background likelihood which should be lower than the likelihood of a true match.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/942,777 US12251170B2 (en) | 2008-12-31 | 2022-09-12 | Configuration marker design and detection for instrument tracking |
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US20397508P | 2008-12-31 | 2008-12-31 | |
| US12/428,691 US9867669B2 (en) | 2008-12-31 | 2009-04-23 | Configuration marker design and detection for instrument tracking |
| US15/699,858 US10675098B2 (en) | 2008-12-31 | 2017-09-08 | Configuration marker design and detection for instrument tracking |
| US16/859,755 US11471221B2 (en) | 2008-12-31 | 2020-04-27 | Configuration marker design and detection for instrument tracking |
| US17/942,777 US12251170B2 (en) | 2008-12-31 | 2022-09-12 | Configuration marker design and detection for instrument tracking |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/859,755 Continuation US11471221B2 (en) | 2008-12-31 | 2020-04-27 | Configuration marker design and detection for instrument tracking |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20230000568A1 US20230000568A1 (en) | 2023-01-05 |
| US12251170B2 true US12251170B2 (en) | 2025-03-18 |
Family
ID=42285854
Family Applications (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/428,691 Active 2031-09-12 US9867669B2 (en) | 2005-05-16 | 2009-04-23 | Configuration marker design and detection for instrument tracking |
| US15/699,858 Active 2029-10-26 US10675098B2 (en) | 2008-12-31 | 2017-09-08 | Configuration marker design and detection for instrument tracking |
| US16/859,755 Active 2030-01-11 US11471221B2 (en) | 2008-12-31 | 2020-04-27 | Configuration marker design and detection for instrument tracking |
| US17/942,777 Active US12251170B2 (en) | 2008-12-31 | 2022-09-12 | Configuration marker design and detection for instrument tracking |
Family Applications Before (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/428,691 Active 2031-09-12 US9867669B2 (en) | 2005-05-16 | 2009-04-23 | Configuration marker design and detection for instrument tracking |
| US15/699,858 Active 2029-10-26 US10675098B2 (en) | 2008-12-31 | 2017-09-08 | Configuration marker design and detection for instrument tracking |
| US16/859,755 Active 2030-01-11 US11471221B2 (en) | 2008-12-31 | 2020-04-27 | Configuration marker design and detection for instrument tracking |
Country Status (5)
| Country | Link |
|---|---|
| US (4) | US9867669B2 (en) |
| EP (1) | EP2391289B1 (en) |
| KR (1) | KR101709277B1 (en) |
| CN (1) | CN102341054B (en) |
| WO (1) | WO2010078016A1 (en) |
Families Citing this family (146)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9155544B2 (en) * | 2002-03-20 | 2015-10-13 | P Tech, Llc | Robotic systems and methods |
| US10555775B2 (en) | 2005-05-16 | 2020-02-11 | Intuitive Surgical Operations, Inc. | Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery |
| US9867669B2 (en) | 2008-12-31 | 2018-01-16 | Intuitive Surgical Operations, Inc. | Configuration marker design and detection for instrument tracking |
| US9526587B2 (en) * | 2008-12-31 | 2016-12-27 | Intuitive Surgical Operations, Inc. | Fiducial marker design and detection for locating surgical instrument in images |
| US8073528B2 (en) | 2007-09-30 | 2011-12-06 | Intuitive Surgical Operations, Inc. | Tool tracking systems, methods and computer products for image guided surgery |
| US7907166B2 (en) * | 2005-12-30 | 2011-03-15 | Intuitive Surgical Operations, Inc. | Stereo telestration for robotic surgery |
| US8830224B2 (en) | 2008-12-31 | 2014-09-09 | Intuitive Surgical Operations, Inc. | Efficient 3-D telestration for local robotic proctoring |
| US8374723B2 (en) * | 2008-12-31 | 2013-02-12 | Intuitive Surgical Operations, Inc. | Obtaining force information in a minimally invasive surgical procedure |
| US8594841B2 (en) * | 2008-12-31 | 2013-11-26 | Intuitive Surgical Operations, Inc. | Visual force feedback in a minimally invasive surgical procedure |
| US8184880B2 (en) | 2008-12-31 | 2012-05-22 | Intuitive Surgical Operations, Inc. | Robust sparse image matching for robotic surgery |
| US9155592B2 (en) * | 2009-06-16 | 2015-10-13 | Intuitive Surgical Operations, Inc. | Virtual measurement tool for minimally invasive surgery |
| EP2486846A4 (en) * | 2009-10-09 | 2016-07-13 | Olympus Corp | Endoscope device |
| EP2493387A4 (en) | 2009-10-30 | 2017-07-19 | The Johns Hopkins University | Visual tracking and annotation of clinically important anatomical landmarks for surgical interventions |
| EP2547278B2 (en) * | 2010-03-17 | 2019-10-23 | Brainlab AG | Flow control in computer-assisted surgery based on marker positions |
| WO2012033838A2 (en) * | 2010-09-07 | 2012-03-15 | Yacoubian Stephan V | Multiple purpose surgical instruments |
| US9439556B2 (en) | 2010-12-10 | 2016-09-13 | Wayne State University | Intelligent autonomous camera control for robotics with medical, military, and space applications |
| EP3278744B1 (en) | 2011-02-15 | 2021-10-20 | Intuitive Surgical Operations, Inc. | Indicator for knife location in a stapling or vessel sealing instrument |
| US9259289B2 (en) | 2011-05-13 | 2016-02-16 | Intuitive Surgical Operations, Inc. | Estimation of a position and orientation of a frame used in controlling movement of a tool |
| US9805624B2 (en) | 2011-09-30 | 2017-10-31 | Regents Of The University Of Minnesota | Simulated, representative high-fidelity organosilicate tissue models |
| US9554763B2 (en) | 2011-10-28 | 2017-01-31 | Navigate Surgical Technologies, Inc. | Soft body automatic registration and surgical monitoring system |
| US9585721B2 (en) | 2011-10-28 | 2017-03-07 | Navigate Surgical Technologies, Inc. | System and method for real time tracking and modeling of surgical site |
| US8908918B2 (en) | 2012-11-08 | 2014-12-09 | Navigate Surgical Technologies, Inc. | System and method for determining the three-dimensional location and orientation of identification markers |
| US9198737B2 (en) | 2012-11-08 | 2015-12-01 | Navigate Surgical Technologies, Inc. | System and method for determining the three-dimensional location and orientation of identification markers |
| US8938282B2 (en) | 2011-10-28 | 2015-01-20 | Navigate Surgical Technologies, Inc. | Surgical location monitoring system and method with automatic registration |
| US11304777B2 (en) | 2011-10-28 | 2022-04-19 | Navigate Surgical Technologies, Inc | System and method for determining the three-dimensional location and orientation of identification markers |
| US9566123B2 (en) | 2011-10-28 | 2017-02-14 | Navigate Surgical Technologies, Inc. | Surgical location monitoring system and method |
| US12070365B2 (en) | 2012-03-28 | 2024-08-27 | Navigate Surgical Technologies, Inc | System and method for determining the three-dimensional location and orientation of identification markers |
| US20130261433A1 (en) * | 2012-03-28 | 2013-10-03 | Navident Technologies, Inc. | Haptic simulation and surgical location monitoring system and method |
| KR20130121521A (en) * | 2012-04-27 | 2013-11-06 | 주식회사 고영테크놀러지 | Method for tracking of the affected part and surgery instrument |
| WO2013165529A2 (en) * | 2012-05-03 | 2013-11-07 | Poniatowski Lauren H | Systems and methods for analyzing surgical techniques |
| US9186053B2 (en) | 2012-05-03 | 2015-11-17 | Covidien Lp | Methods of using light to repair hernia defects |
| US8750568B2 (en) | 2012-05-22 | 2014-06-10 | Covidien Lp | System and method for conformal ablation planning |
| US9439622B2 (en) | 2012-05-22 | 2016-09-13 | Covidien Lp | Surgical navigation system |
| US9439627B2 (en) | 2012-05-22 | 2016-09-13 | Covidien Lp | Planning system and navigation system for an ablation procedure |
| US9439623B2 (en) | 2012-05-22 | 2016-09-13 | Covidien Lp | Surgical planning system and navigation system |
| US9498182B2 (en) * | 2012-05-22 | 2016-11-22 | Covidien Lp | Systems and methods for planning and navigation |
| US10758315B2 (en) * | 2012-06-21 | 2020-09-01 | Globus Medical Inc. | Method and system for improving 2D-3D registration convergence |
| US12310683B2 (en) * | 2012-06-21 | 2025-05-27 | Globus Medical, Inc. | Surgical tool systems and method |
| CN102779344B (en) * | 2012-07-02 | 2014-08-27 | 济南大学 | Registering block for space exchange and use method thereof |
| EP2884934B1 (en) | 2012-08-15 | 2020-10-14 | Intuitive Surgical Operations, Inc. | Movable surgical mounting platform controlled by manual motion of robotic arms |
| WO2014071291A2 (en) * | 2012-11-02 | 2014-05-08 | Strongwatch Corporation, Nevada C Corp | Wide area imaging system and method |
| EP3932628A1 (en) | 2012-12-10 | 2022-01-05 | Intuitive Surgical Operations, Inc. | Collision avoidance during controlled movement of image capturing device and manipulatable device movable arms |
| US9230326B1 (en) | 2012-12-31 | 2016-01-05 | Cognex Corporation | System, method and calibration plate employing embedded 2D data codes as self-positioning fiducials |
| KR101371384B1 (en) * | 2013-01-10 | 2014-03-07 | 경북대학교 산학협력단 | Tracking system and method for tracking using the same |
| KR101371387B1 (en) * | 2013-01-18 | 2014-03-10 | 경북대학교 산학협력단 | Tracking system and method for tracking using the same |
| US20140221828A1 (en) * | 2013-02-05 | 2014-08-07 | Muffin Incorporated | Non-linear echogenic markers |
| KR101446173B1 (en) * | 2013-02-21 | 2014-10-01 | 주식회사 고영테크놀러지 | Tracking system and method for tracking using the same |
| KR102119534B1 (en) | 2013-03-13 | 2020-06-05 | 삼성전자주식회사 | Surgical robot and method for controlling the same |
| US12514678B2 (en) | 2013-03-18 | 2026-01-06 | Navigate Surgical Technologies, Inc. | System and method for determining the three-dimensional location and orientation of identification markers |
| US9489738B2 (en) | 2013-04-26 | 2016-11-08 | Navigate Surgical Technologies, Inc. | System and method for tracking non-visible structure of a body with multi-element fiducial |
| KR101406220B1 (en) * | 2013-04-30 | 2014-06-12 | 경북대학교 산학협력단 | Optical tracking system and method for tracking using the same |
| US9592095B2 (en) | 2013-05-16 | 2017-03-14 | Intuitive Surgical Operations, Inc. | Systems and methods for robotic medical system integration with external imaging |
| CA2919165A1 (en) | 2013-08-13 | 2015-02-19 | Navigate Surgical Technologies, Inc. | Method for determining the location and orientation of a fiducial reference |
| US9456122B2 (en) | 2013-08-13 | 2016-09-27 | Navigate Surgical Technologies, Inc. | System and method for focusing imaging devices |
| CA2924820A1 (en) * | 2013-10-04 | 2015-04-09 | Stryker Corporation | System and method for interacting with an object |
| US8880151B1 (en) * | 2013-11-27 | 2014-11-04 | Clear Guide Medical, Llc | Surgical needle for a surgical system with optical recognition |
| US9622720B2 (en) | 2013-11-27 | 2017-04-18 | Clear Guide Medical, Inc. | Ultrasound system with stereo image guidance or tracking |
| US9597009B2 (en) * | 2013-12-19 | 2017-03-21 | Novartis Ag | Marker-based tool tracking |
| US10307079B2 (en) | 2014-01-27 | 2019-06-04 | Edda Technology, Inc. | Method and system for surgical instrument guidance and tracking with position and orientation correction |
| WO2015118157A1 (en) * | 2014-02-10 | 2015-08-13 | Navigate Surgical Technologies, Inc. | System and method for determining the three-dimensional location and orientation of identification markers |
| US20150223921A1 (en) | 2014-02-11 | 2015-08-13 | Brian Kieser | Structurally encoded implantable devices |
| US9424503B2 (en) | 2014-08-11 | 2016-08-23 | Brian Kieser | Structurally encoded component and method of manufacturing structurally encoded component |
| JP6278747B2 (en) * | 2014-02-28 | 2018-02-14 | オリンパス株式会社 | Manipulator calibration method, manipulator, and manipulator system |
| DE102014104802A1 (en) * | 2014-04-03 | 2015-10-08 | Aesculap Ag | Medical referencing device, medical navigation system and method |
| DE102014104800A1 (en) | 2014-04-03 | 2015-10-08 | Aesculap Ag | Medical fastening device and referencing device and medical instruments |
| CA2926861C (en) * | 2014-05-21 | 2017-03-07 | Millennium Three Technologies Inc | Fiducial marker patterns, their automatic detection in images, and applications thereof |
| WO2016026511A1 (en) * | 2014-08-18 | 2016-02-25 | G-coder Systems AB | Arrangement for minimal invasive intervention |
| US10828104B2 (en) | 2014-09-15 | 2020-11-10 | Synaptive Medical (Barbados) Inc. | Surgical navigation system using image segmentation |
| US9943374B2 (en) * | 2014-09-16 | 2018-04-17 | X-Nav Technologies, LLC | Image guidance system for detecting and tracking an image pose |
| DE102014222293A1 (en) * | 2014-10-31 | 2016-05-19 | Siemens Aktiengesellschaft | Method for automatically monitoring the penetration behavior of a trocar held by a robot arm and monitoring system |
| US20160125762A1 (en) * | 2014-11-05 | 2016-05-05 | Illinois Tool Works Inc. | System and method for welding system clamp assembly |
| US10376181B2 (en) * | 2015-02-17 | 2019-08-13 | Endochoice, Inc. | System for detecting the location of an endoscopic device during a medical procedure |
| JP6772180B2 (en) | 2015-04-06 | 2020-10-21 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | Alignment compensation system and method in image-guided surgery |
| EP3326566A4 (en) * | 2015-07-23 | 2019-07-03 | Olympus Corporation | Medical system and operation method therefor |
| CN107847110B (en) | 2015-07-23 | 2019-11-08 | 奥林巴斯株式会社 | Manipulators and Medical Systems |
| WO2017018748A1 (en) * | 2015-07-24 | 2017-02-02 | 주식회사 고영테크놀러지 | Optical tracking marker, optical tracking system, and optical tracking method |
| US10973587B2 (en) | 2015-08-19 | 2021-04-13 | Brainlab Ag | Reference array holder |
| WO2017050761A1 (en) * | 2015-09-21 | 2017-03-30 | Navigate Surgical Technologies, Inc. | System and method for determining the three-dimensional location and orientation of identification markers |
| WO2017054817A1 (en) * | 2015-10-01 | 2017-04-06 | Olaf Christiansen | Endoscopic image processing system for surgery using means which generate geometric distance information in the detection region of an optical digital camera |
| US20180045960A1 (en) | 2015-12-02 | 2018-02-15 | Augmenteum, LLC. | System for and method of projecting augmentation imagery in a head-mounted display |
| JP6878435B2 (en) * | 2015-12-18 | 2021-05-26 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Medical device tracking |
| WO2017169823A1 (en) * | 2016-03-30 | 2017-10-05 | ソニー株式会社 | Image processing device and method, surgery system, and surgical member |
| US11071449B2 (en) | 2016-03-31 | 2021-07-27 | Alcon Inc. | Visualization system for ophthalmic surgery |
| US11116580B2 (en) * | 2016-08-02 | 2021-09-14 | Brainlab Ag | Solid-joint deformation-model verification |
| KR101820682B1 (en) | 2016-08-09 | 2018-01-23 | 주식회사 고영테크놀러지 | Marker for optical tracking, optical tracking system, and optical tracking method |
| US10413373B2 (en) | 2016-08-16 | 2019-09-17 | Ethicon, Llc | Robotic visualization and collision avoidance |
| US10973585B2 (en) | 2016-09-21 | 2021-04-13 | Alcon Inc. | Systems and methods for tracking the orientation of surgical tools |
| US10235771B2 (en) * | 2016-11-11 | 2019-03-19 | Qualcomm Incorporated | Methods and systems of performing object pose estimation |
| EP3551117B1 (en) * | 2016-12-07 | 2024-06-05 | Koninklijke Philips N.V. | Image guided motion scaling for robot control |
| US10918445B2 (en) | 2016-12-19 | 2021-02-16 | Ethicon Llc | Surgical system with augmented reality display |
| US10872176B2 (en) * | 2017-01-23 | 2020-12-22 | General Electric Company | Methods of making and monitoring a component with an integral strain indicator |
| US10751133B2 (en) | 2017-03-31 | 2020-08-25 | Koninklijke Philips N.V. | Markerless robot tracking systems, controllers and methods |
| WO2018189742A1 (en) | 2017-04-13 | 2018-10-18 | V.T.M. (Virtual Tape Measure) Technologies Ltd. | Endoscopic measurement methods and tools |
| CN109745119B (en) * | 2017-11-06 | 2023-09-29 | 艾博纽诺股份公司 | Device and method for calibrating and measuring target points for a cerebral nerve navigator |
| WO2019103954A1 (en) * | 2017-11-21 | 2019-05-31 | Intuitive Surgical Operations, Inc. | Systems and methods for master/tool registration and control for intuitive motion |
| KR102102291B1 (en) * | 2017-12-20 | 2020-04-21 | 주식회사 고영테크놀러지 | Optical tracking system and optical tracking method |
| CN110384555B (en) * | 2018-04-19 | 2021-03-12 | 中国科学院深圳先进技术研究院 | Holding mirror surgical robot based on far-end center movement mechanism |
| US11007018B2 (en) | 2018-06-15 | 2021-05-18 | Mako Surgical Corp. | Systems and methods for tracking objects |
| US10252350B1 (en) | 2018-06-17 | 2019-04-09 | Arevo, Inc. | Fiducial marks for articles of manufacture with non-trivial dimensional variations |
| EP3829476B1 (en) * | 2018-08-01 | 2025-04-09 | Brain Navi Biotechnology Co., Ltd. | Method and system to update an operation pathway for a robotic arm assembly. |
| CN112566581B (en) | 2018-08-10 | 2024-03-19 | 柯惠有限合伙公司 | System for ablation visualization |
| WO2020055707A1 (en) * | 2018-09-14 | 2020-03-19 | Covidien Lp | Surgical robotic systems and methods of tracking usage of surgical instruments thereof |
| US12005574B2 (en) * | 2018-10-04 | 2024-06-11 | Intuitive Surgical Operations, Inc. | Systems and methods for motion control of steerable devices |
| JP2022502119A (en) | 2018-10-15 | 2022-01-11 | メイザー ロボティックス リミテッド | General-purpose multi-arm robot surgical system |
| US11113837B2 (en) * | 2019-10-25 | 2021-09-07 | 7-Eleven, Inc. | Sensor mapping to a global coordinate system |
| US11720766B2 (en) * | 2018-12-28 | 2023-08-08 | Packsize Llc | Systems and methods for text and barcode reading under perspective distortion |
| US11538570B2 (en) | 2019-01-04 | 2022-12-27 | Gentex Corporation | Authentication and informational displays with adaptive lighting array |
| EP3972518A4 (en) | 2019-05-22 | 2023-10-11 | Covidien LP | STORAGE ARRANGEMENTS FOR ROBOT SURGICAL ARMS AND METHOD FOR REPLACING ROBOT SURGICAL ARM USING THE STORAGE ARRANGEMENTS |
| US11399896B2 (en) * | 2019-06-20 | 2022-08-02 | Sony Group Corporation | Surgical tool tip and orientation determination |
| CN111870288B (en) * | 2019-09-10 | 2021-11-02 | 深圳市精锋医疗科技有限公司 | Surgical robot and its control method and control device |
| US11627868B2 (en) | 2019-10-17 | 2023-04-18 | Synaptive Medical Inc. | Systems and methods for controlling autofocus operations |
| US12268459B2 (en) * | 2019-11-26 | 2025-04-08 | Intuitive Surgical Operations, Inc. | Physical medical element affixation systems, methods, and materials |
| FI20196022A1 (en) * | 2019-11-27 | 2021-05-28 | Novatron Oy | Method and positioning system for determining location and orientation of machine |
| WO2021173541A1 (en) * | 2020-02-24 | 2021-09-02 | Intuitive Surgical Operations, Inc. | Systems and methods for registration feature integrity checking |
| EP3881793A1 (en) * | 2020-03-17 | 2021-09-22 | CHU de NICE | Surgical instrument and computer-implemented method for determining the position and orientation of such surgical instrument |
| CN112075994B (en) | 2020-04-13 | 2021-08-31 | 上海复拓知达医疗科技有限公司 | Optical markers and medical device components for locating medical devices |
| US12274525B2 (en) | 2020-09-29 | 2025-04-15 | Mazor Robotics Ltd. | Systems and methods for tracking anatomical motion |
| CN112704566B (en) * | 2020-12-29 | 2022-11-25 | 上海微创医疗机器人(集团)股份有限公司 | Surgical consumable checking method and surgical robot system |
| US12465441B2 (en) | 2021-02-01 | 2025-11-11 | Mazor Robotics Ltd. | Multi-arm robotic systems and methods for identifying a target |
| US12508085B2 (en) | 2021-02-04 | 2025-12-30 | Mazor Robotics Ltd. | Systems, methods, and devices for verifying mechanical coupling between anatomical elements |
| US11832909B2 (en) | 2021-03-31 | 2023-12-05 | Moon Surgical Sas | Co-manipulation surgical system having actuatable setup joints |
| US12167900B2 (en) | 2021-03-31 | 2024-12-17 | Moon Surgical Sas | Co-manipulation surgical system having automated preset robot arm configurations |
| US12042241B2 (en) | 2021-03-31 | 2024-07-23 | Moon Surgical Sas | Co-manipulation surgical system having automated preset robot arm configurations |
| US11819302B2 (en) | 2021-03-31 | 2023-11-21 | Moon Surgical Sas | Co-manipulation surgical system having user guided stage control |
| EP4312857B1 (en) | 2021-03-31 | 2025-09-03 | Moon Surgical SAS | Co-manipulation surgical system for use with surgical instruments for performing laparoscopic surgery |
| US11844583B2 (en) | 2021-03-31 | 2023-12-19 | Moon Surgical Sas | Co-manipulation surgical system having an instrument centering mode for automatic scope movements |
| US12178418B2 (en) | 2021-03-31 | 2024-12-31 | Moon Surgical Sas | Co-manipulation surgical system having a coupling mechanism removeably attachable to surgical instruments |
| US11812938B2 (en) | 2021-03-31 | 2023-11-14 | Moon Surgical Sas | Co-manipulation surgical system having a coupling mechanism removeably attachable to surgical instruments |
| CN117615732A (en) * | 2021-07-05 | 2024-02-27 | 月球外科公司 | Co-steering surgical system with optical scanner for use with surgical instruments for performing laparoscopic surgery |
| US20230013550A1 (en) * | 2021-07-15 | 2023-01-19 | DePuy Synthes Products, Inc. | End effector identification in surgical robotic systems |
| CN113855235B (en) * | 2021-08-02 | 2024-06-14 | 应葵 | Magnetic resonance navigation method and device for microwave thermal ablation surgery in the liver |
| EP4388243A4 (en) | 2021-08-20 | 2024-12-04 | Gentex Corporation | Lighting assembly and illumination system having a lighting assembly |
| CN113712665B (en) * | 2021-11-01 | 2022-04-22 | 北京柏惠维康科技有限公司 | Positioning method and device based on positioning marker and computer storage medium |
| CN116681708A (en) * | 2022-02-23 | 2023-09-01 | 北京大学 | Convex Mirror Image Generation, Semantic Segmentation Method and Device |
| EP4507615A1 (en) * | 2022-04-15 | 2025-02-19 | Stryker Corporation | Pointer tool for endoscopic surgical procedures |
| US12479098B2 (en) | 2022-08-03 | 2025-11-25 | Covidien Lp | Surgical robotic system with access port storage |
| US12465447B2 (en) | 2022-08-25 | 2025-11-11 | Covidien Lp | Surgical robotic system with instrument detection |
| US12318150B2 (en) * | 2022-10-11 | 2025-06-03 | Globus Medical Inc. | Camera tracking system for computer assisted surgery navigation |
| US12496728B2 (en) | 2022-10-25 | 2025-12-16 | Covidien Lp | Surgical robotic system and method for restoring operational state |
| US11986165B1 (en) | 2023-01-09 | 2024-05-21 | Moon Surgical Sas | Co-manipulation surgical system for use with surgical instruments for performing laparoscopic surgery while estimating hold force |
| US12370001B2 (en) | 2023-01-09 | 2025-07-29 | Moon Surgical Sas | Co-manipulation surgical system having automated user override detection |
| US11832910B1 (en) | 2023-01-09 | 2023-12-05 | Moon Surgical Sas | Co-manipulation surgical system having adaptive gravity compensation |
| US12502779B2 (en) | 2023-03-30 | 2025-12-23 | Omron Corporation | Method and apparatus for improved sampling-based graph generation for online path planning by a robot |
| US12397424B2 (en) | 2023-03-31 | 2025-08-26 | Omron Corporation | Method and apparatus for online task planning and robot motion control |
| USD1087995S1 (en) | 2023-08-02 | 2025-08-12 | Covidien Lp | Surgeon display screen with a transitional graphical user interface having staple firing icon |
| USD1087135S1 (en) | 2023-08-02 | 2025-08-05 | Covidien Lp | Surgeon display screen with a graphical user interface having spent staple icon |
Citations (41)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4614366A (en) | 1983-11-18 | 1986-09-30 | Exactident, Inc. | Nail identification wafer |
| US5572999A (en) | 1992-05-27 | 1996-11-12 | International Business Machines Corporation | Robotic system for positioning a surgical instrument relative to a patient's body |
| US5836869A (en) | 1994-12-13 | 1998-11-17 | Olympus Optical Co., Ltd. | Image tracking endoscope system |
| US5848967A (en) | 1991-01-28 | 1998-12-15 | Cosman; Eric R. | Optically coupled frameless stereotactic system and method |
| US5891034A (en) | 1990-10-19 | 1999-04-06 | St. Louis University | System for indicating the position of a surgical probe within a head on an image of the head |
| US6006126A (en) | 1991-01-28 | 1999-12-21 | Cosman; Eric R. | System and method for stereotactic registration of image scan data |
| US6122541A (en) | 1995-05-04 | 2000-09-19 | Radionics, Inc. | Head band for frameless stereotactic registration |
| US6167295A (en) | 1991-01-28 | 2000-12-26 | Radionics, Inc. | Optical and computer graphic stereotactic localizer |
| US20020065461A1 (en) | 1991-01-28 | 2002-05-30 | Cosman Eric R. | Surgical positioning system |
| US6434416B1 (en) | 1998-11-10 | 2002-08-13 | Olympus Optical Co., Ltd. | Surgical microscope |
| US6468265B1 (en) | 1998-11-20 | 2002-10-22 | Intuitive Surgical, Inc. | Performing cardiac surgery without cardioplegia |
| US20020193686A1 (en) | 2000-01-10 | 2002-12-19 | Pinhas Gilboa | Methods and systems for performing medical procedures with reference to projective image and with respect to pre-stored images |
| US20030210812A1 (en) * | 2002-02-26 | 2003-11-13 | Ali Khamene | Apparatus and method for surgical navigation |
| US20040002642A1 (en) | 2002-07-01 | 2004-01-01 | Doron Dekel | Video pose tracking system and method |
| US20040052333A1 (en) | 2002-09-13 | 2004-03-18 | James Sayre | Device and method for margin marking of radiography specimens |
| US20040138556A1 (en) | 1991-01-28 | 2004-07-15 | Cosman Eric R. | Optical object tracking system |
| US6826423B1 (en) | 1999-01-04 | 2004-11-30 | Midco-Medical Instrumentation And Diagnostics Corporation | Whole body stereotactic localization and immobilization system |
| US20050182295A1 (en) | 2003-12-12 | 2005-08-18 | University Of Washington | Catheterscope 3D guidance and interface system |
| WO2005102202A1 (en) | 2004-04-26 | 2005-11-03 | Orthosoft Inc. | Method for permanent calibration based on actual measurement |
| US20060142657A1 (en) | 2002-03-06 | 2006-06-29 | Mako Surgical Corporation | Haptic guidance system and method |
| WO2006091494A1 (en) | 2005-02-22 | 2006-08-31 | Mako Surgical Corp. | Haptic guidance system and method |
| US20060258938A1 (en) | 2005-05-16 | 2006-11-16 | Intuitive Surgical Inc. | Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery |
| US7137712B2 (en) | 1999-12-23 | 2006-11-21 | Northern Digital Inc. | Reflector system for determining position |
| WO2006131373A2 (en) | 2005-06-09 | 2006-12-14 | Ife Industrielle Forschung Und Entwicklung Gmbh | Device for the contactless determination and measurement of a spatial position and/or a spatial orientation of bodies |
| US20070013336A1 (en) | 2005-05-19 | 2007-01-18 | Intuitive Surgical Inc. | Software center and highly configurable robotic systems for surgery and other uses |
| US20070167702A1 (en) | 2005-12-30 | 2007-07-19 | Intuitive Surgical Inc. | Medical robotic system providing three-dimensional telestration |
| US20070183041A1 (en) | 2006-02-09 | 2007-08-09 | Northern Digital Inc. | Retroreflective marker-tracking systems |
| US20070265527A1 (en) | 2006-05-11 | 2007-11-15 | Richard Wohlgemuth | Medical position determination using redundant position detection means and priority weighting for the position detection means |
| US20070288124A1 (en) | 2004-08-25 | 2007-12-13 | Kabushiki Kaisha Yaskawa Denki | Evaluating System And Evaluating Method Of Robot |
| US20080077158A1 (en) * | 2006-06-16 | 2008-03-27 | Hani Haider | Method and Apparatus for Computer Aided Surgery |
| US20080132909A1 (en) | 2006-12-01 | 2008-06-05 | Medtronic Navigation, Inc. | Portable electromagnetic navigation system |
| US20080140087A1 (en) * | 2006-05-17 | 2008-06-12 | Hansen Medical Inc. | Robotic instrument system |
| US20080172119A1 (en) | 2007-01-12 | 2008-07-17 | Medtronic Vascular, Inc. | Prosthesis Deployment Apparatus and Methods |
| US20080240551A1 (en) | 2007-03-30 | 2008-10-02 | Microsoft Corporation | Local bi-gram model for object recognition |
| US20080262345A1 (en) | 2003-07-21 | 2008-10-23 | The John Hopkins University | Image registration of multiple medical imaging modalities using a multiple degree-of-freedom-encoded fiducial device |
| US20080285724A1 (en) | 2007-05-05 | 2008-11-20 | Ziehm Imaging Gmbh | X-ray diagnostic imaging system with a plurality of coded markers |
| US7747311B2 (en) | 2002-03-06 | 2010-06-29 | Mako Surgical Corp. | System and method for interactive haptic positioning of a medical device |
| US20100168562A1 (en) | 2008-12-31 | 2010-07-01 | Intuitive Surgical, Inc. | Fiducial marker design and detection for locating surgical instrument in images |
| US7797032B2 (en) | 1999-10-28 | 2010-09-14 | Medtronic Navigation, Inc. | Method and system for navigating a catheter probe in the presence of field-influencing objects |
| US7831292B2 (en) | 2002-03-06 | 2010-11-09 | Mako Surgical Corp. | Guidance system and method for surgical procedures with improved feedback |
| US9867669B2 (en) | 2008-12-31 | 2018-01-16 | Intuitive Surgical Operations, Inc. | Configuration marker design and detection for instrument tracking |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6720988B1 (en) | 1998-12-08 | 2004-04-13 | Intuitive Surgical, Inc. | Stereo imaging system and method for use in telerobotic systems |
-
2009
- 2009-04-23 US US12/428,691 patent/US9867669B2/en active Active
- 2009-12-17 EP EP09775495.6A patent/EP2391289B1/en not_active Not-in-force
- 2009-12-17 WO PCT/US2009/068423 patent/WO2010078016A1/en not_active Ceased
- 2009-12-17 KR KR1020117017601A patent/KR101709277B1/en not_active Expired - Fee Related
- 2009-12-17 CN CN200980157767.2A patent/CN102341054B/en not_active Expired - Fee Related
-
2017
- 2017-09-08 US US15/699,858 patent/US10675098B2/en active Active
-
2020
- 2020-04-27 US US16/859,755 patent/US11471221B2/en active Active
-
2022
- 2022-09-12 US US17/942,777 patent/US12251170B2/en active Active
Patent Citations (53)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4614366A (en) | 1983-11-18 | 1986-09-30 | Exactident, Inc. | Nail identification wafer |
| US5891034A (en) | 1990-10-19 | 1999-04-06 | St. Louis University | System for indicating the position of a surgical probe within a head on an image of the head |
| US20060241400A1 (en) | 1990-10-19 | 2006-10-26 | St. Louis University | Method of determining the position of an instrument relative to a body of a patient |
| US7072704B2 (en) | 1990-10-19 | 2006-07-04 | St. Louis University | System for indicating the position of a surgical probe within a head on an image of the head |
| US20020065461A1 (en) | 1991-01-28 | 2002-05-30 | Cosman Eric R. | Surgical positioning system |
| US6006126A (en) | 1991-01-28 | 1999-12-21 | Cosman; Eric R. | System and method for stereotactic registration of image scan data |
| US5848967A (en) | 1991-01-28 | 1998-12-15 | Cosman; Eric R. | Optically coupled frameless stereotactic system and method |
| US6167295A (en) | 1991-01-28 | 2000-12-26 | Radionics, Inc. | Optical and computer graphic stereotactic localizer |
| US6275725B1 (en) | 1991-01-28 | 2001-08-14 | Radionics, Inc. | Stereotactic optical navigation |
| US6351661B1 (en) | 1991-01-28 | 2002-02-26 | Sherwood Services Ag | Optically coupled frameless stereotactic space probe |
| US20020188194A1 (en) | 1991-01-28 | 2002-12-12 | Sherwood Services Ag | Surgical positioning system |
| US6405072B1 (en) | 1991-01-28 | 2002-06-11 | Sherwood Services Ag | Apparatus and method for determining a location of an anatomical target with reference to a medical apparatus |
| US20040138556A1 (en) | 1991-01-28 | 2004-07-15 | Cosman Eric R. | Optical object tracking system |
| US6201984B1 (en) | 1991-06-13 | 2001-03-13 | International Business Machines Corporation | System and method for augmentation of endoscopic surgery |
| US5572999A (en) | 1992-05-27 | 1996-11-12 | International Business Machines Corporation | Robotic system for positioning a surgical instrument relative to a patient's body |
| US5836869A (en) | 1994-12-13 | 1998-11-17 | Olympus Optical Co., Ltd. | Image tracking endoscope system |
| US6122541A (en) | 1995-05-04 | 2000-09-19 | Radionics, Inc. | Head band for frameless stereotactic registration |
| US6246900B1 (en) | 1995-05-04 | 2001-06-12 | Sherwood Services Ag | Head band for frameless stereotactic registration |
| US6434416B1 (en) | 1998-11-10 | 2002-08-13 | Olympus Optical Co., Ltd. | Surgical microscope |
| US6468265B1 (en) | 1998-11-20 | 2002-10-22 | Intuitive Surgical, Inc. | Performing cardiac surgery without cardioplegia |
| US6826423B1 (en) | 1999-01-04 | 2004-11-30 | Midco-Medical Instrumentation And Diagnostics Corporation | Whole body stereotactic localization and immobilization system |
| US7797032B2 (en) | 1999-10-28 | 2010-09-14 | Medtronic Navigation, Inc. | Method and system for navigating a catheter probe in the presence of field-influencing objects |
| US7137712B2 (en) | 1999-12-23 | 2006-11-21 | Northern Digital Inc. | Reflector system for determining position |
| US20020193686A1 (en) | 2000-01-10 | 2002-12-19 | Pinhas Gilboa | Methods and systems for performing medical procedures with reference to projective image and with respect to pre-stored images |
| US20030210812A1 (en) * | 2002-02-26 | 2003-11-13 | Ali Khamene | Apparatus and method for surgical navigation |
| US20060142657A1 (en) | 2002-03-06 | 2006-06-29 | Mako Surgical Corporation | Haptic guidance system and method |
| US7831292B2 (en) | 2002-03-06 | 2010-11-09 | Mako Surgical Corp. | Guidance system and method for surgical procedures with improved feedback |
| US7747311B2 (en) | 2002-03-06 | 2010-06-29 | Mako Surgical Corp. | System and method for interactive haptic positioning of a medical device |
| US20040002642A1 (en) | 2002-07-01 | 2004-01-01 | Doron Dekel | Video pose tracking system and method |
| US20040052333A1 (en) | 2002-09-13 | 2004-03-18 | James Sayre | Device and method for margin marking of radiography specimens |
| US20080262345A1 (en) | 2003-07-21 | 2008-10-23 | The John Hopkins University | Image registration of multiple medical imaging modalities using a multiple degree-of-freedom-encoded fiducial device |
| US20050182295A1 (en) | 2003-12-12 | 2005-08-18 | University Of Washington | Catheterscope 3D guidance and interface system |
| WO2005102202A1 (en) | 2004-04-26 | 2005-11-03 | Orthosoft Inc. | Method for permanent calibration based on actual measurement |
| US20070288124A1 (en) | 2004-08-25 | 2007-12-13 | Kabushiki Kaisha Yaskawa Denki | Evaluating System And Evaluating Method Of Robot |
| WO2006091494A1 (en) | 2005-02-22 | 2006-08-31 | Mako Surgical Corp. | Haptic guidance system and method |
| WO2006124388A1 (en) | 2005-05-16 | 2006-11-23 | Intuitive Surgical, Inc | Methods and system for performing 3-d tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery |
| US20060258938A1 (en) | 2005-05-16 | 2006-11-16 | Intuitive Surgical Inc. | Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery |
| US20070013336A1 (en) | 2005-05-19 | 2007-01-18 | Intuitive Surgical Inc. | Software center and highly configurable robotic systems for surgery and other uses |
| WO2006131373A2 (en) | 2005-06-09 | 2006-12-14 | Ife Industrielle Forschung Und Entwicklung Gmbh | Device for the contactless determination and measurement of a spatial position and/or a spatial orientation of bodies |
| US20070167702A1 (en) | 2005-12-30 | 2007-07-19 | Intuitive Surgical Inc. | Medical robotic system providing three-dimensional telestration |
| US20070183041A1 (en) | 2006-02-09 | 2007-08-09 | Northern Digital Inc. | Retroreflective marker-tracking systems |
| WO2007090288A1 (en) | 2006-02-09 | 2007-08-16 | Northern Digital Inc. | Retroreflective marker-tracking systems |
| US20070265527A1 (en) | 2006-05-11 | 2007-11-15 | Richard Wohlgemuth | Medical position determination using redundant position detection means and priority weighting for the position detection means |
| US20080140087A1 (en) * | 2006-05-17 | 2008-06-12 | Hansen Medical Inc. | Robotic instrument system |
| US20080077158A1 (en) * | 2006-06-16 | 2008-03-27 | Hani Haider | Method and Apparatus for Computer Aided Surgery |
| US20080132909A1 (en) | 2006-12-01 | 2008-06-05 | Medtronic Navigation, Inc. | Portable electromagnetic navigation system |
| US20080172119A1 (en) | 2007-01-12 | 2008-07-17 | Medtronic Vascular, Inc. | Prosthesis Deployment Apparatus and Methods |
| US20080240551A1 (en) | 2007-03-30 | 2008-10-02 | Microsoft Corporation | Local bi-gram model for object recognition |
| US20080285724A1 (en) | 2007-05-05 | 2008-11-20 | Ziehm Imaging Gmbh | X-ray diagnostic imaging system with a plurality of coded markers |
| US20100168562A1 (en) | 2008-12-31 | 2010-07-01 | Intuitive Surgical, Inc. | Fiducial marker design and detection for locating surgical instrument in images |
| US9867669B2 (en) | 2008-12-31 | 2018-01-16 | Intuitive Surgical Operations, Inc. | Configuration marker design and detection for instrument tracking |
| US10675098B2 (en) | 2008-12-31 | 2020-06-09 | Intuitive Surgical Operations, Inc. | Configuration marker design and detection for instrument tracking |
| US20200305984A1 (en) | 2008-12-31 | 2020-10-01 | Intuitive Surgical Operations, Inc. | Configuration marker design and detection for instrument tracking |
Non-Patent Citations (63)
| Title |
|---|
| Advisory Action mailed May 18, 2012 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009. (ISRG01910/US). |
| Belongie S., et al., "Shape Matching and Object Recognition Using Shape Contexts," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24 (4), Apr. 2002, pp. 509-522. |
| Burschka, D., et al., "Navigating Inner Space: 3-D Assistance for Minimally Invasive Surgery," Robotics and Autonomous Systems, 2005, vol. 52(1), pp. 5-26. |
| Casals, A. et al., "Automatic Guidance of an Assistant Robot in Laparoscopic Surgery," 1996 IEEE International Conference on Robotics and Automation (ICRA '96), Minneapolis, MN, Apr. 1996, pp. 895-900. |
| Claus D., et al., "Reliable Fiducial Detection in Natural Scenes," European Conference on Computer Vision, 2004, pp. 469-480. |
| Climent, Joan and Pere Mares, "Automatic Instrument Localization in Laparoscopic Surgery." Electronic Letters on Computer Vision and Image Analysis, vol. 4, Issue 1, pp. 21-31, 2004. |
| Comport A.I., "Towards a Computer Imagination: Robust Real-time 3D Tracking of Rigid and Articulated Objects for Augmented Reality and Robotics" University of Rennes 1, 2005, 310 pages. http://www.irisa.fr/lagadic/pdf/2005_these_comport.pdf. |
| Doignon C., "An Introduction to Model-Based Pose Estimation and 3-D Tracking Techniques," in Book: Scene Reconstruction Pose Estimation and Tracking, Jun. 2007, pp. 359-382. |
| Doignon, C. et al., "Real-time Segmentation of Surgical Instruments Inside the Abdominal Cavity Using a Joint Hue Saturation Color Feature," Real-Time Imaging, vol. 11, pp. 429-442, 2005. |
| Doignon, Christophe et al., "The Role of Insertion Points in the Detection and Positioning of Instruments in Laparoscopy for Robotic Tasks," Proceedings of Medical Image Computing and Computer-Assisted Intervention Conference (MICCAI) 2006, Lecture Notes in Computer Science 4190, Springer, pp. 527-534, 2006. |
| Dutkiewicz, Piotr et al., "Visual Tracking of Surgical Tools for Laparoscopic Surgery," Fourth International Workshop on Robot Motion and Control (RoMoCo '04), Jun. 17-20, 2004, pp. 23-38. |
| Fergus R., et al., "A Sparse Object Category Model for Efficient Learning and Exhaustive Recognition," Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, vol. 1. |
| Fergus R., et al., "Weakly Supervised Scale Invariant Learning of Models for Visual Recognition," International Journal of Computer Vision, 2007, vol. 71 (3), pp. 273-303. |
| Final Office Action mailed Apr. 18, 2016 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009, 14 pages (ISRG01910/US). |
| Final Office Action mailed Feb. 29, 2012 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009. (ISRG01910/US). |
| Final Office Action mailed Mar. 21, 2013 for U.S. Appl. No. 12/428,657, filed Apr. 23, 2009. |
| Final Office Action mailed Nov. 23, 2011 for U.S. Appl. No. 12/428,657, filed Apr. 23, 2009. (ISRG01480/US). |
| Final Office Action mailed Sep. 19, 2014 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009. (ISRG01910/US). |
| Fischler, Martin A. and Robert C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24 , No. 6, Jun. 1981, pp. 381-395. |
| Forsyth D., et al., "Computer Vision a Modern Approach," Prentice Hall, 2003, pp. 234-250. |
| Harris C., et al., "A Combined Corner and Edge Detector," Proceedings of the 4th Alvey Vision Conference, 1988, pp. 147-151. |
| Hartley R. et al., "Multiple View Geometry in Computer Vision," Chapter 2, Cambridge University Press, 2000, 56 pages. |
| Hartley R., et al., "Multiple View Geometry in Computer Vision," Chapter 12: Structure Computation, Cambridge University Press, 2000, 32 pages. |
| Hynes P., et al., "Uncalibrated Visual-Servoing of a Dual-Arm Robot for MIS Suturing," Biomedical Robotics and Biomechatronics, 2006, pp. 420-426. |
| International Search Report and Written Opinion for Application No. PCT/US2009/068423, mailed on Mar. 3, 2010, 15 pages. |
| Kato, Hirokazu and Mark Billinghurst, "Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System," 2nd IEEE and ACM Workshop on Augmented Reality, Oct. 20-21, 1999, pp. 85-94. |
| Kim, Min-Seok et al., "Real-Time Visual Tracking for Laparoscopic Surgery," International Journal of Human-Friendly Welfare Robotic Systems, vol. 5, Issue 1, pp. 2-9, 2004. |
| Kosaka, Akio et al., "Augmented Reality System for Surgical Navigation Using Robust Target Vision," IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 2, pp. 187-194. |
| Krupa, Alexandre et al., "Autonomous 3-D Positioning of Surgical Instruments in Robotized Laparoscopic Surgery Using Visual Servoing," IEEE Transactions on Robotics and Automation, vol. 19, No. 5, pp. 842-853, Oct. 2003. |
| Lopez De Ipina, Diego et al., "TRIP: A Low-Cost Vision-based Location System for Ubiquitous Computing," Personal and Ubiquitous Computing, vol. 6, pp. 206-219, 2002. |
| Lorusso D., et al., "A Comparison of Four Algorithms for Estimating 3-D Rigid Transformations," Proceedings of the 1995 British conference on Machine vision, vol. 1, 1995, pp. 237-246. |
| Lowe, David G., "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, No. 2, Nov. 2004, pp. 91-110. |
| Matas J., et al., "Robust Wide Baseline Stereo from Maximally Stable Extremal Regions," Proceedings of the British Machine Vision Conference, BMVA Press, Sep. 2002, pp. 38.4-39.3. |
| McKenna, S.J. et al., "Towards Video Understanding of Laparoscopic Surgery: Instrument Tracking," Image and Vision Computing New Zealand (IVCNZ '05), Dunedin, Nov. 28-29, 2005, 5 pages. |
| Mooser, Jonathan et al., "Triocodes: A Barcode-like Fiducial Design for Augmented Reality Media," IEEE International Conference on Multimedia and Expo (ICME), Jul. 2006, pp. 1301-1304. |
| Murphy K.P., et al., "Loopy-belief Propagation for Approximate Inference: An Empirical Study," Uncertainty in Artificial Intelligence, vol. 15, 1999, pp. 467-475. |
| Naimark, Leonid and Eric Foxlin, "Circular Data Matrix Fiducial System and Robust Image Processing for a Wearable Vision-Inertial Self-Tracker," International Symposium on Mixed and Augmented Reality (ISMAR '02), Sep. 30-Oct. 1, 2002, pp. 27-36. |
| Non-Final Office Action mailed Feb. 8, 2019 for U.S. Appl. No. 15/699,858, filed Sep. 8, 2017, 17 pages (ISRG01910C1/US). |
| Non-Final Office Action mailed Jun. 9, 2011 for U.S. Appl. No. 12/428,657, filed Apr. 23, 2009. (ISRG01480/US). |
| Non-Final Office Action mailed May 22, 2014 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009 (ISRG01910/US). |
| Non-Final Office Action mailed Oct. 26, 2016 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009, 15 pages (ISRG01910/US). |
| Non-Final Office Action mailed Sep. 13, 2012 for U.S. Appl. No. 12/428,657, filed Apr. 23, 2009. (ISRG01480/US). |
| Non-Final Office Action mailed Sep. 14, 2011 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009. (ISRG01910/US). |
| Non-Final Office Action mailed Sep. 2, 2015 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009, 12 pages (ISRG01910/US). |
| Notice of Allowance mailed Jan. 27, 2020 for U.S. Appl. No. 15/699,858, filed Sep. 8, 2017, 8 pages. |
| Office Action mailed Aug. 9, 2016 for Korean Application No. 10-2011-7017601 filed Jul. 27, 2011, 10 pages (ISRG01910/KR). |
| Office Action mailed Feb. 11, 2016 for Korean Application No. 10-2011-7017601 filed Jul. 27, 2011, 10 pages (ISRG01910/KR). |
| Office Action mailed Feb. 13, 2015 for European Application No. 20090775495 filed Dec. 17, 2009, 5 pages (ISRG01910/EP). |
| Office Action mailed Jan. 21, 2014 for Chinese Application No. 20098157767 filed Dec. 17, 2009, 25 pages (ISRG01910/CN). |
| Office Action mailed Jul. 22, 2014 for Chinese Application No. 20098157767 filed Dec. 17, 2009, 21 pages (ISRG01910/CN). |
| Office Action mailed Jun. 18, 2013 for Chinese Application No. 20098157767 filed Dec. 17, 2009, 24 pages (ISRG01910/CN). |
| Office Action mailed Nov. 6, 2015 for European Application No. 09775495.6 filed Dec. 17, 2009, 4 pages (ISRG01910/EP). |
| PCT/US09/68395 International Search Report and Written Opinion of the International Searching Authority, mailed Mar. 29, 2010, 14 pages. |
| Pearl J., "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference," Morgan Kaufmann Series in Representation and Reasoning, Sep. 1988, pp. 143-237. |
| Pre-Brief Appeal Conference Decision mailed Sep. 13, 2012 for U.S. Appl. No. 12/428,691, filed Apr. 23, 2009. |
| Rekimoto, Jun and Yuji Ayatsuka, "CyberCode: Designing Augmented Reality Environments with Visual Tags," Proceedings of DARE 2000 on Designing Augmented Reality Environments, Elsinore, Denmark, Internet: http://ftp.csl.sony.co.jp/person/rekimoto/papers/dare2000.pdf. |
| Setrix, Inc., "Novel Applications and Sunshiny Markers," White paper, 9 pages, 2003, Internet: http://www.setrix.net/pdf/papers/SetrixLogistics.pdf. |
| Shi J. et al., "Good Features to Track," IEEE Conference on Computer Vision and Pattern Recognition, Jan. 1994, pp. 593-600. |
| Uecker, Darrin R. et al., "Automated Instrument Tracking in Robotically-Assisted Laparoscopic Surgery," Journal of Image Guided Surgery, vol. 1, No. 6, pp. 308-325, 1998. |
| Vertut, J, and Coiffet, P., "Robot Technology: Teleoperation and Robotics Evolution and Development," English translation, Prentice-Hall, Inc., Inglewood Cliffs, NJ, USA 1986, vol. 3A, 332 pages. |
| Voros, Sandrine et al., "Automatic Detection of Instruments in Laparoscopic Images: A First Step Towards High Level Command of Robotized Endoscopic Holders," International Journal of Robotics Research, vol. 26, Issue 11-12, pp. 1173-1190, Nov.-Dec. 2007. |
| Wei, Guo-Quing et al., "Real-Time Visual Servoing for Laparoscopic Surgery," IEEE Engineering in Medicine and Biology Magazine, Jan./Feb. 1997, pp. 40-45, vol. 16—Issue 1, IEEE. |
| Zhang, Xiaoli and Shahram Payandeh, "Application of Visual Tracking for Robotic-Assisted Laparoscopic Surgery," Journal of Robotic Systems, vol. 19, No. 7, pp. 315-328, 2002. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20100168763A1 (en) | 2010-07-01 |
| KR101709277B1 (en) | 2017-02-23 |
| US20200305984A1 (en) | 2020-10-01 |
| EP2391289A1 (en) | 2011-12-07 |
| KR20110118640A (en) | 2011-10-31 |
| CN102341054A (en) | 2012-02-01 |
| US9867669B2 (en) | 2018-01-16 |
| CN102341054B (en) | 2016-03-16 |
| US20180071033A1 (en) | 2018-03-15 |
| US20230000568A1 (en) | 2023-01-05 |
| US11471221B2 (en) | 2022-10-18 |
| EP2391289B1 (en) | 2016-11-23 |
| WO2010078016A1 (en) | 2010-07-08 |
| US10675098B2 (en) | 2020-06-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12251170B2 (en) | Configuration marker design and detection for instrument tracking | |
| US9526587B2 (en) | Fiducial marker design and detection for locating surgical instrument in images | |
| Doignon et al. | Segmentation and guidance of multiple rigid objects for intra-operative endoscopic vision | |
| Bouget et al. | Detecting surgical tools by modelling local appearance and global shape | |
| US8184880B2 (en) | Robust sparse image matching for robotic surgery | |
| US9402690B2 (en) | Efficient 3-D telestration for local and remote robotic proctoring | |
| Reiter et al. | Feature classification for tracking articulated surgical tools | |
| EP2687185B1 (en) | Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery | |
| US20100331855A1 (en) | Efficient Vision and Kinematic Data Fusion For Robotic Surgical Instruments and Other Applications | |
| WO2014093824A1 (en) | Markerless tracking of robotic surgical tools | |
| Voros et al. | Automatic localization of laparoscopic instruments for the visual servoing of an endoscopic camera holder | |
| WO2015158756A1 (en) | Method and device for estimating an optimal pivot point | |
| Docea et al. | Simultaneous localisation and mapping for laparoscopic liver navigation: a comparative evaluation study | |
| US20230233272A1 (en) | System and method for determining tool positioning, and fiducial marker therefore | |
| Gennaro | Vision-based Approaches for Surgical Tool Pose Estimation in Minimally Invasive Robotic Surgery | |
| Penza et al. | Augmented Reality Navigation in Robot-Assisted Surgery | |
| Reiter | Assistive visual tools for surgery | |
| Allan et al. | Detection and Localization of Instruments in Minimally Invasive Surgery | |
| Farag et al. | Computer Assisted Radiology and Surgery (CARS-2004) | |
| Sun et al. | Estimation of incision patterns based on visual tracking of surgical tools in minimally invasive surgery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: INTUITIVE SURGICAL OPERATIONS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHAO, TAO;ZHAO, WENYI;NOWLIN, WILLIAM C.;SIGNING DATES FROM 20170930 TO 20171205;REEL/FRAME:061428/0031 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |