WO2021231293A1 - Systèmes et procédés de présentation basée sur une région d'un contenu augmenté - Google Patents

Systèmes et procédés de présentation basée sur une région d'un contenu augmenté Download PDF

Info

Publication number
WO2021231293A1
WO2021231293A1 PCT/US2021/031568 US2021031568W WO2021231293A1 WO 2021231293 A1 WO2021231293 A1 WO 2021231293A1 US 2021031568 W US2021031568 W US 2021031568W WO 2021231293 A1 WO2021231293 A1 WO 2021231293A1
Authority
WO
WIPO (PCT)
Prior art keywords
augmented content
viewpoint
region
anchor region
user
Prior art date
Application number
PCT/US2021/031568
Other languages
English (en)
Inventor
Simon P. Dimaio
Brandon D. Itkowitz
Terrence B. MCKENNA III
Govinda PAYYAVULA
Original Assignee
Intuitive Surgical Operations, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Surgical Operations, Inc. filed Critical Intuitive Surgical Operations, Inc.
Priority to CN202180030969.1A priority Critical patent/CN115461700A/zh
Priority to US17/919,927 priority patent/US20230186574A1/en
Publication of WO2021231293A1 publication Critical patent/WO2021231293A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00203Electrical control of surgical instruments with speech control or speech recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00216Electrical control of surgical instruments with eye tracking or head position tracking control
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2074Interface software
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/372Details of monitor hardware
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • augmented reality technologies certain virtual reality technologies and other forms of extended reality may all be considered mixed reality technologies as that term is used herein.
  • Various applications and use cases may be served by using mixed reality technology to present augmented content (i.e., one or more virtual elements) onto a view of the real world. However, it may not be the case that a prominent display of such augmented content is equally desirable at all times or for every situation.
  • augmented content i.e., one or more virtual elements
  • a prominent display of such augmented content is equally desirable at all times or for every situation.
  • SUMMARY Systems and methods for region-based presentation of augmented content to a user are described herein. For instance, one embodiment of such a region-based augmentation system includes a memory storing instructions and a processor communicatively coupled to the memory and configured to execute the instructions to perform certain operations.
  • the operations may include determining that a viewpoint of a user of a display device is directed within an anchor region of a physical world containing the user.
  • the operations may also include directing, in response to the determination that the viewpoint is directed within the anchor region, the display device to present augmented content in an evident manner.
  • the evident manner may include anchoring the augmented content relative to the viewpoint, such that the augmented content follows the viewpoint as the user moves the viewpoint within the anchor region.
  • the operations may further include determining that the viewpoint is directed outside of the anchor region by the user, and, in response to the determination that the viewpoint is directed outside of the anchor region, directing the display device to present the augmented content in a less evident manner.
  • presenting the augmented content in the less evident manner may include presenting the augmented content less visibly than the evident manner or unanchoring the augmented content relative to the viewpoint.
  • An example embodiment of a region-based augmentation method may be performed by a region-based augmentation system.
  • the method may include determining that a viewpoint of a user of a display device is directed within an anchor region of a physical world containing the user.
  • the method may also include directing, in response to the determination that the viewpoint is directed within the anchor region, the display device to present augmented content in an evident manner.
  • the evident manner may include anchoring the augmented content relative to the viewpoint, such that the augmented content follows the viewpoint as the user moves the viewpoint within the anchor region.
  • the method may further include determining that the viewpoint is directed outside of the anchor region by the user, and, in response to the determination that the viewpoint is directed outside of the anchor region, directing the display device to present the augmented content in a less evident manner.
  • presenting the augmented content in the less evident manner may include presenting the augmented content less visibly than the evident manner or unanchoring the augmented content relative to the viewpoint.
  • FIG.1 shows an illustrative region-based augmentation system for region- based presentation of augmented content in an evident manner to a user according to principles described herein. smaller size, presented with a greater degree of transparency, not presented at all, etc.) when the view is not directed in that one region of the world.
  • a region of the physical world where it is generally desirable for augmented content to be presented in the evident manner is referred to herein as an anchor region.
  • Systems and methods described herein for region-based presentation of augmented content may provide useful benefits and advantages in a wide variety of circumstances and use cases.
  • various applications facilitated by extended reality technologies such as recreational or entertainment applications (e.g., video games, television or movie content, exercise programs, etc.), industrial applications (e.g., manufacturing, robotic, and other applications), educational and training applications, consumer and professional applications, communication applications, and so forth may all be served by implementations of region-based augmentation systems and methods described herein.
  • recreational or entertainment applications e.g., video games, television or movie content, exercise programs, etc.
  • industrial applications e.g., manufacturing, robotic, and other applications
  • educational and training applications e.g., manufacturing, robotic, and other applications
  • consumer and professional applications e.g., communication applications, and so forth
  • communication applications e.g., communication applications, and so forth
  • aspects of this disclosure are described in reference to computer-assisted systems and devices, which may include systems and devices that are manually manipulated, remote-controlled, semi-autonomous, autonomous, etc.; and including systems and devices that are robotic or non-robotic, teleoperated or non-teleoperated, etc
  • aspects of this disclosure are described in terms of an implementation using a surgical system, such as the da Vinci® Surgical System commercialized by Intuitive Surgical, Inc. of Sunnyvale, California.
  • inventive aspects disclosed herein may be embodied and implemented in various other ways. Implementations on da Vinci® Surgical Systems are merely exemplary and are not to be considered as limiting the scope of the inventive aspects disclosed herein.
  • techniques described with reference to surgical instruments and surgical methods may be used in other contexts.
  • the instruments, systems, and methods described herein may be used for humans, animals, portions of human or animal anatomy, industrial systems, general robotic, or teleoperational systems.
  • the instruments, systems, and methods described herein may be used for non-medical purposes including industrial uses, general robotic uses, sensing or manipulating non-tissue work pieces, cosmetic improvements, imaging of human or animal anatomy, gathering data from human or animal anatomy, setting up or taking down systems, training medical or non-medical personnel, and/or the like.
  • Additional example applications include use for procedures on tissue removed from human or animal anatomies (without return to a human or animal anatomy) and for procedures on human or animal cadavers. Further, these techniques can also be used for medical treatment or diagnosis procedures that include, or do not include, surgical aspects.
  • region-based augmentation systems and methods described below will often be described and illustrated in a medical context that will be understood to relate to various surgical and non-surgical medical procedures and/or operations (e.g., diagnostic procedures, treatment procedures, training procedures, etc.) for which medical personnel may utilize mixed reality technologies such as, for example, one that involves an augmented reality display device. While such medical contexts are used as examples, it will be appreciated that the principles described herein may find significant applicability to various other types of contexts, scenarios, and applications. [0022] To illustrate a particular medical example, a minimally-invasive surgical procedure performed using a computer assisted (e.g., robotically-controlled) medical system will be considered (a specific example of such a system will be described and illustrated in more detail below in relation to FIG.10).
  • a computer assisted (e.g., robotically-controlled) medical system will be considered (a specific example of such a system will be described and illustrated in more detail below in relation to FIG.10).
  • Surgical staff performing intracorporeal tasks at the bedside of such a procedure have conventionally visualized surgical endoscope video and/or other imagery using physical monitors mounted to walls or ceilings of the operating rooms, or to equipment in the operating rooms.
  • the placement of monitors in these types of locations may be non-ergonomic and inconvenient, and may make it difficult for the staff members to visualize intracorporeal events and to efficiently perform their assigned tasks during the procedure.
  • Systems and methods described herein may provide significant benefits in this type of medical context, such as by presenting augmented content (e.g. additional imagery, video, etc.) to the medical staff members’ eyes during the procedure in an evident manner when the content is determined to be desirable, useful, or convenient.
  • FIG.1 shows an illustrative region-based augmentation system 100 (“system 100”) for region-based presentation of augmented content (e.g., for a medical example, for use by a medical staff member assisting or performing a medical procedure, etc.).
  • system 100 may include, without limitation, a memory 102 and a processor 104 selectively and communicatively coupled to one another.
  • memory 102 may represent one or more memory modules of the same or different types (e.g., transient memory, non-transient storage, etc.).
  • processor 104 may represent one or more processing units of the same or different types (e.g., central processing units (“CPUs”), graphics processing units (“GPUs”), embedded devices, customizable devices such as field programmable gate arrays (“FPGAs”), etc.).
  • CPUs central processing units
  • GPUs graphics processing units
  • FPGAs field programmable gate arrays
  • memory 102 and processor 104 may be integrated into a single device, while, in other examples, either or both of memory 102 and processor 104 may be distributed between multiple devices and/or multiple locations.
  • a display device associated with a particular user may include one or more built-in processing units, memory modules, sensors, communication interfaces, and so forth, all of which may interoperate to implement system 100.
  • some or all of these components may not be integrated into the display device itself but, rather, may be implemented on other computing systems as may serve a particular implementation (e.g., edge servers, cloud servers, computing devices integrated with other components of a computer-assisted medical system such as will be described in relation to FIG.10 below, etc.).
  • Memory 102 may store or otherwise maintain executable data used by processor 104 to perform any of the functionality described herein.
  • memory 102 may store instructions 106 that may be executed by processor 104 to perform any of the functionality described herein. Instructions 106 may be implemented by any suitable application, software, code, and/or other executable data instance. Memory 102 may also maintain any data received, generated, managed, used, and/or transmitted by processor 104. [0027] Processor 104 may be configured to perform, such as by being configured to execute instructions 106 stored in memory 102 to perform, various processing functions associated with region-based presenting of augmented content.
  • processor 104 may determine that a viewpoint of a user of a display device is directed within an anchor region of a physical world containing the user, and may direct, in response to the determination that the viewpoint is directed within the anchor region, the display device to present augmented content in an evident manner.
  • the display device may be implemented as a head-mounted augmented reality display that the user utilizes to perform or help with a medical procedure that is being performed on a body within a medical area.
  • Processor 104 may define the anchor region based on regions entered by the user.
  • Processor 104 may also define the anchor region semi-automatically, such as by presenting for selection the anchor region based on operating conditions or other factors.
  • Processor 104 may also define the anchor region automatically, such as by identifying the anchor region by itself, based on various parameters detected or received.
  • Example operating conditions and other factors include: a location, size, or other geometric feature of a site for the medical procedure, a system to be used in the procedure, a tool for performing the procedure, the user or some other person associated with the procedure, a type of the procedure or system or tool, stored user preference information, etc.
  • the augmented content may be configured to assist the user in performing the medical procedure.
  • the augmented content may comprise textual information, graphical information (e.g., video, photographs, sensor images, symbols, drawings, graphs, etc.), or both textual and graphical information.
  • the augmented content may feature one or more external or internal views of the body receiving the medical procedure. Such views may be captured preoperatively or intraoperatively, and may be captured by any appropriate imaging device.
  • Example imaging devices include: a camera for capturing visible light or non-visible light, an endoscope, an ultrasound module, a florescence imaging module, a fluoroscopic imaging module, etc.
  • the augmented content may depict a model that has been generated based on preoperative data or that is generated and/or updated based on intraoperative data.
  • the user may be a person helping to perform the medical procedure and the anchor region may be associated with the body receiving the medical procedure.
  • the anchor region may comprise a space proximate to a portion of the body or the entire body, a space that surrounds the portion of the body or the entire body, etc.
  • medical procedures include surgical procedures as well as non-surgical procedures. Examples of such medical procedures include those for diagnostics, treatment and therapy, cosmetics, imaging, data gathering, training, and demonstration. Medical procedures may or may not utilize minimally invasive techniques involving a computer-assisted medical system. Examples of bodies on which medical procedures may be performed include live human patients or other suitable bodies that may be living or non-living, biological or non-biological, natural or artificial, and so forth.
  • processor 104 may, in certain implementations, anchor the augmented content relative to the viewpoint of the user such that the augmented content follows the viewpoint as the user moves the viewpoint within the anchor region.
  • the augmented content may remain easily viewable to the user while he or she is looking within the anchor region (e.g., looking in the direction of the anchor region), and the ergonomic and efficiency are improved. As will be described below, there may also be situations where this anchoring of the augmented content to the viewpoint is changed or is performed in other ways.
  • system 100 may continuously present the augmented content in the evident manner, where the evident manner includes anchoring the augmented content relative to the viewpoint. However, if the user directs the viewpoint to another part of the physical world away from the anchor region, system 100 may automatically transition to presenting the augmented content in a less evident manner.
  • Presenting in a less evident manner may be helpful to reduce visual distractions, increase convenience, comfort, efficiency, or the like where the user is performing a task not aided by an evident presentation of the augmented content.
  • Presenting in a less evident manner may comprise presenting the augmented content less visibly than the evident manner, or unanchoring the augmented content relative to the viewpoint.
  • Presenting content less visibly than the evident manner may, in some examples, include not presenting the augmented content at all.
  • processor 104 may determine that the viewpoint is directed outside of the anchor region by the user, and, in response to the determination that the viewpoint is directed outside of the anchor region, processor 104 may direct the display device to present the augmented content in the less evident manner.
  • processor 104 may present the augmented content in the less evident manner by presenting the augmented content less visibly than the evident manner, such as by presenting the augmented content with a smaller size or with higher transparency, by ceasing to display the augmented content, or the like.
  • processor 104 may present the augmented content in the less evident manner by unanchoring the augmented content relative to the viewpoint; in some instances, processor 104 may instead anchor the augmented content to the physical world.
  • system 100 e.g., processor 104 may be configured to perform region-based presenting of augmented content in an evident manner to a user in real time.
  • a function may be said to be performed in real time when the function relates to or is based on dynamic, time-sensitive information and the function is performed while the time-sensitive information remains accurate or otherwise relevant. Due to processing times, communication latency, and other inherent delays in physical systems, certain functions may be considered to be performed in real time when performed immediately and without undue delay, even if performed after a small delay (e.g., a delay up to a few seconds or the like). As one example of real-time functionality, processor 104 may direct the display device to present augmented content in an evident manner, a less evident manner, or to switch between (or begin transitioning between) these different manners of presentation in a manner that is responsive to the user’s movement of the display device.
  • FIG.2 shows certain aspects of an illustrative implementation of system 100 in operation. Specifically, as shown, FIG.2 depicts a top view of a physical world 200, a user 202 contained within physical world 200, and a display device 204 being used by user 202.
  • the physical world 200 is the real world that contains user 202.
  • a viewpoint 206 of user 202 may be an actual or estimated direction and/or location of viewing focus for user 202, or may be determined based on the direction and/or location of the viewing focus.
  • viewpoint 206 is determined based on a gaze direction or gaze location of user 202, based on a direction or location that the head of user 202 is directed, based on a direction or location that one or more eyes of user 202 are determined to be engaged, or based on a field of view associated with display device 204.
  • Viewpoint 206 may be freely movable by user 202 to be directed, at different times, toward or at any of various areas of physical world 200.
  • Three illustrative viewpoints 206 (labeled as viewpoints 206-1 through 206-3) are depicted in FIG.2 using different styles of lines that originate from user 202 and move outward in particular directions.
  • FIG.2 depicts a rectangular portion of physical world 200, and will be understood to represent a portion of the world such as a room or another suitable area that includes at least one anchor region and where system 100 may perform region- based presentation of augmented content according to principles described herein.
  • physical world 200 may represent an operating room or other medical area where a medical procedure is being performed.
  • User 202 may represent any user of display device 204.
  • Display device 204 may be implemented by any suitable device that is configured to be used by user 202 to present augmented content based on a region of physical world 200 to which user 202 directs viewpoint 206.
  • display device 204 may be implemented by a head-mounted display device configured to present augmented content in a field of view of a wearer of the device as the wearer moves the viewpoint within physical world 200 using head motions.
  • head-mounted display devices may be implemented by dedicated augmented reality devices, by general purpose mobile devices (e.g., tablet computers, smartphones, etc.) that are worn in front of the eyes using a head-mounting apparatus, or by other types of display devices.
  • display device 204 may be implemented by devices that are not worn on the head.
  • display device 204 may be implemented by a handheld device (e.g., a mobile device such as a smartphone, tablet, etc.) that may be pointed in different directions and/or focused to different distances within physical world 200, by a projection-based augmented reality device that may project augmented content onto physical surfaces in physical world 200, or by other non-head-mounted devices that are capable of presenting augmented reality content in various other suitable ways.
  • viewpoint 206 of user 202 may be determined as a direction and/or location of view of display device 204, instead of a direction and/or location of view of user 202.
  • display device 204 may include a camera that captures a view of physical world 200 and passes the view in real-time through to a display screen viewable by the user, to present the view of physical world 200 to user 202.
  • Certain general-purpose mobile devices used to implement display device 204 may operate in this manner, for example.
  • display device 204 may include a see-through screen that allows light to pass through from physical world 200 to reach the eyes of user 202, and that allows augmented content to be presented on the screen by being overlaid onto the view of the physical world viewable through the screen.
  • the see-through screen may be positioned in front of the eyes of user 202.
  • the see-through screen may allow user 202 to direct viewpoint 206 at will, and augmented content may be presented to allow user 202 to see physical world 200 together with the augmented content by way of the see-through screen.
  • display device 204 may further include computer hardware and software (e.g., a processor, a memory storing instructions, etc.) sufficient to implement system 100, or to communicate with another computing system that fully or partially implements system 100.
  • system 100 determines viewpoint 206 based on at least a viewing direction defined based on the location and orientation of display device 204, such as illustrated in viewpoints 206-1 through 206-3 in FIG.2. In some examples, system 100 determines viewpoint 206 based on at least a viewing direction defined based on the location and orientation of user 202, such as based on the orientation of a head of user 202. In yet another example, system 100 may determine the direction of viewpoint 206 based on additional or other parameters. For example, system 100 may determine the direction of viewpoint 206 based on at least a gaze direction of the eyes of user 202. Accordingly, in such implementations, system 100 may employ eye tracking technology to determine viewpoint 206.
  • system 100 may determine that the viewpoint is directed within an anchor region (e.g. anchor region 208) by determining that the viewing direction intersects with the anchor region.
  • system 100 determines viewpoint 206 as a location of visual focus.
  • viewpoint 206 may be described by a viewing direction and a viewing distance from display 204 or user 202.
  • the system 100 may determine viewpoint 206 based on a viewing direction and a focal distance of the eyes of user 202.
  • system 100 may determine that the viewpoint is directed within an anchor region (e.g. anchor region 208) by determining that the viewing direction and distance falls within the anchor region.
  • Objects 210 may each represent any object or other imagery that may be present in physical world 200.
  • physical world 200 includes a medical area (e.g., an operating room, etc.) where a medical procedure is being performed
  • object 210-1 may represent an operating table upon which a body rests while the medical procedure is performed.
  • Objects 210-2 through 210-4 may then be other types of objects present in the medical area, such as other personnel, medical equipment (e.g., components of a computer-assisted medical system, a table upon which instruments are held, etc.), or other objects as may serve a particular implementation.
  • anchor region 208 may be defined within physical world 200 as an area associated with object 210-1. In this example, as illustrated, anchor region 208 may be defined to enclose the entire operating table and body being operated on, as well as a small amount of space surrounding the operating table and body.
  • system 100 may be configured to determine when viewpoint 206 is directed within or outside of an anchor region.
  • System 100 may perform this determination in any suitable way, such as by using an appropriate sensor of display device 204 (e.g., an image sensor, an accelerometer, a gyroscopic sensor, a magnetometer, etc.), by using external tracking sensors (e.g., optical tracking sensors mounted in physical world 200 and configured to monitor user 202 and/or display device 204, optical or motion sensors mounted to user 202 or display device 204, etc.), and/or based on any suitable tracking techniques or algorithms (e.g., registration or spatial tracking techniques, simultaneous localization and mapping (“SLAM”) algorithms, etc.) to determine viewpoint 206.
  • an appropriate sensor of display device 204 e.g., an image sensor, an accelerometer, a gyroscopic sensor, a magnetometer, etc.
  • external tracking sensors e.g., optical tracking sensors mounted in physical world 200 and configured to monitor user 202 and/or display device 204, optical or motion sensors mounted to user 202 or display device 204, etc.
  • any suitable tracking techniques or algorithms e.g.
  • such sensors and/or techniques can be used to determine the location of display device 204 or user 202, how display device 204 or user 202 is oriented in space, etc., to determine how viewpoint 206 is directed.
  • system 100 may determine that viewpoint 206 is directed within anchor region 208 when viewpoint 206 intersects with anchor region 208; in this example, when viewpoint 206 is determined not to intersect with anchor region 208, system 100 then determines that viewpoint 206 is not directed within, and is directed outside of, anchor region 208.
  • system 100 may determine that viewpoint 206 is directed within anchor region 208 when the viewing direction and viewing distance falls within anchor region 208; in this example, when the viewing direction and viewing distance is determined to place viewpoint 206 outside of the anchor region 208, system 100 then determines that viewpoint 206 is not directed within, and is directed outside of, anchor region 208.
  • viewpoint 206-1 is directed by user 202 within anchor region 208
  • viewpoint 206-2 is directed toward a boundary of anchor region 208
  • viewpoint 206-3 is directed outside of anchor region 208. Fields of view associated with each of these viewpoints 206 will now be illustrated and described in more detail in relation to FIGS. 3-8.
  • FIG.3 shows an illustrative field of view associated with viewpoint 206-1, which is directed within anchor region 208.
  • viewpoint 206-1 is illustrated as a perspective view, from a vantage point of user 202, of a portion of object 210-1 that is viewable through or on display device 204.
  • display device 204 is shaped roughly like a viewport of an illustrative head- mounted display device or pair of glasses, including a large viewable space with a notch 300 where the device could rest on the nose. This shape is used for illustrative convenience only, and it will be understood that various implementations may employ various types and shapes of viewports, including rectangular viewports.
  • display device 204 may be configured to present monoscopic views, stereoscopic views, 3-D views, and so forth.
  • FIG.3 shows that display device 204 may present augmented content that is not actually present at the scene but is superimposed so as to be viewable by user 202 together with the imagery and objects that are present in physical world 200. More particularly, FIG.3 shows an illustrative presentation of augmented content in an evident manner within the field of view associated with viewpoint 206-1.
  • augmented content 302 is of a first type (e.g., showing real-time imagery, such as an internal view of the body currently being captured by an imaging instrument such as an endoscope inserted within the body, an ultrasonic probe, a fluoroscopic device, etc.). Augmented content 302 is shown to be displayed at a location in the bottom-left portion of the field of view in FIG.3, while augmented content 304 is of a second type (e.g., showing a non-real-time imagery, such as preoperative images such as those captured by MRI or CT scans, an anatomical model generated based on preoperative imaging, etc.). Augmented content 302 and 304 may be related to each other, such as by providing complementary information, or by changing with each other.
  • a first type e.g., showing real-time imagery, such as an internal view of the body currently being captured by an imaging instrument such as an endoscope inserted within the body, an ultrasonic probe, a fluoroscopic device, etc.
  • Augmented content 302 is shown
  • augmented content 302 may comprise a real-time image of the anatomy captured by an imaging device, and augmented content 304 may comprise a model or preoperative image of the same anatomy registered directly or indirectly to the real-time image; in this example, as the augmented content 302 is updated to show a different part of the anatomy, augmented content 304 is automatically updated to depict the same different part, or vice versa.
  • Augmented content 304 is shown to be displayed at a location in the bottom-right portion of the field of view in FIG.3. At the time period represented in FIG.3 (i.e., when viewpoint 206 is directed within anchor region 208 as shown for viewpoint 206-1), one or both of augmented contents 302 and 304 may be presented in an evident manner.
  • presenting augmented content in the evident manner in accordance with the example of FIG.3 includes anchoring the augmented content relative to viewpoint 206 such that the augmented content follows viewpoint 206 as user 202 moves viewpoint 206 within anchor region 208.
  • the augmented content shown to be presented by display device 204 in FIG. 3 includes two virtual screens in particular locations (i.e., augmented content 302 comprising a first virtual screen and 304 comprising a second virtual screen).
  • augmented content in a particular implementation can comprise any type, number, and locations of elements, with each element having any suitable form or shape appropriate for the implementation.
  • the augmented content may comprise graphical or textual displays not in the form of virtual screens.
  • the augmented content may comprise fewer or more virtual screens.
  • multiple augmented contents may be aligned horizontally or vertically.
  • different inputs to system 100, or motions of display device 204 e.g., the user pitching display device up and down, yawing motions side to side, etc.
  • Augmented content may be configured to present any information in any manner as may serve a particular implementation.
  • augmented content 302 is implemented as comprising a virtual screen that presents one type of internal view of the body of object 210-1
  • augmented content 304 is implemented as comprising a virtual screen separate from that of augmented content 302, and that presents a different type of internal view of the body of object 210-1.
  • augmented content 302 and 304 are for illustration purposes only, and that various types of other augmented content may be presented in other medical procedure examples or other non-medical examples.
  • Each of augmented contents 302 and 304 may be anchored to viewpoint 206, anchored to physical world 200, anchored to some other reference, or not anchored, in any suitable way.
  • the augmented content presented by display device 204 may include first augmented content (e.g. a first virtual screen such as that shown in FIG.3 for augmented content 302) anchored to viewpoint 206 (e.g., at the bottom-left portion of the field of view) and second augmented content (e.g. a second virtual screen such as that shown in FIG.3 for augmented content 304) anchored to viewpoint 206 at a different location (e.g., at the bottom-right portion of the field of view).
  • the augmented content presented by display device 204 may include first augmented content (e.g. a first virtual screen such as that shown in FIG.3 for augmented content 302) anchored to viewpoint 206 and second augmented content (e.g.
  • augmented content such as augmented contents 302 and 304 may be anchored to or unanchored from viewpoint 206, or be anchored or unanchored from physical world 200.
  • FIGS.4A and 4B illustrate the difference between anchoring an augmented content relative to a viewpoint (in a fixed location relative to the viewpoint or aspect of the viewpoint) and anchoring an augmented content relative to the physical world (in a location fixed relative to the physical world or aspect of the physical world, such as to coordinates defined in the physical world, to a real object in the physical world, etc.).
  • FIG.4A shows a viewpoint anchor mode 400-1 in which illustrative augmented content is anchored relative to a viewpoint of a user of display device 204
  • FIG.4B shows a physical-world anchor mode 400-2 in which illustrative augmented content is anchored relative to the physical world as viewed by the user by way of display device 204.
  • FIGS.4A and 4B shows the view at an initial time (on the left sides of FIGS.4A and 4B) and at a subsequent time after display device 204 has been moved to the left (in accordance with a movement arrow displayed above the initial view) and the viewpoint has been redirected to a different position, as shown on the right sides of FIGS.4A and 4B.
  • an object 402 in the physical world is depicted (e.g., object 402 can be one of objects 210 in physical world 200).
  • Object 402 can be viewed when user 202 directs the view associated with display device 204 accordingly.
  • FIG.4A shows a viewpoint-anchored augmented content 404 that is anchored relative to the viewpoint (in this example, fixed in a location relative to the viewpoint).
  • FIG.4B shows a physical-world-anchored augmented content 406 that is anchored to the physical world (in this particular example, fixed in a location relative to object 402).
  • viewpoint-anchored augmented content 404 remains in the same place relative to the viewpoint.
  • viewpoint- anchored augmented content 404 remains the same size and remains at the fixed location in the bottom-left portion of the field of view as the viewpoint is redirected from the initial to the subsequent position over time.
  • FIG.4B when user 202 redirects display device 204 in the same way for physical-world-anchored augmented content 406, physical- world-anchored augmented content 406 remains in the same place relative to object 402.
  • FIG.4B physical-world-anchored augmented content 406 gets slightly larger and remains at the fixed location near the top left corner of object 402 as the viewpoint is redirected from the initial to the subsequent position over time.
  • each of augmented contents 302 and 304 may be implemented as a viewpoint-anchored augmented content in some instances, a physical-world-anchored augmented content in some instances, or a free-floating or other-anchored augmented content in some instances.
  • one or both of augmented contents 302 and 304 may become anchored, or change their anchor modes, under certain circumstances, such as based on where viewpoint 206 is directed within physical world 200.
  • both augmented contents 302 and 304 may remain anchored relative to viewpoint 206 (e.g., at the bottom-left and bottom-right portions of the field of view, respectively, as shown) as user 202 moves viewpoint 206 within anchor region 208.
  • system 100 may begin presenting augmented content 302 and/or 304 in a less evident manner.
  • system 100 may present the augmented content less visibly than the evident manner shown in FIG.3, and/or may change the anchor mode from the viewpoint anchor mode to another mode.
  • system 100 may present the augmented content in a physical-world anchor mode (e.g., physical-world anchor mode 400-2).
  • system 100 may direct display device 204 to present the augmented content in the less evident manner by unanchoring the augmented content relative to viewpoint 206.
  • This unanchoring may include ceasing to anchor the augmented content relative to viewpoint 206 in at least one spatial direction (i.e., such that the augmented content does not follow viewpoint 206 in at least that one spatial direction).
  • a spatial direction may refer to a translational or a rotational direction of movement in space.
  • the ceasing to anchor the augmented content in the at least one spatial direction may be performed by ceasing to anchor the augmented content to viewpoint 206 in all spatial directions, such that the augmented content is no longer anchored to viewpoint 206 in any spatial direction.
  • the ceasing to anchor the augmented content in the at least one spatial direction comprises unanchoring the augmented content in less than all spatial directions, such as in only one spatial direction, such that the augmented content does not move in the at least one spatial direction relative to anchor region 208 as viewpoint 206 moves, but does still follow viewpoint 206 as user 202 moves viewpoint 206 in other spatial directions.
  • FIG.5 shows an illustrative example for when the viewpoint 206-2 is directed near a boundary 502 of anchor region 208.
  • augmented content 302 is anchored in a first spatial direction relative to a viewpoint of a user and anchored in a second spatial direction relative to the boundary.
  • viewpoint 206-2 represents a moment in time when viewpoint 206 is being moved by user 202 leftward (in accordance with a movement arrow shown above display device 204) so as to move viewpoint 206 across boundary 502 (i.e., so as to move viewpoint 206 from within anchor region 208 to outside of anchor region 208).
  • augmented content 302 may be anchored relative to viewpoint 206 in all spatial directions.
  • system 100 may direct display device 204 to change how augmented content 302 is anchored (e.g., change the anchor mode of augmented content 302).
  • system 100 may direct display device 204 to cease anchoring augmented content 302 to viewpoint 206, and to begin anchoring augmented content 302 to coordinates in physical world 200 (e.g., coordinates in a world reference frame that does not move with the viewpoint 206).
  • a change in how augmented content 302 is anchored may affect all or only some of the spatial directions (e.g., lateral, vertical, and depth spatial directions) with respect to which augmented content 302 is anchored.
  • augmented content 302 is shown to be anchored, with respect to a lateral (e.g., horizontal) spatial direction, to boundary 502; with this approach, as viewpoint 206 is moved left or right over boundary 502, augmented content 302 remains anchored to boundary 502 and does not leave anchor region 208.
  • augmented content 302 may remain anchored, with respect to a vertical spatial direction and/or a depth spatial direction (e.g., distance to boundary 502), to viewpoint 206; with this approach, augmented content 302 is continually displayed near the bottom of the field of view at a constant size as user 202 moves viewpoint 206 over boundary 502.
  • augmented content 302 may be concurrently anchored in accordance with a viewpoint anchor mode with respect to one or more spatial directions (e.g., the vertical and/or depth spatial directions) while being anchored in accordance with a physical- world anchor mode with respect to one or more other spatial directions (e.g., the lateral and/or the depth spatial directions).
  • augmented content that is anchored to boundary 502 will be considered to be using a form of physical-world anchoring even if boundary 502 is not associated with any particular physical object since anchor region 208 is defined as a region of physical world 200. Additionally, it will be understood that augmented content may be anchored to the anchor region in various ways other than the way shown in FIG.5. For example, an augmented content may be anchored on the outside of a boundary, anchored so as to be centered on the boundary, anchored with respect to two or more spatial directions (instead of just to the lateral spatial direction such as described above), anchored to a center of the anchor region, anchored in rotation and not anchored in translation, or the like.
  • the less evident manner of presenting augmented content may involve presenting the augmented content less visibly. Presenting the augmented content less visibly may be instead of, or in addition to, unanchoring the augmented content relative to the viewpoint in the ways described above. For instance, augmented content may be presented less visibly in any of the ways that will now be described in reference to FIGS.6-8.
  • FIGS.6-8 show various illustrative presentations of augmented content in the less evident manner for viewpoint 206-3, which is directed outside of anchor region 208 in the general direction of objects 210-2 through 210-4 (and not in the direction of object 210-1).
  • viewpoint 206-3 may represent a view associated with a task that is not directly related to the body represented by object 210- 1 and in which, as a result, it may not be helpful or desirable for user 202 to be presented with augmented content representative of the internal view of the body in a manner as evident as when viewpoint 206 is directed within anchor region 208.
  • objects 210-2 through 210-4 may represent other people with whom user 202 may need to converse, a selection of instrumentation that user 202 is tasked with selecting or manipulating, a physical screen that provides different types of information to user 202 than is presented within the augmented content, or the like.
  • FIGS.6-8 show various illustrative ways that augmented content may be presented in a less evident manner than, for example, the evident manner presented in FIG.3 for viewpoint 206-1.
  • FIG.6 shows that system 100 may direct display device 204 to present augmented content 302 and 304 in the less evident manner by ceasing the presenting of the augmented content altogether (an example of presenting the augmented content less visibly).
  • system 100 may direct display device 204 to cease presenting augmented content 302 or 304 (or any other augmented content) while viewpoint 206 is directed outside of anchor region 208.
  • FIG.7 shows that system 100 may direct display device 204 to display augmented content 302 and 304 in the less evident manner by increasing a transparency or a translucency of augmented content 302 and 304 (another example of presenting the augmented content less visibly).
  • system 100 may direct display device 204 to adjust the augmented content to be more transparent or translucent when viewpoint 206 is directed outside of anchor region 208 than when viewpoint 206 is directed within anchor region 208.
  • the increased transparency or translucency may decrease an opacity of the augmented content.
  • Augmented content 302 and 304 in FIG.3 are completely opaque and block viewing of physical world 200 behind them, while augmented content 702 and 704 in FIG.7 are partially transparent or translucent and allow viewing of objects within the physical world 200 behind them.
  • FIG.8 shows that system 100 may direct display device 204 to display augmented content 302 and 304 in the less evident manner by decreasing a size of augmented content 302 and 304 as presented by display device 204 (yet another example of presenting the augmented content less visibly).
  • system 100 may direct display device 204 to cause the augmented content to be smaller in size when viewpoint 206 is directed outside of anchor region 208 than when viewpoint 206 is directed within anchor region 208.
  • Augmented content 302 and 304 in FIG.3 are relatively larger, and thus more prominent, while augmented content 802 and 804 in FIG.8 are relatively smaller, and thus less prominent and more inconspicuous, by comparison.
  • system 100 may direct display device 204 to present augmented content 302 and 304 in the less evident manner by using a combination of the principles illustrated in FIGS.4-8, or in other manners that make augmented content 302 and 304 less evident.
  • augmented content may be made less visible by decreasing a brightness property of the augmented content, changing an anchoring of the augmented content (e.g., by moving augmented content to be anchored at a less prominent portion of the field of view, etc.), simplifying or reducing the augmented content (e.g. displaying reduced-color images, outlines instead of full images, abbreviated text or shortened messages, simpler graphics, symbols instead of text, etc.), or the like.
  • System 100 may transition between presenting augmented content in the evident manner and the less evident manner in an abrupt manner, or in a more gradual manner. Examples of more gradual manners include a slower fading from a more visible display of the augmented content to a less visible display of the augmented content.
  • the system may cause the transition between presenting in the evident and the less evident manners based on an elapsed time since an event has occurred (e.g., since viewpoint 206 has exited or entered anchor region 208), and the change in the presentation of the augmented content can be a complete change between evident and less evident, that occurs immediately after the event has occurred, that occurs after a predetermined period of time has passed after the event has occurred, that occurs more gradually based on the passage of time after the event has occurred, etc.
  • change in the presentation of the augmented content can be a complete change between evident and less evident, that occurs as soon as the viewpoint direction, location, velocity direction, velocity magnitude, acceleration direction, acceleration magnitude, etc.
  • the presenting by system 100 of the augmented content in the less evident manner may include presenting the augmented content less visibly than the evident manner by varying a visual property of the augmented content based on one or more parameters.
  • Such parameters may include, for example, 1) a distance between a boundary of the anchor region and a location outside the anchor region at which the viewpoint is directed, 2) an elapsed time since the determination that the viewpoint is directed outside of the anchor region, 3) a speed at which the viewpoint is moving with respect to the boundary of the anchor region, or any other parameter as may serve a particular implementation.
  • the presentation of the augmented content may become increasingly less evident as the parameter(s) change until, in certain implementations, one or more applicable thresholds are met.
  • the presentation may become increasingly less evident (e.g., the transparency or translucency may gradually increase, the size may gradually decrease, etc.) until a certain distance or speed has been reached, a certain time has elapsed, a certain transparency or size has been achieved, or the like.
  • the transition between the evident and less evident manners of presentation may occur as viewpoint 206 approaches an anchor region boundary such as boundary 502 (i.e., such that the transition may be underway or complete by the time the boundary is crossed).
  • the transition between the evident and less evident manners may occur after viewpoint 206 has crossed the boundary.
  • system 100 may, in certain implementations, define a transition region near the boundary of an anchor region, and the transition region may be used to facilitate various types of smooth transitions between the evident and less evident manners of augmented content presentation, including smooth transitions between a viewpoint anchor mode and a physical-world anchor mode used for anchoring augmented content.
  • augmented content may be positioned, relative to viewpoint 206, based on a weighted linear combination of the respective positions, poses, motion coordinates, and/or other such characteristics of the viewpoint and an anchor region boundary.
  • the augmented content is anchored entirely to the viewpoint (e.g., using a viewpoint anchor mode), while, by the time viewpoint 206 exits the transition region, the augmented content is anchored entirely to the boundary of the anchor region (e.g., using a physical-world anchor mode).
  • This type of transitioning may be implemented with respect to one or more spatial directions, and may have the visual effect of slowing down the augmented content or gradually decoupling it from the viewpoint as the content passes through the transition region.
  • system 100 may scale the speed of motion of the augmented content relative to the anchor region boundary as the augmented content moves through the transition region, bringing the augmented content to rest (i.e., the relative speed reaching zero) when viewpoint 206 fully exits the transition region.
  • a virtual spring force may be simulated to tether the augmented content to the anchor region boundary as viewpoint 206 passes through the transition region, or another suitable transition may be employed.
  • system 100 may determine that viewpoint 206 is again directed within anchor region 208. In response to this determination that viewpoint 206 is again directed within anchor region 208, system 100 may direct display device 204 to present the augmented content in the evident manner or another evident manner. In certain examples, system 100 may return to presenting in the evident manner instantaneously, while, in other examples, the presenting may occur gradually using a transition period or transition region. Such a transition period or region may be similar but inverse to those described above.
  • system 100 may direct display device 204 to present augmented content in either the evident or less evident manner based on various criteria other than the direction of viewpoint 206 with respect to anchor region 208.
  • system 100 may determine that an event occurring or an action performed by user 202 while viewpoint 206 is directed outside anchor region 208 is associated with displaying the augmented content in the evident manner; in response to this determination, system 100 may direct display device 204 to display the augmented content in the evident manner instead of the less evident manner associated with viewpoint 206 being directed outside of anchor region 208. Further, system 100 may determine that an event occurring or an action performed by user 202 while viewpoint 206 is directed within anchor region 208 is associated with displaying the augmented content in the less evident manner; in response to this determination, system 100 may direct display device 204 to display the augmented content in the less evident manner instead of the evident manner associated with viewpoint 206 being directed within anchor region 208.
  • Implementations may have no definition of events or actions that modify the display of the augmented content in the evident or less evident manner. Various implementations that do have such definitions differ in what actions or events cause such modification. Examples of events and actions that may cause the augmented content to be presented in a manner mismatching those described above (i.e., that cause the augmented content to be presented in an evident manner when viewpoint 206 is directed outside the anchor region, or to be presented in a less evident manner when viewpoint 206 is directed within the anchor region) include the following: a fault of equipment used in the procedure taking place in physical world 200, an emergency event in physical world 200, an unexpected movement or action by a system used in the medical procedure, an unexpected event or action, a command issued by user 202 or some other personnel involved in the procedure, a beginning or continuing performance of the procedure or a particular stage of the procedure, manipulating a body or a tool in a work site for the procedure, manipulation of a tool within the body, current usage of a particular tool, or achievement of a particular event or milestone
  • the events or actions may include removing the tool from the robotic manipulator arm, attaching the tool to the manipulator arm, manipulating the tool, powering the tool up, powering the tool down, connecting the tool to a control system or power supply, or another such tool-related action.
  • the events or action may involve commands issued by user 202, such as a voice, gestural, or input device command associated with the display of the augmented content.
  • Additional examples of events and actions that may cause system 100 to present the augmented content in a manner that mismatches the viewpoint 206 include: a beginning or continuance of a certain phase or stage of the medical procedure (e.g., in an example involving a computer-assisted medical system having a robotic manipulator arm, a positioning of a manipulator assembly or a mounting of the manipulator arm, a docking of the manipulator arm, a physical or electrical coupling of an instrument with the manipulator arm, an electrical coupling of an instrument with the computer-assisted medical system, etc.), a change in a state of the system, an external manipulation (such as by 202) of the manipulator arm, a placing of the manipulator arm to a clutched mode, an installation or re-installation of an instrument, a connection of cables or accessories to the system, a collision or impending collision between instruments or manipulator arms, a collision or impending collision of a robotic arm or instrument with a person (e.g., user 202, a patient
  • system 100 may determine that user 202 moves viewpoint 206 at a rate greater than a threshold rate, and, in response to this determination, direct display device 204 to present the augmented content in the less evident manner even if viewpoint 206 is directed within anchor region 208.
  • viewpoint 206 may quickly pass through part or all of anchor region 208.
  • the direction of viewpoint 206 may be incidental and/or temporary, and user 202 may not intend to view anchor region 208, and instead intend to look past anchor region 208 or to something else that is outside of anchor region 208.
  • system 100 may determine that. Thus, in some instances, system 100 may direct display device 204 to begin or continue presenting the augmented content in the less evident manner while a speed of movement of display device 204, viewpoint 206, or user 202 is determined to be higher than a threshold speed. This may be the case even if viewpoint 206 is directed within anchor region 208.
  • system 100 may direct display device 204 to begin or continue presenting the augmented content in the less evident manner while a speed of movement of display device 204, viewpoint 206, or user 202 is determined to be higher than a threshold speed. This may be the case even if viewpoint 206 is directed within anchor region 208.
  • system 100 may also present augmented content based at least in part on manual indications by user 202 or other personnel that directly or indirectly indicate how augmented content is desired to be presented by display device 204.
  • user 202 or other personnel may indicate his or her preference regarding the presentation of augmented content in the evident or less evident manner through any appropriate input technique, and system 100 may present the augmented content in accordance with the input of user 202 or other personnel.
  • Example input techniques include button presses, voice commands, gestural input, and so forth.
  • anchor region 208 may be automatically defined by system 100, or partially or fully user defined (e.g., by user 202 or other personnel) using any type of anchor region definition technique as may serve a particular implementation.
  • a user may define the anchor region using movement of an eye gaze, the head, a hand, or some other body part or tool controlled by the user, to indicate region corners, edges, or entire boundaries.
  • the user may define some (but not necessarily all) of the boundaries of an anchor region using assistance from system 100, and system 100 may define other boundaries of the anchor region automatically.
  • system 100 may receive user inputs such as those described above to define certain boundaries, or to identify regions in the physical space that should not be obscured by the augmented view, and then system 100 may compute an optimal anchor region based on this and other information.
  • a user may select a preset anchor region from a library of preset anchor regions provided by system 100.
  • boundary segments of an anchor region may be defined automatically by system 100 based on operating conditions (e.g., a lower boundary may be defined by system 100 as a surface based on a site of a medical procedure or a body receiving the medical procedure) while other boundary portions of the anchor region may be user-defined (e.g., an upper boundary and one or more lateral boundaries may be surface portions defined based on user input).
  • Example operating conditions that may be used to aid system 100 in defining an anchor region include: the type of medical procedure that is being performed, the amount and/or location of space left free around equipment (e.g.
  • system 100 may automatically redefine the anchor region during the medical procedure based on the current operational stage of the medical procedure. For example, in response to the medical procedure transitioning from a first operational stage to a second operational stage, system 100 may automatically redefine the anchor region from a first region associated with the first operational stage to a second region associated with the second operation stage; the first region and the second region may be the same, or be distinct from each other, depending on various considerations including operational stages.
  • FIG.2 depicts an anchor region 208 consisting of a single, box-like shape.
  • Anchor regions may comprise any number of symmetric or asymmetric, or convex or concave portions.
  • the anchor region is defined based on the volume of an object in the physical world.
  • an anchor region defined to surround a patient on an operating table may be defined by offsetting the outer surfaces of the patient and operating table outwards, and then connecting them.
  • FIGS.9A-9D show various illustrative aspects of various anchor regions that may be defined within a physical world (e.g., physical world 200) in various implementations. It will be understood that the anchor regions of FIGS.9A-9D represent anchor regions in three-dimensional in space, although FIGS.9A-9D depict a cross section of each respective anchor region for illustrative convenience. Each of FIGS.9A-9D will now be described in more detail.
  • FIG.9A illustrates an anchor region 902 that is associated with a medical area at which a medical procedure is being performed on a body 904.
  • Body 904 as illustrated, is resting on an operating table 906.
  • the medical procedure is shown to be performed using an instrument 908 that enters body 904 at an entry location 910 on body 904 to operate internally within body 904.
  • entry location 910 may be an incision into which is inserted a cannula (not shown).
  • anchor region 902 may be defined based on the site for the medical procedure.
  • anchor region 902 may be defined based on entry location 910.
  • anchor region 902 may be automatically defined as the region within a particular distance 912 (i.e., a radius) from entry location 910. Accordingly, based on entry location 910 and distance 912 (e.g., both of which may either be provided by a user or determined automatically using default values or in other ways described herein), system 100 may efficiently and automatically define anchor region 902 to incorporate a suitable area around where the medical procedure is being performed.
  • the system e.g. system 100
  • each manipulator arm may have a remote center of motion about which the manipulator arm pivots during operation.
  • the real-time location of a remote center of a manipulator arm may be determined in any appropriate way, including based on the geometric parameters and real-time kinematic information of the manipulator arm, along with registration information that registers the robotic system to the physical world, or the display device (e.g. display device 204), or another reference.
  • the remote center(s) of motion are generally collocated with entry locations (e.g. entry location 910).
  • the system bases the anchor region definition on at least the location(s) of remote center(s) of motion.
  • kinematic information is to position an anchor region relative to the working direction of the procedure (e.g., based on a physical configuration of one or more manipulator arms or tools supported by the manipulator arm(s), the pointing direction or location of visual focus of an image capture device such as an endoscope, etc.).
  • the image capture device direction may be used to define an initial position of the anchor region, and that initial position may be modified or confirmed by the user through user input.
  • a system e.g. system 100
  • may generate an anchor region e.g., anchor region 902 based on other or additional factors.
  • Example factors include: where a user (e.g., user 202) is located, an anticipated shape of the procedure space (e.g., a space around operating table 906), one or more devices around operating table 906, an identification of space left open near operating table 906, a user-preferred region of interest, a machine-learning model based on procedure type, user preferences, other data related to the procedure, and any other suitable factor or combination thereof.
  • a user e.g., user 202
  • an anticipated shape of the procedure space e.g., a space around operating table 906
  • one or more devices around operating table 906 e.g., an identification of space left open near operating table 906, a user-preferred region of interest
  • a machine-learning model based on procedure type, user preferences, other data related to the procedure, and any other suitable factor or combination thereof.
  • FIG.9B illustrates an anchor region 914 (the shaded area) that is defined to include a first region of the physical world (i.e., a box-shaped region 914 containing an spherical region that is a non-anchor region 916) that fully encloses a second region of the physical world (i.e., the spherical region that is a non-anchor region 916 in FIG.9B).
  • the non-anchor region 916 is not part of anchor region 914 (and is thus not shaded).
  • the anchor region may comprise a region containing any number and shape of internal non-anchor regions.
  • the anchor region 914 may operate in a same or similar manner as described above for anchor region 208 (aside from the shape differences). In other implementations, anchor region 914 may operate in a different manner than anchor region 208.
  • the non-anchor region 916 may be a non-occludable region.
  • a non- occludable region is a region of the physical world that is not to be occluded by certain augmented content, or not to be occluded by any augmented content.
  • regions that user 202 may want to assign as non-occludable regions include: regions containing a physical monitor or a person (e.g., an assistant or nurse for a medical procedure), regions of special import for the procedure (e.g., a region immediately around an entry location such as entry location 910 in FIG.9A), or any other suitable regions where user 202 wants to retain visibility.
  • the user defines the non-occludable regions.
  • the system semi-automatically defines non-occludable regions, such as by identifying, and presenting for selection or confirmation, potential non-occludable regions based on operating conditions or other factors.
  • the system automatically defines the non-occludable regions based on operating conditions or other factors.
  • the system may direct display device 204 not to display the augmented content in the non-occludable region.
  • system 100 may direct display device 204 to temporarily unanchor the augmented content from viewpoint 206 to avoid occluding the non-occludable region with the augmented content.
  • system 100 may cause the augmented content to be anchored just outside the non-occludable region, at the location where the augmented content would have entered the non-occludable region based on the viewpoint 206 direction, when the augmented content would overwise be within the non-occludable region based on the direction of viewpoint 206.
  • non-occludable region implemented by non-anchor region 916 is shown to be enclosed by anchor region 914, it will be understood that certain implementations of non-occludable regions may be defined otherwise.
  • a non-occludable region may be defined adjacent to an anchor region rather than enclosed by the anchor region, in an area remote from any anchor regions, etc.
  • FIG.9C illustrates an anchor region 918 that is shown at two different moments in time. At a first moment in time, anchor region 918 is labeled as anchor region 918-1, and at a second moment in time, anchor region 918 is labeled as anchor region 918-2. In FIG.9C, anchor region 918 is shown to be referenced to an object 920 (illustrated in this example as a tip of an instrument) in the physical world.
  • anchor region 918 is shown to be a dynamic anchor region that moves, together with object 920, in the physical world. As shown, when object 920 translates and reorients from a first position to a second position at the two different points in time (and as indicated by the arrow), anchor region 918 is redefined from being anchor region 918-1 to being anchor region 918-2. It will be understood that any object in the physical world may be used to provide a reference for an anchor region such as anchor region 918. Examples of such objects include part or all of: a manipulator arm, a body for a procedure, a tool, an operating table, a piece of equipment, a person, etc.
  • FIG.9D illustrates an anchor region 922 that includes a plurality of physically separate sub-regions labeled 922-1 through 922-3 in the physical world.
  • each sub-region of anchor region 922 may be treated similarly to the anchor regions described above (e.g., anchor region 208).
  • anchor region 208 the various principles described above for other anchor regions in this disclosure may also apply to the sub-regions of a multi-part anchor region such as anchor region 922.
  • a system e.g.
  • system 100 may determine that a viewpoint of a user is directed within one sub-region of anchor region 922 and, in response to this determination, the system may direct a display device to present augmented content in an evident manner if the viewpoint is directed within a sub-region of anchor region 922, or a less evident manner if the viewpoint is not direction within any sub-regions of anchor region 922.
  • different sub-regions in a multi-part anchor region such as anchor region 922 may be of different shapes, and also be spatially separated (see, e.g., sub-regions 922-1 and 922-3) or spatially connected to one another (see, e.g., sub-regions 922-1 and 922-2).
  • one or more anchor regions may be shared between multiple users who are located in different places in the physical world. For example, each user may set up his/her own anchor region(s) and then may share some or all of these anchor region(s) as appropriate, such as sharing an anchor region when assisted by others on the task for which the anchor region is set up.
  • system 100 may detect that a first user of a first display device shares an anchor region with a second user of a second display device. System 100 may determine that a viewpoint of the second user is directed within the anchor region of the first user and direct, in response to that determination, the second display device to present augmented content to the second user in the evident manner.
  • FIG. 10 shows an illustrative computer-assisted medical system 1000 (“medical system 1000”) that may be used to perform various types of medical procedures including surgical and/or non-surgical procedures.
  • medical system 1000 may include a manipulator assembly 1002 (a manipulator cart is shown in FIG.10), a user control system 1004, and an auxiliary system 1006, all of which are communicatively coupled to each other.
  • Medical system 1000 may be utilized by a medical team to perform a computer-assisted medical procedure or other similar operation on a body of a patient 1008 or on any other body as may serve a particular implementation.
  • the medical team may include a first user 1010-1 (such as a surgeon for a surgical procedure), a second user 1010-2 (such as a patient-side assistant), a third user 1010- 3 (such as another assistant, a nurse, a trainee, etc.), and a fourth user 1010-4 (such as an anesthesiologist for a surgical procedure), all of whom may be collectively referred to as “users 1010,” and each of whom may control, interact with, or otherwise be a user of medical system 1000 and/or an implementation of system 100.
  • a first user 1010-1 such as a surgeon for a surgical procedure
  • a second user 1010-2 such as a patient-side assistant
  • a third user 1010- 3 such as another assistant, a nurse, a trainee, etc.
  • FIG.10 illustrates an ongoing minimally invasive medical procedure such as a minimally invasive surgical procedure
  • medical system 1000 may similarly be used to perform open medical procedures or other types of operations. For example, operations such as exploratory imaging operations, mock medical procedures used for training purposes, and/or other operations may also be performed.
  • manipulator assembly 1002 may include one or more manipulator arms 1012 (FIG.10 shows manipulator assembly 1002 as including a plurality of robotic manipulator arms 1012 (e.g., arms 1012-1 through 1012-4)) to which one or more instruments may be coupled (FIG.10 shows a plurality of instruments).
  • the instruments may be used for a computer-assisted medical procedure on patient 1008 (e.g., in a surgical example, by being at least partially inserted into patient 1008 and manipulated within patient 1008).
  • manipulator assembly 1002 is depicted and described herein as including four manipulator arms 1012, it will be recognized that manipulator assembly 1002 may include a single manipulator arm 1012 or any other number of manipulator arms as may serve a particular implementation. Additionally, it will be understood that, in some examples, one or more instruments may be partially or entirely manually controlled, such as by being handheld and controlled manually by a person. For instance, these partially or entirely manually controlled instruments may be used in conjunction with, or as an alternative to, computer-assisted instrumentation that is coupled to manipulator arms 1012 shown in FIG.10. [0098] During the medical operation, user control system 1004 may be configured to facilitate teleoperational control by user 1010-1 of manipulator arms 1012 and instruments attached to manipulator arms 1012.
  • user control system 1004 may provide user 1010-1 with imagery of an operational area associated with patient 1008 as captured by an imaging device.
  • user control system 1004 may include a set of master controls. These master controls may be manipulated by user 1010-1 to control movement of the manipulator arms 1012 or any instruments coupled to manipulator arms 1012.
  • Auxiliary system 1006 may include one or more computing devices configured to perform auxiliary functions in support of the medical procedure, such as providing insufflation, electrocautery energy, illumination or other energy for imaging devices, image processing, or coordinating components of medical system 1000.
  • auxiliary system 1006 may be configured with a display monitor 1014 configured to display one or more user interfaces, or graphical or textual information in support of the medical procedure.
  • display monitor 1014 may be implemented by a touchscreen display and provide user input functionality.
  • Augmented content provided by a region-based augmentation system may be similar, or differ from, content associated with display monitor 1014 or one or more display devices in the operation area (not shown).
  • system 100 may be implemented within or may operate in conjunction with medical system 1000. For instance, in certain implementations, system 100 may be implemented entirely by one or more display devices associated with individual users 1010.
  • Manipulator assembly 1002, user control system 1004, and auxiliary system 1006 may be communicatively coupled one to another in any suitable manner.
  • manipulator assembly 1002, user control system 1004, and auxiliary system 1006 may be communicatively coupled by way of control lines 1016, which may represent any wired or wireless communication link as may serve a particular implementation.
  • manipulator assembly 1002, user control system 1004, and auxiliary system 1006 may each include one or more wired or wireless communication interfaces, such as one or more local area network interfaces, Wi-Fi network interfaces, cellular interfaces, and so forth.
  • FIG.11 shows an illustrative method 1100 for region-based presentation of augmented content.
  • FIG.11 shows illustrative operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown in FIG.11.
  • One or more of the operations shown in FIG.11 may be performed by a region-based augmentation system such as system 100, any components included therein, and/or any implementation thereof.
  • a region-based augmentation system may determine that a viewpoint of a user of a display device is directed within an anchor region of a physical world containing the user. Operation 1102 may be performed in any of the ways described herein.
  • the region-based augmentation system may direct the display device to present augmented content in an evident manner.
  • the region-based augmentation system may direct the display device to present the augmented content in the evident manner in response to the determination in operation 1102 that the viewpoint is directed within the anchor region.
  • the evident manner in which the augmented content is presented may include anchoring the augmented content relative to the viewpoint such that the augmented content follows the viewpoint as the user moves the viewpoint with the anchor region.
  • Operation 1104 may be performed in any of the ways described herein.
  • the region-based augmentation system may determine that the viewpoint is directed outside of the anchor region by the user. Operation 1106 may be performed in any of the ways described herein.
  • the region-based augmentation system may direct the display device to present the augmented content in a less evident manner than displayed in operation 1104.
  • the region-based augmentation system may present the augmented content in the less evident manner in response to the determination in operation 1106 that the viewpoint is directed outside of the anchor region.
  • the presenting of the augmented content in the less evident manner may include presenting the augmented content less visibly than the evident manner of operation 1104 and/or unanchoring the augmented content relative to the viewpoint.
  • Operation 1108 may be performed in any of the ways described herein.
  • a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein.
  • a non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device).
  • a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media.
  • non-volatile storage media examples include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g. a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory (“RAM”), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.).
  • RAM e.g., dynamic RAM
  • memory 102 of system 100 may be implemented by a storage device of the computing device, and processor 104 of system 100 may be implemented by one or more processors of the computing device.
  • the systems and/or other components described herein may be implemented by any suitable non-transitory computer-readable medium storing instructions that, when executed, direct a processor of such a computing device to perform methods and operations described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Architecture (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Un système d'augmentation basé sur une région est configuré pour déterminer qu'un point de vue d'un utilisateur d'un dispositif d'affichage est dirigé à l'intérieur d'une région d'ancrage d'un monde physique contenant l'utilisateur. Par conséquent, le système dirige le dispositif d'affichage de façon à présenter un contenu augmenté d'une manière évidente. La manière évidente comprend l'ancrage du contenu augmenté par rapport au point de vue, de telle sorte que le contenu augmenté suit le point de vue lorsque l'utilisateur déplace le point de vue dans la région d'ancrage. Le système détermine en outre que le point de vue est dirigé à l'extérieur de la région d'ancrage par l'utilisateur. En réponse, le système dirige le dispositif d'affichage pour présenter le contenu augmenté d'une manière moins évidente. La présentation du contenu augmenté de la manière moins évidente comprend la présentation du contenu augmenté de manière moins visible que la manière évidente et/ou l'absence d'ancrage du contenu augmenté par rapport au point de vue. L'invention concerne également des procédés et des systèmes associés.
PCT/US2021/031568 2020-05-11 2021-05-10 Systèmes et procédés de présentation basée sur une région d'un contenu augmenté WO2021231293A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180030969.1A CN115461700A (zh) 2020-05-11 2021-05-10 用于基于区域的呈现增强内容的系统和方法
US17/919,927 US20230186574A1 (en) 2020-05-11 2021-05-10 Systems and methods for region-based presentation of augmented content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063023012P 2020-05-11 2020-05-11
US63/023,012 2020-05-11

Publications (1)

Publication Number Publication Date
WO2021231293A1 true WO2021231293A1 (fr) 2021-11-18

Family

ID=76197635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/031568 WO2021231293A1 (fr) 2020-05-11 2021-05-10 Systèmes et procédés de présentation basée sur une région d'un contenu augmenté

Country Status (3)

Country Link
US (1) US20230186574A1 (fr)
CN (1) CN115461700A (fr)
WO (1) WO2021231293A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012650A1 (fr) * 2022-07-11 2024-01-18 Brainlab Ag Dispositif d'augmentation par recouvrement

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240028777A1 (en) * 2022-07-22 2024-01-25 Bank Of America Corporation Device for audiovisual conferencing having multi-directional destructive interference technology and visual privacy features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160049013A1 (en) * 2014-08-18 2016-02-18 Martin Tosas Bautista Systems and Methods for Managing Augmented Reality Overlay Pollution
US20170103583A1 (en) * 2013-05-13 2017-04-13 Microsoft Technology Licensing, Llc Interactions of virtual objects with surfaces
US20190018498A1 (en) * 2017-07-12 2019-01-17 Unity IPR ApS Methods and systems for displaying ui elements in mixed reality environments
US20200085511A1 (en) * 2017-05-05 2020-03-19 Scopis Gmbh Surgical Navigation System And Method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170103583A1 (en) * 2013-05-13 2017-04-13 Microsoft Technology Licensing, Llc Interactions of virtual objects with surfaces
US20160049013A1 (en) * 2014-08-18 2016-02-18 Martin Tosas Bautista Systems and Methods for Managing Augmented Reality Overlay Pollution
US20200085511A1 (en) * 2017-05-05 2020-03-19 Scopis Gmbh Surgical Navigation System And Method
US20190018498A1 (en) * 2017-07-12 2019-01-17 Unity IPR ApS Methods and systems for displaying ui elements in mixed reality environments

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012650A1 (fr) * 2022-07-11 2024-01-18 Brainlab Ag Dispositif d'augmentation par recouvrement

Also Published As

Publication number Publication date
US20230186574A1 (en) 2023-06-15
CN115461700A (zh) 2022-12-09

Similar Documents

Publication Publication Date Title
CN110800033B (zh) 虚拟现实腹腔镜式工具
CN109791801B (zh) 机器人外科手术系统中的虚拟现实训练、模拟和协作
Sielhorst et al. Advanced medical displays: A literature review of augmented reality
US11819273B2 (en) Augmented and extended reality glasses for use in surgery visualization and telesurgery
ES2736966T3 (es) Manejo -sin contacto táctil- de dispositivos usando sensores de profundidad
US11069146B2 (en) Augmented reality for collaborative interventions
CN110169822A (zh) 用于与机器人外科手术系统一起使用的增强现实导航系统及其使用方法
US20220387128A1 (en) Surgical virtual reality user interface
Gallo et al. A user interface for VR-ready 3D medical imaging by off-the-shelf input devices
CN105078580B (zh) 手术机器人系统及其腹腔镜操作方法以及体感式手术用图像处理装置及其方法
US20230186574A1 (en) Systems and methods for region-based presentation of augmented content
JP2022513013A (ja) 複合現実のための仮想オブジェクトの体系的配置
EP3497600B1 (fr) Système de visualisation médicale interactif distribué ayant des caractéristiques d'interface utilisateur
EP3870021B1 (fr) Systèmes de réalité mixte et procédés pour indiquer une étendue d'un champ de vue d'un dispositif d'imagerie
EP3907585B1 (fr) Systèmes et procédés de commande d'un écran de salle d'opération à l'aide d'un casque de réalité augmentée
US20220215539A1 (en) Composite medical imaging systems and methods
Gsaxner et al. Augmented reality in oral and maxillofacial surgery
US20220117662A1 (en) Systems and methods for facilitating insertion of a surgical instrument into a surgical space
Zinchenko et al. Virtual reality control of a robotic camera holder for minimally invasive surgery
LIU et al. A preliminary study of kinect-based real-time hand gesture interaction systems for touchless visualizations of hepatic structures in surgery
Danciu et al. A survey of augmented reality in health care
EP3871193B1 (fr) Systèmes de réalité mixte et procédés pour indiquer une étendue d'un champ de vision d'un dispositif d'imagerie
Drouin et al. Interaction in augmented reality image-guided surgery
Salb et al. Risk reduction in craniofacial surgery using computer-based modeling and intraoperative immersion
US20230139425A1 (en) Systems and methods for optimizing configurations of a computer-assisted surgical system for reachability of target objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21729159

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21729159

Country of ref document: EP

Kind code of ref document: A1