CN116528790A - Techniques for adjusting display units of viewing systems - Google Patents

Techniques for adjusting display units of viewing systems Download PDF

Info

Publication number
CN116528790A
CN116528790A CN202180079997.2A CN202180079997A CN116528790A CN 116528790 A CN116528790 A CN 116528790A CN 202180079997 A CN202180079997 A CN 202180079997A CN 116528790 A CN116528790 A CN 116528790A
Authority
CN
China
Prior art keywords
input
linear
hand
magnitude
display unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180079997.2A
Other languages
Chinese (zh)
Inventor
E•努希贝赞贾尼
L•N•维尔纳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intuitive Surgical Operations Inc
Original Assignee
Intuitive Surgical Operations Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Surgical Operations Inc filed Critical Intuitive Surgical Operations Inc
Priority claimed from PCT/US2021/061234 external-priority patent/WO2022115795A1/en
Publication of CN116528790A publication Critical patent/CN116528790A/en
Pending legal-status Critical Current

Links

Abstract

Techniques for adjusting a display unit (206) of a viewing system include a repositionable structure (204) configured to support the display unit, first and second hand input sensors (240) configured to receive input from an operator; and a control unit. The display unit is configured to display an image viewable by an operator. The control unit is configured to receive a first input from the first hand input sensor, receive a second input from the second hand input sensor, and, in response to the set of criteria being met, determine a commanded motion based on the first input and the second input and command the actuator to move the repositionable structure based on the commanded motion. The set of criteria includes a first magnitude of the first input and a second magnitude of the second input being greater than a threshold.

Description

Techniques for adjusting display units of viewing systems
Cross Reference to Related Applications
The present application claims the benefit of U.S. provisional patent application Ser. No. 63/174,754, entitled "Techniques for Adjusting aDisplay Unit of a Viewing System", filed on month 14 of 2021, and claims the benefit of U.S. provisional patent application Ser. No. 63/119,603, entitled "Techniques for Adjusting a Display Unit of a Viewing System", filed on month 11, 30 of 2020. The disclosure of each of which is incorporated herein by reference.
Technical Field
The present disclosure relates generally to electronic devices and, more particularly, to techniques for adjusting a display unit of a viewing system.
Background
More and more devices are being replaced by computer-aided electronic devices. This is especially true in industrial, recreational, educational and other environments. As a medical example, today's hospitals find a large number of electronic devices in operating rooms, intervention rooms, intensive care units, emergency rooms and/or the like. Many of these electronic devices may be capable of autonomous or semi-autonomous movement. It is also common for a person to control the movement and/or operation of an electronic device using one or more input devices located at a user control system. As a specific example, a minimally invasive robotic tele-surgical system allows a surgeon to operate on a patient at a bedside or remote location. Tele-surgery generally refers to surgery performed using a surgical system in which the surgeon uses some form of remote control (e.g., a servo mechanism) to manipulate movement of surgical instruments, rather than holding and moving the instruments directly with the hand.
When the electronic device is used to perform a task at a work site, one or more imaging devices (e.g., an endoscope, an optical camera, and/or an ultrasound probe) may capture images of the work site that provide visual feedback to an operator who is monitoring and/or performing the task. The imaging device(s) may be controllable to update the view of the work site provided to the operator via the display unit. The display unit may be a monoscopic or stereoscopic display device with lenses or viewing screens. To use the display unit, the operator positions his or her eyes so as to see an image displayed on a lens or a viewing screen of the display unit.
Since each operator may have a different size or prefer a different pose when using the display unit of the viewing system, the operator may from time to time make ergonomic adjustments to the positioning and orientation of the display unit. However, existing ergonomic tuning techniques may disrupt the workflow, be cumbersome or not intuitive to use, or be prone to accidental activation.
Accordingly, there is a need for improved techniques for adjusting the display unit of a viewing system.
Disclosure of Invention
Consistent with some embodiments, a computer-assisted device includes: a repositionable structure configured to support a display unit, the repositionable structure including an actuator configured to move the repositionable structure, the display unit configured to display an image viewable by an operator; a first hand input sensor and a second hand input sensor configured to receive input from an operator; and a control unit communicatively coupled to the repositionable structure, the first hand input sensor, and the second hand input sensor, wherein the control unit is configured to: receiving a first input from a first hand input sensor, receiving a second input from a second hand input sensor, and in response to a set of criteria being met: a commanded motion is determined based on the first input and the second input, and the actuator is commanded to move the repositionable structure based on the commanded motion, the set of criteria including a first magnitude of the first input and a second magnitude of the second input being greater than a first threshold.
Consistent with other embodiments, a method includes receiving a first input from a first hand input sensor configured to receive input from an operator, receiving a second input from a second hand input sensor configured to receive input from the operator, and in response to a set of criteria being met: a commanded motion is determined based on the first input and the second input, and the actuator is commanded to move the repositionable structure based on the commanded motion, the set of criteria including a first magnitude of the first input and a second magnitude of the second input being greater than a first threshold, the repositionable structure configured to support a display unit configured to display an image viewable by an operator.
Other embodiments include, but are not limited to, one or more non-transitory machine-readable media comprising a plurality of machine-readable instructions that, when executed by one or more processors, are adapted to cause the one or more processors to perform any of the methods disclosed herein.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory in nature and are intended to provide an understanding of the disclosure, without limiting the scope of the disclosure. In this regard, additional aspects, features and advantages of the present disclosure will be apparent to those skilled in the art from the following detailed description.
Drawings
FIG. 1 is a simplified diagram of an exemplary remote operating system, according to various embodiments.
FIG. 2 is a perspective view of an exemplary display system according to various embodiments.
FIG. 3 is a side view of an exemplary display system according to various embodiments.
Fig. 4 illustrates a pathway for combining linear hand inputs sensed by a hand input sensor during adjustment of a display unit, in accordance with various embodiments.
Fig. 5 illustrates a pathway for combining linear hand inputs sensed by a hand input sensor during adjustment of a display unit, in accordance with various other embodiments.
FIG. 6 illustrates an example in which the tip of the sum of linear hand inputs extends beyond the receiving area, in accordance with various embodiments.
Fig. 7 illustrates another example in which the tip of the sum of linear hand inputs extends beyond the receiving area, in accordance with various embodiments.
Fig. 8 illustrates a pathway for combining rotational inputs sensed by a hand input sensor during adjustment of a display unit, in accordance with various embodiments.
Fig. 9 illustrates a pathway for combining input sensed by a head input sensor with input sensed by a hand input sensor during adjustment of a display unit, in accordance with various embodiments.
Fig. 10 illustrates a simplified diagram of a method for adjusting a display unit of a viewing system, in accordance with various embodiments.
Fig. 11 illustrates one process of the method of fig. 10 in more detail, in accordance with various embodiments.
Fig. 12 illustrates the same process of the method of fig. 10 in more detail, in accordance with various other embodiments.
Fig. 13 illustrates the same process of the method of fig. 10 in more detail, in accordance with various other embodiments.
Fig. 14 illustrates another process of the method of fig. 10 in more detail, in accordance with various embodiments.
Fig. 15 illustrates another process of the method of fig. 10 in more detail, in accordance with various embodiments.
Detailed Description
The present specification and drawings, which illustrate aspects, embodiments or modules of the invention, should not be considered limiting of the claims defining the protected invention. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the invention. The same numbers in two or more drawings may identify the same or similar elements.
In this specification, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art, that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are intended to be illustrative, not limiting. Those skilled in the art may implement other elements within the scope and spirit of the present disclosure although not specifically described herein. Furthermore, to avoid unnecessary repetition, one or more features shown and described in connection with one embodiment may be incorporated into other embodiments unless specifically described otherwise, or if one or more features would render the embodiments inoperative.
Furthermore, the terms of the present specification are not intended to limit the present invention. For example, spatially relative terms (e.g., "below," "lower," "upper," "proximal," "distal," and/or the like) may be used to describe one element or feature's relationship to another element or feature as illustrated. These spatially relative terms are intended to encompass different positions (i.e., locations) and orientations (i.e., rotational placement) of the element or its operation in addition to the position and orientation depicted in the figures. For example, if the contents of one of the figures are turned over, elements described as "below" or "beneath" other elements or features would then be "above" or "over" the other elements or features. Thus, the exemplary term "below" may encompass both an upper and lower position and orientation. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Likewise, descriptions of movement along and about various axes include various particular element positions and orientations. Furthermore, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," "including," and/or the like, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups. Components described as coupled may be directly electrically or mechanically coupled, or they may be indirectly coupled via one or more intermediate components.
Elements described in detail with reference to one embodiment, or module may be included in other embodiments, or modules without specifically showing or describing them in any practical case. For example, if an element is described in detail with reference to one embodiment without describing the element with reference to a second embodiment, the element may still be required to be included in the second embodiment. Thus, to avoid unnecessary repetition in the following description, one or more elements shown and described in association with one embodiment, or application may be incorporated into other embodiments, or aspects unless specifically described otherwise, unless one or more elements would render the embodiment or embodiment inoperative, or unless two or more of the elements provide conflicting functionality.
In some instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
The present disclosure describes states of various devices, elements, and portions of computer-aided devices and elements in three-dimensional space. As used herein, the term "positioning" refers to the position of an element or a portion of an element in three dimensions (e.g., three translational degrees of freedom along cartesian x, y and z coordinates). As used herein, the term "orientation" refers to rotational placement (three degrees of rotational freedom, such as roll, pitch, and yaw) of an element or a portion of an element. As used herein, the term "shape" refers to a set of positions or orientations measured along an element. As used herein, and for a device having a repositionable arm, the term "proximal" refers to a direction along its kinematic chain toward a base of a computer-assisted device, and "distal" refers to a direction along the kinematic chain away from the base.
Aspects of the present disclosure are described with reference to computer-assisted systems and devices, which may include teleoperational, remote control, autonomous, semi-autonomous, robotic, and/or the like systems and devices. Furthermore, aspects of the present invention are directed to da commercialized by intuitive surgical procedures companies using surgical systems (e.g., sanguis Vire, calif.)Surgical system) is described in terms of an embodimentAs described above. However, those skilled in the art will appreciate that the inventive aspects disclosed herein may be embodied and practiced in a variety of ways, including robotic and non-robotic embodiments (if applicable). About da->The embodiments of the surgical system are merely exemplary and should not be considered as limiting the scope of the inventive aspects disclosed herein. For example, the techniques described with reference to surgical instruments and surgical methods may be used in other contexts. Accordingly, the instruments, systems and methods described herein may be used with humans, animals, portions of the human or animal anatomy, industrial systems, general purpose robots, or teleoperational systems. As further examples, the instruments, systems, and methods described herein may be used for non-medical purposes, including industrial purposes, general-purpose robotic purposes, sensing or manipulating non-tissue workpieces, cosmetic improvements, imaging of human or animal anatomy, collecting data from human or animal anatomy, building or dismantling systems, training medical or non-medical personnel, and/or the like. Additional example applications include for performing procedures on tissue removed from human or animal anatomy (without returning to human or animal anatomy) and for performing procedures on human or animal carcasses. Furthermore, these techniques may also be used in medical or diagnostic procedures that may or may not include surgical aspects.
Overview of the System
FIG. 1 is a simplified diagram of an exemplary remote operating system 100, according to various embodiments. In some examples, teleoperational system 100 may be a teleoperational medical system, such as a surgical system. As shown, the teleoperational system 100 includes a slave device 104. The slave device 104 is controlled by one or more director input devices, as will be described in more detail below. Systems comprising a director device and a slave device are sometimes also referred to as master-slave systems. Also shown in fig. 1 is an input system that includes a workstation (e.g., console) 102, and in various embodiments, the input system may be in any suitable form and may or may not include a workstation.
In this example, the workstation 102 includes one or more director input devices 106 that are contacted and manipulated by an operator 108. For example, the workstation 102 may include one or more director input devices 106 for use by the hands of an operator 108. The director input device 106 in this example is supported by the workstation 102 and may be mechanically grounded. In some embodiments, an ergonomic support 110 (e.g., a forearm support) may be provided, and the operator 108 may rest his or her forearm on the ergonomic support 110. In some examples, the operator 108 may perform tasks at a work site near the slave device 104 during a procedure by commanding the slave device 104 using the director input device 106.
A display unit 112 is also included in the workstation 102. The display unit 112 may display the image for viewing by the operator 108. The display unit 112 may be movable in various degrees of freedom to accommodate the viewing orientation of the operator 108 and/or optionally provide control functions as another guide input device. In the example of the remote operating system 100, the displayed images may depict the work site at which the operator 108 performs various tasks by manipulating the guide input device 106 and/or the display unit 112. In some examples, the image displayed by the display unit 112 may be received by the workstation 102 from one or more imaging devices disposed at the work site for capturing the image. In other examples, the image displayed by the display unit may be generated by the display unit 112 (or by another device or system connected), such as a virtual representation for a tool, a work site, or a user interface component.
When using the workstation 102, the operator 108 may sit on a chair or other support in front of the workstation 102, position his or her eyes in front of the display unit 112, manipulate the guide input device 106, and rest his or her forearms on the ergonomic support 110 as desired. In some embodiments, the operator 108 may stand at a workstation or take other gestures, and the display unit 112 and the director input device 106 may be adjusted in azimuth (height, depth, etc.) to accommodate the operator 108.
The remote operating system 100 may also include a slave device 104 that may be commanded by the workstation 102. In a medical example, the slave device 104 may be located near an operating table (e.g., table, bed, or other support) on which a patient may be positioned. In this case, the working site may be provided on or within an operating table, for example, on or within a patient, simulated patient or model, or the like (not shown). The illustrated teleoperated driven device 104 includes a plurality of manipulator arms 120, each manipulator arm configured to be coupled to an instrument assembly 122. The instrument assembly 122 may include, for example, an instrument 126 and an instrument bracket configured to hold the respective instrument 126.
In various embodiments, one or more of the instruments 126 may include an imaging device (e.g., an optical camera, a hyperspectral camera, an ultrasound sensor, etc.) for capturing images. For example, one or more of the instruments 126 may be an endoscopic assembly that includes an imaging device that may provide a captured image of a portion of the working site to be displayed via the display unit 112.
In some embodiments, the slave manipulator arm 120 and/or instrument assembly 122 may be controlled to move and articulate the instrument 126 in response to manipulation of the guide input device 106 by the operator 108 so that the operator 108 may perform tasks at the work site. Manipulator arm 120 and instrument assembly 122 are examples of repositionable structures on which instruments and/or imaging devices may be mounted. For the surgical example, the operator may guide the slave manipulator arm 120 to move the instrument 126 to perform a surgical procedure at the internal surgical site through the minimally invasive aperture or natural orifice.
As shown, the control system 140 is provided external to the workstation 102 and communicates with the workstation 102. In other embodiments, the control system 140 may be provided in the workstation 102 or in the slave device 104. As the operator 108 moves the guide input device(s) 106, sensed spatial information, including sensed position and/or orientation information, is provided to the control system 140 based on the movement of the guide input device 106. The control system 140 may determine control signals or provide control signals to the slave device 104 to control movement of the manipulator arm 120, the instrument assembly 122, and/or the instrument 126 based on the received information and operator inputs. In one embodiment, control system 140 supports one or more wired communication protocols (e.g., ethernet, USB, and/or the like) and/or one or more wireless communication protocols (e.g., bluetooth, irDA, homeRF, IEEE 1002.11, DECT, wireless telemetry, and/or the like).
The control system 140 may be implemented on one or more computing systems. One or more computing systems may be used to control slave device 104. Further, one or more computing systems may be used to control movement of components of the workstation 102, such as the display unit 112.
As shown, the control system 140 includes a processor 150 and a memory 160 that stores a control module 170. In an embodiment, control system 140 may include one or more processors, non-persistent storage (e.g., volatile memory (e.g., random Access Memory (RAM)), cache memory), persistent storage (e.g., a hard disk, an optical drive (e.g., a Compact Disc (CD) drive or a Digital Versatile Disc (DVD) drive), flash memory, etc.), a communication interface (e.g., a bluetooth interface, an infrared interface, a network interface, an optical interface, etc.), and many other elements and functions. Furthermore, the functionality of the control module 170 may be implemented in any technically feasible software and/or hardware.
Each of the one or more processors of control system 140 may be an integrated circuit for processing instructions. For example, the one or more processors may be one or more cores or microcores of a processor, a Central Processing Unit (CPU), a microprocessor, a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Graphics Processing Unit (GPU), a Tensor Processing Unit (TPU), and/or the like. The control system 140 may also include one or more input devices, such as a touch screen, keyboard, mouse, microphone, touch pad, electronic pen, or any other type of input device.
The communication interface of the control system 140 may include an integrated circuit for connecting the computing system to a network (not shown) (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) (e.g., the internet), a mobile network, or any other type of network) and/or to another device, such as another computing system.
Further, the control system 140 may include one or more output devices, such as a display device (e.g., a Liquid Crystal Display (LCD), a plasma display, a touch screen, an organic LED display (OLED), a projector, or other display device), a printer, speakers, external storage, or any other output device. One or more of the output devices may be the same as or different from the input device(s). Many different types of computing systems exist, and the input device(s) and output device(s) described above may take other forms.
Software instructions in the form of computer readable program code to perform embodiments of the present disclosure may be stored, in whole or in part, temporarily or permanently on a non-transitory computer readable medium such as a CD, DVD, storage device, magnetic disk, magnetic tape, flash memory, physical memory, or any other computer readable storage medium. In particular, the software instructions may correspond to computer readable program code which, when executed by the processor(s), is configured to perform some embodiments of the invention.
Continuing with fig. 1, control system 140 may be connected to a network or may be part of a network. The network may include a plurality of nodes. The control system 140 may be implemented on one node or on a group of nodes. For example, the control system 140 may be implemented on a node of a distributed system that is connected to other nodes. As another example, the control system 140 may be implemented on a distributed computing system having a plurality of nodes, wherein different functions and/or components of the control system 140 may be located on different nodes within the distributed computing system. In addition, one or more elements of the control system 140 described above may be located at a remote location and connected to other elements via a network.
In some embodiments, one or more of the director input devices may be ungrounded (ungrounded director input devices are not kinematically grounded, such as a director input device held by the hand of the operator 108 without additional physical support). Such an ungrounded director input device may be used in conjunction with the display unit 112. In some embodiments, the operator 108 may use a display unit 112 positioned near the working site such that the operator 108 may manually operate an instrument at the working site, such as a laparoscopic instrument in the surgical example, while viewing the image displayed by the display unit 112.
Some embodiments may include one or more components of a teleoperational medical system, such as da commercialized by intuitive surgical operations corporation of sanyverer, california, usaA surgical system. With respect to daThe embodiments of the surgical system are merely examples and should not be considered as limiting the scope of the features disclosed herein. For example, different types of teleoperational systems with slave devices at the work site as well as non-teleoperational systems may utilize the features described herein.
Fig. 2 is a perspective view of an example display system 200, according to various embodiments. Fig. 3 is a side view of an example display system 200, according to various embodiments. In some embodiments, the display system 200 is used in a workstation of a remote operating system (e.g., in workstation 102 of remote operating system 100 of fig. 1), or the display system 200 may be used in other systems or as a stand-alone system, e.g., to allow an operator to view a work site or other physical site, a displayed virtual environment, etc. Although fig. 2-3 illustrate a particular configuration, other embodiments may use different configurations.
As shown in fig. 2-3, the display system 200 includes a base support 202, an arm support 204, and a display unit 206. The display unit 206 is provided with a plurality of degrees of freedom of movement provided by a support link comprising a base support 202, an arm support 204 coupled to the base support 202, and a tilt member 224 (described below) coupled to the arm support 204, wherein the display unit 206 is coupled to the tilt member 224.
The base support 202 may be a vertical member that is mechanically grounded (e.g., directly or indirectly coupled to the ground, such as by resting or being attached to the floor). For example, the base support 202 may be mechanically coupled to a wheeled support structure 210, the wheeled support structure 210 being coupled to the ground. The base support 202 includes a first base portion 212 and a second base portion 214, the first base portion 212 and the second base portion 214 being coupled such that the second base portion 214 is translatable in a linear degree of freedom relative to the first base portion 212.
The arm support 204 may be a horizontal member mechanically coupled to the base support 202. The arm support 204 includes a first arm portion 218 and a second arm portion 220. The second arm portion 220 is coupled to the first arm portion 218 such that the second arm portion 220 is linearly translatable relative to the first arm portion 218 in a first linear degree of freedom (DOF).
The display unit 206 may be mechanically coupled to the arm support 204. The display unit 206 may be movable in a second linear DOF provided by linear translation of the second base portion 214 and the second arm portion 220.
In some embodiments, the display unit 206 includes a display device, such as one or more display screens, projectors, or other display devices, that can display digital images. The display unit 206 may comprise two view ports 223, wherein the display device is provided behind or comprised in the view ports. In some embodiments, one or more display screens or other display devices may be positioned on the display unit 206 in place of the viewport 223.
In some embodiments, the display unit 206 displays an image of the working site (e.g., the internal anatomy of a patient in a medical example) captured by an imaging device (e.g., an endoscope). Alternatively, the working site may be a virtual representation of the working site. The images may show a captured image or virtual rendering of the instruments 126 of the slave device 104, with one or more of these instruments 126 being controlled by an operator via the director input devices (e.g., the director input device 106 and/or the display unit 206) of the workstation 102.
In some embodiments, the display unit 206 is rotatably coupled to the arm support 204 by a tilt member 224. In the example shown, the tilting member 224 is coupled at a first end to the second arm portion 220 of the arm support 204 by a rotational coupling configured to provide rotational movement of the tilting member 224 and the display unit 206 relative to the second arm portion 220 about the tilt axis 226. In some embodiments, tilt axis 226 is tilt axis 226d above the display device in display unit 206.
Each of the various degrees of freedom discussed herein may be passive and require manual manipulation to move, or may be moved by one or more actuators (e.g., by one or more motors, solenoids, etc.). For example, rotational movement of the tilting member 224 and the display unit 206 about the tilting axis 226 may be driven by one or more actuators (e.g., by a motor coupled to the tilting member at or near the tilting axis 226).
The display unit 206 may be rotationally coupled to the tilting member 224 and may rotate about the yaw axis 230. For example, from the viewpoint of an operator who views an image of the display unit 206 via the viewport 223, the rotation may be lateral or left-right. In this example, the display unit 206 is coupled to the tilting member by a rotation mechanism, which may be a track mechanism. For example, in some embodiments, the track mechanism includes a curved track 228, the curved track 228 slidably engaging a groove member 229 coupled to the tilt member 224, allowing the display unit 206 to rotate about a deflection axis 230 by moving the curved track 228 through a groove of the groove member 229.
Accordingly, the display system 200 may provide the display unit 206 with a vertical linear degree of freedom 216, a horizontal linear degree of freedom 222, and a rotational (tilting) degree of freedom 227. The combination of coordinated movements of the components of the display system 200 in these degrees of freedom allows the display unit 206 to be positioned in various positions and orientations based on the operator's preferences. Movement of the display unit 206 in the tilt, horizontal, and vertical degrees of freedom allows the display unit 206 to remain close to or in contact with the operator's head, such as when the display system 200 is in a steerable viewer mode, the operator provides head input through head movement. In the steerable viewer mode, the operator may move his or her head to provide input to control the display unit 206 to follow the movement of the head, and the movement of the head may further control the orientation and/or orientation of one or more imaging devices capturing images displayed via the display unit 206. Although some embodiments are described herein as including a steerable viewer mode, other embodiments may not include a steerable viewer mode. In embodiments with and without a steerable viewer mode, devices other than the display unit 206 (e.g., via the director input device 106 being manipulated by an operator's hand) may be used to control the orientation and/or orientation of one or more imaging devices capturing images displayed via the display unit 206.
The degrees of freedom of the display system allow the display system 200 to provide pivotal movement of the display unit 206 in physical space about a pivot axis that may be positioned in different locations. For example, the display system 200 may provide for movement of the display unit 206 in physical space that corresponds to movement of the operator's head when operating the display system 200. Such movement may include rotation about a defined neck pivot axis that generally corresponds to a neck axis of the operator's head at the operator's neck. This rotation allows the display unit 206 to move according to the head of the operator guiding the movement of the display unit 206. In another example, the movement may include rotation about a defined forehead pivot axis that generally corresponds to a forehead axis that extends through the operator's head at the forehead when the display unit 206 is oriented in a central yaw rotational position about the yaw axis 230 as shown.
The display unit 206 may include one or more input devices that allow an operator to provide input to manipulate the orientation and/or position of the display unit 206 in space, and/or to manipulate other functions or components of the display system 200 and/or a larger system (e.g., a remote operating system).
Illustratively, the display unit 206 includes a head input sensor 242. In some embodiments, the head input sensor 242 is positioned on a surface of the display unit 206 that faces the head of the operator during operation of the display unit 206.
The head input sensor 242 may include or be coupled to a headrest configured to contact the head of the operator while the operator is providing head input. More specifically, the head input sensor 242 may sense inputs applied to the headrest or display unit 206 in an area above the viewport 223. In some embodiments, the head input sensor is located in the region and the region is configured to contact the forehead of the operator while the operator is viewing the image through the viewport 223. The display unit 206 may include one or more head input sensors (e.g., head input sensor 242) that sense operator head input as commands to cause movement of the imaging device, or otherwise cause updating of views in images presented to the operator (e.g., by graphics rendering, digital zoom or pan, etc.). Further, in some embodiments and some examples of operations, the sensed head movement is used to move the display unit 206 to compensate for the head movement. Thus, even when the operator performs head movement to control the view provided by the imaging device, the positioning of the operator's head may remain stationary relative to the viewport 223. Proper alignment of the operator's eyes with the viewport can be maintained.
In some embodiments, sensing operator head input includes sensing the presence of a portion or the entire head (e.g., forehead) of the operator or contact with the head input sensor 242. More generally, in embodiments, the head input may be sensed by the head input sensor(s) in any technically feasible manner, and the operator's head may or may not contact the head input sensor(s). For example, in some embodiments, the head input sensor may be below the surface of the display unit 206. In this case, the operator's head may not contact the head input sensor, but the force may instead be transmitted through/to the head input sensor. The head input sensor 242 may include any of a variety of types of sensors, such as resistive sensors, capacitive sensors, force sensors, optical sensors, and the like.
Continuing with FIG. 2, in the steerable viewer mode, the orientation and/or position of the display unit 206 may be changed by the display system 200 based on operator head input to the head input sensor 242. For example, the sensed operator input is provided to a control system (e.g., control system 140) that controls an actuator of display system 200 to move second base portion 214 in linear degree of freedom 216, to move second arm portion 220 in linear degree of freedom 222, to move tilt member 224 in rotational degree of freedom 227, and/or to move display unit 206 in rotational degree of freedom 231 to cause display unit 206 to move in accordance with the sensed operator head input command (e.g., in accordance with the sensed operator head input). The sensed operator head input may also be used to control other functions of the display system 200 and/or a larger system (e.g., the teleoperational system 100 of fig. 1). As described above, in some embodiments, the sensed operator head input may be used to control one or more imaging devices for capturing images of the work site displayed via the display unit 206. Thus, in some embodiments, in the steerable viewer mode, the operator may move his or her head to provide input to control the display unit 206 to be moved by the display system 200 in accordance with the movement of the head. This allows the display unit 206 to follow the movement of the operator's head and changes in viewing angle.
Regardless of whether a steerable viewer mode is supported, some embodiments provide an ergonomic adjustment mode (described in more detail below) in which an operator may make ergonomic adjustments to the position and/or orientation of a display unit (e.g., display unit 206). Thus, a system with an ergonomic tuning mode may or may not support a steerable viewer mode. In the ergonomic tuning mode of some embodiments, input from both hands of the operator is sensed and used to determine a command to actuate the repositionable structure on which the display unit 206 is mounted. The input of the operator's hand may be further used with the input of the operator's head to determine a command for actuating the repositionable structure on which the display unit 206 is mounted. In some embodiments, as described above, head input may be sensed by head input sensor 242. Further, the hand input may be sensed by any technically feasible hand input sensor capable of sensing force, moment, linear displacement, or angular displacement, velocity, or acceleration, or other physical parameter. In some embodiments, some physical parameters may be used as indicators of other physical parameters. For example, a simple mobile system may be modeled with friction-free newtonian mechanics, and acceleration may be sensed as an indicator of force, related to mass. As another example, some systems may be modeled with hooke's law springs, and linear or angular displacement may be sensed as an indicator of force or torque, related to the spring constant.
Illustratively, the hand input sensors 240a-b are disposed on either side of the display unit 206. Although shown as a knob, the hand input sensor may be of any suitable shape and/or type, including a convex or concave polyhedron, joystick, touch sensitive panel, embedded or otherwise concave, or the like. In some examples, the hand input sensors 240a-b may include strain gauges, inductive sensors, linear position sensors, capacitive sensors, resistive sensors, accelerometers, and the like. Although described herein primarily with respect to a hand input sensor that senses input applied by both hands of an operator, in other embodiments, the hand input sensor may be configured to sense input applied by the same hand of the operator.
In some embodiments, the display unit 206 is limited in degrees of freedom by a repositionable arm coupled to the display unit 206. In some embodiments, the repositionable arm is configured to allow the display unit 206 to move in a plane corresponding to the vertical linear degrees of freedom 216 and the horizontal linear degrees of freedom 222 and rotate about the tilt axis 226. As shown, hand input or linear displacement, linear velocity, or linear acceleration may also be sensed in the y-z plane, which is defined by y-axis 252 and z-axis 250. Further, torque or angular displacement, angular velocity or angular acceleration may be sensed about a pitch axis 254, the pitch axis 254 passing through the operator rotatable hand input sensors 240a-b. In some embodiments, the hand input sensors 240a-b are disposed to the left and right of where the operator's eyes will be during use such that the pitch axis 254 between the hand input sensors 240a-b is generally aligned with the operator's eyes during use. For example, the hand input sensors 240a-b may be disposed on the left and right sides of the viewport 223 or the display of the display unit 206. Similarly, the head input sensor 242 may be positioned proximate to where the operator's eyes will be during use, and in such a way that the pitch axis 254 is generally aligned with the operator's eyes during use. In some embodiments, rotation of the display unit 206 about a pitch axis that is generally aligned with the eyes of the operator is not accompanied by significant linear movement of the display unit 206, which may be more comfortable for the operator.
It is understood that fig. 2 shows only an example of the configuration of the display system. Alternative configurations that support movement of the display unit 206 based on input from an operator are also possible. Any linkage that supports the display unit 206 and provides it with a degree of freedom and range of motion suitable for the application may be used in place of the configuration shown in fig. 2. Additional examples of movable display systems are described in U.S. provisional patent application 62/890,844 entitled "Moveable Display System" filed on month 8 and 23 of 2019, and international patent application publication WO 2021/0410149 entitled "Moveable Display System" filed on month 8 and 21 of 2020, both of which are incorporated herein by reference.
Although described herein primarily with respect to display unit 206 being part of a grounded mechanical structure (e.g., display system 200), in other embodiments, the display unit may be any technically feasible display device or devices. For example, the display unit may be a handheld device, such as a tablet device or a mobile phone. As another example, the display unit may be a head-mounted device (e.g., glasses, goggles, helmets). In all these cases, one or more accelerometers, gyroscopes, inertial measurement units, cameras, and/or other sensors, either internal or external to the display unit, may be used to determine the location and/or orientation of the display unit.
Display element adjustment in viewing system
Regardless of whether the display unit (e.g., display unit 206) supports a steerable viewer mode, the display unit may be adjusted based on operator preferences. From time to time, operators may want to make ergonomic adjustments to the azimuth and/or orientation of the display unit.
The ergonomic tuning mode may be entered in any technically feasible manner. In some embodiments, the ergonomic tuning mode is entered in response to a hand input sensed by a hand input sensor (e.g., hand input sensors 240 a-b) meeting a particular criteria; for example, when a force and/or torque meeting certain criteria is detected. In some embodiments, the operator may make an ergonomic adjustment to the position and/or orientation of the display unit 206 in an ergonomic adjustment mode, and then switch to a steerable viewer mode. The operator may re-enter the ergonomic adjustment mode from time to make further ergonomic adjustments to the position and/or orientation of the display unit 206.
FIG. 4 illustrates a pathway for combining linear hand inputs sensed by hand input sensors 240a-b during adjustment of display unit 206, in accordance with various embodiments. As shown, after entering the ergonomic tuning mode, the hand input sensors 240a-b have sensed two example linear hand inputs 402 and 404. In this example, the linear hand inputs 402 and 404 are shown as linear inputs having an amplitude and a direction, and are represented by vectors.
In some examples, the linear hand inputs 402 and 404 may be inputs applied by an operator's hand (e.g., by one or more fingers, palm, wrist, and/or another portion of the hand). The linear hand inputs 402 and 404 may be sensed by the hand input sensors 240a-b described above in connection with fig. 2-3. In various embodiments, the linear hand inputs 402 and 404 may include linear force, linear displacement, linear orientation, linear velocity, and/or linear acceleration. The hand input sensors 240a-b may detect these parameters directly, or may detect one or more indicators from which the parameters may be derived (e.g., detect velocity over time and integrate to provide displacement or orientation). In some embodiments, linear hand inputs 402 and 404 may be derived using techniques that aggregate, filter, or average sensor signals spatially (e.g., from multiple sensing elements) or temporally.
Illustratively, in FIG. 4, the linear hand inputs 402 and 404 are in one plane (in this case, labeled as the y-z plane). As described above, in some embodiments, the display unit 206 is constrained to move in such a y-z plane by the repositionable structure. In other embodiments, the display unit 206 may be allowed to move in any number of degrees of freedom, up to and including six DOF. Further, linear hand inputs may be sensed in fewer or more DOFs than the display unit 206 is allowed to move. In this case, one or more components of the sensed linear hand input may not be used in determining the command to move the display unit 206 or repositionable structure. For example, the one or more components may be ignored or discarded when determining the command. For example, in embodiments where the linear hand input is sensed as having a component in the x-direction and the display unit 206 is constrained to move in the y-z plane, the sensed x-component of the linear hand input is not used to determine a motion annotation of the display unit or repositionable structure.
To help increase the likelihood that movement of the display unit 206 in response to activation of the hand input sensor is desired by the operator, a control module (e.g., control module 170) performs a check to determine whether the set of criteria is met and commands movement of the display unit 206 in response to the set of criteria being met. The set of criteria may include a single criteria, or a plurality of criteria. Various embodiments may use a set of criteria including one, two, or more of these example criteria.
In some embodiments, the criteria in the set of criteria include each of the linear hand inputs 402 and 404 having an amplitude that is less than a threshold amplitude. In this case, when either of the linear hand inputs 402 or 404 has an amplitude smaller than the threshold amplitude, the display unit 206 is not commanded to move. Thus, activation of a hand input sensor involving a hand input having an amplitude less than the threshold amplitude (e.g., accidental activation) will not cause movement of the display unit 206. In some embodiments, the criteria include that the combination of linear hand inputs 402 and 404 (e.g., the nonlinear sum described below) have an amplitude that is less than the threshold amplitude. In this case, when the combination of the linear hand inputs 402 and 404 has an amplitude smaller than the threshold amplitude, the display unit 206 is not commanded to move.
In some embodiments, the criteria include the directions of the linear hand inputs 402 and 404 or the directions of the components of the linear hand inputs 402 and 404 in a particular plane differing by less than a threshold angle. The threshold angle may be a constant predetermined angle or a user definable angle. For example, the threshold angle may be about 15 degrees to 30 degrees. In some embodiments, when the display unit 206 is constrained to move in the y-z plane, the direction of the components of the linear hand inputs 402 and 404 in the y-z plane is required to be less than a threshold angle. Thus, activation of one of the hand input sensors 240a-b that does not correspond (within the desired angular difference) with simultaneous activation of the other hand input sensor 240a-b will not cause movement of the display unit 206.
In some embodiments, the criteria of the set of criteria is that the magnitudes of the linear hand inputs 402 and 404 differ by less than a maximum ratio. For example, in some embodiments, the maximum ratio may be about five. In some embodiments, the criteria of the set of criteria include that the magnitude of the linear hand inputs 402 and 404 is less than or equal to a maximum magnitude associated with a maximum speed that the display unit 206 may move. In some examples, the control unit accepts linear hand input but narrows to a magnitude greater than a maximum magnitude, e.g., to a maximum magnitude.
These example criteria may be adjusted based on the DOF(s) supported by the embodiment. For example, in some embodiments, the repositionable structure on which the display unit is mounted may provide only one DOF for the display unit (e.g., inward and outward with respect to the lens/viewing screen of the display unit 206). In this case, the threshold angle criterion may be adjusted to require that the direction of the linear hand input (or the direction of the component of the linear hand input along the one DOF) be the same and not opposite to each other in the one DOF.
In response to the set of criteria being met, the control module determines a composite linear input 410 using the linear hand inputs 402 and 404. In some embodiments, the control module combines the linear hand inputs 402 and 404 using a nonlinear sum to determine the composite linear input 410. As shown in this example, the magnitude of the linear hand input 404 is greater than the magnitude of the linear hand input 402. An example nonlinear sum performs the following operations: (1) Reducing the magnitude of the linear hand input 404 having a larger magnitude (e.g., the magnitude of the hand force) to the magnitude of the linear hand input 402 having a smaller magnitude (e.g., the magnitude of the hand force) while maintaining the same direction as the linear hand input 404, thereby generating a virtual linear hand input 408 (e.g., a virtual hand force), and (2) adding the virtual linear hand input 408 to the linear hand input 402 having a smaller magnitude to obtain a composite linear input 410. In this example of a nonlinear sum, the composite linear input 410 is in a direction that bisects the direction of the linear hand input 402 and the direction of the linear hand input 404.
In some other embodiments, the control module uses other linear or nonlinear techniques to determine the composite linear input 410. Another example technique involves performing the following operations: (1) Adding the linear hand inputs (e.g., hand forces) 402 and 404 to obtain an intermediate linear hand input (e.g., intermediate force), and (2) scaling the magnitude of the intermediate linear hand input force as a function of the angle between the directions of the linear hand inputs 402 and 404 to obtain a composite linear input 410. Any technically feasible function of the angle between the directions of the linear hand inputs 402 and 404 may be used. For example, the function may be a gaussian function. As another example, the function may be a constant over a range of angle values, and the shoulder region outside the range of angle values tapers.
The control module may calculate a commanded motion for moving the display unit 206 based on the compound linear input 410. For example, the control module may determine a commanded position, commanded velocity, commanded acceleration, or other motion-related parameter based on the compound linear input 410 and use such motion-related parameter to calculate the commanded motion. As a specific example, the commanded motion may include a linear velocity proportional to the compound linear input 410.
Fig. 5 illustrates another approach for combining linear hand inputs sensed by hand input sensors 240a-b during adjustment of display unit 206, in accordance with various embodiments. As shown, the example linear hand inputs 402 and 404 described above in connection with FIG. 4 are sensed by the hand input sensors 240a-b upon entering an ergonomic adjustment mode. The control module (e.g., control module 170) then checks to determine if the set of criteria is met and commands movement of the display unit 206 in response to the set of criteria being met.
In some embodiments, the set of criteria includes criteria that each of the linear hand inputs 402 and 404 has an amplitude that is greater than (not less than) a threshold amplitude of the repositionable structure to be commanded to move; in this example, the repositionable structure is not commanded to move the display unit 206 when at least one of the linear hand inputs 402 or 404 has an amplitude that is less than a threshold amplitude. Other criteria in the set of criteria may include any of the criteria described above in connection with fig. 4, but the set of criteria does not include criteria that the directions of the linear hand inputs 402 and 404 or the directions of the components of the linear hand inputs 402 and 404 in a particular plane differ by less than a threshold angle.
In some embodiments, the control module determines the combination 502 of the linear hand inputs 402 and 404 when both of the linear hand inputs 402 and 404 have an amplitude that is greater than a threshold amplitude. In some embodiments, the combination 502 is calculated using the same nonlinear summation as described in connection with FIG. 4 for calculating the composite linear input 410. The control module then scales the magnitude of the combinations 502 to generate a gating (gating) center 504. Thus, the gating center 504 has the same direction and magnitude of scaling as compared to the combination 502. In some embodiments, the control module scales the magnitude of the combination 502 as a function of the angle between the directions of the linear hand inputs measured by the hand input sensors 240 a-b. For example, the function may be a gaussian function or a function that is constant (e.g., 1) over a range of angular values (e.g., 0-30 degrees) and tapers to zero (e.g., decreases from 30 degrees to 60 degrees to zero) at a shoulder region outside the range of angular values. In these examples, linear hand inputs having directions that differ more from one another result in a gating center 504 having a smaller scale factor and smaller magnitude.
After the gating center 504 is obtained, the control module checks whether the composite of the linear hand inputs 402 and 404 (as shown by composite linear input 510) is located within the acceptance area 506 around the gating center 506. In some embodiments, this composite of linear hand inputs 402 and 404 is the sum of linear hand inputs 402 and 404 (e.g., the sum of vectors representing those inputs). In some embodiments, any technically feasible sum may be used, such as a straight sum, a weighted sum, and the like. In some embodiments, the receiving area 506 is three-dimensional, including, for example, a sphere centered on the tip of a vector representing the gating center 504 and extending a given radius. In other embodiments, the acceptance area is two-dimensional (e.g., a circle centered at the gating center 504) or one-dimensional (i.e., a linear range centered at the gating center 504). The dimensions of the acceptance area 506 may be based on the number or type of DOFs available to the display unit 206. In some embodiments, the radius of the acceptance area 506 is determined as a function of the ratio between the magnitudes of the linear hand inputs. For example, the ratio may have a magnitude of a linear hand input of smaller magnitude in the numerator and a magnitude of a linear hand input of larger magnitude in the denominator (i.e., the ratio is less than 1), and the function may output a constant radius in a ratio range between the magnitudes of the linear hand inputs, with the shoulder region tapering to a minimum radius near zero. In this case, the acceptance area may have a constant radius around the gating center when the magnitude of the linear hand input is similar. Alternatively, when the magnitudes of the linear hand inputs differ by a large amount, the acceptance area may be set to have a radius close to or equal to the minimum radius. It should be appreciated that if the radius is set to zero instead of the minimum radius, the approach of combining the linear hand inputs of fig. 5 will be the same as that described above in connection with fig. 4. Although described herein with respect to radius as a reference example, in some embodiments, the size or shape of the non-spherical or circular (and without radius) acceptance area may be determined based on a function of the ratio between the magnitudes of the linear hand inputs.
Illustratively, the compound linear input 510 (specifically, the tip of the vector representing the compound linear input 510) is located within the acceptance area 506. In this example, the compound linear input 510 is the sum of the linear hand inputs. In this case, the control module sets the compound linear input equal to compound linear input 510. That is, when the composite linear input 510 is within the acceptance area 506 of the nonlinear sum calculated using the virtual hand input, the composite linear input is calculated based on the linear hand inputs 402 and 404 themselves, without using the virtual hand input. Using linear hand inputs 402 and 404 applied by the operator may result in smoother movement of the display unit 206.
In other cases, when the sum of the linear hand inputs extends outside of the acceptance area 506, the control module sets the composite linear input 510 equal to the linear input within the acceptance area 506 (e.g., represented by a vector); in one example, the compound linear input 510 is set equal to a linear hand input having a maximum magnitude and a direction having a minimum angle relative to the direction of the linear hand input. Such a linear input may be represented by a vector whose tip is located on the boundary of the receiving region (e.g., spherical or circular or linear range).
Fig. 6-7 illustrate two example cases where linear hand input extends outside of the acceptance area, in accordance with various embodiments. As shown in fig. 6, the linear hand inputs 614 sensed by the hand input sensors 240a-b extend outside of the receiving area 612 around the nonlinear sum 610 of the linear hand inputs. Thus, the control module sets the composite linear input equal to the linear input 616, the linear input 616 being within the acceptance region 612 and having a maximum magnitude and a direction that forms a minimum angle with the direction of the sum of the linear hand inputs 614. Illustratively, the tip of the vector representing the linear input 616 is located on the boundary of the acceptance region 612, and the vector representing the linear input 614 is tangential to the boundary.
As shown in fig. 7, the sum of the linear hand inputs 714 sensed by the hand input sensors 240a-b extends outside of the acceptance area 712 around the nonlinear sum 710 of the linear hand inputs. Thus, the control module sets the composite linear input equal to the linear input 716, the linear input 716 being within the acceptance area 712 and having a maximum magnitude and a direction that forms a minimum angle with the direction of the sum of the linear hand inputs 714. In this example, the minimum angle is zero because the vector representing the sum of the linear hand inputs 714 passes through the acceptance area 712. In other words, the linear input 716 has the same direction and a smaller magnitude than the sum of the linear hand inputs 714. Illustratively, the tip of the vector representing the linear input 716 is located on the far boundary of the acceptance area 712.
Similar to the discussion above in connection with fig. 4, the control module may calculate a commanded motion for moving the display unit 206 based on the compound linear input 510. For example, the control module may determine a commanded position, commanded velocity, commanded acceleration, or other motion-related parameter based on the compound linear input 510 and use such motion-related parameter to calculate the commanded motion. As a specific example, the commanded motion may include a linear velocity proportional to the compound linear input 510.
FIG. 8 illustrates a pathway for combining rotational hand inputs sensed by hand input sensors 240a-b during adjustment of display unit 206, in accordance with various embodiments. As shown, after entering the ergonomic adjustment mode, the hand input sensors 240a-b have sensed two example rotational/angular hand inputs 802 and 804 (also referred to herein as "rotational inputs 802 and 804"). In this example, the rotational hand inputs 802 and 804 are shown as rotational inputs having a rotational magnitude and rotational direction, and are represented by arcuate arrows traversing (transitioning) about a rotational axis. Although often described with respect to torque as an illustrative example, in various embodiments, the rotational hand inputs 802 and 804 may include angular displacement, angular positioning, angular velocity, and/or angular acceleration (also referred to herein as "rotational displacement", "rotational positioning", "rotational velocity", and/or "rotational acceleration"). The hand input sensors 240a-b may detect these parameters directly or may detect an indicator from which the parameters may be derived (e.g., detect angular velocity over time and integrate to provide angular displacement or angular positioning). In other embodiments, the linear hand inputs 402 and 404 may be derived using techniques for spatially (e.g., from multiple sensing elements) or temporally summing, filtering, or averaging the sensor signals.
Illustratively, in FIG. 8, the rotational hand inputs 802 and 804 are about an axis passing through the knob center of the hand input sensors 240a-b (e.g., the pitch axis 254 described above in connection with FIG. 2). In some embodiments, the repositionable structure to which the display unit 206 is mounted may enable the display unit 206 to move in multiple rotational DOFs. In this case, the hand input sensor may sense rotational hand inputs about different axes allowed by the plurality of rotational DOFs.
To help increase the likelihood that movement of the display unit in response to activation of the hand input sensor is desired by the operator, a control module (e.g., control module 170) performs a check to determine whether the set of criteria is met and commands movement of the display unit in response to the set of criteria being met. The set of criteria may include a single criteria, or a plurality of criteria. The discussion that follows in fig. 8 describes example criteria, and various embodiments may use a set of criteria that includes one, two, or more of these example criteria.
In some embodiments, the criteria include each of the rotational hand inputs 802 and 804 having a magnitude less than a threshold magnitude. In this case, when either of the rotational hand inputs 802 or 804 has an amplitude smaller than the threshold amplitude, the display unit 206 is not commanded to move. Thus, activation (e.g., accidental activation) of the hand input sensors 240a-b involving rotational hand inputs having an amplitude less than a threshold amplitude will not cause movement of the display unit 206. In some embodiments, the criteria include that the combination of the rotational hand inputs 802 and 804 (e.g., the nonlinear sum described below) have an amplitude that is less than the threshold amplitude. In this case, when the combination of the rotational hand inputs 802 and 804 has an amplitude smaller than the threshold amplitude, the display unit 206 is not commanded to move.
In some embodiments, the criteria include that the direction of the rotational hand inputs 802 and 804 or the direction of the components of the rotational hand inputs 802 and 804 about a particular axis are the same (i.e., the rotational hand inputs 802 are both clockwise or counterclockwise about the axis). As a result, activation of one of the hand input sensors 240a or 240b that does not have the same rotational direction will not cause movement of the display unit 206. In some embodiments, the repositionable structure to which the display unit 206 is mounted may enable the display unit 206 to move in multiple rotational DOFs. In this case, the control module may determine whether the angle between the axes of rotation is less than a threshold angle, rather than whether the rotational hand input is in the same direction.
In some embodiments, the set of criteria may include criteria that the magnitude of the rotational hand inputs 802 and 804 differ by less than a maximum ratio. For example, in some embodiments, the maximum ratio may be about five. In some embodiments, the set of criteria may further include the following criteria: the magnitude of the rotational hand inputs 802 and 804 is less than or equal to a maximum magnitude associated with a maximum rotational speed that the display unit 206 can achieve. In some examples, the magnitude of the rotating hand input having a magnitude greater than the maximum magnitude may be reduced to the maximum magnitude.
In response to the set of criteria being met, the control module determines a compound rotational input 810 using the rotational hand inputs 802 and 804. In some embodiments, the control module determines the composite rotational input 810 by combining the rotational hand inputs 802 and 804 using a nonlinear sum. As shown, the magnitude of the rotational hand input 804 is greater than the magnitude of the rotational hand input 802. The nonlinear sum (1) reduces the magnitude of the rotating hand input 804 (e.g., the magnitude of the torque or rotational displacement) with a larger magnitude to the magnitude of the rotating hand input 802 (e.g., the magnitude of the torque or rotational displacement) with a smaller magnitude while maintaining the same direction as the rotating hand input 804, thereby generating a virtual rotating hand input 806 (e.g., a virtual torque or virtual rotational displacement), and (2) adds the virtual rotating hand input 806 to the rotating hand input 802 with a smaller magnitude to obtain the composite rotating input 810. In other embodiments where the display unit 206 may move in multiple rotational DOFs, a rotational hand input may be added to obtain an intermediate rotational hand input (e.g., an intermediate torque), and the magnitude of the intermediate rotational hand input is reduced as a function of the angle between the directions of the rotational hand input to generate the composite rotational input 810. The function may be, for example, a gaussian function, or a constant over a range of angular values, with the shoulder region outside the range of angular values tapering.
The control module may calculate a commanded motion for moving the display unit 206 based on the compound rotation input 810. For example, the control module may determine a commanded angular orientation, commanded velocity, commanded acceleration, or other motion-related parameter based on the compound rotational input 810, and calculate a commanded motion using such motion-related parameter. As a specific example, the commanded motion may include a rotational speed proportional to the compound rotational input 810.
Although a linear hand input and a rotational hand input are described in connection with fig. 4-8, respectively, some embodiments accept a hand input that includes both a linear component and a rotational component and use such a hand input to provide command motions having both a linear component and a rotational component. For example, some embodiments determine the linear component(s) of command motion using the linear component and technique of hand input described in connection with fig. 4-7, determine the rotational component(s) of command motion using the rotational component and technique of hand input described in connection with fig. 8, and superimpose them to provide overall command motion.
Fig. 9 illustrates a way to combine head input sensed by head input sensor 242 with hand input sensed by hand input sensors 240a-b during adjustment of display unit 206, in accordance with various embodiments. As shown, the head input 902 has been sensed by the head input sensor 242. For example, the head input 902 may be a force, displacement, orientation, velocity, acceleration, or other input applied by the head of the operator and sensed by the head input sensor 242. In various embodiments, the head input includes a linear head input, such as a linear force, a linear displacement, a linear velocity, a linear acceleration, or some other physical parameter. In some embodiments, the head input also includes a rotational head input, such as torque, rotational displacement, rotational speed, rotational acceleration, or some other physical parameter. In some embodiments, the head input 902 may be derived using techniques that aggregate, filter, or average sensor signals spatially (e.g., from multiple sensing elements) or temporally.
Illustratively, in the example of fig. 9, the head input 902 is a one-dimensional force in a direction toward and away from the operator and through the viewport 223 of the display unit 206. In some embodiments, the head input sensor (e.g., head input sensor 242) is a 1-DOF sensor that senses head input pushing the display unit 206 in a direction away from the operator, as shown in fig. 9. In some embodiments, in the ergonomic tuning mode, sufficient head force applied to the display unit 206 in a direction away from the operator will cause displacement of the display unit 206 in the direction of the force applied by the operator. However, in some embodiments, if the head force decreases or removes as the operator moves back away from the display unit 206 in the ergonomic adjustment mode, the display unit 206 does not follow the operator back. Following the movement of the operator's head may cause discomfort to the operator as the operator moves rearward.
In response to the head input 902, a control module (e.g., control module 170) compares the head input 902 with a baseline head input 904 to determine an adjusted head input 906. In some embodiments, the baseline head input 904 is a head input sensed by the head input sensor 242 when entering the ergonomic adjustment mode. In some embodiments, the control module uses the head input subsequently sensed by the head input sensor 242 to continuously or periodically update the baseline head input 904 and pauses updating the baseline head input 904 (i.e., maintains the baseline head input 904) while the system is in the ergonomic adjustment mode. In some embodiments, the control module calculates the adjusted head input 906 by comparing the head input 902 (e.g., head force) with the baseline head input 904 (e.g., baseline head force). In some embodiments, the adjusted head input 906 is equal to the difference between the head input 902 and the baseline head input 904 (e.g., the head force is equal to the difference between the head force of the head input 902 and the head force of the baseline head input 904). Using the difference between the sensed head input 906 and the baseline head input 904 may help provide continuity of force felt by the operator's head when switching to an ergonomic adjustment mode (e.g., when switching from a steerable viewer mode). Although the examples above explicitly discuss forces, in some embodiments the head input 902 includes head displacement, velocity, acceleration, or other head input parameters, and the adjusted head input 906 is based on a difference between the head input 902 and a corresponding baseline head input 904 that includes head displacement, velocity, acceleration, or other head input parameters.
In some embodiments, when the sensed head input 902 is less than the baseline head input 904, the baseline head input 904 is reduced; as an example, the amplitude of the baseline head input 904 is reduced (e.g., to the amplitude of the sensed head input 902). However, when the sensed head input 902 is greater than the baseline head input 904, the baseline head input 904 does not increase. In this way, when the sensed head input 902 is less than the baseline head input 904, the baseline head input 904 is reduced by "ratcheted".
In some embodiments, in response to the sensed head input 902 being greater than the baseline head input 904, the adjusted head input 906 is used to update a linear input (e.g., the compound linear input 410 or 510, or the linear input 616 or 716) determined based on the hand input sensed by the hand input sensors 240a-b, resulting in an updated compound linear input 910. In some embodiments, header input 908, which is a scaled version of adjusted header input 906, is added to composite linear input 410 (or composite linear input 510 or linear input 616 or 716). The scaling may have a value that makes the adjusted head input 906 the primary qualifier (e.g., qualify more than 50% or more than 75%) of the direction or magnitude of movement of the display unit 206. For example, in some embodiments or modes of the same embodiment, the adjusted head input may be scaled by a factor of about 1.5-2.0 or another factor. Scaling up more than one value may enhance the head and neck forces exerted by the operator, may improve the responsiveness of the system, and may help reduce fatigue of the operator's head and neck. Illustratively, the scaled header input 908 is added to the composite linear input 410 or 510, or the linear input 616 or 716, resulting in an updated composite linear input 910. In other embodiments or modes of the same embodiment, rather than scaling up the adjusted header input 906, the composite linear input 410 or 510 or linear input 616 or 716 is scaled down (e.g., by a factor of 0.8-0.9 or another factor) and added to the adjusted header input 906. Scaling down to less than one value may improve the accuracy of the system. In general, an appropriate scaling of less than one, or more than one may improve the usability or performance of the system.
Although fig. 9 is described primarily with respect to an example in which the head input is a linear head input (e.g., force), in some embodiments the head input may include a rotational head input, such as torque. In this case, the control module (e.g., control module 170) may also store a baseline rotational head input that is sensed by the head input sensor 242 when entering the ergonomic adjustment mode and ratcheted down when the sensed rotational head input is less than the baseline rotational head input. Further, the control module may update the rotational input (e.g., compound rotational input 810) determined based on the hand inputs sensed by the hand input sensors 240a-b, similar to the discussion above regarding updating linear head inputs, as described in detail below in connection with fig. 10 and 14.
The command motions may then be calculated from the updated compound input (e.g., updated compound linear input 910) itself and/or with other input(s) (e.g., compound rotational input 810) (or updated compound rotational input) in a manner similar to the determination of command motions using compound linear inputs 410 or 510 and compound rotational input 810 described above.
In the examples described in connection with fig. 4-7, inverse kinematics may be used to calculate joint velocities or orientations of joints associated with the display unit 206 and/or the repositionable structure on which the display unit 206 is mounted that will move the display unit 206 toward a direction that achieves the commanded motion.
Fig. 10 illustrates a simplified diagram of a method 1000 for adjusting a display unit in a viewing system, in accordance with various embodiments. One or more of the processes 1002-1010 of the method 1000 may be implemented at least in part in the form of executable code stored on a non-transitory tangible machine-readable medium, which when executed by one or more processors (e.g., the processor 150 in the control system 140) may cause the one or more processors to perform one or more of the processes 1002-1018. In some embodiments, the method 1000 may be performed by one or more modules (e.g., the control module 170). In some embodiments, method 1000 may include additional processes not shown. In some embodiments, one or more of processes 1002-1018 may be performed at least in part by one or more of the units of control system 140.
As shown, the method 1000 begins at process 1002, where linear hand inputs and rotational hand inputs (e.g., linear hand inputs 402 and 404 and rotational hand inputs 802 and 804) are received from hand input sensors 240a and 240b at process 1002. In some embodiments, any technically feasible signal modulation may be applied. For example, a low pass filter may be applied to reduce noise in the hand input measurements received from the hand input sensors 240 a-b. The method 1000 then processes the linear hand input (see process 1004) independent of the rotational hand input (see process 1010).
At process 1004, when the magnitude of the linear hand input is greater than the minimum threshold, method 1000 continues to process 1006.
At process 1006, a compound linear input (e.g., compound force) is determined. In some embodiments, the composite linear input is determined using one of the techniques described below in connection with fig. 11-13. Fig. 11 illustrates process 1006 of method 1000 in more detail, according to various embodiments. As shown, at process 1102, when an angle (e.g., angle 406) between directions of the linear hand inputs (e.g., linear hand inputs 402 and 404) is less than a threshold angle, method 1000 continues to process 1104. Alternatively, when the angle between the directions of the linear hand input is not less than the threshold angle, the method 1000 continues to process 1108. In some embodiments where the display unit 206 is constrained to move in one linear DOF, the direction of the linear hand input should be the same direction along the one linear DOF in order to proceed to process 1104.
At process 1104, a virtual linear hand input (e.g., virtual linear hand input 408) is determined. In some embodiments, the virtual linear hand input is determined by reducing the magnitude of the linear hand input having a larger magnitude to the magnitude of the linear hand input having a smaller magnitude.
At process 1106, a composite linear input (e.g., composite linear input 410) is determined. In some embodiments, the composite linear input is determined by adding a virtual linear hand input to a linear hand input having a smaller magnitude.
Alternatively, when the angle between the directions of the linear hand input is not less than the threshold angle and the method 1000 continues to process 1108, the compound linear input is set to zero.
Fig. 12 illustrates in more detail a process 1006 of method 1000 in accordance with various other embodiments. As shown, at process 1202, a nonlinear sum (e.g., nonlinear sum 502) of linear hand inputs (e.g., linear hand inputs 402 and 404) is determined. In some embodiments, the nonlinear sum may be determined by adding a virtual linear hand input to a linear hand input having a smaller magnitude, similar to that described above in connection with processes 1104 and 1106.
At process 1204, a gating center (e.g., gating center 504) is determined. In some embodiments, the gating center is determined by scaling the magnitude of the nonlinear summation while maintaining the direction of the nonlinear summation; in this example, a nonlinear sum is determined at process 1202. In this case, the nonlinear sum may be scaled according to a function of the difference in direction between the linear hand inputs measured by the hand input sensors 240a-b (e.g., a gaussian function or a constant function with the shoulder area reduced to zero).
At process 1206, when the composite of the linear hand inputs (e.g., composite linear input 510) is located within the acceptance area (e.g., acceptance area 506) around the gating center, method 1000 continues to process 1208 where the composite linear input is set to the composite of the linear hand inputs at process 1208. As described above, in some embodiments, a composite of linear hand inputs may be used, which is a straight-sum, weighted-sum, or any other technically feasible composite. In some embodiments, the acceptance area is an area centered on the tip of a vector representing the center of the gate and extending a radius that is determined as a function of the ratio between the magnitudes of the linear hand inputs (e.g., a constant function with the shoulder area decreasing to the minimum radius).
Alternatively, when the composite of the linear hand inputs extends outside of the acceptance area around the gating center, the method 1000 proceeds to process 1210 where the composite linear input is set to a linear input (e.g., linear input 616 or 716) having a maximum magnitude within the acceptance area and a direction forming a minimum angle with the direction of the sum of the linear hand inputs.
Fig. 13 illustrates in more detail a process 1006 of method 1000 in accordance with various other embodiments. In some embodiments, when process 1006 is implemented using the technique of fig. 13, in some cases, processes 1004 and 1008 may be skipped, ignored, or disabled (e.g., by hardware setting them to zero or a minimum detectable level, or by skipping or ignoring the inspection results for the minimum amplitude). For example, processes 1004 and 1008 may be skipped to enable one-handed adjustment of display unit 206, in which case the magnitude of the linear hand input associated with the other hand is zero.
As shown, at process 1302, when the difference between the linear hand inputs (e.g., linear hand inputs 402 and 404) is less than or equal to a threshold, method 1000 continues to process 1304 where the composite linear input is set to the sum of the linear hand inputs at process 1304. For example, the linear hand input may be multidimensional and represented by two vectors, and the magnitude of the vector, which is the difference between the two vectors, may be compared to a threshold. When the magnitude of the difference vector is less than the threshold, the two vectors are added together. As another example, the linear hand input may be one-dimensional and represented by two scalars, and the difference between the two scalars may be compared to a threshold. When the magnitude of the difference between the two scalars is less than the threshold, the two scalars are added together. In some embodiments, the threshold may be based on the type of display unit 206, the hand input sensors 240a and 240b, and a mathematical model and/or operator preference of the display unit 206. For example, in some embodiments, the threshold may be between 1-10 newtons.
Alternatively, when the difference between the linear hand inputs is greater than a threshold amount, then at process 1306 the composite linear input is set to the sum of the linear hand inputs scaled by the scaling factor. In some embodiments, the compound linear input may be calculated as:
F merge =α(F 1 +F 2 ) Wherein
Wherein F is merge Is a compound linear input, F 1 And F 2 Is a linear hand input, K is a threshold, and α is a scaling factor.
In some embodiments, the technique for determining a composite linear input described above in connection with fig. 13 is particularly useful when the linear hand inputs are relatively small, in which case the difference between the linear hand inputs may be less than a threshold, and the linear hand inputs are added together even when the directions of the linear hand inputs are significantly different. For example, a small linear hand input may correspond to a fine hand movement of the operator, and the operator may make a fine hand movement in a different direction or using only one hand.
Although the techniques of fig. 11-13 are described above as being used in different embodiments to determine a composite linear input at process 1006, some embodiments may allow switching between the techniques described above in connection with fig. 11-13 based on, for example, operator preferences, magnitude of linear hand input, and/or the like.
Returning to fig. 10, when the magnitude of the linear hand input is not greater than the minimum threshold at process 1004, the method 1000 continues to process 1008 where the compound linear input is set to zero at process 1008.
At process 1012, when the magnitude of the rotational hand input (e.g., the magnitude of rotational hand inputs 802 and 804) is greater than a minimum threshold, method 1000 continues to process 1012. In some embodiments, processes 1010-1014 are performed in parallel with processes 1004-1008, although in other embodiments processes 1004-1008 and 1010-1014 may be performed in series.
At process 1012, a compound rotational input (e.g., compound rotational input 810) is determined. Fig. 14 illustrates process 1012 of method 1000 in greater detail, according to various embodiments. As shown, at process 1402, when the rotational hand inputs (e.g., rotational hand inputs 802 and 804) are in the same direction (e.g., the rotational hand inputs are both clockwise or counterclockwise about an axis), method 1000 continues to process 1404. Alternatively, when the rotational hand input is in a different direction, the method 1000 continues to process 1408. In some embodiments where the display unit 206 may be rotated about a plurality of different axes, the direction of the rotational hand input should be less than a threshold angle in order to proceed to process 1304.
At process 1404, a virtual rotational input (e.g., virtual rotational hand input 806) is determined. In some embodiments, the virtual rotary hand input is determined by reducing the magnitude of one of the rotary hand inputs having a larger magnitude to the magnitude of the other rotary hand input having a smaller magnitude.
At process 1406, a composite rotational input (e.g., composite rotational input 810) is determined. In some embodiments, the composite rotational input is determined by adding a virtual rotational hand input to a rotational hand input having a smaller magnitude.
Alternatively, when the rotational hand input is not in the same direction, the method 1000 continues to process 1408 where the compound rotational input is set to zero at process 1408.
Although fig. 10 and processes 1004-1008 are described above with respect to linear hand inputs (which are interpreted as forces applied to the hand input sensors 240a and 240 b), in some embodiments, linear hand inputs may be interpreted as torques applied to the display unit 206 that are intended to rotate the display unit 206 (e.g., deflect and/or roll the display unit). In this case, each linear hand input may be calculated as a torque (or angular displacement, angular orientation, angular velocity, and/or angular acceleration) about the virtual center. In some embodiments, the virtual center may be a point located between and in front of the eyes of the operator, such as a center point of a line between the hand input sensors 240a and 240 b. Thus, in addition to facilitating translational movement of the display unit 206, linear hand input may also facilitate rotational input for rotating the display unit 206. When the torque corresponding to the linear hand input is greater than the minimum threshold at process 1004, one or more of the nonlinear summation techniques described above in connection with fig. 11-13 may be applied at process 1006 to determine a compound linear input that is a compound torque about the virtual center. When the torque corresponding to the linear hand input is not greater than the minimum threshold at process 1004, the compound linear input is set to zero at process 1008. Then, according to processes 1016 and 1018, a multi-dimensional rotational motion (including yaw and roll about a virtual center) of display unit 206 may be determined based on the composite linear input, as will be described in more detail below.
Returning to FIG. 10, when the magnitude of the rotational hand input is not greater than the minimum threshold at process 1010, method 1000 continues to process 1014 where the compound rotational input is set to zero at process 1014.
At optional process 1016, the composite linear input determined at process 1006 or 1008 and the composite rotational input determined at process 1012 or 1014 are updated based on the head input (e.g., head force and/or head torque). Fig. 15 illustrates process 1016 of method 1000 in more detail, in accordance with various embodiments. As shown, at process 1502, a sensed linear head input and/or a rotational head input (e.g., head input 902) is received from a head input sensor (e.g., head input sensor 242). In some embodiments, any technically feasible signal modulation may be applied. For example, a low pass filter may be applied to reduce noise in the sensed linear head input and/or the rotational head input received from the head input sensor 242.
At process 1504, when the sensed linear head input is less than the baseline linear head input (e.g., baseline head input 904), method 1000 continues to process 1506 where the magnitude of the baseline linear head input is reduced to the magnitude of the sensed linear head input. It should be noted that when the sensed linear head input is less than the baseline linear head input, the display unit 206 is not commanded to move toward the operator's head based on the sensed linear head input. Moving the display unit 206 toward the head of the operator may be uncomfortable for the operator.
Alternatively, when the sensed linear head input is greater than the baseline linear head input, the method 1000 continues to process 1508 where a scaled version of the difference between the sensed linear head input and the baseline linear head input (e.g., scaled head input 908) is added to the linear input (e.g., composite linear input 410 or 510, or linear input 616 or 716) determined at process 1006 or 1008 to update the linear input. In some embodiments, the difference between the sensed linear head input and the baseline linear head input may be scaled up by a factor (e.g., about 1.5) and added to the composite linear input determined at process 1006 or 1008; the updated linear input is determined based on the linear hand input from the hand input sensors 240 a-b.
At process 1510, when the sensed rotational head input is less than the baseline rotational head input, the method 1000 continues to process 1512 where the magnitude of the baseline rotational head input is reduced to the magnitude of the sensed rotational head input. In some embodiments, processes 1504-1508 are performed in parallel with processes 1510-1514, although in other embodiments processes 1504-1508 and 1510-1514 may be performed in series. When the sensed rotational head input is less than the baseline rotational head input, the display unit 206 is also not commanded to move based on the sensed rotational head input.
Alternatively, when the sensed rotational head input is greater than the baseline rotational head input, the method 1000 continues to process 1514 by adding a scaled version of the difference between the sensed rotational head input and the baseline rotational head input to the rotational input determined at process 1012 or 1014 (e.g., the composite rotational input 810) at process 1514 to update the rotational input. The difference between the sensed rotational head input and the baseline rotational head input may be scaled by any technically feasible factor and added to the composite linear input determined at process 1012 or 1014. In other embodiments, the sensed rotating head input may be discarded by, for example, setting the scaling factor to 0, or the rotating head input may not be sensed.
Returning to fig. 10, at process 1018, based on the composite linear input and the composite rotational input, a repositionable structure on which the display unit is mounted is actuated. In some embodiments, the commanded linear velocity and the commanded rotational velocity are calculated from a composite linear input determined during processes 1006 and/or 1008 and a composite rotational input determined during processes 1012 and/or 1014. When a compound linear input is generated based on interpreting the linear hand input as a linear input (e.g., force) applied to the input device, then the compound linear input may be used to determine a commanded linear velocity of the repositionable structure. When generating a compound linear input based on interpreting the linear hand input as a rotational input (e.g., torque about a virtual center), then the compound linear input may be used to determine a contribution (contribution) to a commanded rotational speed of the repositionable structure about a point corresponding to the virtual center. In some embodiments, the linear hand input may be used to determine a commanded linear velocity and a contribution to a commanded rotational velocity. The compound rotational input may also be used to determine an additional contribution to the commanded rotational speed that is combined with the contribution determined from the compound linear input. Inverse kinematics may then be used to calculate joint speeds of the display unit 206 and/or the repositionable structure on which the display unit 206 is mounted for moving the display unit 206 to achieve the commanded linear speed and the commanded rotational speed. The method 1000 then repeats by returning to process 1002.
As mentioned above and further emphasized here, FIG. 10 is merely an example and should not unduly limit the scope of the claims. Those of ordinary skill in the art will recognize many variations, alternatives, and modifications. As described, in some embodiments, one or more other criteria may alternatively or additionally be included. For example, other criteria may include linear hand input and/or rotational hand input differing by less than a threshold ratio. As another example, other criteria may include a linear hand input and/or a rotational hand input that is less than a respective maximum value.
In some embodiments, in process 1006, the composite linear input may alternatively be determined by adding a linear hand input to obtain an intermediate linear hand input, and reducing the magnitude of the intermediate linear hand input as a function of the angle between the directions of the linear hand input to obtain the composite linear input. Any technically feasible function of the angle between the directions of the linear hand input may be used, such as a gaussian function or a function that is constant over a range of angle values and tapers off in the shoulder region outside the range of angle values.
In some embodiments where the display unit 206 may be rotated about a plurality of different axes, the composite rotational input may be determined in process 1012 by reducing the magnitude of the intermediate rotational hand input as a sum of the rotational hand inputs as a function of the angle between the rotational hand inputs. Again, any technically feasible function of the angle between the directions of the rotating hand input may be used, such as a gaussian function or a function that is constant over a range of angle values and that tapers off in the shoulder region outside the range of angle values. In some embodiments, component(s) of the linear hand input and/or the rotational hand input that are not in the direction(s) corresponding to the DOF of the display unit 206 may be discarded. In some embodiments, process 1004 of method 1000 may be replaced with determining whether the magnitude of the combination of the linear input and the rotational input is greater than a respective threshold.
In some embodiments where the display unit 206 may be rotated about a plurality of different axes, the composite rotational input may be determined in process 1012 in a manner similar to determining the composite linear input based on the acceptance area as described above in connection with fig. 5-7 and 12.
As described in various embodiments of the disclosed embodiments, when the set of criteria is met, the repositionable structure on which the display unit of the viewing system is mounted is actuated based on the input measured by the hand input sensor. The set of criteria may include that the direction between two linear hand inputs (e.g., hand forces) measured by different hand input sensors is less than a threshold angle, and that the directions of two rotating hand inputs (e.g., hand moments) measured by the hand input sensors are in the same direction. The set of criteria may also include a measured linear hand input and a measured rotational hand input having an amplitude that is greater than a minimum amplitude, and so on. In some embodiments, the measured linear hand inputs are combined into a nonlinear sum, where the linear hand input with the larger magnitude is reduced in magnitude to the magnitude of the linear handwriting input with the smaller magnitude, thereby generating a virtual linear hand input. Further, the rotating hand input having a larger magnitude is reduced in magnitude to the magnitude of the rotating hand input having a smaller magnitude, thereby generating a virtual rotating hand input. Thereafter, a virtual linear hand input is added to the linear hand input having the smaller magnitude to obtain a compound linear input, and a virtual rotary hand input is added to the rotary hand input having the smaller magnitude to obtain a compound rotary input. In other embodiments, the compound linear input may be determined by: (i) calculating a nonlinear sum of the linear hand inputs, (ii) scaling the nonlinear sum to obtain a gating center, and (iii) if the sum is within an acceptance area around the gating center, setting the composite linear input to the sum of the linear hand inputs, or setting the composite linear input to the linear input within the acceptance area such that the composite linear input has a maximum magnitude available within the acceptance area and a direction forming a minimum angle with the direction of the sum of the linear hand inputs. The repositionable structure on which the display unit is mounted may then be actuated based on the composite linear input and the composite rotational input. In some embodiments, the repositionable structure on which the display unit is mounted is further actuated based on an input measured by the head input sensor. In some examples, the head input sensor measures a head input (e.g., head force) that is determined as a difference between the sensed head input and a measured baseline head input when entering the ergonomic adjustment mode. In this case, the composite linear input determined based on the linear hand input may be updated based on the head input. Further, when the sensed head input is less than the baseline head force, the baseline head input is ratcheted back.
The disclosed techniques may help to improve the usability of ergonomic controls for a display unit of a mobile viewing system to help determine whether an operator's input is intentional when the hand input sensor detects the input, reduce unintentional movement of the display unit, and/or the like. In some embodiments including a head input sensor, input from a hand input sensor may be further used with input from a head input sensor. In this case, the set of criteria may include one or more criteria that reduce the likelihood of unintentional movement of the display unit toward the operator's head.
Some examples of control systems (e.g., control system 140) may include a non-transitory tangible machine-readable medium including executable code that, when executed by one or more processors (e.g., processor 150), may cause the one or more processors to perform the processes of method 1000 and/or the processes of fig. 10, 11, 12, 13, and/or 14. Some common forms of machine-readable media, which may include the process of method 1000 and/or the process of fig. 10, 11, 12, 13, and/or 14, are, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is suitable for reading.
Although exemplary embodiments have been shown and described, a wide range of modifications, changes, and substitutions is contemplated in the foregoing disclosure and, in some instances, some features of the embodiments may be employed without a corresponding use of the other features. Those of ordinary skill in the art will recognize many variations, alternatives, and modifications. Accordingly, the scope of the invention should be limited only by the following claims, and the claims are to be interpreted broadly and in a manner consistent with the scope of the embodiments disclosed herein.

Claims (43)

1. A computer-assisted device, comprising:
a repositionable structure configured to support a display unit, the repositionable structure including an actuator configured to move the repositionable structure, the display unit configured to display an image viewable by an operator;
a first hand input sensor and a second hand input sensor configured to receive input from the operator; and
a control unit communicatively coupled to the repositionable structure, the first hand input sensor, and the second hand input sensor;
Wherein the control unit is configured to:
receiving a first input from the first hand input sensor,
receiving a second input from the second hand input sensor, and
the following is performed in response to a set of criteria being satisfied, the set of criteria including a first magnitude of the first input and a second magnitude of the second input being greater than a first threshold:
determining a commanded motion based on the first input and the second input; and
the actuator is commanded to move the repositionable structure based on the commanded motion.
2. The computer-assisted device of claim 1, wherein the set of criteria further comprises a difference between a first direction of the first input and a second direction of the second input being less than a second threshold.
3. The computer-assisted device of claim 1, wherein the control unit is configured to determine the commanded movement by:
a direction of a command of the motion of the command is determined based on a third direction that bisects a first direction of the first input and a second direction of the second input.
4. The computer-assisted device of claim 1, wherein the first amplitude is greater than the second amplitude, and wherein the control unit is configured to determine the commanded motion by:
A commanded amplitude of the commanded motion is determined based on a combination of the second input and a scaled first input that is a scaling of the first input based on at least one of the first amplitude or the second amplitude.
5. The computer-assisted device of claim 4, wherein the scaled first input has a scaled first magnitude, and wherein the scaled first magnitude is equal to the second magnitude.
6. The computer-assisted device of claim 4, wherein the control unit is configured to determine the commanded movement by:
determining a third input based on the first input and the second input;
generating a fourth input by scaling the third input based on the direction of the first input and the second input;
responsive to a combination of the first input and the second input being within a region around the fourth input, determining a magnitude of the command based on the combination; and
responsive to the combination being outside the region, determining a magnitude of the command based on a fifth input, wherein the fifth input is within the region.
7. The computer-assisted device of claim 6, wherein the combination of the first input and the second input is a sum of the first input and the second input.
8. The computer-assisted device of claim 6, wherein the control unit is further configured to determine a size or shape of the area around the fourth input based on a ratio between the first input and the second input.
9. The computer-assisted device of claim 1, wherein the control unit is configured to determine the commanded movement by:
responsive to a difference between the first input and the second input being less than or equal to a second threshold, determining movement of the command based on a sum of the first input and the second input; and
responsive to a difference between the first input and the second input being greater than the second threshold, determining a movement of the command based on a sum of the first input and the second input and a scaling factor.
10. The computer-assisted device of claim 9, wherein the control unit is further configured to determine the scaling factor based on the first and second inputs and the second threshold.
11. The computer-assisted device of claim 1, wherein each of the first input and the second input comprises at least one input selected from the group consisting of: force input, torque input, and force input interpreted as torque about a virtual center point.
12. The computer-assisted device of claim 1, wherein each of the first input and the second input comprises at least one parameter selected from the group consisting of: force, torque, linear displacement, linear velocity, linear acceleration, angular displacement, angular velocity, and angular acceleration.
13. The computer-assisted device of claim 1, further comprising:
the display unit, wherein the first hand input sensor and the second hand input sensor are physically coupled to opposite sides of the display unit.
14. The computer-assisted device of claim 1, wherein to determine the movement of the command based on the first input and the second input, the control unit is configured to:
discarding components of the first input or the second input, wherein the components are in a direction in which the display unit cannot be moved.
15. The computer-assisted device of claim 1, wherein to determine the movement of the command based on the first input and the second input, the control unit is configured to:
a speed proportional to a combination of the first input and the second input is determined.
16. The computer-assisted device of claim 1, wherein the first hand input sensor and the second hand input sensor are configured to receive inputs from different hands of the operator.
17. The computer-assisted device of any of claims 1-16, further comprising a head input sensor, wherein the control unit is further configured to:
receiving a sixth input from the head input sensor; and
a movement of the command is further determined based on the sixth input.
18. The computer-assisted device of claim 17, wherein the control unit is further configured to:
in response to determining that the magnitude of the sixth input is less than a baseline magnitude, the baseline magnitude is reduced based on the sixth input.
19. The computer-assisted device of claim 17, wherein the control unit is further configured to:
In response to determining that the magnitude of the sixth input is not less than the baseline magnitude:
generating a fourth input by subtracting the magnitude of the sixth input from the baseline magnitude, an
A movement of the command is determined based on the fourth input and a scaling of a combination of the first input and the second input.
20. The computer-assisted device of claim 17, wherein the control unit is further configured to: a baseline amplitude is initialized based on the sixth input in response to entering an ergonomic adjustment mode of the display unit.
21. The computer-assisted device of any of claims 1-16, wherein the set of criteria further comprises: a ratio between the first amplitude of the first input and the second amplitude of the second input is less than a threshold ratio.
22. The computer-assisted device of any of claims 1-16, wherein the set of criteria further comprises: the first amplitude of the first input is less than a threshold amplitude and the second amplitude of the second input is less than the threshold amplitude.
23. The computer-assisted device of any of claims 1-16, wherein the set of criteria further comprises: the magnitude of the combination of the first input and the second input is greater than a third threshold.
24. The computer-assisted device of any of claims 1-16, wherein the commanded movement is determined based on a function of an angular difference between a first direction of the first input and a second direction of the second input.
25. The computer-assisted device of any of claims 1-16, wherein the control unit is further configured to command the actuator to move the repositionable structure based on the sensed movement of the operator's head.
26. The computer-assisted device of any of claims 1-16, wherein the repositionable structure is configured to move the display unit in a plane, and wherein a first direction of the first input and a second direction of the second input are in the plane.
27. A method, comprising:
receive a first input from a first hand input sensor configured to receive input from an operator;
receiving a second input from a second hand input sensor configured to receive input from the operator; and
the following is performed in response to a set of criteria being satisfied, the set of criteria including a first magnitude of the first input and a second magnitude of the second input being greater than a first threshold:
Determining movement of a command based on the first input and the second input, and
movement based on the command commands the actuator to move a repositionable structure configured to support a display unit configured to display an image viewable by the operator.
28. The method of claim 27, wherein the set of criteria further comprises a difference between a first direction of the first input and a second direction of the second input being less than a second threshold.
29. The method of claim 27, wherein determining movement of the command based on the first input and the second input comprises:
discarding components of the first input or the second input, wherein the components are in a direction in which the display unit cannot be moved.
30. The method of claim 27, wherein determining movement of the command based on the first input and the second input comprises:
a speed proportional to a combination of the first input and the second input is determined.
31. The method of claim 27, wherein the first amplitude is greater than the second amplitude, and wherein determining the commanded movement comprises:
A commanded amplitude of the commanded motion is determined based on a combination of the second input and a scaled first input that is a scaling of the first input based on at least one of the first amplitude or the second amplitude.
32. The method of claim 31, wherein determining movement of the command based on the first input and the second input comprises:
determining a third input based on the first input and the second input;
generating a fourth input by scaling the third input based on the direction of the first input and the second input;
responsive to a combination of the first input and the second input being within a region around the fourth input, determining a magnitude of the command based on the combination; and
responsive to the combination being outside the region, determining a magnitude of the command based on a fifth input, wherein the fifth input is within the region.
33. The method of claim 32, further comprising determining a size or shape of the region around the fourth input based on a ratio between the first input and the second input.
34. The method of claim 27, wherein determining movement of the command based on the first input and the second input comprises:
Responsive to a difference between the first input and the second input being less than or equal to a second threshold, determining movement of the command based on a sum of the first input and the second input; and
responsive to a difference between the first input and the second input being greater than the second threshold, determining a movement of the command based on a sum of the first input and the second input and a scaling factor.
35. The method of claim 34, further comprising determining the scaling factor based on the first and second inputs and the second threshold.
36. The method of claim 27, wherein each of the first input and the second input comprises at least one input selected from the group consisting of: force input, torque input, and force input interpreted as torque about a virtual center point.
37. The method of claim 27, further comprising:
receiving a fifth input from the head input sensor; and
a movement of the command is further determined based on the fifth input.
38. The method of claim 37, further comprising:
responsive to determining that the magnitude of the fifth input is less than a baseline magnitude, reducing the baseline magnitude based on the fifth input; and
In response to determining that the amplitude of the fifth input is not less than the baseline amplitude: a motion of the command is determined based on the baseline amplitude minus the amplitude of the fifth input and a scaling of a combination of the first input and the second input.
39. The method of claim 38, further comprising: the baseline amplitude is initialized based on the fifth input in response to entering an ergonomic adjustment mode of the display unit.
40. The method of claim 27, wherein the set of criteria further comprises at least one of:
a ratio between the first amplitude of the first input and the second amplitude of the second input is less than a threshold ratio; or (b)
The first amplitude of the first input is less than a threshold amplitude and the second amplitude of the second input is less than the threshold amplitude; or (b)
The magnitude of the combination of the first input and the second input is greater than a third threshold.
41. The method of claim 27, wherein the commanded movement is determined based on a function of an angular difference between a first direction of the first input and a second direction of the second input.
42. The method of claim 27, further comprising commanding the actuator to move the repositionable structure based on the sensed movement of the operator's head.
43. One or more non-transitory machine-readable media comprising a plurality of machine-readable instructions that when executed by one or more processors are adapted to cause the one or more processors to perform the method of any of claims 27-42.
CN202180079997.2A 2020-11-30 2021-11-30 Techniques for adjusting display units of viewing systems Pending CN116528790A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/119,603 2020-11-30
US202163174754P 2021-04-14 2021-04-14
US63/174,754 2021-04-14
PCT/US2021/061234 WO2022115795A1 (en) 2020-11-30 2021-11-30 Techniques for adjusting a display unit of a viewing system

Publications (1)

Publication Number Publication Date
CN116528790A true CN116528790A (en) 2023-08-01

Family

ID=87396280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180079997.2A Pending CN116528790A (en) 2020-11-30 2021-11-30 Techniques for adjusting display units of viewing systems

Country Status (1)

Country Link
CN (1) CN116528790A (en)

Similar Documents

Publication Publication Date Title
US11534246B2 (en) User input device for use in robotic surgery
US8473031B2 (en) Medical robotic system with functionality to determine and display a distance indicated by movement of a tool robotically manipulated by an operator
JP6787623B2 (en) Very dexterous system user interface
US20230031641A1 (en) Touchscreen user interface for interacting with a virtual model
US11504200B2 (en) Wearable user interface device
US20230064265A1 (en) Moveable display system
US20240000534A1 (en) Techniques for adjusting a display unit of a viewing system
Bihlmaier et al. Endoscope robots and automated camera guidance
US20240025050A1 (en) Imaging device control in viewing systems
WO2023023186A1 (en) Techniques for following commands of an input device using a constrained proxy
CN116528790A (en) Techniques for adjusting display units of viewing systems
US20220287788A1 (en) Head movement control of a viewing system
US20220296323A1 (en) Moveable display unit on track
US20240024049A1 (en) Imaging device control via multiple input modalities
US20230393544A1 (en) Techniques for adjusting a headrest of a computer-assisted system
WO2023014732A1 (en) Techniques for adjusting a field of view of an imaging device based on head motion of an operator
WO2023069745A1 (en) Controlling a repositionable structure system based on a geometric relationship between an operator and a computer-assisted device
CN118043765A (en) Controlling a repositionable structural system based on a geometric relationship between an operator and a computer-aided device
WO2023177802A1 (en) Temporal non-overlap of teleoperation and headrest adjustment in a computer-assisted teleoperation system
WO2022232170A1 (en) Method and apparatus for providing input device repositioning reminders
WO2024076592A1 (en) Increasing mobility of computer-assisted systems while maintaining a partially constrained field of view
WO2024049942A1 (en) Techniques for updating a registration transform between an extended-reality system and a computer-assisted device
WO2023244636A1 (en) Visual guidance for repositioning a computer-assisted system
WO2023192465A1 (en) User interface interaction elements with associated degrees of freedom of motion
WO2023150449A1 (en) Systems and methods for remote mentoring in a robot assisted medical system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination