EP4330796A1 - Handheld controller with thumb pressure sensing - Google Patents

Handheld controller with thumb pressure sensing

Info

Publication number
EP4330796A1
EP4330796A1 EP21727684.9A EP21727684A EP4330796A1 EP 4330796 A1 EP4330796 A1 EP 4330796A1 EP 21727684 A EP21727684 A EP 21727684A EP 4330796 A1 EP4330796 A1 EP 4330796A1
Authority
EP
European Patent Office
Prior art keywords
input
user
handheld controller
surface region
input surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21727684.9A
Other languages
German (de)
French (fr)
Inventor
Derrick Readinger
Amir Norton
Yi-Yaun Chen
Glen Jason Tompkins
Jennifer Lynn Spurlock
Bradley Morris Johnson
Valerie Susan HANSON
Robert Carey Leonard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Publication of EP4330796A1 publication Critical patent/EP4330796A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present disclosure is generally directed to handheld controllers, systems, and methods that employ thumb pressure sensing.
  • Handheld controllers such as those incorporated in artificial-reality systems, gaming systems, and the like, are convenient devices by which a user of such a system may provide accurate and timely input (e.g., by way of input buttons) to that system.
  • some handheld controllers may include components (e.g., haptic feedback devices) that provide some type of output or feedback for the user.
  • haptic feedback devices e.g., haptic feedback devices
  • the ability to enable the associated input via a cost-effective handheld controller may be limited. For example, some subtle user actions, such as applying a grasping or pinching action to a virtual object being presented in an artificial-reality system, may be difficult to facilitate with a handheld controller that provides a limited number and type of input components.
  • a handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
  • the handheld controller may furthercomprise: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, wherein the trigger button detects whether the trigger button has been activated for interpretation as a second input.
  • the handheld controller may further comprise: a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
  • the pressure sensor may comprise a static pressure sensor that senses depression of the input surface region.
  • the pressure sensor may comprise a zero-movement sensor.
  • the handheld controller may further comprise: a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
  • the handheld controller may further comprise a capacitive sensor coupled to the input surface region and configured to detect a touching with the input surface region for interpretation as a second input.
  • the handheld controller may further comprise a capacitive sensor that detects a touched location on the input surface region for interpretation as a second input.
  • the handheld controller may further comprise an input button coupled to the body outside the input surface region and configured to be engaged by the thumb while the hand grasps the grasping region, wherein the input button indicates whether the input button has been engaged for interpretation as a second input.
  • a system comprising: a display that presents a virtual object in an artificial environment; a processor that processes a plurality of inputs for manipulating the virtual object; and a handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region, and wherein the processor processes the level of pressure as a first input of the plurality of inputs.
  • the handheld controller may furthercomprise: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand, wherein: the trigger button indicates whether the trigger button has been activated; and the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
  • the processor may interpret a combination of the first input and the second input as a manipulation of the virtual object.
  • the manipulation of the virtual object may comprise a pinching action imposed on the virtual object.
  • the handheld controller may further comprise a capacitive sensor coupled to the input surface region, wherein: the capacitive sensor detects a touched location of the input surface region; and the processor processes a representation of the touched location as a third input of the plurality of inputs.
  • the processor may interpret the third input as a navigation of a menu presented by the display.
  • the processor may interpret a combination of the first input, the second input, and the third input as a manipulation of the virtual object.
  • the manipulation of the virtual object may comprise a rotational action imposed on the virtual object.
  • a method comprising: detecting, by a pressure sensor coupled to an input surface region of a body of a handheld controller, a level of pressure imposed on the input surface region, wherein: the body further comprises a grasping region configured to be grasped by a hand; and the input surface region is configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and interpreting, by a processor, the level of pressure as a first input.
  • the method may further comprise: detecting, by a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, whether the trigger button has been activated; and interpreting, by the processor, whether the trigger button has been activated as a second input.
  • the method may further comprise: presenting, by a display, a virtual object in an artificial environment; and interpreting, by the processor, a combination of the first input and the second input as a manipulation of the virtual object.
  • FIG. 1 is a block diagram of an exemplary handheld controllerthat incorporates thumb pressure sensing.
  • FIG. 2 is a perspective view of an exemplary handheld controller that incorporates thumb pressure sensing.
  • FIG. 3 is a flow diagram of an exemplary method employed by a handheld controller that includes thumb pressure sensing.
  • FIG. 4 is an illustration of exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
  • FIG. 5 is an illustration of an exemplary virtual-reality headset that may be used in connection with embodiments of this disclosure.
  • FIG. 6 is a block diagram of components implemented in an exemplary handheld controller that employs thumb pressure sensing.
  • FIG. 7 is a block diagram of an exemplary computing architecture for implementation of an artificial-reality system that incorporates a handheld controller employing thumb pressure sensing.
  • FIG. 8 an illustration of an exemplary system that incorporates an eye-tracking subsystem capable of tracking a user's eye(s).
  • FIG. 9 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 8.
  • Handheld controllers such as those incorporated in artificial-reality systems, gaming systems, and the like, are convenient devices by which a user of such a system may provide accurate and timely input (e.g., by way of input buttons) to that system.
  • some handheld controllers may include components (e.g., haptic feedback devices) that provide some type of output or feedback for the user.
  • haptic feedback devices e.g., haptic feedback devices
  • the ability to enable the associated input via a cost-effective handheld controller may be limited. For example, some subtle user actions, such as applying a grasping or pinching action to a virtual object being presented in an artificial-reality system, may be difficult to facilitate with a handheld controller that provides a limited number and type of input components.
  • an example handheld controller may have a body that includes a grasping region configured to be grasped by a hand, as well as an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region.
  • the handheld controller may further include a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
  • the use of such a handheld controller may facilitate more subtle or varied input (e.g., pinching in an artificial environment), especially in conjunction with other input components (e.g., one or more buttons), while remaining cost- effective.
  • FIGS. 1-9 detailed descriptions of exemplary handheld controllers, systems, and methods employing thumb pressure sensing.
  • Various embodiments of an exemplary handhold controller are described in conjunction with the block diagram of FIG. 1 and the perspective view of FIG. 2.
  • An exemplary method employing thumb pressure sensing via a handheld controller is discussed in conjunction with FIG. 3.
  • Discussions of exemplary augmented-reality glasses and an exemplary virtual-reality headset that may be used in connection with the various handheld controller embodiments are discussed in connection with the illustrations of FIGS. 4 and 5, respectively.
  • a discussion of some components that may be included in an exemplary handhold controller employing thumb pressure sensing is provided in relation to the block diagram of FIG. 6.
  • FIG. 7 A description of an exemplary computing architecture for an artificial-reality system that incorporates a handheld controller, as discussed herein, is presented in connection with FIG. 7. Moreover, in conjunction with FIGS. 8 and 9, an exemplary display system and an exemplary eye-tracking subsystem that may employed in connection with an artificial-reality system are discussed.
  • the controller is configured to sense pressure applied by a thumb of a user. Flowever, in other embodiments, additionally or alternatively, pressure sensing may be provided for one or more other digits (e.g., the index finger). Additionally, while the discussion below focuses on a single handheld controller, at least some user systems (e.g., artificial-reality systems) may provide two such controllers, one configured for each hand.
  • FIG. 1 is a block diagram of an exemplary handheld controller 100 that incorporates thumb pressure sensing. More specifically, FIG. 1 depicts possible components associated with various embodiments of handheld controller 100. Some of the depicted components may not be included in some embodiments of handheld controller 100, while other components (e.g., a battery, one or more printed circuit boards, and so on) that may be included in handheld controller 100 are not illustrated in FIG. 1 to simplify the following discussion. While the block diagram of FIG. 1 represents a kind of schematic representation of handheld controller 100, FIG. 2, described in greater detail below, presents a perspective view of just one possible embodiment of such a controller.
  • Flandheld controller 100 may include a body
  • grasping region 104 may be configured to be grasped by a hand of a user.
  • grasping region 104 may be configured to engaged by a palm and/or one or more fingers (e.g., fingers other than the thumb and/or index finger) of the hand.
  • grasping region 104 may define or more physical features (e.g., smooth curves, ridges, indentations, and/or the like) to facilitate engagement by the hand.
  • input surface region 106 may be configured to be engaged with a thumb of the hand while the hand grasps grasping region 104.
  • input surface region 106 may be an area of body 102 that may be just large enough for the user's thumb to engage.
  • input surface region 106 may be an area of body 102 sufficiently large to provide a range of locations in one or two dimensions for the user's thumb to engage.
  • body 102 may provide a thumb rest or similar area in grasping region 104 to comfortably facilitate positioning of the thumb away from input surface region 106.
  • a pressure sensor 132 may be coupled (e.g., mechanically, electrically, or the like) to input surface region 106 such that pressure sensor 132 indicates a level of pressure imposed on input surface region 106 (e.g., by the user's thumb).
  • pressure sensor 132 may provide some kind of output (e.g., an analog voltage, a capacitance, a digital value, or the like) that may serve as an indication of the level of pressure imposed on input surface region 106.
  • pressure sensor 132 may be a sensor that measures an amount or level of pressure imposed on input surface region 106 (e.g., by detecting a location or a change thereof of input surface region 106 orthogonal to input surface region 106, by detecting a capacitance or change thereof of input surface region 106, etc.). In other examples, pressure sensor 132 may measure pressure imposed on input surface region 106, or some characteristic (e.g., amount of movement, capacitance, etc.) as a proxy for an amount of pressure imposed on input surface region 106. Further, pressure sensor 132 may indicate at least one surface location (e.g., an (x, y) coordinate) on input surface region 106 at which some pressure is being imposed.
  • pressure sensor 132 may be a static pressure sensor that senses and/or distinguishes between small levels of depression, movement, or location of input surface region 106.
  • input surface region 106 may facilitate the use of a static pressure sensor by presenting a surface that flexes or otherwise moves some distance in response to an associated level of pressure imposed on input surface region 106.
  • pressure sensor 132 may be a zero-movement pressure sensor that senses a level of pressure imposed on input surface region 106, such as by way of measuring some characteristic of input surface region 106 corresponding to pressure imposed on input surface region 106 without causing any depression or other movement of input surface region 106.
  • pressure sensor 132 may measure capacitance at one or more locations of input surface region to indicate a level of pressure imposed on input surface region 106.
  • a capacitive sensor 134 may also be coupled to input surface region 106 and configured to detect a touching with input surface region 106 (e.g., by the thumb of the user). Further, in some examples, capacitive sensor 134 may detect a surface location (e.g., an (x, y) coordinate) on input surface region 106 at which the touching is occurring. In some embodiments, pressure sensor 132 and capacitive sensor 134 may be the same sensor.
  • input surface region 106 may be employed to detect a touching with input surface region 106, and possibly a location on input surface region 106 at which such a touching is occurring, in other embodiments. Consequently, in some examples, input surface region 106 may be employed as a joystick, touchpad, or other locational or directional input device.
  • handheld controller 100 coupled (e.g., mechanically) to body
  • Trigger button 140 may be a trigger button 140 configured to be engaged by an index finger of the hand while the hand grasps grasping region 104.
  • Trigger button 140 may provide an indication (e.g., a voltage, a Boolean indication, etc.) whether trigger button 140 has been activated (e.g., depressed by the user's index finger).
  • an indication e.g., a voltage, a Boolean indication, etc.
  • manipulation by the user of input surface region e.g., as indicated by pressure sensor 132 and possibly capacitive sensor 134
  • trigger button 140 concurrently may be interpreted as a "pinch" or other manipulation of a virtual object being presented to a user.
  • coupled to body 102 may be one or more additional input buttons 150 positioned or configured to be activated by the user (e.g., by the thumb of the user).
  • transmitter 110 may transmit a representation or indication of an output of pressure sensor 132, capacitive sensor 134, trigger button 140, and/or input button 150 to a processor (e.g., a processor not located on handheld controller 100).
  • transmitter 110 may be a wired transmitter (e.g., an electrical signal transmitter, an optical signal transmitter, etc.) or a wireless transmitter (e.g., a radio frequency (RF) transmitter, a Bluetooth ® transmitter, etc.).
  • RF radio frequency
  • handheld controller 100 may interpret some combination of the representations of the outputs of two or more of pressure sensor 132, capacitive sensor 134, trigger button 140, and/or input button 150 as a manipulation of a virtual object (e.g., a virtual object of an artificial environment presented to the user by way of a display device).
  • a virtual object e.g., a virtual object of an artificial environment presented to the user by way of a display device.
  • handheld controller 100 may include a receiver 120
  • handheld controller 100 may include one or more haptic actuators 160 coupled to body 102 or other portions of handheld controller 100, where the haptic actuators 160 may provide haptic feedback to one or more locations of handheld controller 100, including, but not limited to, body 102 (e.g., grasping region 104 and/or input surface region 106), trigger button 140, and/or trigger button 140.
  • the haptic feedback may indicate contact in the artificial environment of one or more portions of the user's hand with a virtual object.
  • haptic feedback presented by haptic actuators 160 may indicate a level of pressure imposed on the virtual object by the thumb of the user on input surface region 106, as sensed by way of pressure sensor 132.
  • haptic actuators 160 may include one or more linear resonance actuators (LRAs), eccentric rotating masses (ERMs), or the like.
  • the processor that receives the inputs (e.g., from transmitter 110) and provides the outputs (e.g., to receiver 120) may instead be located in or on handheld controller 100, thus potentially eliminating the use of transmitter 110 and receiver 120 for at least the transmitting and receiving operations noted above.
  • FIG. 2 is a perspective view of an exemplary handheld controller 200 that may serve as one example of handheld controller 100 of FIG. 1.
  • handheld controller 200 may include a body 202 with a grasping region 204 (corresponding to grasping region 104 of FIG. 1) and an input surface region 206 (corresponding to input surface region 106 of FIG. 1).
  • handheld controller 200 may also include a trigger button 240 (serving as trigger button 140 of FIG. 1) and one or more input buttons 250 (corresponding to input button 150 of FIG. 1), both of which may reside external to input surface region 206.
  • trigger button 140 of FIG. 1 serving as trigger button 140 of FIG. 1
  • input buttons 250 corresponding to input button 150 of FIG. 1
  • input surface region 206 and input buttons 250 may be selectively engaged by the thumb of the hand of the user while the hand of the user engages grasping region 204 and while the index finger of the hand is positioned on trigger button 240.
  • FIG. 2 presents one particular configuration or arrangement of handheld controller 200
  • other configurations for handheld controller 200 are also possible.
  • input surface region 206 may be placed on a substantially similar plane of body 202 as input buttons 250
  • body 202 may be shaped in other ways such that input surface region 206 may lie in a plane substantially different from that portion of body 202 that supports input buttons 250.
  • input surface region 206 may be angled downward away from input buttons 250 (e.g., perpendicular or parallel to grasping region 204, and between zero degrees and 90 degrees downward relative to a surface of body 202 holding input buttons 250).
  • such a configuration for input surface region 206 relative to body 202 may facilitate a comfortable angle of opposition between input surface region 206 and trigger button 240.
  • one or more components of handheld controller 200 may reside inside body 202.
  • Other components not depicted in FIG. 2 may be included in handheld controller 200 in other examples.
  • handheld controller 200 may include one or more cameras 212
  • handheld controller 200 may include optical sensors, acoustic sensors (e.g., sonar), time of flight sensors, structured light emitters/sensors, Global Positioning System (GPS) modules, inertial measurement units (IMUs), and other sensors. Any or all of these sensors may be used alone or in combination to provide input data to handheld controller 200.
  • acoustic sensors e.g., sonar
  • time of flight sensors e.g., time of flight sensors
  • structured light emitters/sensors e.g., Global Positioning System (GPS) modules
  • IMUs inertial measurement units
  • Cameras 212 may be positioned substantially anywhere on handheld controller 200. In some embodiments, cameras 212 may be positioned at angles offset from each other (e.g., 30 degrees, 40 degrees, 50 degrees, 60 degrees, 70 degrees, 80 degrees, or 90 degrees offset from each other). This offset may allow cameras 212 to capture different portions of the physical environment in which handheld controller 200 is being used. In some cases, cameras 212 may be positioned to avoid occlusions from a user's hand, fingers, or other body parts.
  • cameras 212 may be configured to capture portions of a room including the walls, floor, ceiling, objects within the room, people within the room, or other features of the room.
  • cameras 212 may be configured to capture images of the ground, the sky, or the 360-degree surroundings of the device.
  • the images may be used in isolation or in combination to determine the device's current location in space.
  • the images may be used to determine distances to objects within a room. Movements between sequences of subsequently taken images may be used to calculate which direction handheld controller 200 has moved and how fast handheld controller 200 have moved relative to their surroundings.
  • the images may be used to determine the location of another peripheral device (e.g., a second handheld controller 200 in the user's other hand).
  • the images may also be used to capture portions of the user who is using handheld controller 200, including the user's hand, fingers, arm, torso, legs, face, head, or other body parts.
  • Handheld controller 200 may use images of these body parts to determine its location relative to the user and relative to other objects in the room including walls, doors, floor, and ceiling, without relying on any other outside cameras or sensors to determine its location.
  • handheld controller 200 may communicate with a headset (e.g., a headset of an artificial-reality system), as described in greater detail below.
  • the headset may include a display and one or more computing components. These computing components may be configured to generate and present a display to the user.
  • the display may include a user interface and content such as video content, web content, video game content, etc.
  • the computing components in the headset 306 may also be configured to generate map data. For example, the computing components in the headset 306 may receive inputs from sensors worn by the user or mounted on the headset and use that sensor data to create a map of the user's environment. This map data may be shared with handheld controller 200, which may use this map data when determining its location.
  • the self-tracking ability of handheld controller 200 within such a system may be employed to determine a relative location and/or orientation of user's hand relative to a virtual object within an artificial environment, as displayed by the headset, to facilitate the pinching or grasping of the virtual object, as described above.
  • FIG. 3 is a flow diagram of an exemplary method 300 employed by a handheld controller (e.g., handheld controller 100 or 200) that includes thumb pressure sensing.
  • a level of pressure imposed on an input surface region e.g., input surface region 106 or 206
  • the level of pressure may be interpreted (e.g., by a processor) as a first input.
  • a trigger button e.g., trigger button 140 or 240
  • the trigger button has been activated may be detected.
  • the trigger button has been activated may be interpreted (e.g. by the processor) as a second input.
  • a combination of the first input and the second input may be interpreted (e.g., by the processor) as a manipulation of a virtual object.
  • the processor may execute software instructions stored in one or more memory devices to perform such an interpretation, as well as other operations ascribed to a processor, as discussed herein.
  • the first input and the second input may be interpreted as a "pinch” or "grasp” of a virtual object, where the amount of virtual force imposed on the virtual object is determined at least in part by the level of pressure imposed on input surface region 106 in conjunction with activation of trigger button 140.
  • activation of trigger button 140 may not be necessary to interpret some level of pressure on input surface region 106 as applying some pressure or force on the virtual object.
  • a visual representation of the amount of force imposed on the virtual object may be presented to a user (e.g., via a visual display).
  • the virtual object may appearto be increasingly deformed as the level of pressure imposed via input surface region 106 is increased. Additionally or alternatively, some other visual aspect (e.g., color, brightness, or the like) of the virtual object may be altered as the level of pressure changes.
  • some other visual aspect e.g., color, brightness, or the like
  • the level of pressure imposed on the virtual object may also be reflected by way of a level or type of force provided by one or more haptic actuators 160 on grasping region 104, input surface region 106, trigger button 140, and/or some other region of handheld controller 100, as mentioned above.
  • an aspect e.g., volume, pitch, and so on
  • one or more sounds e.g., generated by way of an audio speaker
  • other types of manipulations such as rotating or spinning
  • applying a level of pressure on, or at least making contact with, input surface region 106 combined with moving the location of the level of pressure on input surface region 106 (e.g., by way of dragging the thumb across input surface region 106), possibly in conjunction with activation (e.g., pressing) of trigger button 140, may be interpreted as a rotating or spinning of a virtual object.
  • the direction or path of contact across input surface region 106 may determine the axis of the imposed rotation of the virtual object.
  • input surface region 106 along with pressure sensor
  • input surface region 106 may be employed to navigate a displayed menu, an input slider, a set of input objects, or another input item by way of contacting different areas of input surface region 106 and/or by applying varying levels of pressure to input surface region 106 that may be interpreted by a processor as different portions or areas of the input item.
  • a particular level of pressure imposed on input surface region 106 may be interpreted by a processor as a different logical level of a multilevel menu or other input item.
  • input surface region 106 in conjunction with pressure sensor 132 and/or capacitive sensor 134, may serve as a replacement for a joystick, touchpad, or other directional or positional input device.
  • a detected two-dimensional location contacted by the user on input surface region 106 e.g., relative to some reference location of input surface region 106
  • a distance of the two- dimensional location from the reference location may be interpreted as a magnitude command (e.g., speed, force, etc.) associated with the directional command.
  • the level of pressure imposed at the detected two-dimensional location on input surface region 106 may be interpreted as a magnitude command associated with the directional command.
  • Other examples of providing input via contact with input surface region 106 are also possible.
  • the interpretation of the level of pressure and/or location on input surface region 106 by pressure sensor 132 and/or capacitive sensor 134 may depend on a state or context of the system in which handheld controller 100 is being used. For example, if a virtual object is currently displayed in an artificial environment (e.g., in an artificial-reality system) to the user of handheld controller 100, user input provided via input surface region 106 may be interpreted as a manipulation of the virtual object. Instead, if the system currently displays a menu of selectable items to the user, user input provided via input surface region 106 may be interpreted as a manipulation of that menu.
  • an artificial environment e.g., in an artificial-reality system
  • Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof.
  • Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content.
  • the artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer).
  • artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
  • Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 400 in FIG. 4) or that visually immerses a user in an artificial reality (such as, e.g., virtual-reality system 500 in FIG. 5). While some artificial-reality devices may be self-contained systems, other artificial-reality devices may communicate and/or coordinate with external devices to provide an artificial-reality experience to a user. Examples of such external devices include handheld controllers, mobile devices, desktop computers, devices worn by a user, devices worn by one or more other users, and/or any other suitable external system.
  • augmented-reality system 400 may include an eyewear device 402 with a frame 410 configured to hold a left display device 415(A) and a right display device 415(B) in front of a user's eyes.
  • Display devices 415(A) and 415(B) may act together or independently to present an image or series of images to a user.
  • augmented-reality system 400 includes two displays, embodiments of this disclosure may be implemented in augmented-reality systems with a single NED or more than two NEDs.
  • augmented-reality system 400 may include one or more sensors, such as sensor 440.
  • Sensor 440 may generate measurement signals in response to motion of augmented-reality system 400 and may be located on substantially any portion of frame 410.
  • Sensor 440 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof.
  • IMU inertial measurement unit
  • augmented-reality system 400 may or may not include sensor 440 or may include more than one sensor.
  • the IMU may generate calibration data based on measurement signals from sensor 440.
  • Examples of sensor 440 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
  • augmented-reality system 400 may also include a microphone array with a plurality of acoustic transducers 420(A)-420(J), referred to collectively as acoustic transducers 420.
  • Acoustic transducers 420 may represent transducers that detect air pressure variations induced by sound waves.
  • Each acoustic transducer 420 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format).
  • acoustic transducers 420(A) and 420(B) which may be designed to be placed inside a corresponding ear of the user, acoustic transducers 420(C), 420(D), 420(E), 420(F), 420(G), and 420(H), which may be positioned at various locations on frame 410, and/or acoustic transducers 420(1) and 420(J), which may be positioned on a corresponding neckband 405.
  • one or more of acoustic transducers 420(A)-(F) may be used as output transducers (e.g., speakers).
  • acoustic transducers 420(A) and/or 420(B) may be earbuds or any other suitable type of headphone or speaker.
  • the configuration of acoustic transducers 420 of the microphone array may vary. While augmented-reality system 400 is shown in FIG. 4 as having ten acoustic transducers 420, the number of acoustic transducers 420 may be greater or less than ten. In some embodiments, using higher numbers of acoustic transducers 420 may increase the amount of audio information collected and/or the sensitivity and accuracy of the audio information. In contrast, using a lower number of acoustic transducers 420 may decrease the computing power required by an associated controller 450 to process the collected audio information. In addition, the position of each acoustic transducer 420 of the microphone array may vary. For example, the position of an acoustic transducer 420 may include a defined position on the user, a defined coordinate on frame 410, an orientation associated with each acoustic transducer 420, or some combination thereof.
  • Acoustic transducers 420(A) and 420(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 420 on or surrounding the ear in addition to acoustic transducers 420 inside the ear canal. Having an acoustic transducer 420 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal.
  • augmented-reality device 400 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head.
  • acoustic transducers 420(A) and 420(B) may be connected to augmented-reality system 400 via a wired connection 430, and in other embodiments acoustic transducers 420(A) and 420(B) may be connected to augmented-reality system 400 via a wireless connection (e.g., a Bluetooth ® connection).
  • acoustic transducers 420(A) and 420(B) may not be used at all in conjunction with augmented-reality system 400.
  • Acoustic transducers 420 on frame 410 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 415(A) and 415(B), or some combination thereof. Acoustic transducers 420 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 400. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 400 to determine relative positioning of each acoustic transducer 420 in the microphone array.
  • augmented-reality system 400 may include or be connected to an external device (e.g., a paired device), such as neckband 405.
  • an external device e.g., a paired device
  • Neckband 405 generally represents any type or form of paired device.
  • the following discussion of neckband 405 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, handheld controllers, tablet computers, laptop computers, other external compute devices, etc.
  • neckband 405 may be coupled to eyewear device 402 via one or more connectors.
  • the connectors may be wired or wireless and may include electrical and/or non- electrical (e.g., structural) components.
  • eyewear device 402 and neckband 405 may operate independently without any wired or wireless connection between them.
  • FIG. 4 illustrates the components of eyewear device 402 and neckband 405 in example locations on eyewear device 402 and neckband 405, the components may be located elsewhere and/or distributed differently on eyewear device 402 and/or neckband 405.
  • the components of eyewear device 402 and neckband 405 may be located on one or more additional peripheral devices paired with eyewear device 402, neckband 405, or some combination thereof.
  • Pairing external devices such as neckband 405
  • augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities.
  • Some or all of the battery power, computational resources, and/or additional features of augmented- reality system 400 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality.
  • neckband 405 may allow components that would otherwise be included on an eyewear device to be included in neckband 405 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads.
  • Neckband 405 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 405 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 405 may be less invasive to a user than weight carried in eyewear device 402, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
  • Neckband 405 may be communicatively coupled with eyewear device 402 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 400.
  • neckband 405 may include two acoustic transducers (e.g., 420(1) and 420(J)) that are part of the microphone array (or potentially form their own microphone subarray).
  • Neckband 405 may also include a controller 425 and a power source 435.
  • Acoustic transducers 420(1) and 420(J) of neckband 405 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital).
  • acoustic transducers 420(1) and 420(J) may be positioned on neckband 405, thereby increasing the distance between the neckband acoustic transducers 420(1) and 420(J) and other acoustic transducers 420 positioned on eyewear device 402.
  • increasing the distance between acoustic transducers 420 of the microphone array may improve the accuracy of beamforming performed via the microphone array.
  • the determined source location of the detected sound may be more accurate than if the sound had been detected by acoustic transducers 420(D) and 420(E).
  • Controller 425 of neckband 405 may process information generated by the sensors on neckband 405 and/or augmented-reality system 400.
  • controller 425 may process information from the microphone array that describes sounds detected by the microphone array.
  • controller 425 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array.
  • DOA direction-of-arrival
  • controller 425 may populate an audio data set with the information.
  • controller 425 may compute all inertial and spatial calculations from the IMU located on eyewear device 402.
  • a connector may convey information between augmented-reality system 400 and neckband 405 and between augmented-reality system 400 and controller 425.
  • the information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 400 to neckband 405 may reduce weight and heat in eyewear device 402, making it more comfortable to the user.
  • Power source 435 in neckband 405 may provide power to eyewear device 402 and/or to neckband 405.
  • Power source 435 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 435 may be a wired power source. Including power source 435 on neckband 405 instead of on eyewear device 402 may help better distribute the weight and heat generated by power source 435.
  • some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience.
  • a head-worn display system such as virtual-reality system 500 in FIG. 5, that mostly or completely covers a user's field of view.
  • Virtual-reality system 500 may include a front rigid body 502 and a band 504 shaped to fit around a user's head.
  • Virtual-reality system 500 may also include output audio transducers 506(A) and 506(B).
  • front rigid body 502 may include one or more electronic elements, including one or more electronic displays, one or more inertial measurement units (IMUs), one or more tracking emitters or detectors, and/or any other suitable device or system for creating an artificial-reality experience.
  • IMUs inertial measurement units
  • Artificial-reality systems may include a variety of types of visual feedback mechanisms.
  • display devices in augmented-reality system 400 and/or virtual- reality system 500 may include one or more liquid crystal displays (LCDs), light-emitting diode (LED) displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen.
  • LCDs liquid crystal displays
  • LED light-emitting diode
  • OLED organic LED
  • DLP digital light project
  • LCD liquid crystal on silicon
  • These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error.
  • Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen.
  • optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light.
  • optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so- called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion).
  • a non-pupil-forming architecture such as a single lens configuration that directly collimates light but results in so- called pincushion distortion
  • a pupil-forming architecture such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion.
  • some artificial-reality systems described herein may include one or more projection systems.
  • display devices in augmented-reality system 400 and/or virtual-reality system 500 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through.
  • the display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world.
  • the display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light- manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc.
  • Waveguide components e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements
  • light- manipulation surfaces and elements such as diffractive, reflective, and refractive elements and gratings
  • coupling elements etc.
  • Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
  • augmented-reality system 400 and/or virtual-reality system 500 may include one or more optical sensors, such as two- dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor.
  • An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
  • the artificial-reality systems described herein may also include one or more input and/or output audio transducers.
  • Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer.
  • input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer.
  • a single transducer may be used for both audio input and audio output.
  • the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system.
  • Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature.
  • Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance.
  • Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms.
  • Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
  • artificial-reality systems may create an entire virtual experience or enhance a user's real- world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world.
  • Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.).
  • the embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
  • the systems described herein may also include an eye tracking subsystem designed to identify and track various characteristics of a user's eye(s), such as the user's gaze direction.
  • eye-tracking may, in some examples, refer to a process by which the position, orientation, and/or motion of an eye is measured, detected, sensed, determined, and/or monitored.
  • the disclosed systems may measure the position, orientation, and/or motion of an eye in a variety of different ways, including through the use of various optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, etc.
  • An eye-tracking subsystem may be configured in a number of different ways and may include a variety of different eye-tracking hardware components or other computer-vision components.
  • an eye-tracking subsystem may include a variety of different optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor.
  • a processing subsystem may process data from one or more of these sensors to measure, detect, determine, and/or otherwise monitor the position, orientation, and/or motion of the user's eye(s).
  • the handheld controller 100 or 200 may also include other electronic components, both on its exterior and on its interior.
  • an exemplary handheld controller 600 e.g., serving as an embodiment of handheld controller 100 or 200
  • PCBs printed circuit boards
  • PCBs 602 and 620 may be configured to process inputs from trigger button 140, input buttons 150, pressure sensor 132, and capacitive sensor 134.
  • PCBs 602 and 620 or other computing components may be configured to perform some or all of the computing to determine the device's current location and/or interpretation of input provided by the user via handheld controller 600.
  • handheld controller 600 may communicate with other devices (e.g., a virtual- reality system 500) to assist in performing the computing.
  • These electronics may further include a communications module that communicates with a head-mounted display of an artificial-reality system.
  • Antenna 610 may be part of this communications module.
  • the communications module may transmit and receive communications from a corresponding communications module on an artificial-reality device.
  • the internal electronics may further include an imaging module including at least one camera 614 that is configured to acquire images of the environment surrounding handheld controller 600.
  • the internal electronics of the handheld controller 600 may include a tracking module configured to track the location of handheld controller 600 in free space using the images acquired by cameras 614.
  • PCBs 602 and 620 may then analyze the images to track the location of handheld controller 600 without having line of sight between handheld controller 600 and any main processing components of an external artificial-reality system.
  • a pair of handheld controllers 600 may be used simultaneously.
  • cameras 614 may be used to capture images of the current surroundings of each handheld controller 600.
  • Each handheld controller 600 may process the images that it captures with its cameras 614 using local processor 604. Additionally or alternatively, each handheld controller 600 may share image data with the other handheld controller 600. Accordingly, two handheld controllers 600 may share images with each other. As such, each handheld controller 600 may have its own images along with other images received via the communications module. These images may be pieced together to determine depth, determine relative locations, determine coordinates in space, or to otherwise calculate the exact or relative location of handheld controller 600. Each handheld controller 600 may thus determine its location on its own or may determine its location in relation to the other handheld controller 600 using imaging data from those devices.
  • handheld controller 600 may begin tracking its location using two or more cameras 614. Once the handheld controller 600 has established its location in space, the tracking may continue using fewer cameras 614. Thus, if handheld controller 600 started tracking its location using three cameras 614, handheld controller 600 may transition to tracking its location using two cameras 614 or using one camera 614. Similarly, if handheld controller 600 started tracking its location using two cameras 614, once calibrated or once an initial map has been created, handheld controller 600 may continue tracking its location using a single camera 614. If handheld controller 600 loses its position in space or becomes unaware of its exact location (due to loss of signal from a camera, for example), two or more additional cameras 614 may be initiated to assist in re-determining the location in space of handheld controller 600.
  • each handheld controller 600 may be configured to access the image data taken by its cameras 614 (and perhaps additionally use image data from cameras 614 on the other handheld controller 600) to create a map of the surrounding environment.
  • the map may be created over time as subsequent images are taken and processed.
  • the map may identify objects within an environment and may note the location of those objects within the environment.
  • additional images may be taken and analyzed. These additional images may indicate where the user is in relation to the identified objects, what the distance is between the user and the objects, what the distance is between handheld controller 600 and the user, and what the distance is between handheld controllers 600. Calculated distances between objects may be refined over time as new images are captured and analyzed.
  • the map of the environment around handheld controller 600 may be continually updated and improved.
  • objects e.g., people
  • the updated map may reflect these changes.
  • handheld controller 600 may be truly self-tracked. Handheld controller 600 may not need any outside sensors or cameras or other devices to determine, by itself, its own location in free space. Implementations using a single camera 614 may be produced and may function using the camera data from single camera 614. In other implementations, two cameras 614 may be used. By using only one or two cameras 614, the cost and complexity of handheld controller 600 may be reduced, as well as reducing its weight and increasing battery life, as there are fewer components to power. Still further, with fewer cameras 614, it is less likely that one of the cameras will be occluded and provide faulty (or no) information.
  • handheld controller 600 may be held by a user, and because handheld controller 600 may determine its own location independently, handheld controller 600 may also be able to determine the position, location, and pose of the user holding handheld controller 600.
  • Cameras 614 on handheld controller 600 may have wide angle lenses that may capture portions of the user's body. From these images, handheld controller 600 may determine how the user's body is positioned, which direction the user's body is moving, and how far the body part is away from handheld controller 600. Knowing this distance and its own location in free space may allow handheld controller 600 to calculate the location of the user holding handheld controller 600.
  • cameras 614 e.g., wide-angle cameras
  • handheld controller 600 may track the movements of the user's eyes and determine where the user intends to move or determine what the user is looking at. Knowing where the user is within the environment and knowing where the user is likely to move, along with the knowledge of its own location, handheld controller 600 may generate warning beeps or buzzes to keep the user from running into objects within the environment.
  • handheld controller 600 may be continuously capturing image data, some portions of that data may be redacted or blurred for privacy reasons. For instance, users within a room in which handheld controllers 600 are being used may not wish to be recorded, or the owner of a property may not wish to have certain portions of their property recorded. In such cases, handheld controller 600 may be configured to identify faces in the images and blur those faces. Additionally or alternatively, the image data may be used for calculations and then immediately discarded. Other privacy implications may be administered via policies.
  • a self-tracked peripheral device embodied as a handheld controller (e.g., handheld controller 100 7 200, and 600), is described herein.
  • the self-tracked peripheral device may track itself in free space without any external cameras on other devices or in other parts of the environment. Moreover, the self-tracked peripheral device may determine its current location without needing line of sight to any other sensors or cameras.
  • the self-tracked peripheral device may be lighter and less costly than traditional devices due to the implementation of fewer cameras. Moreover, the reduced number of cameras may reduce the occurrence of occlusions and may also increase battery life in the peripheral device.
  • FIG. 7 illustrates a computing architecture 700 with multiple different modules.
  • the computing architecture 700 may include, for example, a display subsystem 701, which may represent and/or include any of the various display components and/or attributes described herein.
  • display subsystem 701 may interact with a processing subsystem 710, including any of its various subcomponents.
  • display subsystem 701 may interact with a processor 711, memory 712, a communications module 713 (which may include or represent a variety of different wired or wireless connections, such as WiFi, Bluetooth ® , Global Positioning System (GPS) modules, cellular or other radios, etc.), and/or a data store 714 (which may include or represent a variety of different volatile or non-volatile data storage devices).
  • processor 711 may serve as the processor that interprets input data received from handheld controller 100, 200, and 600, as well as generates output data for use by handheld controller 100, 200, and 600, as discussed above.
  • processing subsystem 710 may be embedded within, located on, or coupled to an artificial-reality device. In other cases, processing subsystem 710 may be separate from and/or external to the artificial-reality device (as part of, e.g., a separate computing device, as described in greater detail below). In some examples, processing subsystem 710 may include one or more special-purpose, hardware-based accelerators, such as machine-learning accelerators designed to perform tasks associated with computer-vision processing.
  • computing architecture 700 may also include an authentication subsystem 702.
  • Authentication subsystem 702 may be embedded within and/or coupled to an artificial-reality device.
  • Authentication subsystem 702 may include a variety of different hardware components, such as cameras, microphones, iris scanners, facial scanners, and/or other hardware components (such as the optical sensors and acoustic transducers incorporated into an artificial-reality device), each of which may be used to authenticate a user. In some cases, some or all of the functions of the artificial-reality device may be locked until the user is authenticated.
  • a user may use authentication subsystem 702 to authenticate him or herself and, in turn, transition the artificial-reality device from a "locked” state, in which some or all of the device's functionality is locked, to an "unlocked” state, in which some or all of the device's functionality is available to the user.
  • authentication subsystem 702 may authenticate the user to a network, for example, that provides data to the artificial- reality device.
  • authentication subsystem 702 may authenticate the user based on the user's detected voice patterns, based on an iris scan of the user, based on a facial scan, based on a fingerprint scan, or based on some other form of biometric authentication.
  • Authentication subsystem 702 may be mounted on or embedded within the disclosed artificial-reality devices in a variety of ways. In some examples, authentication subsystem 702 may be part of an external device (described below) to which the artificial- reality device is connected.
  • computing architecture 700 may also include an eye tracking subsystem 703 designed to identify and track various characteristics of a user's eye(s), such as their gaze direction.
  • Eye-tracking subsystem 703 may include a variety of different eye-tracking hardware components or other computer vision components.
  • eye-tracking subsystem 703 may include optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR (Light Detection And Ranging) sensors, and/or any other suitable type or form of optical sensor.
  • a processing subsystem such as processing subsystem 710 in FIG.
  • eye-tracking subsystem 703 may be configured to identify and measure the inter-pupillary distance (IPD) of a user. In some embodiments, eye-tracking subsystem 703 may measure and/or calculate the IPD of the user while the user is wearing the artificial-reality device. In these embodiments, eye-tracking subsystem 703 may detect the positions of a user's eyes and may use this information to calculate the user's IPD.
  • IPD inter-pupillary distance
  • Eye-tracking subsystem 703 may track a user's eye position and/or eye movement in a variety of ways.
  • one or more light sources and/or optical sensors may capture an image of the user's eyes. Eye-tracking subsystem 703 may then use the captured information to determine the user's inter-pupillary distance, interocular distance, and/or a 3D position of each eye relative to the artificial-reality device (e.g., for distortion adjustment purposes), including a magnitude of torsion and rotation (i.e., roll, pitch, and yaw) and/or gaze directions for each eye.
  • infrared light may be emitted by eye-tracking subsystem 703 and reflected from each eye. The reflected light may be received or detected by an optical sensor and analyzed to extract eye rotation data from changes in the infrared light reflected by each eye.
  • Eye-tracking subsystem 703 may use any of a variety of different methods to track the eyes of an artificial-reality device user.
  • a light source e.g., infrared light-emitting diodes
  • Eye-tracking subsystem 703 may then detect (e.g., via an optical sensor coupled to the artificial-reality device) and analyze a reflection of the dot pattern from each eye of the user to identify a location of each pupil of the user.
  • eye-tracking subsystem 703 may track up to six degrees of freedom of each eye (i.e., 3D position, roll, pitch, and yaw) and at least a subset of the tracked quantities may be combined from two eyes of a user to estimate a gaze point (i.e., a 3D location or position in a virtual scene where the user is looking) and/or an IPD.
  • a gaze point i.e., a 3D location or position in a virtual scene where the user is looking
  • IPD IPD
  • the varying distance between a pupil and a display as viewing direction changes may be referred to as "pupil swim" and may contribute to distortion perceived by the user as a result of light focusing in different locations as the distance between the pupil and the display changes. Accordingly, measuring distortion at different eye positions and pupil distances relative to displays and generating distortion corrections for different positions and distances may allow mitigation of distortion caused by "pupil swim" by tracking the 3D position of a user's eyes and applying a distortion correction corresponding to the 3D position of each of the user's eyes at a given point in time.
  • knowing the 3D position of each of a user's eyes may allow for the mitigation of distortion caused by changes in the distance between the pupil of the eye and the display by applying a distortion correction for each 3D eye position. Furthermore, as noted above, knowing the position of each of the user's eyes may also enable eye-tracking subsystem 703 to make automated adjustments for a user's IPD.
  • display subsystem 701 discussed above may include a variety of additional subsystems that may work in conjunction with eye-tracking subsystem 703.
  • display subsystem 701 may include a varifocal actuation subsystem, a scene-rendering module, and a vergence processing module.
  • the varifocal subsystem may cause left and right display elements to vary the focal distance of the display device.
  • the varifocal subsystem may physically change the distance between a display and the optics through which it is viewed by moving the display, the optics, or both. Additionally, moving or translating two lenses relative to each other may also be used to change the focal distance of the display.
  • the varifocal subsystem may include actuators or motors that move displays and/or optics to change the distance between them.
  • This varifocal subsystem may be separate from or integrated into display subsystem 701.
  • the varifocal subsystem may also be integrated into or separate from the actuation subsystem and/or eye-tracking subsystem 703.
  • display subsystem 701 may include a vergence processing module configured to determine a vergence depth of a user's gaze based on a gaze point and/oran estimated intersection of the gaze lines determined by eye-tracking subsystem 703.
  • Vergence may refer to the simultaneous movement or rotation of both eyes in opposite directions to maintain single binocular vision, which may be naturally and automatically performed by the human eye.
  • a location where a user's eyes are verged is where the user is looking and is also typically the location where the user's eyes are focused.
  • the vergence processing module may triangulate gaze lines to estimate a distance or depth from the user associated with intersection of the gaze lines.
  • the depth associated with intersection of the gaze lines may then be used as an approximation for the accommodation distance, which may identify a distance from the user where the user's eyes are directed.
  • the vergence distance may allow for the determination of a location where the user's eyes should be focused and a depth from the user's eyes at which the eyes are focused, thereby providing information (such as an object or plane of focus) for rendering adjustments to the virtual scene.
  • the vergence processing module may coordinate with eye-tracking subsystem
  • Eye-tracking subsystem 703 may obtain information about the user's vergence or focus depth and may adjust display subsystem 701 to be closer together when the user's eyes focus or verge on something close and to be farther apart when the user's eyes focus or verge on something at a distance.
  • the eye-tracking information generated by eye-tracking subsystem 703 may be used, for example, to modify various aspects of how different computer-generated images are presented.
  • display subsystem 701 may be configured to modify, based on information generated by eye-tracking subsystem 703, at least one aspect of how the computer-generated images are presented. For instance, the computer generated images may be modified based on the user's eye movement, such that if a user is looking up, the computer-generated images may be moved upward on the screen. Similarly, if the user is looking to the side or down, the computer-generated images may be moved to the side or downward on the screen. If the user's eyes are closed, the computer-generated images may be paused or removed from the display and resumed once the user's eyes are back open.
  • computing architecture 700 may also include a face tracking subsystem 705 and/or a body-tracking subsystem 707 configured to identify and track the movement of, and/or various characteristics of, a user's face and/or other body parts.
  • face-tracking subsystem 705 and/or body-tracking subsystem 707 may include one or more body- and/or face-tracking light sources and/or optical sensors, along with potentially other sensors or hardware components. These components may be positioned or directed toward the user's face and/or body so as to capture movements of the user's mouth, cheeks, lips, chin, etc., as well as potentially movement of the user's body, including their arms, legs, hands, feet, torso, etc.
  • face-tracking subsystem 705 may be configured to identify and track facial expressions of a user. These facial expressions may be identified by tracking movements of individual parts of the user's face, as detailed above. The user's facial expressions may change over time and, as such, face-tracking subsystem 705 may be configured to operate on a continuous or continual basis to track the user's changing facial expressions. Classifications of these facial expressions may be stored in data store 714 of processing subsystem 710. [0114] Similarly, body-tracking subsystem 707 may be configured to identify and track a position of substantially any part of the user's body.
  • body-tracking subsystem 707 may log initial positions for a user's arms, hands, legs, or feet and may note how those body parts move over time. In some cases, these body movements may be used as inputs to a processing subsystem of the artificial-reality device. For example, if a user wants to open or close a display, the user may wave their hand or arm in a certain manner or perform a certain gesture (such as a snap or finger-closing motion).
  • face/body-tracking component or other components of body-tracking subsystem 707 may track the user's body movements and use those movements as inputs to interact with an artificial reality generated by the artificial-reality device and/or to interact with software applications running on processing subsystem 710.
  • face-tracking subsystem 705 and/or body-tracking subsystem 707 may be incorporated within and/or coupled to the artificial- reality devices disclosed herein in a variety of ways.
  • all or a portion of face tracking subsystem 705 and/or body-tracking subsystem 707 may be embedded within and/or attached to an outer portion of the artificial-reality device.
  • one or more face/body-tracking components may be embedded within and/or positioned near an outer portion of the artificial-reality device.
  • the face/body-tracking component(s) may be positioned far enough away from the user's face and/or body to have a clear view of the user's facial expressions and/or facial and body movements.
  • computing architecture 700 may also include an imaging subsystem 706 configured to image a local environment of the artificial-reality device.
  • Imaging subsystem 706 may include or incorporate any of a variety of different imaging components and elements, such as light sources and optical sensors.
  • imaging subsystem 706 may include one or more world-facing cameras that are configured to capture images of the user's surroundings. These world-facing cameras may be mounted on or coupled to the artificial-reality device in a variety of different positions and patterns.
  • the images captured by these world-facing cameras may be processed by processing subsystem 710.
  • the images may be stitched together to provide a 360-degree view of the user's local environment.
  • this surrounding view may be presented on the display of the artificial-reality device.
  • the user may be able to see either to the side or behind themselves simply by viewing the surrounding view presented on the display.
  • the artificial-reality device may have substantially any number of world-facing cameras.
  • the artificial-reality device may use the above- described world-facing cameras to map a user's and/or device's environment using techniques referred to as "simultaneous location and mapping" (SLAM).
  • SLAM mapping and location identifying techniques may involve a variety of hardware and software tools that can create or update a map of an environment while simultaneously keeping track of a user's location within the mapped environment.
  • SLAM may use many different types of sensors to create a map and determine a user's position within the map.
  • SLAM techniques used by an artificial-reality device may, for example, use data from optical sensors to determine a user's location. Radios including WiFi, Bluetooth, GPS, cellular, or other communication devices may be also used to determine a user's location relative to a radio transceiver or group of transceivers (e.g., a WiFi router or group of GPS satellites). Acoustic sensors such as microphone arrays or 2D or 3D sonar sensors may also be used to determine a user's location within an environment. The artificial-reality device may incorporate any or all of these types of sensors to perform SLAM operations such as creating and continually updating maps of the user's current environment.
  • SLAM data generated by these sensors may be referred to as "environmental data" and may indicate a user's current environment.
  • This data may be stored in a local or remote data store (e.g., a cloud data store) and may be provided to the artificial- reality device on demand.
  • computing architecture 700 may include a sensor subsystem 709 configured to detect, and generate sensor data that reflects, changes in a local environment of the artificial-reality device.
  • Sensor subsystem 709 may include a variety of different sensors and sensing elements, examples of which include, without limitation, a position sensor, an inertial measurement unit (IMU), a depth camera assembly, an audio sensor, a video sensor, a location sensor (e.g., GPS), a light sensor, and/or any sensor or hardware component from any another subsystem described herein.
  • the sensors include an IMU
  • the IMU may generate calibration data based on measurement signals from the sensor(s). Examples of IMUs may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
  • the above-described sensor data may include a change in location (e.g., from a GPS location sensor), a change in audible surroundings (e.g., from an audio sensor), a change in visual surroundings (e.g., from a camera or other light sensor), a change in inertia (e.g., from an IMU), or other changes that may indicate that the user's environment has changed.
  • a change in the amount of ambient light for example, may be detected by a light sensor.
  • display subsystem 701 in conjunction with processing subsystem 710) may increase the brightness of the display.
  • An increase in ambient sound may result in an increase in sound amplitude (e.g., in an output audio transducer).
  • Otherenvironmental changes may also be detected and implemented as feedback within the artificial-reality device's computing environment.
  • sensor subsystem 709 may generate measurement signals in response to motion of the artificial-reality device.
  • the user may be interacting with other users or other electronic devices that serve as audio sources.
  • the process of determining where the audio sources are located relative to the user may be referred to as "localization,” and the process of rendering playback of the audio source signal to appear as if it is coming from a specific direction may be referred to as "spatialization.”
  • Localizing an audio source may be performed in a variety of different ways.
  • a subsystem of the artificial-reality device may initiate a direction-of-arrival (DOA) analysis to determine the location of a sound source.
  • DOA analysis may include analyzing the intensity, spectra, and/or arrival time of each sound at the artificial-reality device to determine the direction from which the sounds originated.
  • the DOA ana lysis may include any suitable algorithm for analyzing the surrounding acoustic environment in which the artificial-reality device is located.
  • the DOA analysis may be designed to receive input signals from a microphone and apply digital signal processing algorithms to the input signals to estimate the direction of arrival. These algorithms may include, for example, delay and sum algorithms where the input signal is sampled and the resulting weighted and delayed versions of the sampled signal are averaged together to determine a direction of arrival.
  • a least mean squared (LMS) algorithm may also be implemented to create an adaptive filter. This adaptive filter may then be used to identify differences in signal intensity, for example, or differences in time of arrival. These differences may then be used to estimate the direction of arrival.
  • the DOA may be determined by converting the input signals into the frequency domain and selecting specific bins within the time-frequency (TF) domain to process.
  • Each selected TF bin may be processed to determine whether that bin includes a portion of the audio spectrum with a direct-path audio signal. Those bins having a portion of the direct-path signal may then be analyzed to identify the angle at which a microphone array received the direct-path audio signal. The determined angle may then be used to identify the direction of arrival for the received input signal. Other algorithms not listed above may also be used alone or in combination with the above algorithms to determine DOA.
  • different users may perceive the source of a sound as coming from slightly different locations. This may be the result of each user having a unique head-related transfer function (FIRTF), which may be dictated by a user's anatomy including ear canal length and the positioning of the ear drum.
  • the artificial- reality device may provide an alignment and orientation guide, which the user may follow to customize the sound signal presented to the user based on their unique HRTF.
  • the artificial-reality device may use a variety of different array transfer functions (e.g., any of the DOA algorithms identified above) to estimate the direction of arrival for the sounds. Once the direction of arrival has been determined, the artificial-reality device may play back sounds to the user according to the user's unique HRTF. Accordingly, the DOA estimation generated using the array transfer function (ATF) may be used to determine the direction from which the sounds are to be played from. The playback sounds may be further refined based on how that specific user hears sounds according to the HRTF.
  • ATF array transfer function
  • the artificial-reality device may perform localization based on information received from other types of sensors, such as sensor subsystem 709. These sensors may include cameras, IR sensors, heat sensors, motion sensors, GPS receivers, or in some cases, sensors that detect a user's eye movements.
  • sensors may include cameras, IR sensors, heat sensors, motion sensors, GPS receivers, or in some cases, sensors that detect a user's eye movements.
  • artificial-reality device may include eye tracking subsystem 703 that determines where the user is looking. Often, the user's eyes will look at the source of the sound, if only briefly. Such clues provided by the user's eyes may further aid in determining the location of a sound source.
  • sensors such as cameras, heat sensors, and IR sensors may also indicate the location of a user, the location of an electronic device, or the location of another sound source. Any or all of the above methods may be used individually or in combination to determine the location of a sound source and may further be used to update the location of a sound source over time.
  • an "acoustic transfer function" may characterize or define how a sound is received from a given location. More specifically, an acoustic transfer function may define the relationship between parameters of a sound at its source location and the parameters by which the sound signal is detected (e.g., detected by a microphone array or detected by a user's ear).
  • the artificial-reality device may include one or more acoustic sensors that detect sounds within range of the device.
  • a processing subsystem of the artificial-reality device may estimate a DOA for the detected sounds (using, e.g., any of the methods identified above) and, based on the parameters of the detected sounds, may generate an acoustic transfer function that is specific to the location of the device. This customized acoustic transfer function may thus be used to generate a spatialized output audio signal where the sound is perceived as coming from a specific location.
  • the artificial-reality device may re-render (i.e., spatialize) the sound signals to sound as if coming from the direction of that sound source.
  • the artificial-reality device may apply filters or other digital signal processing that alter the intensity, spectra, or arrival time of the sound signal.
  • the digital signal processing may be applied in such a way that the sound signal is perceived as originating from the determined location.
  • the artificial-reality device may amplify or subdue certain frequencies or change the time that the signal arrives at each ear.
  • the artificial-reality device may create an acoustic transfer function that is specific to the location of the device and the detected direction of arrival of the sound signal.
  • the artificial-reality device may re-renderthe source signal in a stereo device or multi-speaker device (e.g., a surround sound device).
  • a stereo device or multi-speaker device e.g., a surround sound device
  • separate and distinct audio signals may be sent to each speaker.
  • Each of these audio signals may be altered according to the user's HRTF and according to measurements of the user's location and the location of the sound source to sound as if they are coming from the determined location of the sound source. Accordingly, in this manner, the artificial-reality device (or speakers associated with the device) may re-render an audio signal to sound as if originating from a specific location.
  • computing architecture 700 may also include a battery subsystem 708 configured to provide electrical power for the artificial-reality device.
  • Battery subsystem 708 may include a variety of different components and elements, examples of which include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power source or power storage device. Battery subsystem 708 may be incorporated into and/or otherwise associated with the artificial-reality devices disclosed herein in a variety of ways. In some examples, all or a portion of battery subsystem 708 may be embedded or disposed within a back portion or area of the artificial-reality device.
  • the artificial-reality device may include or be connected to an external device (e.g., a paired device), such as a neckband, charging case, smart watch, smartphone, wrist band, other wearable device, handheld controller, tablet computer, laptop computer, and/or other external compute device, etc.
  • a paired device such as a neckband, charging case, smart watch, smartphone, wrist band, other wearable device, handheld controller, tablet computer, laptop computer, and/or other external compute device, etc.
  • This external device generally represents any type or form of paired device.
  • the external device may be coupled to the artificial-reality device via one or more connectors.
  • the connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components.
  • the artificial-reality device and the external device may operate independently without any wired or wireless connection between them.
  • Pairing external devices with the artificial-reality device may enable the artificial-reality device to achieve certain form factors while still providing sufficient battery and computation power for expanded capabilities.
  • Some or all of the battery power, computational resources, and/or additional features of the artificial-reality device may be provided by a paired device or shared between a paired device and the artificial-reality device, thus reducing the weight, heat profile, and form factor of the artificial-reality device overall while still retaining the desired functionality.
  • the external device may allow components that would otherwise be included on a device to be included in the external device since users may tolerate a heavier weight load in their pockets, shoulders, or hands than they would tolerate on their heads.
  • the external device may also have a larger surface area over which to diffuse and disperse heat to the ambient environment.
  • an external device may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone headwear device. Since weight carried in the external device may be less invasive to a user than weight carried in the artificial-reality device, a user may tolerate wearing a lighter artificial-reality device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone artificial-reality device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
  • the external device may be communicatively coupled with the artificial-reality device and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to the artificial-reality device.
  • the external device may include multiple acoustic transducers, such as the acoustic transducers 607 and 608 described above.
  • a processing subsystem on the external device may process information generated by the sensors on the external device and/or the artificial-reality device.
  • the processing subsystem may process information from a microphone array that describes sounds detected by the microphone array. For each detected sound, the processing subsystem may perform a DOA estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, the processing subsystem may populate an audio data set with the information.
  • the processing subsystem may compute all inertial and spatial calculations from the IMU located on the artificial-reality device.
  • a connector may convey information between the artificial-reality device and the external device and between the artificial-reality device and the processing subsystem.
  • computing architecture 700 may also include a notification subsystem 704.
  • Notification subsystem 704 may be configured to generate user notifications that are communicated to the user.
  • the user notifications may include audio-based notifications, haptics-based notifications, visual-based notifications, or other types of notifications.
  • notification subsystem 704 may generate an audio notification (via, e.g., acoustic transducers) when a text message or email is received (by, e.g., the artificial-reality device and/or an external device).
  • various haptic transducers may buzz or vibrate to instruct a user to move the display screen down from its storage position to a viewing position.
  • an IR camera may detect another artificial-reality device within the same room and/or an audio sensor may detect an inaudible frequency emitted by the other artificial-reality device.
  • the artificial-reality device may display a message on the display instructing the user to switch to artificial reality mode so that the artificial-reality device and the detected device may interact.
  • Many other types of notifications are also possible.
  • the artificial-reality device may respond automatically to the notification, while in other cases, the user may perform some type of interaction to respond to the notification.
  • notification subsystem 704 may include one or more haptic components disposed in various locations on the artificial-reality device. These haptic transducers may be configured to generate haptic outputs, such as buzzes or vibrations.
  • the haptic transducers may be positioned within the artificial-reality device in a variety of ways. Users may be able to detect haptic sensations from substantially any location on the artificial- reality device and, as such, the haptic transducers may be disposed throughout the device.
  • the haptic transducers may be disposed on or within the artificial-reality device in patterns. For instance, the haptic transducers may be arranged in rows or circles or lines throughout the artificial-reality device.
  • haptic transducers may be actuated at different times to generate different patterns that may be felt by the user.
  • the haptic transducers may be actuated in a certain manner to correspond to a particular notification. For instance, a short buzz on the right side of the artificial-reality device may indicate that the user has received a text message. A pattern of two short vibrations on the left side of the artificial-reality device may indicate that the user is receiving a phone call or may also indicate who that phone call is from. A string of vibrations from successive haptic transducers 805 arranged in a row may indicate that an interesting artificial reality feature is available in the user's current location and that the user should consider lowering the display. In addition, a pattern of vibrations that moves from right to left may indicate that the user should take a left turn at an intersection. Many other such notifications are possible, and the above-identified list is not intended to be limiting.
  • the haptic transducers or other haptic feedback elements may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature.
  • Haptic transducers may also provide various types of kinesthetic feedback, such as motion and compliance.
  • Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms.
  • Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
  • the artificial-reality device may create an entire artificial experience or enhance a user's real- world experience in a variety of contexts and environments. For instance, the artificial-reality device may assist or extend a user's perception, memory, or cognition within a particular environment. The artificial-reality device may also enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world.
  • the artificial-reality device may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visuals aids, etc.).
  • the embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
  • the user may interact with processing subsystem 710 and/or with any of the various subsystems 701-709 via tactile or motion-based movements. For instance, a user may press a button or knob or dial (either on the artificial-reality device or within an external device) to respond to a notification. In other cases, the user may perform a gesture with their hands, arms, face, eyes, or other body part. This gesture may be interpreted by processing subsystem 710 and associated software as a response to the notification. [0142] In some cases, the user may interact with the artificial-reality device just using their brain.
  • an artificial-reality device may include at least one brain-computer- interface.
  • this brain-computer-interface may be positioned within the crown portion of the artificial-reality device.
  • the brain-computer-interface (BCI) may be configured to detect brain signals that are translatable into user inputs.
  • the BCI may use any of a variety of non-invasive brain activity detection techniques, including using functional near-infrared spectroscopy, using electroencephalography (EEG), using dry active electrode arrays, or using other types of brain-computer-interfaces to detect and determine a user's intentions based on their brain waves and brain wave patterns.
  • BCIs may be configured to indicate movement of a hand, a finger, or a limb.
  • BCIs may be configured to detect speech patterns and convert the detected speech patterns into written text. This text may be applied in a reply message, for example, to a text or an email. In some cases, the text may be presented on the display as the user thinks the text and as the think-to-text translator forms the words.
  • a display may be adjusted to swing or slide down in front of the user's face.
  • a positioning mechanism may adjustably position the display between a storage position in which the display is positioned in a location that is intended to be substantially outside of the user's field of view (e.g., closed), and a viewing position in which the display is positioned in a location that is intended to be substantially within the user's field of view (e.g., open).
  • an artificial-reality device may utilize or include a virtual retina display.
  • the virtual retinal display may include virtual retina display projectors that may be configured to draw or trace a display directly onto the user's eye.
  • a virtual retinal display also referred to herein as a retinal scan display (RSD) or retinal projector (RP), may draw a raster display directly onto the retina of the user's eye. The user may then see what appears to be a conventional display floating in space in front of them.
  • the VRD projectors may be coupled to the artificial-reality device.
  • the virtual retina display projectors may incorporate vertical-cavity surface-emitting lasers or other types of lasers configured to draw images on a user's retina. Such lasers are typically powered using a relatively low power amplitude to avoid damaging the user's eyes.
  • the artificial-reality device may also include a holographic grating that focuses the lasers on the user's retina.
  • FIG. 8 is an illustration of an exemplary system 800 that incorporates an eye-tracking subsystem capable of tracking a user's eye(s).
  • system 800 may include a light source 802, an optical subsystem 804, an eye-tracking subsystem 806, and/or a control subsystem 808.
  • light source 802 may generate light for an image (e.g., to be presented to an eye 801 of the viewer).
  • Light source 802 may represent any of a variety of suitable devices.
  • light source 802 can include a two-dimensional projector (e.g., a LCoS (liquid crystal on silicon) display), a scanning source (e.g., a scanning laser), or other device (e.g., an LCD (liquid crystal display), an LED (light-emitting diode) display, an OLED (organic light-emitting diode) display, an active-matrix OLED display (AMOLED), a transparent OLED display (TOLED), a waveguide, or some other display capable of generating light for presenting an image to the viewer).
  • the image may represent a virtual image, which may refer to an optical image formed from the apparent divergence of light rays from a point in space, as opposed to an image formed from the light ray's actual divergence.
  • optical subsystem 804 may receive the light generated by light source 802 and generate, based on the received light, converging light 820 that includes the image.
  • optical subsystem 804 may include any number of lenses (e.g., Fresnel lenses, convex lenses, concave lenses), apertures, filters, mirrors, prisms, and/or other optical components, possibly in combination with actuators and/or other devices.
  • the actuators and/or other devices may translate and/or rotate one or more of the optical components to alter one or more aspects of converging light 820.
  • various mechanical couplings may serve to maintain the relative spacing and/orthe orientation of the optical components in any suitable combination.
  • eye-tracking subsystem 806 may generate tracking information indicating a gaze angle of an eye 801 of the viewer.
  • control subsystem 808 may control aspects of optical subsystem 804 (e.g., the angle of incidence of converging light 820) based at least in part on this tracking information. Additionally, in some examples, control subsystem 808 may store and utilize historical tracking information (e.g., a history of the tracking information over a given duration, such as the previous second or fraction thereof) to anticipate the gaze angle of eye 801 (e.g., an angle between the visual axis and the anatomical axis of eye 801).
  • historical tracking information e.g., a history of the tracking information over a given duration, such as the previous second or fraction thereof
  • eye-tracking subsystem 806 may detect radiation emanating from some portion of eye 801 (e.g., the cornea, the iris, the pupil, or the like) to determine the current gaze angle of eye 801.
  • eye tracking subsystem 806 may employ a wavefront sensor to track the current location of the pupil.
  • Any number of techniques can be used to track eye 801. Some techniques may involve illuminating eye 801 with infrared light and measuring reflections with at least one optical sensor that is tuned to be sensitive to the infrared light. Information about how the infrared light is reflected from eye 801 may be analyzed to determine the position(s), orientation(s), and/or motion(s) of one or more eye feature(s), such as the cornea, pupil, iris, and/or retinal blood vessels.
  • the radiation captured by a sensor of eye-tracking subsystem 806 may be digitized (i.e., converted to an electronic signal). Further, the sensor may transmit a digital representation of this electronic signal to one or more processors (for example, processors associated with a device including eye-tracking subsystem 806).
  • Eye-tracking subsystem 806 may include any of a variety of sensors in a variety of different configurations.
  • eye-tracking subsystem 806 may include an infrared detector that reacts to infrared radiation.
  • the infrared detector may be a thermal detector, a photonic detector, and/or any other suitable type of detector.
  • Thermal detectors may include detectors that react to thermal effects of the incident infrared radiation.
  • one or more processors may process the digital representation generated by the sensor(s) of eye-tracking subsystem 806 to track the movement of eye 801.
  • these processors may track the movements of eye 801 by executing algorithms represented by computer-executable instructions stored on non-transitory memory.
  • on-chip logic e.g., an application-specific integrated circuit or ASIC
  • eye-tracking subsystem 806 may be programmed to use an output of the sensor(s) to track movement of eye 801.
  • eye-tracking subsystem 806 may analyze the digital representation generated by the sensors to extract eye rotation information from changes in reflections.
  • eye-tracking subsystem 806 may use corneal reflections or glints (also known as Purkinje images) and/or the center of the eye's pupil 822 as features to track over time.
  • eye-tracking subsystem 806 may use the center of the eye's pupil 822 and infrared or near-infrared, non-collimated light to create corneal reflections. In these embodiments, eye-tracking subsystem 806 may use the vector between the center of the eye's pupil 822 and the corneal reflections to compute the gaze direction of eye 801.
  • the disclosed systems may perform a calibration procedure for an individual (using, e.g., supervised or unsupervised techniques) before tracking the user's eyes.
  • the calibration procedure may include directing users to look at one or more points displayed on a display while the eye-tracking system records the values that correspond to each gaze position associated with each point.
  • eye-tracking subsystem 806 may use two types of infrared and/or near-infrared (also known as active light) eye-tracking techniques: bright-pupil and dark-pupil eye tracking, which may be differentiated based on the location of an illumination source with respect to the optical elements used. If the illumination is coaxial with the optical path, then eye 801 may act as a retroreflector as the light reflects off the retina, thereby creating a bright pupil effect similar to a red-eye effect in photography. If the illumination source is offset from the optical path, then the eye's pupil 822 may appear dark because the retroreflection from the retina is directed away from the sensor.
  • infrared and/or near-infrared also known as active light
  • bright-pupil tracking may create greater iris/pupil contrast, allowing more robust eye tracking with iris pigmentation, and may feature reduced interference (e.g., interference caused by eyelashes and other obscuring features).
  • Bright-pupil tracking may also allow tracking in lighting conditions ranging from total darkness to a very bright environment.
  • control subsystem 808 may control light source 802 and/or optical subsystem 804 to reduce optical aberrations (e.g., chromatic aberrations and/or monochromatic aberrations) of the image that may be caused by or influenced by eye 801.
  • control subsystem 808 may use the tracking information from eye-tracking subsystem 806 to perform such control.
  • control subsystem 808 may alter the light generated by light source 802 (e.g., by way of image rendering) to modify (e.g., pre-distort) the image so that the aberration of the image caused by eye 801 is reduced.
  • the disclosed systems may track both the position and relative size of the pupil (since, e.g., the pupil dilates and/or contracts).
  • the eye-tracking devices and components e.g., sensors and/or sources
  • the frequency range of the sensors may be different (or separately calibrated) for eyes of different colors and/or different pupil types, sizes, and/or the like.
  • the various eye-tracking components e.g., infrared sources and/or sensors
  • described herein may need to be calibrated for each individual user and/or eye.
  • the disclosed systems may track both eyes with and without ophthalmic correction, such as that provided by contact lenses worn by the user.
  • ophthalmic correction elements e.g., adjustable lenses
  • the color of the user's eye may necessitate modification of a corresponding eye-tracking algorithm.
  • eye tracking algorithms may need to be modified based at least in part on the differing color contrast between a brown eye and, for example, a blue eye.
  • FIG. 9 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 8.
  • an eye-tracking subsystem 900 may include at least one source 904 and at least one sensor 9806.
  • Source 904 generally represents any type or form of element capable of emitting radiation.
  • source 904 may generate visible, infrared, and/or near-infrared radiation.
  • source 904 may radiate non-collimated infrared and/or near-infrared portions of the electromagnetic spectrum towards an eye 902 of a user.
  • Source 904 may utilize a variety of sampling rates and speeds.
  • the disclosed systems may use sources with higher sampling rates in order to capture fixational eye movements of a user's eye 902 and/or to correctly measure saccade dynamics of the user's eye 902.
  • any type or form of eye-tracking technique may be used to track the user's eye 902, including optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, etc.
  • Sensor 906 generally represents any type or form of element capable of detecting radiation, such as radiation reflected off the user's eye 902.
  • sensor 906 include, without limitation, a charge coupled device (CCD), a photodiode array, a complementary metal-oxide-semiconductor (CMOS) based sensor device, and/or the like.
  • CMOS complementary metal-oxide-semiconductor
  • sensor 906 may represent a sensor having predetermined parameters, including, but not limited to, a dynamic resolution range, linearity, and/or other characteristic selected and/or designed specifically for eye tracking.
  • eye-tracking subsystem 900 may generate one or more glints.
  • a glint 903 may represent reflections of radiation (e.g., infrared radiation from an infrared source, such as source 904) from the structure of the user's eye.
  • glint 903 and/or the user's pupil may be tracked using an eye-tracking algorithm executed by a processor (either within or external to an artificial reality device).
  • an artificial-reality device may include a processor and/or a memory device in order to perform eye tracking locally and/or a transceiver to send and receive the data necessary to perform eye tracking on an external device (e.g., a mobile phone, cloud server, or other computing device).
  • an external device e.g., a mobile phone, cloud server, or other computing device.
  • FIG. 9 shows an example image 905 captured by an eye-tracking subsystem, such as eye-tracking subsystem 900.
  • image 905 may include both the user's pupil 908 and a glint 910 near the same.
  • pupil 908 and/or glint 910 may be identified using an artificial-intelligence-based algorithm, such as a computer-vision-based algorithm.
  • image 905 may represent a single frame in a series of frames that may be analyzed continuously in order to track the eye 902 of the user. Further, pupil 908 and/or glint 910 may be tracked over a period of time to determine a user's gaze.
  • eye-tracking subsystems can be incorporated into one or more of the various artificial reality systems described herein in a variety of ways.
  • one or more of the various components of system 800 and/or eye-tracking subsystem 900 may be incorporated into augmented-reality system 400 in FIG. 4 and/or virtual-reality system 500 in FIG. 5 to enable these systems to perform various eye-tracking tasks (including one or more of the eye-tracking operations described herein).
  • a handheld controller capable of sensing levels or variations of physical pressure on the controller may facilitate more complex and/or nuanced input from a user of the controller.
  • input that includes sensing of pressure imposed by the user on the controller may be interpreted as manipulation (e.g., squeezing, pinching, rotation, or the like) of a virtual object of an artificial environment being presented to the user.
  • manipulation e.g., squeezing, pinching, rotation, or the like
  • the levels or variations of pressure may be employed to navigate menus, alter input slider positions, or provide other types of input that may be more difficult to provide via other types of input device mechanisms.
  • a handheld controller may include (1) a body including (a) a grasping region configured to be grasped by a hand and (b) an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region and (2) a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
  • Example 2 The handheld controller of Example 1, where the handheld controller may further include a trigger button coupled to the grasping region and configured to be engaged by an index fingerof the hand while the hand grasps the grasping region, where the trigger button detects whether the trigger button has been activated for interpretation as a second input.
  • a trigger button coupled to the grasping region and configured to be engaged by an index fingerof the hand while the hand grasps the grasping region, where the trigger button detects whether the trigger button has been activated for interpretation as a second input.
  • Example 3 The handheld controller of Example 2, where the handheld controller may further include a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
  • a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
  • Example 4 The handheld controller of any of Examples 1-3, where the pressure sensor may include a static pressure sensorthat senses depression of the input surface region.
  • Example 5 The handheld controller of any of Examples 1-3, where the pressure sensor may include a zero-movement sensor.
  • Example 6 The handheld controller of any of Examples 1-3, where the handheld controller may further include a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
  • a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
  • Example 7 The handheld controller of any of Examples 1-3, where the handheld controller may further include a capacitive sensor coupled to the input surface region and configured to detect a touching with the input surface region for interpretation as a second input.
  • Example 8 The handheld controller of any of Examples 1-3, where the handheld controller may further include a capacitive sensor that detects a touched location on the input surface region for interpretation as a second input.
  • Example 9 The handheld controller of any of Examples 1-3, where the handheld controller may further include an input button coupled to the body outside the input surface region and configured to be engaged by the thumb while the hand grasps the grasping region, where the input button indicates whether the input button has been engaged for interpretation as a second input.
  • a system may include (1) a display that presents a virtual object in an artificial environment, (2) a processor that processes a plurality of inputs for manipulating the virtual object, and (3) a body including (a) a grasping region configured to be grasped by a hand, and (b) an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region, and (c) a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region, and where the processor processes the level of pressure as a first input of the plurality of inputs.
  • Example 11 The system of Example 10, where the handheld controller may further include a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand where (1) the trigger button indicates whether the trigger button has been activated and (2) the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
  • a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand where (1) the trigger button indicates whether the trigger button has been activated and (2) the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
  • Example 12 The system of either Example 10 or Example 11, where the processor may interpret a combination of the first input and the second input as a manipulation of the virtual object.
  • Example 13 The system of Example 12, where the manipulation of the virtual object may include a pinching action imposed on the virtual object.
  • Example 14 The system of either Example 10 or Example 11, where the handheld controller may further include a capacitive sensor coupled to the input surface region, where (a) the capacitive sensor detects a touched location of the input surface region and (b) the processor processes a representation of the touched location as a third input of the plurality of inputs.
  • Example 15 The system of Example 14, where the processor may interpret the third input as a navigation of a menu presented by the display.
  • Example 16 The system of Example 14, where the processor may interpret a combination of the first input, the second input, and the third input as a manipulation of the virtual object.
  • Example 17 The system of Example 16, where the manipulation of the virtual object may include a rotational action imposed on the virtual object.
  • Example 18 A method may include (1) detecting, by a pressure sensor mechanically coupled to an input surface region of a body of a handheld controller, a level of pressure imposed on the input surface region, where (a) the body further includes a grasping region configured to be grasped by a hand and (b) the input surface region is configured to be engaged by a thumb of the hand while the hand grasps the grasping region and (2) interpreting, by a processor, the level of pressure as a first input.
  • Example 19 The method of Example 18, where the method may further include (1) detecting, by a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, whether the trigger button has been activated and (2) interpreting, by the processor, whether the trigger button has been activated as a second input.
  • Example 20 The method of Example 19, where the method may further include (1) presenting, by a display, a virtual object in an artificial environment and (2) interpreting, by the processor, a combination of the first input and the second input as a manipulation of the virtual object.
  • computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein.
  • these computing device(s) may each include at least one memory device and at least one physical processor.
  • the term "memory device” generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer- readable instructions. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • HDDs Hard Disk Drives
  • SSDs Solid-State Drives
  • optical disk drives caches, variations or combinations of one or more of the same, or any other suitable storage memory.
  • the term "physical processor” generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
  • a physical processor may access and/or modify one or more modules stored in the above-described memory device.
  • Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
  • the term "computer-readable medium” generally refers to any form of device, carrier, or medium capable of storing or carrying computer- readable instructions.
  • Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic- storage media (e.g., solid-state drives and flash media), and other distribution systems.
  • transmission-type media such as carrier waves
  • non-transitory-type media such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic- storage media (e.g., solid-state drives and

Abstract

The disclosed handheld controller may include (1) a body including (a) a grasping region configured to be grasped by a hand and (b) an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region and (2) a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input. Various other handheld controllers, methods, and systems are also disclosed.

Description

HANDHELD CONTROLLER WITH THUMB PRESSURE SENSING
TECHNICAL FIELD
[0001] The present disclosure is generally directed to handheld controllers, systems, and methods that employ thumb pressure sensing.
BACKGROUND OF THE INVENTION
[0002] Handheld controllers, such as those incorporated in artificial-reality systems, gaming systems, and the like, are convenient devices by which a user of such a system may provide accurate and timely input (e.g., by way of input buttons) to that system. In addition, some handheld controllers may include components (e.g., haptic feedback devices) that provide some type of output or feedback for the user. However, as such systems progressively foster more complex and nuanced interaction with their users, the ability to enable the associated input via a cost-effective handheld controller may be limited. For example, some subtle user actions, such as applying a grasping or pinching action to a virtual object being presented in an artificial-reality system, may be difficult to facilitate with a handheld controller that provides a limited number and type of input components.
SUMMARY OF THE INVENTION
[0003] In accordance with a first aspect of the present disclosure, there is provided a handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
[0004] In some embodiments, the handheld controller may furthercomprise: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, wherein the trigger button detects whether the trigger button has been activated for interpretation as a second input.
[0005] In some embodiments, the handheld controller may further comprise: a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
[0006] In some embodiments, the pressure sensor may comprise a static pressure sensor that senses depression of the input surface region.
[0007] In some embodiments, the pressure sensor may comprise a zero-movement sensor.
[0008] In some embodiments, the handheld controller may further comprise: a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
[0009] In some embodiments, the handheld controller may further comprise a capacitive sensor coupled to the input surface region and configured to detect a touching with the input surface region for interpretation as a second input.
[0010] In some embodiments, the handheld controller may further comprise a capacitive sensor that detects a touched location on the input surface region for interpretation as a second input.
[0011] In some embodiments, the handheld controller may further comprise an input button coupled to the body outside the input surface region and configured to be engaged by the thumb while the hand grasps the grasping region, wherein the input button indicates whether the input button has been engaged for interpretation as a second input.
[0012] In accordance with a second aspect of the present disclosure, there is provided a system comprising: a display that presents a virtual object in an artificial environment; a processor that processes a plurality of inputs for manipulating the virtual object; and a handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region, and wherein the processor processes the level of pressure as a first input of the plurality of inputs.
[0013] In some embodiments, the handheld controller may furthercomprise: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand, wherein: the trigger button indicates whether the trigger button has been activated; and the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
[0014] In some embodiments, the processor may interpret a combination of the first input and the second input as a manipulation of the virtual object.
[0015] In some embodiments, the manipulation of the virtual object may comprise a pinching action imposed on the virtual object. [0016] In some embodiments, the handheld controller may further comprise a capacitive sensor coupled to the input surface region, wherein: the capacitive sensor detects a touched location of the input surface region; and the processor processes a representation of the touched location as a third input of the plurality of inputs.
[0017] In some embodiments, the processor may interpret the third input as a navigation of a menu presented by the display.
[0018] In some embodiments, the processor may interpret a combination of the first input, the second input, and the third input as a manipulation of the virtual object.
[0019] In some embodiments, the manipulation of the virtual object may comprise a rotational action imposed on the virtual object.
[0020] In accordance with a third aspect of the present disclosure, there is provided a method comprising: detecting, by a pressure sensor coupled to an input surface region of a body of a handheld controller, a level of pressure imposed on the input surface region, wherein: the body further comprises a grasping region configured to be grasped by a hand; and the input surface region is configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and interpreting, by a processor, the level of pressure as a first input.
[0021] In some embodiments, the method may further comprise: detecting, by a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, whether the trigger button has been activated; and interpreting, by the processor, whether the trigger button has been activated as a second input.
[0022] In some embodiments, the method may further comprise: presenting, by a display, a virtual object in an artificial environment; and interpreting, by the processor, a combination of the first input and the second input as a manipulation of the virtual object. [0023] It will be appreciated that any features described herein as being suitable for incorporation into one or more aspects or embodiments of the present disclosure are intended to be generalizable across any and all aspects and embodiments of the present disclosure. Other aspects of the present disclosure can be understood by those skilled in the art in light of the description, the claims, and the drawings of the present disclosure. The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
[0025] FIG. 1 is a block diagram of an exemplary handheld controllerthat incorporates thumb pressure sensing.
[0026] FIG. 2 is a perspective view of an exemplary handheld controller that incorporates thumb pressure sensing.
[0027] FIG. 3 is a flow diagram of an exemplary method employed by a handheld controller that includes thumb pressure sensing.
[0028] FIG. 4 is an illustration of exemplary augmented-reality glasses that may be used in connection with embodiments of this disclosure.
[0029] FIG. 5 is an illustration of an exemplary virtual-reality headset that may be used in connection with embodiments of this disclosure.
[0030] FIG. 6 is a block diagram of components implemented in an exemplary handheld controller that employs thumb pressure sensing.
[0031] FIG. 7 is a block diagram of an exemplary computing architecture for implementation of an artificial-reality system that incorporates a handheld controller employing thumb pressure sensing.
[0032] FIG. 8 an illustration of an exemplary system that incorporates an eye-tracking subsystem capable of tracking a user's eye(s).
[0033] FIG. 9 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 8.
[0034] Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS [0035] Handheld controllers, such as those incorporated in artificial-reality systems, gaming systems, and the like, are convenient devices by which a user of such a system may provide accurate and timely input (e.g., by way of input buttons) to that system. In addition, some handheld controllers may include components (e.g., haptic feedback devices) that provide some type of output or feedback for the user. However, as such systems progressively foster more complex and nuanced interaction with their users, the ability to enable the associated input via a cost-effective handheld controller may be limited. For example, some subtle user actions, such as applying a grasping or pinching action to a virtual object being presented in an artificial-reality system, may be difficult to facilitate with a handheld controller that provides a limited number and type of input components.
[0036] The present disclosure is generally directed to handheld controllers, systems, and methods that employ thumb pressure sensing. In some embodiments, an example handheld controller may have a body that includes a grasping region configured to be grasped by a hand, as well as an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region. The handheld controller may further include a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input. In some examples, the use of such a handheld controller may facilitate more subtle or varied input (e.g., pinching in an artificial environment), especially in conjunction with other input components (e.g., one or more buttons), while remaining cost- effective.
[0037] Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
[0038] The following will provide, with reference to FIGS. 1-9, detailed descriptions of exemplary handheld controllers, systems, and methods employing thumb pressure sensing. Various embodiments of an exemplary handhold controller are described in conjunction with the block diagram of FIG. 1 and the perspective view of FIG. 2. An exemplary method employing thumb pressure sensing via a handheld controller is discussed in conjunction with FIG. 3. Discussions of exemplary augmented-reality glasses and an exemplary virtual-reality headset that may be used in connection with the various handheld controller embodiments are discussed in connection with the illustrations of FIGS. 4 and 5, respectively. A discussion of some components that may be included in an exemplary handhold controller employing thumb pressure sensing is provided in relation to the block diagram of FIG. 6. A description of an exemplary computing architecture for an artificial-reality system that incorporates a handheld controller, as discussed herein, is presented in connection with FIG. 7. Moreover, in conjunction with FIGS. 8 and 9, an exemplary display system and an exemplary eye-tracking subsystem that may employed in connection with an artificial-reality system are discussed. [0039] In the various handheld controller embodiments described below, the controller is configured to sense pressure applied by a thumb of a user. Flowever, in other embodiments, additionally or alternatively, pressure sensing may be provided for one or more other digits (e.g., the index finger). Additionally, while the discussion below focuses on a single handheld controller, at least some user systems (e.g., artificial-reality systems) may provide two such controllers, one configured for each hand.
[0040] FIG. 1 is a block diagram of an exemplary handheld controller 100 that incorporates thumb pressure sensing. More specifically, FIG. 1 depicts possible components associated with various embodiments of handheld controller 100. Some of the depicted components may not be included in some embodiments of handheld controller 100, while other components (e.g., a battery, one or more printed circuit boards, and so on) that may be included in handheld controller 100 are not illustrated in FIG. 1 to simplify the following discussion. While the block diagram of FIG. 1 represents a kind of schematic representation of handheld controller 100, FIG. 2, described in greater detail below, presents a perspective view of just one possible embodiment of such a controller.
[0041] Flandheld controller 100, in at least some embodiments, may include a body
102 that defines a grasping region 104 and an input surface region 106. In some examples, grasping region 104 may be configured to be grasped by a hand of a user. For example, grasping region 104 may be configured to engaged by a palm and/or one or more fingers (e.g., fingers other than the thumb and/or index finger) of the hand. In some embodiments, grasping region 104 may define or more physical features (e.g., smooth curves, ridges, indentations, and/or the like) to facilitate engagement by the hand.
[0042] Further, in at least some embodiments, input surface region 106, may be configured to be engaged with a thumb of the hand while the hand grasps grasping region 104. In some examples, input surface region 106 may be an area of body 102 that may be just large enough for the user's thumb to engage. In other embodiments, input surface region 106 may be an area of body 102 sufficiently large to provide a range of locations in one or two dimensions for the user's thumb to engage. In some examples, body 102 may provide a thumb rest or similar area in grasping region 104 to comfortably facilitate positioning of the thumb away from input surface region 106.
[0043] In some embodiments, a pressure sensor 132 may be coupled (e.g., mechanically, electrically, or the like) to input surface region 106 such that pressure sensor 132 indicates a level of pressure imposed on input surface region 106 (e.g., by the user's thumb). Several different types of components may serve as pressure sensor 132 in different embodiments, each of which may provide some kind of output (e.g., an analog voltage, a capacitance, a digital value, or the like) that may serve as an indication of the level of pressure imposed on input surface region 106. In some examples, pressure sensor 132 may be a sensor that measures an amount or level of pressure imposed on input surface region 106 (e.g., by detecting a location or a change thereof of input surface region 106 orthogonal to input surface region 106, by detecting a capacitance or change thereof of input surface region 106, etc.). In other examples, pressure sensor 132 may measure pressure imposed on input surface region 106, or some characteristic (e.g., amount of movement, capacitance, etc.) as a proxy for an amount of pressure imposed on input surface region 106. Further, pressure sensor 132 may indicate at least one surface location (e.g., an (x, y) coordinate) on input surface region 106 at which some pressure is being imposed.
[0044] In some embodiments, pressure sensor 132 may be a static pressure sensor that senses and/or distinguishes between small levels of depression, movement, or location of input surface region 106. In such examples, input surface region 106 may facilitate the use of a static pressure sensor by presenting a surface that flexes or otherwise moves some distance in response to an associated level of pressure imposed on input surface region 106. [0045] In other embodiments, pressure sensor 132 may be a zero-movement pressure sensor that senses a level of pressure imposed on input surface region 106, such as by way of measuring some characteristic of input surface region 106 corresponding to pressure imposed on input surface region 106 without causing any depression or other movement of input surface region 106. For example, pressure sensor 132 may measure capacitance at one or more locations of input surface region to indicate a level of pressure imposed on input surface region 106. [0046] In some embodiments, a capacitive sensor 134 may also be coupled to input surface region 106 and configured to detect a touching with input surface region 106 (e.g., by the thumb of the user). Further, in some examples, capacitive sensor 134 may detect a surface location (e.g., an (x, y) coordinate) on input surface region 106 at which the touching is occurring. In some embodiments, pressure sensor 132 and capacitive sensor 134 may be the same sensor. Other types of sensors coupled to input surface region 106 may be employed to detect a touching with input surface region 106, and possibly a location on input surface region 106 at which such a touching is occurring, in other embodiments. Consequently, in some examples, input surface region 106 may be employed as a joystick, touchpad, or other locational or directional input device.
[0047] Also included in handheld controller 100, coupled (e.g., mechanically) to body
102, may be a trigger button 140 configured to be engaged by an index finger of the hand while the hand grasps grasping region 104. Trigger button 140 may provide an indication (e.g., a voltage, a Boolean indication, etc.) whether trigger button 140 has been activated (e.g., depressed by the user's index finger). As is described in greater detail below, manipulation by the user of input surface region (e.g., as indicated by pressure sensor 132 and possibly capacitive sensor 134) and trigger button 140 concurrently may be interpreted as a "pinch" or other manipulation of a virtual object being presented to a user. Moreover, coupled to body 102 may be one or more additional input buttons 150 positioned or configured to be activated by the user (e.g., by the thumb of the user).
[0048] As depicted in FIG. 1, one or more of pressure sensor 132, capacitive sensor
134, trigger button 140, and/or input button 150 may be communicatively coupled with a transmitter 110. In some embodiments, transmitter 110 may transmit a representation or indication of an output of pressure sensor 132, capacitive sensor 134, trigger button 140, and/or input button 150 to a processor (e.g., a processor not located on handheld controller 100). In various examples, transmitter 110 may be a wired transmitter (e.g., an electrical signal transmitter, an optical signal transmitter, etc.) or a wireless transmitter (e.g., a radio frequency (RF) transmitter, a Bluetooth® transmitter, etc.). As discussed below, the processor may interpret some combination of the representations of the outputs of two or more of pressure sensor 132, capacitive sensor 134, trigger button 140, and/or input button 150 as a manipulation of a virtual object (e.g., a virtual object of an artificial environment presented to the user by way of a display device). [0049] Also as depicted in FIG. 1, handheld controller 100 may include a receiver 120
(e.g., a wired or wireless receiver) that may receive a representation of an output (e.g., from the processor mentioned above). Additionally, handheld controller 100 may include one or more haptic actuators 160 coupled to body 102 or other portions of handheld controller 100, where the haptic actuators 160 may provide haptic feedback to one or more locations of handheld controller 100, including, but not limited to, body 102 (e.g., grasping region 104 and/or input surface region 106), trigger button 140, and/or trigger button 140. In some embodiments, the haptic feedback may indicate contact in the artificial environment of one or more portions of the user's hand with a virtual object. For example, haptic feedback presented by haptic actuators 160 (e.g., via grasping region 104, input surface region 106, and/or trigger button 140) may indicate a level of pressure imposed on the virtual object by the thumb of the user on input surface region 106, as sensed by way of pressure sensor 132. In some examples, haptic actuators 160 may include one or more linear resonance actuators (LRAs), eccentric rotating masses (ERMs), or the like.
[0050] In other examples, instead of residing external to handheld controller 100, the processor that receives the inputs (e.g., from transmitter 110) and provides the outputs (e.g., to receiver 120) may instead be located in or on handheld controller 100, thus potentially eliminating the use of transmitter 110 and receiver 120 for at least the transmitting and receiving operations noted above.
[0051] FIG. 2 is a perspective view of an exemplary handheld controller 200 that may serve as one example of handheld controller 100 of FIG. 1. As shown, handheld controller 200 may include a body 202 with a grasping region 204 (corresponding to grasping region 104 of FIG. 1) and an input surface region 206 (corresponding to input surface region 106 of FIG. 1). In addition, handheld controller 200 may also include a trigger button 240 (serving as trigger button 140 of FIG. 1) and one or more input buttons 250 (corresponding to input button 150 of FIG. 1), both of which may reside external to input surface region 206. As illustrated in FIG. 2, input surface region 206 and input buttons 250 may be selectively engaged by the thumb of the hand of the user while the hand of the user engages grasping region 204 and while the index finger of the hand is positioned on trigger button 240.
[0052] While FIG. 2 presents one particular configuration or arrangement of handheld controller 200, other configurations for handheld controller 200 are also possible. More specifically, while input surface region 206 may be placed on a substantially similar plane of body 202 as input buttons 250, body 202 may be shaped in other ways such that input surface region 206 may lie in a plane substantially different from that portion of body 202 that supports input buttons 250. For example, input surface region 206 may be angled downward away from input buttons 250 (e.g., perpendicular or parallel to grasping region 204, and between zero degrees and 90 degrees downward relative to a surface of body 202 holding input buttons 250). In some embodiments, such a configuration for input surface region 206 relative to body 202 may facilitate a comfortable angle of opposition between input surface region 206 and trigger button 240.
[0053] In addition, in some embodiments, one or more components of handheld controller 200 (e.g., components serving as pressure sensor 132, capacitive sensor 134, haptic actuator 160, transmitter 110, and/or receiver 120) may reside inside body 202. Other components not depicted in FIG. 2 may be included in handheld controller 200 in other examples.
[0054] Moreover, handheld controller 200 may include one or more cameras 212
(e.g., wide-angle cameras with a large field of view) that may be employed to track a position (e.g., location and/or orientation) of handheld controller 200 relative to the user or the local environment. Additionally or alternatively, handheld controller 200 may include optical sensors, acoustic sensors (e.g., sonar), time of flight sensors, structured light emitters/sensors, Global Positioning System (GPS) modules, inertial measurement units (IMUs), and other sensors. Any or all of these sensors may be used alone or in combination to provide input data to handheld controller 200.
[0055] Cameras 212 may be positioned substantially anywhere on handheld controller 200. In some embodiments, cameras 212 may be positioned at angles offset from each other (e.g., 30 degrees, 40 degrees, 50 degrees, 60 degrees, 70 degrees, 80 degrees, or 90 degrees offset from each other). This offset may allow cameras 212 to capture different portions of the physical environment in which handheld controller 200 is being used. In some cases, cameras 212 may be positioned to avoid occlusions from a user's hand, fingers, or other body parts.
[0056] For example, cameras 212 may be configured to capture portions of a room including the walls, floor, ceiling, objects within the room, people within the room, or other features of the room. Similarly, if handheld controller 200 is being used outdoors, cameras 212 may be configured to capture images of the ground, the sky, or the 360-degree surroundings of the device. The images may be used in isolation or in combination to determine the device's current location in space. For example, the images may be used to determine distances to objects within a room. Movements between sequences of subsequently taken images may be used to calculate which direction handheld controller 200 has moved and how fast handheld controller 200 have moved relative to their surroundings. The images may be used to determine the location of another peripheral device (e.g., a second handheld controller 200 in the user's other hand). The images may also be used to capture portions of the user who is using handheld controller 200, including the user's hand, fingers, arm, torso, legs, face, head, or other body parts. Handheld controller 200 may use images of these body parts to determine its location relative to the user and relative to other objects in the room including walls, doors, floor, and ceiling, without relying on any other outside cameras or sensors to determine its location.
[0057] In some embodiments, handheld controller 200 may communicate with a headset (e.g., a headset of an artificial-reality system), as described in greater detail below. The headset may include a display and one or more computing components. These computing components may be configured to generate and present a display to the user. The display may include a user interface and content such as video content, web content, video game content, etc. The computing components in the headset 306 may also be configured to generate map data. For example, the computing components in the headset 306 may receive inputs from sensors worn by the user or mounted on the headset and use that sensor data to create a map of the user's environment. This map data may be shared with handheld controller 200, which may use this map data when determining its location. Moreover, in some embodiments, the self-tracking ability of handheld controller 200 within such a system may be employed to determine a relative location and/or orientation of user's hand relative to a virtual object within an artificial environment, as displayed by the headset, to facilitate the pinching or grasping of the virtual object, as described above.
[0058] FIG. 3 is a flow diagram of an exemplary method 300 employed by a handheld controller (e.g., handheld controller 100 or 200) that includes thumb pressure sensing. At step 310, a level of pressure imposed on an input surface region (e.g., input surface region 106 or 206) may be detected (e.g., by way of pressure sensor 132). At step 320, the level of pressure may be interpreted (e.g., by a processor) as a first input. At step 330, whether a trigger button (e.g., trigger button 140 or 240) has been activated may be detected. Further, at step 340, whether the trigger button has been activated may be interpreted (e.g. by the processor) as a second input. At step 350, a combination of the first input and the second input may be interpreted (e.g., by the processor) as a manipulation of a virtual object. In some examples, the processor may execute software instructions stored in one or more memory devices to perform such an interpretation, as well as other operations ascribed to a processor, as discussed herein.
[0059] In reference to handheld controller 100 or 200, in some embodiments, the first input and the second input may be interpreted as a "pinch" or "grasp" of a virtual object, where the amount of virtual force imposed on the virtual object is determined at least in part by the level of pressure imposed on input surface region 106 in conjunction with activation of trigger button 140. In some examples, activation of trigger button 140 may not be necessary to interpret some level of pressure on input surface region 106 as applying some pressure or force on the virtual object. Also, in some embodiments, a visual representation of the amount of force imposed on the virtual object may be presented to a user (e.g., via a visual display). For example, the virtual object may appearto be increasingly deformed as the level of pressure imposed via input surface region 106 is increased. Additionally or alternatively, some other visual aspect (e.g., color, brightness, or the like) of the virtual object may be altered as the level of pressure changes.
[0060] In some embodiments, the level of pressure imposed on the virtual object may also be reflected by way of a level or type of force provided by one or more haptic actuators 160 on grasping region 104, input surface region 106, trigger button 140, and/or some other region of handheld controller 100, as mentioned above. Also, in some examples, an aspect (e.g., volume, pitch, and so on) of one or more sounds (e.g., generated by way of an audio speaker) may be presented to the user that corresponds to the level of pressure imposed on the virtual object.
[0061] Additionally or alternatively, other types of manipulations, such as rotating or spinning, may be imposed on the virtual object by way of input surface region 106, possibly in conjunction with trigger button 140. For example, as indicated above, capacitive sensor 134 or another sensor capable of tracking a location on input surface region 106 at which some pressure is supplied. Accordingly, in some embodiments, applying a level of pressure on, or at least making contact with, input surface region 106, combined with moving the location of the level of pressure on input surface region 106 (e.g., by way of dragging the thumb across input surface region 106), possibly in conjunction with activation (e.g., pressing) of trigger button 140, may be interpreted as a rotating or spinning of a virtual object. Additionally, in some examples, the direction or path of contact across input surface region 106 may determine the axis of the imposed rotation of the virtual object.
[0062] In some embodiments, input surface region 106, along with pressure sensor
132 and/or capacitive sensor 134, may be employed to perform other operations aside from manipulation of virtual objects. For example, input surface region 106 may be employed to navigate a displayed menu, an input slider, a set of input objects, or another input item by way of contacting different areas of input surface region 106 and/or by applying varying levels of pressure to input surface region 106 that may be interpreted by a processor as different portions or areas of the input item. In some embodiments, a particular level of pressure imposed on input surface region 106 may be interpreted by a processor as a different logical level of a multilevel menu or other input item.
[0063] In yet other examples, input surface region 106, in conjunction with pressure sensor 132 and/or capacitive sensor 134, may serve as a replacement for a joystick, touchpad, or other directional or positional input device. For example, a detected two-dimensional location contacted by the user on input surface region 106 (e.g., relative to some reference location of input surface region 106), as sensed by pressure sensor 132 and/or capacitive sensor 134, may be interpreted as a directional command. Additionally, a distance of the two- dimensional location from the reference location may be interpreted as a magnitude command (e.g., speed, force, etc.) associated with the directional command. In some embodiments, the level of pressure imposed at the detected two-dimensional location on input surface region 106 may be interpreted as a magnitude command associated with the directional command. Other examples of providing input via contact with input surface region 106 are also possible.
[0064] Moreover, in some embodiments, the interpretation of the level of pressure and/or location on input surface region 106 by pressure sensor 132 and/or capacitive sensor 134 may depend on a state or context of the system in which handheld controller 100 is being used. For example, if a virtual object is currently displayed in an artificial environment (e.g., in an artificial-reality system) to the user of handheld controller 100, user input provided via input surface region 106 may be interpreted as a manipulation of the virtual object. Instead, if the system currently displays a menu of selectable items to the user, user input provided via input surface region 106 may be interpreted as a manipulation of that menu.
[0065] As mentioned above, embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial-reality systems. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, for example, a virtual reality, an augmented reality, a mixed reality, a hybrid reality, or some combination and/or derivative thereof. Artificial-reality content may include completely computer-generated content or computer-generated content combined with captured (e.g., real-world) content. The artificial-reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional (3D) effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to, for example, create content in an artificial reality and/or are otherwise used in (e.g., to perform activities in) an artificial reality.
[0066] Artificial-reality systems may be implemented in a variety of different form factors and configurations. Some artificial-reality systems may be designed to work without near-eye displays (NEDs). Other artificial-reality systems may include an NED that also provides visibility into the real world (such as, e.g., augmented-reality system 400 in FIG. 4) or that visually immerses a user in an artificial reality (such as, e.g., virtual-reality system 500 in FIG. 5). While some artificial-reality devices may be self-contained systems, other artificial-reality devices may communicate and/or coordinate with external devices to provide an artificial-reality experience to a user. Examples of such external devices include handheld controllers, mobile devices, desktop computers, devices worn by a user, devices worn by one or more other users, and/or any other suitable external system.
[0067] Turning to FIG. 4, augmented-reality system 400 may include an eyewear device 402 with a frame 410 configured to hold a left display device 415(A) and a right display device 415(B) in front of a user's eyes. Display devices 415(A) and 415(B) may act together or independently to present an image or series of images to a user. While augmented-reality system 400 includes two displays, embodiments of this disclosure may be implemented in augmented-reality systems with a single NED or more than two NEDs.
[0068] In some embodiments, augmented-reality system 400 may include one or more sensors, such as sensor 440. Sensor 440 may generate measurement signals in response to motion of augmented-reality system 400 and may be located on substantially any portion of frame 410. Sensor 440 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (IMU), a depth camera assembly, a structured light emitter and/or detector, or any combination thereof. In some embodiments, augmented-reality system 400 may or may not include sensor 440 or may include more than one sensor. In embodiments in which sensor440 includes an IMU, the IMU may generate calibration data based on measurement signals from sensor 440. Examples of sensor 440 may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
[0069] In some examples, augmented-reality system 400 may also include a microphone array with a plurality of acoustic transducers 420(A)-420(J), referred to collectively as acoustic transducers 420. Acoustic transducers 420 may represent transducers that detect air pressure variations induced by sound waves. Each acoustic transducer 420 may be configured to detect sound and convert the detected sound into an electronic format (e.g., an analog or digital format). The microphone array in FIG. 4 may include, for example, ten acoustic transducers: 420(A) and 420(B), which may be designed to be placed inside a corresponding ear of the user, acoustic transducers 420(C), 420(D), 420(E), 420(F), 420(G), and 420(H), which may be positioned at various locations on frame 410, and/or acoustic transducers 420(1) and 420(J), which may be positioned on a corresponding neckband 405.
[0070] In some embodiments, one or more of acoustic transducers 420(A)-(F) may be used as output transducers (e.g., speakers). For example, acoustic transducers 420(A) and/or 420(B) may be earbuds or any other suitable type of headphone or speaker.
[0071] The configuration of acoustic transducers 420 of the microphone array may vary. While augmented-reality system 400 is shown in FIG. 4 as having ten acoustic transducers 420, the number of acoustic transducers 420 may be greater or less than ten. In some embodiments, using higher numbers of acoustic transducers 420 may increase the amount of audio information collected and/or the sensitivity and accuracy of the audio information. In contrast, using a lower number of acoustic transducers 420 may decrease the computing power required by an associated controller 450 to process the collected audio information. In addition, the position of each acoustic transducer 420 of the microphone array may vary. For example, the position of an acoustic transducer 420 may include a defined position on the user, a defined coordinate on frame 410, an orientation associated with each acoustic transducer 420, or some combination thereof.
[0072] Acoustic transducers 420(A) and 420(B) may be positioned on different parts of the user's ear, such as behind the pinna, behind the tragus, and/or within the auricle or fossa. Or, there may be additional acoustic transducers 420 on or surrounding the ear in addition to acoustic transducers 420 inside the ear canal. Having an acoustic transducer 420 positioned next to an ear canal of a user may enable the microphone array to collect information on how sounds arrive at the ear canal. By positioning at least two of acoustic transducers 420 on either side of a user's head (e.g., as binaural microphones), augmented-reality device 400 may simulate binaural hearing and capture a 3D stereo sound field around about a user's head. In some embodiments, acoustic transducers 420(A) and 420(B) may be connected to augmented-reality system 400 via a wired connection 430, and in other embodiments acoustic transducers 420(A) and 420(B) may be connected to augmented-reality system 400 via a wireless connection (e.g., a Bluetooth® connection). In still other embodiments, acoustic transducers 420(A) and 420(B) may not be used at all in conjunction with augmented-reality system 400.
[0073] Acoustic transducers 420 on frame 410 may be positioned in a variety of different ways, including along the length of the temples, across the bridge, above or below display devices 415(A) and 415(B), or some combination thereof. Acoustic transducers 420 may also be oriented such that the microphone array is able to detect sounds in a wide range of directions surrounding the user wearing the augmented-reality system 400. In some embodiments, an optimization process may be performed during manufacturing of augmented-reality system 400 to determine relative positioning of each acoustic transducer 420 in the microphone array.
[0074] In some examples, augmented-reality system 400 may include or be connected to an external device (e.g., a paired device), such as neckband 405. Neckband 405 generally represents any type or form of paired device. Thus, the following discussion of neckband 405 may also apply to various other paired devices, such as charging cases, smart watches, smart phones, wrist bands, other wearable devices, handheld controllers, tablet computers, laptop computers, other external compute devices, etc.
[0075] As shown, neckband 405 may be coupled to eyewear device 402 via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non- electrical (e.g., structural) components. In some cases, eyewear device 402 and neckband 405 may operate independently without any wired or wireless connection between them. While FIG. 4 illustrates the components of eyewear device 402 and neckband 405 in example locations on eyewear device 402 and neckband 405, the components may be located elsewhere and/or distributed differently on eyewear device 402 and/or neckband 405. In some embodiments, the components of eyewear device 402 and neckband 405 may be located on one or more additional peripheral devices paired with eyewear device 402, neckband 405, or some combination thereof.
[0076] Pairing external devices, such as neckband 405, with augmented-reality eyewear devices may enable the eyewear devices to achieve the form factor of a pair of glasses while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of augmented- reality system 400 may be provided by a paired device or shared between a paired device and an eyewear device, thus reducing the weight, heat profile, and form factor of the eyewear device overall while still retaining desired functionality. For example, neckband 405 may allow components that would otherwise be included on an eyewear device to be included in neckband 405 since users may tolerate a heavier weight load on their shoulders than they would tolerate on their heads. Neckband 405 may also have a larger surface area over which to diffuse and disperse heat to the ambient environment. Thus, neckband 405 may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone eyewear device. Since weight carried in neckband 405 may be less invasive to a user than weight carried in eyewear device 402, a user may tolerate wearing a lighter eyewear device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone eyewear device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
[0077] Neckband 405 may be communicatively coupled with eyewear device 402 and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 400. In the embodiment of FIG. 4, neckband 405 may include two acoustic transducers (e.g., 420(1) and 420(J)) that are part of the microphone array (or potentially form their own microphone subarray). Neckband 405 may also include a controller 425 and a power source 435.
[0078] Acoustic transducers 420(1) and 420(J) of neckband 405 may be configured to detect sound and convert the detected sound into an electronic format (analog or digital). In the embodiment of FIG. 4, acoustic transducers 420(1) and 420(J) may be positioned on neckband 405, thereby increasing the distance between the neckband acoustic transducers 420(1) and 420(J) and other acoustic transducers 420 positioned on eyewear device 402. In some cases, increasing the distance between acoustic transducers 420 of the microphone array may improve the accuracy of beamforming performed via the microphone array. For example, if a sound is detected by acoustic transducers 420(C) and 420(D) and the distance between acoustic transducers 420(C) and 420(D) is greater than, e.g., the distance between acoustic transducers 420(D) and 420(E), the determined source location of the detected sound may be more accurate than if the sound had been detected by acoustic transducers 420(D) and 420(E).
[0079] Controller 425 of neckband 405 may process information generated by the sensors on neckband 405 and/or augmented-reality system 400. For example, controller 425 may process information from the microphone array that describes sounds detected by the microphone array. For each detected sound, controller 425 may perform a direction-of-arrival (DOA) estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, controller 425 may populate an audio data set with the information. In embodiments in which augmented-reality system 400 includes an inertial measurement unit, controller 425 may compute all inertial and spatial calculations from the IMU located on eyewear device 402. A connector may convey information between augmented-reality system 400 and neckband 405 and between augmented-reality system 400 and controller 425. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable data form. Moving the processing of information generated by augmented-reality system 400 to neckband 405 may reduce weight and heat in eyewear device 402, making it more comfortable to the user. [0080] Power source 435 in neckband 405 may provide power to eyewear device 402 and/or to neckband 405. Power source 435 may include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power storage. In some cases, power source 435 may be a wired power source. Including power source 435 on neckband 405 instead of on eyewear device 402 may help better distribute the weight and heat generated by power source 435.
[0081] As noted, some artificial-reality systems may, instead of blending an artificial reality with actual reality, substantially replace one or more of a user's sensory perceptions of the real world with a virtual experience. One example of this type of system is a head-worn display system, such as virtual-reality system 500 in FIG. 5, that mostly or completely covers a user's field of view. Virtual-reality system 500 may include a front rigid body 502 and a band 504 shaped to fit around a user's head. Virtual-reality system 500 may also include output audio transducers 506(A) and 506(B). Furthermore, while not shown in FIG. 5, front rigid body 502 may include one or more electronic elements, including one or more electronic displays, one or more inertial measurement units (IMUs), one or more tracking emitters or detectors, and/or any other suitable device or system for creating an artificial-reality experience.
[0082] Artificial-reality systems may include a variety of types of visual feedback mechanisms. For example, display devices in augmented-reality system 400 and/or virtual- reality system 500 may include one or more liquid crystal displays (LCDs), light-emitting diode (LED) displays, organic LED (OLED) displays, digital light project (DLP) micro-displays, liquid crystal on silicon (LCoS) micro-displays, and/or any other suitable type of display screen. These artificial-reality systems may include a single display screen for both eyes or may provide a display screen for each eye, which may allow for additional flexibility for varifocal adjustments or for correcting a user's refractive error. Some of these artificial-reality systems may also include optical subsystems having one or more lenses (e.g., conventional concave or convex lenses, Fresnel lenses, adjustable liquid lenses, etc.) through which a user may view a display screen. These optical subsystems may serve a variety of purposes, including to collimate (e.g., make an object appear at a greater distance than its physical distance), to magnify (e.g., make an object appear larger than its actual size), and/or to relay (to, e.g., the viewer's eyes) light. These optical subsystems may be used in a non-pupil-forming architecture (such as a single lens configuration that directly collimates light but results in so- called pincushion distortion) and/or a pupil-forming architecture (such as a multi-lens configuration that produces so-called barrel distortion to nullify pincushion distortion). [0083] In addition to or instead of using display screens, some artificial-reality systems described herein may include one or more projection systems. For example, display devices in augmented-reality system 400 and/or virtual-reality system 500 may include micro-LED projectors that project light (using, e.g., a waveguide) into display devices, such as clear combiner lenses that allow ambient light to pass through. The display devices may refract the projected light toward a user's pupil and may enable a user to simultaneously view both artificial-reality content and the real world. The display devices may accomplish this using any of a variety of different optical components, including waveguide components (e.g., holographic, planar, diffractive, polarized, and/or reflective waveguide elements), light- manipulation surfaces and elements (such as diffractive, reflective, and refractive elements and gratings), coupling elements, etc. Artificial-reality systems may also be configured with any other suitable type or form of image projection system, such as retinal projectors used in virtual retina displays.
[0084] The artificial-reality systems described herein may also include various types of computer vision components and subsystems. For example, augmented-reality system 400 and/or virtual-reality system 500 may include one or more optical sensors, such as two- dimensional (2D) or 3D cameras, structured light transmitters and detectors, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. An artificial-reality system may process data from one or more of these sensors to identify a location of a user, to map the real world, to provide a user with context about real-world surroundings, and/or to perform a variety of other functions.
[0085] The artificial-reality systems described herein may also include one or more input and/or output audio transducers. Output audio transducers may include voice coil speakers, ribbon speakers, electrostatic speakers, piezoelectric speakers, bone conduction transducers, cartilage conduction transducers, tragus-vibration transducers, and/or any other suitable type or form of audio transducer. Similarly, input audio transducers may include condenser microphones, dynamic microphones, ribbon microphones, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both audio input and audio output.
[0086] In some embodiments, the artificial-reality systems described herein may also include tactile (i.e., haptic) feedback systems, which may be incorporated into headwear, gloves, body suits, handheld controllers, environmental devices (e.g., chairs, floormats, etc.), and/or any other type of device or system. Haptic feedback systems may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices.
[0087] By providing haptic sensations, audible content, and/or visual content, artificial-reality systems may create an entire virtual experience or enhance a user's real- world experience in a variety of contexts and environments. For instance, artificial-reality systems may assist or extend a user's perception, memory, or cognition within a particular environment. Some systems may enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. Artificial-reality systems may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
[0088] In some embodiments, the systems described herein may also include an eye tracking subsystem designed to identify and track various characteristics of a user's eye(s), such as the user's gaze direction. The phrase "eye-tracking" may, in some examples, refer to a process by which the position, orientation, and/or motion of an eye is measured, detected, sensed, determined, and/or monitored. The disclosed systems may measure the position, orientation, and/or motion of an eye in a variety of different ways, including through the use of various optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, etc. An eye-tracking subsystem may be configured in a number of different ways and may include a variety of different eye-tracking hardware components or other computer-vision components. For example, an eye-tracking subsystem may include a variety of different optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor. In this example, a processing subsystem may process data from one or more of these sensors to measure, detect, determine, and/or otherwise monitor the position, orientation, and/or motion of the user's eye(s).
[0089] The handheld controller 100 or 200, as described above, may also include other electronic components, both on its exterior and on its interior. For example, as shown in FIG. 6, an exemplary handheld controller 600 (e.g., serving as an embodiment of handheld controller 100 or 200) may include a main board 602 with a local processor 604 (e.g., serving as transmitter 110 and/or receiver 120, or serving as a processor for interpreting input and generation output) connected to a battery 606, a heat sink 608, an antenna 610, haptic devices 612 (e.g., serving as haptic actuators 160), cameras 614 (e.g., serving as cameras 212), and potentially other printed circuit boards (PCBs) 620. PCBs 602 and 620 may be configured to process inputs from trigger button 140, input buttons 150, pressure sensor 132, and capacitive sensor 134. PCBs 602 and 620 or other computing components may be configured to perform some or all of the computing to determine the device's current location and/or interpretation of input provided by the user via handheld controller 600. In other embodiments, handheld controller 600 may communicate with other devices (e.g., a virtual- reality system 500) to assist in performing the computing.
[0090] These electronics may further include a communications module that communicates with a head-mounted display of an artificial-reality system. Antenna 610 may be part of this communications module. The communications module may transmit and receive communications from a corresponding communications module on an artificial-reality device. The internal electronics may further include an imaging module including at least one camera 614 that is configured to acquire images of the environment surrounding handheld controller 600. Moreover, the internal electronics of the handheld controller 600 may include a tracking module configured to track the location of handheld controller 600 in free space using the images acquired by cameras 614. PCBs 602 and 620 may then analyze the images to track the location of handheld controller 600 without having line of sight between handheld controller 600 and any main processing components of an external artificial-reality system.
[0091] In some cases, a pair of handheld controllers 600 may be used simultaneously.
Consequently, cameras 614 may be used to capture images of the current surroundings of each handheld controller 600. Each handheld controller 600 may process the images that it captures with its cameras 614 using local processor 604. Additionally or alternatively, each handheld controller 600 may share image data with the other handheld controller 600. Accordingly, two handheld controllers 600 may share images with each other. As such, each handheld controller 600 may have its own images along with other images received via the communications module. These images may be pieced together to determine depth, determine relative locations, determine coordinates in space, or to otherwise calculate the exact or relative location of handheld controller 600. Each handheld controller 600 may thus determine its location on its own or may determine its location in relation to the other handheld controller 600 using imaging data from those devices.
[0092] In some cases, handheld controller 600 may begin tracking its location using two or more cameras 614. Once the handheld controller 600 has established its location in space, the tracking may continue using fewer cameras 614. Thus, if handheld controller 600 started tracking its location using three cameras 614, handheld controller 600 may transition to tracking its location using two cameras 614 or using one camera 614. Similarly, if handheld controller 600 started tracking its location using two cameras 614, once calibrated or once an initial map has been created, handheld controller 600 may continue tracking its location using a single camera 614. If handheld controller 600 loses its position in space or becomes unaware of its exact location (due to loss of signal from a camera, for example), two or more additional cameras 614 may be initiated to assist in re-determining the location in space of handheld controller 600.
[0093] In some embodiments, each handheld controller 600 may be configured to access the image data taken by its cameras 614 (and perhaps additionally use image data from cameras 614 on the other handheld controller 600) to create a map of the surrounding environment. The map may be created over time as subsequent images are taken and processed. The map may identify objects within an environment and may note the location of those objects within the environment. As time passes, and as handheld controllers 600 change locations, additional images may be taken and analyzed. These additional images may indicate where the user is in relation to the identified objects, what the distance is between the user and the objects, what the distance is between handheld controller 600 and the user, and what the distance is between handheld controllers 600. Calculated distances between objects may be refined over time as new images are captured and analyzed. Thus, the map of the environment around handheld controller 600 may be continually updated and improved. Moreover, if objects (e.g., people) within the environment move to different locations, the updated map may reflect these changes.
[0094] Because the location of handheld controller 600 may be determined solely using the images captured by handheld controller 600 itself, and may not depend on outside cameras or sensors, handheld controller 600 may be truly self-tracked. Handheld controller 600 may not need any outside sensors or cameras or other devices to determine, by itself, its own location in free space. Implementations using a single camera 614 may be produced and may function using the camera data from single camera 614. In other implementations, two cameras 614 may be used. By using only one or two cameras 614, the cost and complexity of handheld controller 600 may be reduced, as well as reducing its weight and increasing battery life, as there are fewer components to power. Still further, with fewer cameras 614, it is less likely that one of the cameras will be occluded and provide faulty (or no) information.
[0095] Because handheld controller 600 may be held by a user, and because handheld controller 600 may determine its own location independently, handheld controller 600 may also be able to determine the position, location, and pose of the user holding handheld controller 600. Cameras 614 on handheld controller 600 may have wide angle lenses that may capture portions of the user's body. From these images, handheld controller 600 may determine how the user's body is positioned, which direction the user's body is moving, and how far the body part is away from handheld controller 600. Knowing this distance and its own location in free space may allow handheld controller 600 to calculate the location of the user holding handheld controller 600. Moreover, in some embodiments, cameras 614 (e.g., wide-angle cameras) may capture images of the user's face and eyes. These images may allow handheld controller 600 to track the movements of the user's eyes and determine where the user intends to move or determine what the user is looking at. Knowing where the user is within the environment and knowing where the user is likely to move, along with the knowledge of its own location, handheld controller 600 may generate warning beeps or buzzes to keep the user from running into objects within the environment.
[0096] Still further, because cameras 614 of handheld controller 600 may be continuously capturing image data, some portions of that data may be redacted or blurred for privacy reasons. For instance, users within a room in which handheld controllers 600 are being used may not wish to be recorded, or the owner of a property may not wish to have certain portions of their property recorded. In such cases, handheld controller 600 may be configured to identify faces in the images and blur those faces. Additionally or alternatively, the image data may be used for calculations and then immediately discarded. Other privacy implications may be administered via policies.
[0097] Accordingly, a self-tracked peripheral device, embodied as a handheld controller (e.g., handheld controller 1007 200, and 600), is described herein. The self-tracked peripheral device may track itself in free space without any external cameras on other devices or in other parts of the environment. Moreover, the self-tracked peripheral device may determine its current location without needing line of sight to any other sensors or cameras. The self-tracked peripheral device may be lighter and less costly than traditional devices due to the implementation of fewer cameras. Moreover, the reduced number of cameras may reduce the occurrence of occlusions and may also increase battery life in the peripheral device.
[0098] As noted above, the self-tracked peripheral devices described herein may be used in conjunction with other artificial-reality systems. Embodiments of these artificial- reality systems are described in conjunction with FIGS. 7-9 below. The self-tracked and/or pressure-sensing peripheral devices described herein (e.g., handheld controller 100, 200, and 600) may be used in conjunction with any of these devices, either alone or in combination. [0099] FIG. 7 illustrates a computing architecture 700 with multiple different modules.
Some or all of these modules may be incorporated in handheld controller 100, 200, or 600 (which may operate as a self-tracking and/or pressure-sensing peripheral device, as described above) or in the artificial-reality device to which the peripheral device is tethered. The computing architecture 700 may include, for example, a display subsystem 701, which may represent and/or include any of the various display components and/or attributes described herein. In this example, display subsystem 701 may interact with a processing subsystem 710, including any of its various subcomponents. For instance, display subsystem 701 may interact with a processor 711, memory 712, a communications module 713 (which may include or represent a variety of different wired or wireless connections, such as WiFi, Bluetooth®, Global Positioning System (GPS) modules, cellular or other radios, etc.), and/or a data store 714 (which may include or represent a variety of different volatile or non-volatile data storage devices). In some examples, processor 711 may serve as the processor that interprets input data received from handheld controller 100, 200, and 600, as well as generates output data for use by handheld controller 100, 200, and 600, as discussed above.
[0100] In some cases, processing subsystem 710 may be embedded within, located on, or coupled to an artificial-reality device. In other cases, processing subsystem 710 may be separate from and/or external to the artificial-reality device (as part of, e.g., a separate computing device, as described in greater detail below). In some examples, processing subsystem 710 may include one or more special-purpose, hardware-based accelerators, such as machine-learning accelerators designed to perform tasks associated with computer-vision processing.
[0101] In one example, computing architecture 700 may also include an authentication subsystem 702. Authentication subsystem 702 may be embedded within and/or coupled to an artificial-reality device. Authentication subsystem 702 may include a variety of different hardware components, such as cameras, microphones, iris scanners, facial scanners, and/or other hardware components (such as the optical sensors and acoustic transducers incorporated into an artificial-reality device), each of which may be used to authenticate a user. In some cases, some or all of the functions of the artificial-reality device may be locked until the user is authenticated. For instance, and as will be explained in greater detail below, a user may use authentication subsystem 702 to authenticate him or herself and, in turn, transition the artificial-reality device from a "locked" state, in which some or all of the device's functionality is locked, to an "unlocked" state, in which some or all of the device's functionality is available to the user. In other cases, authentication subsystem 702 may authenticate the user to a network, for example, that provides data to the artificial- reality device.
[0102] In some examples, authentication subsystem 702 may authenticate the user based on the user's detected voice patterns, based on an iris scan of the user, based on a facial scan, based on a fingerprint scan, or based on some other form of biometric authentication. Authentication subsystem 702 may be mounted on or embedded within the disclosed artificial-reality devices in a variety of ways. In some examples, authentication subsystem 702 may be part of an external device (described below) to which the artificial- reality device is connected.
[0103] In some embodiments, computing architecture 700 may also include an eye tracking subsystem 703 designed to identify and track various characteristics of a user's eye(s), such as their gaze direction. Eye-tracking subsystem 703 may include a variety of different eye-tracking hardware components or other computer vision components. For example, eye-tracking subsystem 703 may include optical sensors, such as two-dimensional (2D) or 3D cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR (Light Detection And Ranging) sensors, and/or any other suitable type or form of optical sensor. In some examples, a processing subsystem (such as processing subsystem 710 in FIG. 7) may process data from one or more of these sensors to measure, detect, determine, and/or otherwise monitor the position, orientation, and/or motion of the user's eye(s). [0104] In one example, eye-tracking subsystem 703 may be configured to identify and measure the inter-pupillary distance (IPD) of a user. In some embodiments, eye-tracking subsystem 703 may measure and/or calculate the IPD of the user while the user is wearing the artificial-reality device. In these embodiments, eye-tracking subsystem 703 may detect the positions of a user's eyes and may use this information to calculate the user's IPD.
[0105] Eye-tracking subsystem 703 may track a user's eye position and/or eye movement in a variety of ways. In one example, one or more light sources and/or optical sensors may capture an image of the user's eyes. Eye-tracking subsystem 703 may then use the captured information to determine the user's inter-pupillary distance, interocular distance, and/or a 3D position of each eye relative to the artificial-reality device (e.g., for distortion adjustment purposes), including a magnitude of torsion and rotation (i.e., roll, pitch, and yaw) and/or gaze directions for each eye. In one example, infrared light may be emitted by eye-tracking subsystem 703 and reflected from each eye. The reflected light may be received or detected by an optical sensor and analyzed to extract eye rotation data from changes in the infrared light reflected by each eye.
[0106] Eye-tracking subsystem 703 may use any of a variety of different methods to track the eyes of an artificial-reality device user. For example, a light source (e.g., infrared light-emitting diodes) may emit a dot pattern onto each eye of the user. Eye-tracking subsystem 703 may then detect (e.g., via an optical sensor coupled to the artificial-reality device) and analyze a reflection of the dot pattern from each eye of the user to identify a location of each pupil of the user. Accordingly, eye-tracking subsystem 703 may track up to six degrees of freedom of each eye (i.e., 3D position, roll, pitch, and yaw) and at least a subset of the tracked quantities may be combined from two eyes of a user to estimate a gaze point (i.e., a 3D location or position in a virtual scene where the user is looking) and/or an IPD. [0107] In some cases, the distance between a user's pupil and a display may change as the user's eye moves to look in different directions. The varying distance between a pupil and a display as viewing direction changes may be referred to as "pupil swim" and may contribute to distortion perceived by the user as a result of light focusing in different locations as the distance between the pupil and the display changes. Accordingly, measuring distortion at different eye positions and pupil distances relative to displays and generating distortion corrections for different positions and distances may allow mitigation of distortion caused by "pupil swim" by tracking the 3D position of a user's eyes and applying a distortion correction corresponding to the 3D position of each of the user's eyes at a given point in time. Thus, knowing the 3D position of each of a user's eyes may allow for the mitigation of distortion caused by changes in the distance between the pupil of the eye and the display by applying a distortion correction for each 3D eye position. Furthermore, as noted above, knowing the position of each of the user's eyes may also enable eye-tracking subsystem 703 to make automated adjustments for a user's IPD.
[0108] In some embodiments, display subsystem 701 discussed above may include a variety of additional subsystems that may work in conjunction with eye-tracking subsystem 703. For example, display subsystem 701 may include a varifocal actuation subsystem, a scene-rendering module, and a vergence processing module. The varifocal subsystem may cause left and right display elements to vary the focal distance of the display device. In one embodiment, the varifocal subsystem may physically change the distance between a display and the optics through which it is viewed by moving the display, the optics, or both. Additionally, moving or translating two lenses relative to each other may also be used to change the focal distance of the display. Thus, the varifocal subsystem may include actuators or motors that move displays and/or optics to change the distance between them. This varifocal subsystem may be separate from or integrated into display subsystem 701. The varifocal subsystem may also be integrated into or separate from the actuation subsystem and/or eye-tracking subsystem 703.
[0109] In one example, display subsystem 701 may include a vergence processing module configured to determine a vergence depth of a user's gaze based on a gaze point and/oran estimated intersection of the gaze lines determined by eye-tracking subsystem 703. Vergence may refer to the simultaneous movement or rotation of both eyes in opposite directions to maintain single binocular vision, which may be naturally and automatically performed by the human eye. Thus, a location where a user's eyes are verged is where the user is looking and is also typically the location where the user's eyes are focused. For example, the vergence processing module may triangulate gaze lines to estimate a distance or depth from the user associated with intersection of the gaze lines. The depth associated with intersection of the gaze lines may then be used as an approximation for the accommodation distance, which may identify a distance from the user where the user's eyes are directed. Thus, the vergence distance may allow for the determination of a location where the user's eyes should be focused and a depth from the user's eyes at which the eyes are focused, thereby providing information (such as an object or plane of focus) for rendering adjustments to the virtual scene.
[0110] The vergence processing module may coordinate with eye-tracking subsystem
703 to make adjustments to display subsystem 701 to account for a user's vergence depth. When the user is focused on something at a distance, the user's pupils may be slightly farther apart than when the user is focused on something close. Eye-tracking subsystem 703 may obtain information about the user's vergence or focus depth and may adjust display subsystem 701 to be closer together when the user's eyes focus or verge on something close and to be farther apart when the user's eyes focus or verge on something at a distance. [0111] The eye-tracking information generated by eye-tracking subsystem 703 may be used, for example, to modify various aspects of how different computer-generated images are presented. In some embodiments, for example, display subsystem 701 may be configured to modify, based on information generated by eye-tracking subsystem 703, at least one aspect of how the computer-generated images are presented. For instance, the computer generated images may be modified based on the user's eye movement, such that if a user is looking up, the computer-generated images may be moved upward on the screen. Similarly, if the user is looking to the side or down, the computer-generated images may be moved to the side or downward on the screen. If the user's eyes are closed, the computer-generated images may be paused or removed from the display and resumed once the user's eyes are back open.
[0112] In some embodiments, computing architecture 700 may also include a face tracking subsystem 705 and/or a body-tracking subsystem 707 configured to identify and track the movement of, and/or various characteristics of, a user's face and/or other body parts. In some examples, face-tracking subsystem 705 and/or body-tracking subsystem 707 may include one or more body- and/or face-tracking light sources and/or optical sensors, along with potentially other sensors or hardware components. These components may be positioned or directed toward the user's face and/or body so as to capture movements of the user's mouth, cheeks, lips, chin, etc., as well as potentially movement of the user's body, including their arms, legs, hands, feet, torso, etc.
[0113] As noted, face-tracking subsystem 705 may be configured to identify and track facial expressions of a user. These facial expressions may be identified by tracking movements of individual parts of the user's face, as detailed above. The user's facial expressions may change over time and, as such, face-tracking subsystem 705 may be configured to operate on a continuous or continual basis to track the user's changing facial expressions. Classifications of these facial expressions may be stored in data store 714 of processing subsystem 710. [0114] Similarly, body-tracking subsystem 707 may be configured to identify and track a position of substantially any part of the user's body. For example, body-tracking subsystem 707 may log initial positions for a user's arms, hands, legs, or feet and may note how those body parts move over time. In some cases, these body movements may be used as inputs to a processing subsystem of the artificial-reality device. For example, if a user wants to open or close a display, the user may wave their hand or arm in a certain manner or perform a certain gesture (such as a snap or finger-closing motion). Or, if the user wants to interact with a virtual element presented on the display, the face/body-tracking component or other components of body-tracking subsystem 707 may track the user's body movements and use those movements as inputs to interact with an artificial reality generated by the artificial-reality device and/or to interact with software applications running on processing subsystem 710. [0115] As with eye-tracking subsystem 703, face-tracking subsystem 705 and/or body-tracking subsystem 707 may be incorporated within and/or coupled to the artificial- reality devices disclosed herein in a variety of ways. In one example, all or a portion of face tracking subsystem 705 and/or body-tracking subsystem 707 may be embedded within and/or attached to an outer portion of the artificial-reality device. For example, one or more face/body-tracking components (which may represent, e.g., one or more light sources and/or optical sensors) may be embedded within and/or positioned near an outer portion of the artificial-reality device. By doing so, the face/body-tracking component(s) may be positioned far enough away from the user's face and/or body to have a clear view of the user's facial expressions and/or facial and body movements.
[0116] In some examples, computing architecture 700 may also include an imaging subsystem 706 configured to image a local environment of the artificial-reality device. Imaging subsystem 706 may include or incorporate any of a variety of different imaging components and elements, such as light sources and optical sensors. For example, imaging subsystem 706 may include one or more world-facing cameras that are configured to capture images of the user's surroundings. These world-facing cameras may be mounted on or coupled to the artificial-reality device in a variety of different positions and patterns. In one example, the images captured by these world-facing cameras may be processed by processing subsystem 710. In some cases, the images may be stitched together to provide a 360-degree view of the user's local environment. In one embodiment, some or all of this surrounding view may be presented on the display of the artificial-reality device. As such, the user may be able to see either to the side or behind themselves simply by viewing the surrounding view presented on the display. In some example, the artificial-reality device may have substantially any number of world-facing cameras.
[0117] In some embodiments, the artificial-reality device may use the above- described world-facing cameras to map a user's and/or device's environment using techniques referred to as "simultaneous location and mapping" (SLAM). SLAM mapping and location identifying techniques may involve a variety of hardware and software tools that can create or update a map of an environment while simultaneously keeping track of a user's location within the mapped environment. SLAM may use many different types of sensors to create a map and determine a user's position within the map.
[0118] SLAM techniques used by an artificial-reality device may, for example, use data from optical sensors to determine a user's location. Radios including WiFi, Bluetooth, GPS, cellular, or other communication devices may be also used to determine a user's location relative to a radio transceiver or group of transceivers (e.g., a WiFi router or group of GPS satellites). Acoustic sensors such as microphone arrays or 2D or 3D sonar sensors may also be used to determine a user's location within an environment. The artificial-reality device may incorporate any or all of these types of sensors to perform SLAM operations such as creating and continually updating maps of the user's current environment. In at least some of the embodiments described herein, SLAM data generated by these sensors may be referred to as "environmental data" and may indicate a user's current environment. This data may be stored in a local or remote data store (e.g., a cloud data store) and may be provided to the artificial- reality device on demand.
[0119] In some examples, computing architecture 700 may include a sensor subsystem 709 configured to detect, and generate sensor data that reflects, changes in a local environment of the artificial-reality device. Sensor subsystem 709 may include a variety of different sensors and sensing elements, examples of which include, without limitation, a position sensor, an inertial measurement unit (IMU), a depth camera assembly, an audio sensor, a video sensor, a location sensor (e.g., GPS), a light sensor, and/or any sensor or hardware component from any another subsystem described herein. In embodiments in which the sensors include an IMU, the IMU may generate calibration data based on measurement signals from the sensor(s). Examples of IMUs may include, without limitation, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors used for error correction of the IMU, or some combination thereof.
[0120] The above-described sensor data may include a change in location (e.g., from a GPS location sensor), a change in audible surroundings (e.g., from an audio sensor), a change in visual surroundings (e.g., from a camera or other light sensor), a change in inertia (e.g., from an IMU), or other changes that may indicate that the user's environment has changed. A change in the amount of ambient light, for example, may be detected by a light sensor. In response to the detected increase in ambient light, display subsystem 701 (in conjunction with processing subsystem 710) may increase the brightness of the display. An increase in ambient sound (e.g., as detected by an input audio transducer) may result in an increase in sound amplitude (e.g., in an output audio transducer). Otherenvironmental changes may also be detected and implemented as feedback within the artificial-reality device's computing environment. In some cases, sensor subsystem 709 may generate measurement signals in response to motion of the artificial-reality device.
[0121] When a user is using the artificial-reality device in a given environment, the user may be interacting with other users or other electronic devices that serve as audio sources. In some cases, it may be desirable to determine where the audio sources are located relative to the user and then present audio from the audio sources to the user as if they were coming from the location of the audio source. The process of determining where the audio sources are located relative to the user may be referred to as "localization," and the process of rendering playback of the audio source signal to appear as if it is coming from a specific direction may be referred to as "spatialization."
[0122] Localizing an audio source may be performed in a variety of different ways. In some cases, a subsystem of the artificial-reality device (such as processing subsystem 710) may initiate a direction-of-arrival (DOA) analysis to determine the location of a sound source. The DOA analysis may include analyzing the intensity, spectra, and/or arrival time of each sound at the artificial-reality device to determine the direction from which the sounds originated. The DOA ana lysis may include any suitable algorithm for analyzing the surrounding acoustic environment in which the artificial-reality device is located.
[0123] For example, the DOA analysis may be designed to receive input signals from a microphone and apply digital signal processing algorithms to the input signals to estimate the direction of arrival. These algorithms may include, for example, delay and sum algorithms where the input signal is sampled and the resulting weighted and delayed versions of the sampled signal are averaged together to determine a direction of arrival. A least mean squared (LMS) algorithm may also be implemented to create an adaptive filter. This adaptive filter may then be used to identify differences in signal intensity, for example, or differences in time of arrival. These differences may then be used to estimate the direction of arrival. In another embodiment, the DOA may be determined by converting the input signals into the frequency domain and selecting specific bins within the time-frequency (TF) domain to process. Each selected TF bin may be processed to determine whether that bin includes a portion of the audio spectrum with a direct-path audio signal. Those bins having a portion of the direct-path signal may then be analyzed to identify the angle at which a microphone array received the direct-path audio signal. The determined angle may then be used to identify the direction of arrival for the received input signal. Other algorithms not listed above may also be used alone or in combination with the above algorithms to determine DOA.
[0124] In some embodiments, different users may perceive the source of a sound as coming from slightly different locations. This may be the result of each user having a unique head-related transfer function (FIRTF), which may be dictated by a user's anatomy including ear canal length and the positioning of the ear drum. In these embodiments, the artificial- reality device may provide an alignment and orientation guide, which the user may follow to customize the sound signal presented to the user based on their unique HRTF. In some embodiments, the artificial-reality device may use a variety of different array transfer functions (e.g., any of the DOA algorithms identified above) to estimate the direction of arrival for the sounds. Once the direction of arrival has been determined, the artificial-reality device may play back sounds to the user according to the user's unique HRTF. Accordingly, the DOA estimation generated using the array transfer function (ATF) may be used to determine the direction from which the sounds are to be played from. The playback sounds may be further refined based on how that specific user hears sounds according to the HRTF.
[0125] In addition to or as an alternative to performing a DOA estimation, the artificial-reality device may perform localization based on information received from other types of sensors, such as sensor subsystem 709. These sensors may include cameras, IR sensors, heat sensors, motion sensors, GPS receivers, or in some cases, sensors that detect a user's eye movements. For example, as noted above, artificial-reality device may include eye tracking subsystem 703 that determines where the user is looking. Often, the user's eyes will look at the source of the sound, if only briefly. Such clues provided by the user's eyes may further aid in determining the location of a sound source. Other sensors such as cameras, heat sensors, and IR sensors may also indicate the location of a user, the location of an electronic device, or the location of another sound source. Any or all of the above methods may be used individually or in combination to determine the location of a sound source and may further be used to update the location of a sound source over time.
[0126] Some embodiments may implement the determined DOA to generate a more customized output audio signal for the user. For instance, an "acoustic transfer function" may characterize or define how a sound is received from a given location. More specifically, an acoustic transfer function may define the relationship between parameters of a sound at its source location and the parameters by which the sound signal is detected (e.g., detected by a microphone array or detected by a user's ear). The artificial-reality device may include one or more acoustic sensors that detect sounds within range of the device. A processing subsystem of the artificial-reality device (such as processing subsystem 710) may estimate a DOA for the detected sounds (using, e.g., any of the methods identified above) and, based on the parameters of the detected sounds, may generate an acoustic transfer function that is specific to the location of the device. This customized acoustic transfer function may thus be used to generate a spatialized output audio signal where the sound is perceived as coming from a specific location.
[0127] Once the location of the sound source or sources is known, the artificial-reality device may re-render (i.e., spatialize) the sound signals to sound as if coming from the direction of that sound source. The artificial-reality device may apply filters or other digital signal processing that alter the intensity, spectra, or arrival time of the sound signal. The digital signal processing may be applied in such a way that the sound signal is perceived as originating from the determined location. The artificial-reality device may amplify or subdue certain frequencies or change the time that the signal arrives at each ear. In some cases, the artificial-reality device may create an acoustic transfer function that is specific to the location of the device and the detected direction of arrival of the sound signal. In some embodiments, the artificial-reality device may re-renderthe source signal in a stereo device or multi-speaker device (e.g., a surround sound device). In such cases, separate and distinct audio signals may be sent to each speaker. Each of these audio signals may be altered according to the user's HRTF and according to measurements of the user's location and the location of the sound source to sound as if they are coming from the determined location of the sound source. Accordingly, in this manner, the artificial-reality device (or speakers associated with the device) may re-render an audio signal to sound as if originating from a specific location. [0128] In some examples, computing architecture 700 may also include a battery subsystem 708 configured to provide electrical power for the artificial-reality device. Battery subsystem 708 may include a variety of different components and elements, examples of which include, without limitation, lithium ion batteries, lithium-polymer batteries, primary lithium batteries, alkaline batteries, or any other form of power source or power storage device. Battery subsystem 708 may be incorporated into and/or otherwise associated with the artificial-reality devices disclosed herein in a variety of ways. In some examples, all or a portion of battery subsystem 708 may be embedded or disposed within a back portion or area of the artificial-reality device.
[0129] In some examples, the artificial-reality device may include or be connected to an external device (e.g., a paired device), such as a neckband, charging case, smart watch, smartphone, wrist band, other wearable device, handheld controller, tablet computer, laptop computer, and/or other external compute device, etc. This external device generally represents any type or form of paired device.
[0130] The external device may be coupled to the artificial-reality device via one or more connectors. The connectors may be wired or wireless and may include electrical and/or non-electrical (e.g., structural) components. In some cases, the artificial-reality device and the external device may operate independently without any wired or wireless connection between them.
[0131] Pairing external devices with the artificial-reality device may enable the artificial-reality device to achieve certain form factors while still providing sufficient battery and computation power for expanded capabilities. Some or all of the battery power, computational resources, and/or additional features of the artificial-reality device may be provided by a paired device or shared between a paired device and the artificial-reality device, thus reducing the weight, heat profile, and form factor of the artificial-reality device overall while still retaining the desired functionality. For example, the external device may allow components that would otherwise be included on a device to be included in the external device since users may tolerate a heavier weight load in their pockets, shoulders, or hands than they would tolerate on their heads. The external device may also have a larger surface area over which to diffuse and disperse heat to the ambient environment.
[0132] Thus, an external device may allow for greater battery and computation capacity than might otherwise have been possible on a stand-alone headwear device. Since weight carried in the external device may be less invasive to a user than weight carried in the artificial-reality device, a user may tolerate wearing a lighter artificial-reality device and carrying or wearing the paired device for greater lengths of time than a user would tolerate wearing a heavy standalone artificial-reality device, thereby enabling users to more fully incorporate artificial-reality environments into their day-to-day activities.
[0133] The external device may be communicatively coupled with the artificial-reality device and/or to other devices. These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to the artificial-reality device. For example, the external device may include multiple acoustic transducers, such as the acoustic transducers 607 and 608 described above.
[0134] A processing subsystem on the external device may process information generated by the sensors on the external device and/or the artificial-reality device. For example, the processing subsystem may process information from a microphone array that describes sounds detected by the microphone array. For each detected sound, the processing subsystem may perform a DOA estimation to estimate a direction from which the detected sound arrived at the microphone array. As the microphone array detects sounds, the processing subsystem may populate an audio data set with the information. In embodiments in which the artificial-reality device includes an inertial measurement unit, the processing subsystem may compute all inertial and spatial calculations from the IMU located on the artificial-reality device. A connector may convey information between the artificial-reality device and the external device and between the artificial-reality device and the processing subsystem. The information may be in the form of optical data, electrical data, wireless data, or any other transmittable form. As noted, moving the processing of information generated by the artificial-reality device to the external device may reduce weight and heat in the artificial-reality device, making it more comfortable to the user. [0135] In some examples, computing architecture 700 may also include a notification subsystem 704. Notification subsystem 704 may be configured to generate user notifications that are communicated to the user. The user notifications may include audio-based notifications, haptics-based notifications, visual-based notifications, or other types of notifications. For example, notification subsystem 704 may generate an audio notification (via, e.g., acoustic transducers) when a text message or email is received (by, e.g., the artificial-reality device and/or an external device). In other cases, various haptic transducers may buzz or vibrate to instruct a user to move the display screen down from its storage position to a viewing position.
[0136] In another example, an IR camera may detect another artificial-reality device within the same room and/or an audio sensor may detect an inaudible frequency emitted by the other artificial-reality device. In this example, the artificial-reality device may display a message on the display instructing the user to switch to artificial reality mode so that the artificial-reality device and the detected device may interact. Many other types of notifications are also possible. In some cases, the artificial-reality device may respond automatically to the notification, while in other cases, the user may perform some type of interaction to respond to the notification.
[0137] In some examples, notification subsystem 704 may include one or more haptic components disposed in various locations on the artificial-reality device. These haptic transducers may be configured to generate haptic outputs, such as buzzes or vibrations. The haptic transducers may be positioned within the artificial-reality device in a variety of ways. Users may be able to detect haptic sensations from substantially any location on the artificial- reality device and, as such, the haptic transducers may be disposed throughout the device. [0138] In some cases, the haptic transducers may be disposed on or within the artificial-reality device in patterns. For instance, the haptic transducers may be arranged in rows or circles or lines throughout the artificial-reality device. These haptic transducers may be actuated at different times to generate different patterns that may be felt by the user. In some examples, the haptic transducers may be actuated in a certain manner to correspond to a particular notification. For instance, a short buzz on the right side of the artificial-reality device may indicate that the user has received a text message. A pattern of two short vibrations on the left side of the artificial-reality device may indicate that the user is receiving a phone call or may also indicate who that phone call is from. A string of vibrations from successive haptic transducers 805 arranged in a row may indicate that an interesting artificial reality feature is available in the user's current location and that the user should consider lowering the display. In addition, a pattern of vibrations that moves from right to left may indicate that the user should take a left turn at an intersection. Many other such notifications are possible, and the above-identified list is not intended to be limiting.
[0139] The haptic transducers or other haptic feedback elements may provide various types of cutaneous feedback, including vibration, force, traction, texture, and/or temperature. Haptic transducers may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluidic systems, and/or a variety of other types of feedback mechanisms. Haptic feedback systems may be implemented independent of other artificial-reality devices, within other artificial-reality devices, and/or in conjunction with other artificial-reality devices. [0140] By providing haptic sensations, audible content, and/or visual content, the artificial-reality device may create an entire artificial experience or enhance a user's real- world experience in a variety of contexts and environments. For instance, the artificial-reality device may assist or extend a user's perception, memory, or cognition within a particular environment. The artificial-reality device may also enhance a user's interactions with other people in the real world or may enable more immersive interactions with other people in a virtual world. The artificial-reality device may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government organizations, military organizations, business enterprises, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visuals aids, etc.). The embodiments disclosed herein may enable or enhance a user's artificial-reality experience in one or more of these contexts and environments and/or in other contexts and environments.
[0141] In some cases, the user may interact with processing subsystem 710 and/or with any of the various subsystems 701-709 via tactile or motion-based movements. For instance, a user may press a button or knob or dial (either on the artificial-reality device or within an external device) to respond to a notification. In other cases, the user may perform a gesture with their hands, arms, face, eyes, or other body part. This gesture may be interpreted by processing subsystem 710 and associated software as a response to the notification. [0142] In some cases, the user may interact with the artificial-reality device just using their brain. For example, an artificial-reality device may include at least one brain-computer- interface. In one example, this brain-computer-interface may be positioned within the crown portion of the artificial-reality device. The brain-computer-interface (BCI) may be configured to detect brain signals that are translatable into user inputs. The BCI may use any of a variety of non-invasive brain activity detection techniques, including using functional near-infrared spectroscopy, using electroencephalography (EEG), using dry active electrode arrays, or using other types of brain-computer-interfaces to detect and determine a user's intentions based on their brain waves and brain wave patterns. In some cases, BCIs may be configured to indicate movement of a hand, a finger, or a limb. In other cases, BCIs may be configured to detect speech patterns and convert the detected speech patterns into written text. This text may be applied in a reply message, for example, to a text or an email. In some cases, the text may be presented on the display as the user thinks the text and as the think-to-text translator forms the words.
[0143] As detailed above, the various components of display subsystem 701 may be coupled to the artificial-reality devices described herein in a variety of ways. In some embodiments, a display may be adjusted to swing or slide down in front of the user's face. In this example, a positioning mechanism may adjustably position the display between a storage position in which the display is positioned in a location that is intended to be substantially outside of the user's field of view (e.g., closed), and a viewing position in which the display is positioned in a location that is intended to be substantially within the user's field of view (e.g., open).
[0144] In one example, an artificial-reality device may utilize or include a virtual retina display. The virtual retinal display may include virtual retina display projectors that may be configured to draw or trace a display directly onto the user's eye. A virtual retinal display (VRD), also referred to herein as a retinal scan display (RSD) or retinal projector (RP), may draw a raster display directly onto the retina of the user's eye. The user may then see what appears to be a conventional display floating in space in front of them. As with the other display devices and components described herein, the VRD projectors may be coupled to the artificial-reality device. In some embodiments, the virtual retina display projectors may incorporate vertical-cavity surface-emitting lasers or other types of lasers configured to draw images on a user's retina. Such lasers are typically powered using a relatively low power amplitude to avoid damaging the user's eyes. In cases where vertical-cavity surface-emitting lasers are incorporated, the artificial-reality device may also include a holographic grating that focuses the lasers on the user's retina.
[0145] As detailed above, display subsystem 701 and eye-tracking subsystem 703 described herein may be configured in a number of different ways and may include a variety of elements and components. FIG. 8 is an illustration of an exemplary system 800 that incorporates an eye-tracking subsystem capable of tracking a user's eye(s). As depicted in FIG. 8, system 800 may include a light source 802, an optical subsystem 804, an eye-tracking subsystem 806, and/or a control subsystem 808. In some examples, light source 802 may generate light for an image (e.g., to be presented to an eye 801 of the viewer). Light source 802 may represent any of a variety of suitable devices. For example, light source 802 can include a two-dimensional projector (e.g., a LCoS (liquid crystal on silicon) display), a scanning source (e.g., a scanning laser), or other device (e.g., an LCD (liquid crystal display), an LED (light-emitting diode) display, an OLED (organic light-emitting diode) display, an active-matrix OLED display (AMOLED), a transparent OLED display (TOLED), a waveguide, or some other display capable of generating light for presenting an image to the viewer). In some examples, the image may represent a virtual image, which may refer to an optical image formed from the apparent divergence of light rays from a point in space, as opposed to an image formed from the light ray's actual divergence.
[0146] In some embodiments, optical subsystem 804 may receive the light generated by light source 802 and generate, based on the received light, converging light 820 that includes the image. In some examples, optical subsystem 804 may include any number of lenses (e.g., Fresnel lenses, convex lenses, concave lenses), apertures, filters, mirrors, prisms, and/or other optical components, possibly in combination with actuators and/or other devices. In particular, the actuators and/or other devices may translate and/or rotate one or more of the optical components to alter one or more aspects of converging light 820. Further, various mechanical couplings may serve to maintain the relative spacing and/orthe orientation of the optical components in any suitable combination.
[0147] In one embodiment, eye-tracking subsystem 806 may generate tracking information indicating a gaze angle of an eye 801 of the viewer. In one example, control subsystem 808 may control aspects of optical subsystem 804 (e.g., the angle of incidence of converging light 820) based at least in part on this tracking information. Additionally, in some examples, control subsystem 808 may store and utilize historical tracking information (e.g., a history of the tracking information over a given duration, such as the previous second or fraction thereof) to anticipate the gaze angle of eye 801 (e.g., an angle between the visual axis and the anatomical axis of eye 801). In some embodiments, eye-tracking subsystem 806 may detect radiation emanating from some portion of eye 801 (e.g., the cornea, the iris, the pupil, or the like) to determine the current gaze angle of eye 801. In other examples, eye tracking subsystem 806 may employ a wavefront sensor to track the current location of the pupil.
[0148] Any number of techniques can be used to track eye 801. Some techniques may involve illuminating eye 801 with infrared light and measuring reflections with at least one optical sensor that is tuned to be sensitive to the infrared light. Information about how the infrared light is reflected from eye 801 may be analyzed to determine the position(s), orientation(s), and/or motion(s) of one or more eye feature(s), such as the cornea, pupil, iris, and/or retinal blood vessels.
[0149] In some examples, the radiation captured by a sensor of eye-tracking subsystem 806 may be digitized (i.e., converted to an electronic signal). Further, the sensor may transmit a digital representation of this electronic signal to one or more processors (for example, processors associated with a device including eye-tracking subsystem 806). Eye-tracking subsystem 806 may include any of a variety of sensors in a variety of different configurations. For example, eye-tracking subsystem 806 may include an infrared detector that reacts to infrared radiation. The infrared detector may be a thermal detector, a photonic detector, and/or any other suitable type of detector. Thermal detectors may include detectors that react to thermal effects of the incident infrared radiation.
[0150] In some examples, one or more processors may process the digital representation generated by the sensor(s) of eye-tracking subsystem 806 to track the movement of eye 801. In another example, these processors may track the movements of eye 801 by executing algorithms represented by computer-executable instructions stored on non-transitory memory. In some examples, on-chip logic (e.g., an application-specific integrated circuit or ASIC) may be used to perform at least portions of such algorithms. As noted, eye-tracking subsystem 806 may be programmed to use an output of the sensor(s) to track movement of eye 801. In some embodiments, eye-tracking subsystem 806 may analyze the digital representation generated by the sensors to extract eye rotation information from changes in reflections. In one embodiment, eye-tracking subsystem 806 may use corneal reflections or glints (also known as Purkinje images) and/or the center of the eye's pupil 822 as features to track over time.
[0151] In some embodiments, eye-tracking subsystem 806 may use the center of the eye's pupil 822 and infrared or near-infrared, non-collimated light to create corneal reflections. In these embodiments, eye-tracking subsystem 806 may use the vector between the center of the eye's pupil 822 and the corneal reflections to compute the gaze direction of eye 801. In some embodiments, the disclosed systems may perform a calibration procedure for an individual (using, e.g., supervised or unsupervised techniques) before tracking the user's eyes. For example, the calibration procedure may include directing users to look at one or more points displayed on a display while the eye-tracking system records the values that correspond to each gaze position associated with each point.
[0152] In some embodiments, eye-tracking subsystem 806 may use two types of infrared and/or near-infrared (also known as active light) eye-tracking techniques: bright-pupil and dark-pupil eye tracking, which may be differentiated based on the location of an illumination source with respect to the optical elements used. If the illumination is coaxial with the optical path, then eye 801 may act as a retroreflector as the light reflects off the retina, thereby creating a bright pupil effect similar to a red-eye effect in photography. If the illumination source is offset from the optical path, then the eye's pupil 822 may appear dark because the retroreflection from the retina is directed away from the sensor. In some embodiments, bright-pupil tracking may create greater iris/pupil contrast, allowing more robust eye tracking with iris pigmentation, and may feature reduced interference (e.g., interference caused by eyelashes and other obscuring features). Bright-pupil tracking may also allow tracking in lighting conditions ranging from total darkness to a very bright environment.
[0153] In some embodiments, control subsystem 808 may control light source 802 and/or optical subsystem 804 to reduce optical aberrations (e.g., chromatic aberrations and/or monochromatic aberrations) of the image that may be caused by or influenced by eye 801. In some examples, as mentioned above, control subsystem 808 may use the tracking information from eye-tracking subsystem 806 to perform such control. For example, in controlling light source 802, control subsystem 808 may alter the light generated by light source 802 (e.g., by way of image rendering) to modify (e.g., pre-distort) the image so that the aberration of the image caused by eye 801 is reduced. [0154] The disclosed systems may track both the position and relative size of the pupil (since, e.g., the pupil dilates and/or contracts). In some examples, the eye-tracking devices and components (e.g., sensors and/or sources) used for detecting and/or tracking the pupil may be different (or calibrated differently) for different types of eyes. For example, the frequency range of the sensors may be different (or separately calibrated) for eyes of different colors and/or different pupil types, sizes, and/or the like. As such, the various eye-tracking components (e.g., infrared sources and/or sensors) described herein may need to be calibrated for each individual user and/or eye.
[0155] The disclosed systems may track both eyes with and without ophthalmic correction, such as that provided by contact lenses worn by the user. In some embodiments, ophthalmic correction elements (e.g., adjustable lenses) may be directly incorporated into the artificial reality systems described herein. In some examples, the color of the user's eye may necessitate modification of a corresponding eye-tracking algorithm. For example, eye tracking algorithms may need to be modified based at least in part on the differing color contrast between a brown eye and, for example, a blue eye.
[0156] FIG. 9 is a more detailed illustration of various aspects of the eye-tracking subsystem illustrated in FIG. 8. As shown in this figure, an eye-tracking subsystem 900 may include at least one source 904 and at least one sensor 9806. Source 904 generally represents any type or form of element capable of emitting radiation. In one example, source 904 may generate visible, infrared, and/or near-infrared radiation. In some examples, source 904 may radiate non-collimated infrared and/or near-infrared portions of the electromagnetic spectrum towards an eye 902 of a user. Source 904 may utilize a variety of sampling rates and speeds. For example, the disclosed systems may use sources with higher sampling rates in order to capture fixational eye movements of a user's eye 902 and/or to correctly measure saccade dynamics of the user's eye 902. As noted above, any type or form of eye-tracking technique may be used to track the user's eye 902, including optical-based eye-tracking techniques, ultrasound-based eye-tracking techniques, etc.
[0157] Sensor 906 generally represents any type or form of element capable of detecting radiation, such as radiation reflected off the user's eye 902. Examples of sensor 906 include, without limitation, a charge coupled device (CCD), a photodiode array, a complementary metal-oxide-semiconductor (CMOS) based sensor device, and/or the like. In one example, sensor 906 may represent a sensor having predetermined parameters, including, but not limited to, a dynamic resolution range, linearity, and/or other characteristic selected and/or designed specifically for eye tracking.
[0158] As detailed above, eye-tracking subsystem 900 may generate one or more glints. As detailed above, a glint 903 may represent reflections of radiation (e.g., infrared radiation from an infrared source, such as source 904) from the structure of the user's eye. In various embodiments, glint 903 and/or the user's pupil may be tracked using an eye-tracking algorithm executed by a processor (either within or external to an artificial reality device). For example, an artificial-reality device may include a processor and/or a memory device in order to perform eye tracking locally and/or a transceiver to send and receive the data necessary to perform eye tracking on an external device (e.g., a mobile phone, cloud server, or other computing device).
[0159] FIG. 9 shows an example image 905 captured by an eye-tracking subsystem, such as eye-tracking subsystem 900. In this example, image 905 may include both the user's pupil 908 and a glint 910 near the same. In some examples, pupil 908 and/or glint 910 may be identified using an artificial-intelligence-based algorithm, such as a computer-vision-based algorithm. In one embodiment, image 905 may represent a single frame in a series of frames that may be analyzed continuously in order to track the eye 902 of the user. Further, pupil 908 and/or glint 910 may be tracked over a period of time to determine a user's gaze.
[0160] The above-described eye-tracking subsystems can be incorporated into one or more of the various artificial reality systems described herein in a variety of ways. For example, one or more of the various components of system 800 and/or eye-tracking subsystem 900 may be incorporated into augmented-reality system 400 in FIG. 4 and/or virtual-reality system 500 in FIG. 5 to enable these systems to perform various eye-tracking tasks (including one or more of the eye-tracking operations described herein).
[0161] In view of the discussion above in conjunction with FIGS. 1-9, a handheld controller capable of sensing levels or variations of physical pressure on the controller may facilitate more complex and/or nuanced input from a user of the controller. In some examples, such as in artificial-reality systems, input that includes sensing of pressure imposed by the user on the controller may be interpreted as manipulation (e.g., squeezing, pinching, rotation, or the like) of a virtual object of an artificial environment being presented to the user. In other embodiments, the levels or variations of pressure may be employed to navigate menus, alter input slider positions, or provide other types of input that may be more difficult to provide via other types of input device mechanisms.
[0162] Example Embodiments
[0163] Example 1: A handheld controller may include (1) a body including (a) a grasping region configured to be grasped by a hand and (b) an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region and (2) a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
[0164] Example 2: The handheld controller of Example 1, where the handheld controller may further include a trigger button coupled to the grasping region and configured to be engaged by an index fingerof the hand while the hand grasps the grasping region, where the trigger button detects whether the trigger button has been activated for interpretation as a second input.
[0165] Example 3: The handheld controller of Example 2, where the handheld controller may further include a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
[0166] Example 4: The handheld controller of any of Examples 1-3, where the pressure sensor may include a static pressure sensorthat senses depression of the input surface region. [0167] Example 5: The handheld controller of any of Examples 1-3, where the pressure sensor may include a zero-movement sensor.
[0168] Example 6: The handheld controller of any of Examples 1-3, where the handheld controller may further include a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
[0169] Example 7: The handheld controller of any of Examples 1-3, where the handheld controller may further include a capacitive sensor coupled to the input surface region and configured to detect a touching with the input surface region for interpretation as a second input.
[0170] Example 8: The handheld controller of any of Examples 1-3, where the handheld controller may further include a capacitive sensor that detects a touched location on the input surface region for interpretation as a second input.
[0171] Example 9: The handheld controller of any of Examples 1-3, where the handheld controller may further include an input button coupled to the body outside the input surface region and configured to be engaged by the thumb while the hand grasps the grasping region, where the input button indicates whether the input button has been engaged for interpretation as a second input.
[0172] Example 10: A system may include (1) a display that presents a virtual object in an artificial environment, (2) a processor that processes a plurality of inputs for manipulating the virtual object, and (3) a body including (a) a grasping region configured to be grasped by a hand, and (b) an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region, and (c) a pressure sensor mechanically coupled with the input surface region, where the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region, and where the processor processes the level of pressure as a first input of the plurality of inputs.
[0173] Example 11: The system of Example 10, where the handheld controller may further include a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand where (1) the trigger button indicates whether the trigger button has been activated and (2) the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
[0174] Example 12: The system of either Example 10 or Example 11, where the processor may interpret a combination of the first input and the second input as a manipulation of the virtual object.
[0175] Example 13: The system of Example 12, where the manipulation of the virtual object may include a pinching action imposed on the virtual object.
[0176] Example 14: The system of either Example 10 or Example 11, where the handheld controller may further include a capacitive sensor coupled to the input surface region, where (a) the capacitive sensor detects a touched location of the input surface region and (b) the processor processes a representation of the touched location as a third input of the plurality of inputs.
[0177] Example 15: The system of Example 14, where the processor may interpret the third input as a navigation of a menu presented by the display.
[0178] Example 16: The system of Example 14, where the processor may interpret a combination of the first input, the second input, and the third input as a manipulation of the virtual object.
[0179] Example 17: The system of Example 16, where the manipulation of the virtual object may include a rotational action imposed on the virtual object.
[0180] Example 18: A method may include (1) detecting, by a pressure sensor mechanically coupled to an input surface region of a body of a handheld controller, a level of pressure imposed on the input surface region, where (a) the body further includes a grasping region configured to be grasped by a hand and (b) the input surface region is configured to be engaged by a thumb of the hand while the hand grasps the grasping region and (2) interpreting, by a processor, the level of pressure as a first input.
[0181] Example 19: The method of Example 18, where the method may further include (1) detecting, by a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, whether the trigger button has been activated and (2) interpreting, by the processor, whether the trigger button has been activated as a second input.
[0182] Example 20: The method of Example 19, where the method may further include (1) presenting, by a display, a virtual object in an artificial environment and (2) interpreting, by the processor, a combination of the first input and the second input as a manipulation of the virtual object.
[0183] As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each include at least one memory device and at least one physical processor.
[0184] In some examples, the term "memory device" generally refers to any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer- readable instructions. Examples of memory devices include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
[0185] In some examples, the term "physical processor" generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors include, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
[0186] In some embodiments, the term "computer-readable medium" generally refers to any form of device, carrier, or medium capable of storing or carrying computer- readable instructions. Examples of computer-readable media include, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic- storage media (e.g., solid-state drives and flash media), and other distribution systems. [0187] The process parameters and sequence of the steps described and/or illustrated herein are given byway of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
[0188] The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the scope of the claims. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the present disclosure.
[0189] Unless otherwise noted, the terms "connected to" and "coupled to" (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms "a" or "an," as used in the specification and claims, are to be construed as meaning "at least one of." Finally, for ease of use, the terms "including" and "having" (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word "comprising."

Claims

CLAIMS:
1. A handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region for interpretation as a first input.
2. The handheld controller of claim 1, further comprising: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, wherein the trigger button detects whether the trigger button has been activated for interpretation as a second input.
3. The handheld controller of claim 2, further comprising: a haptic actuator coupled with the trigger button and that provides haptic feedback to the index finger based on an output.
4. The handheld controller of claim 1, claim 2 or claim 3, wherein the pressure sensor comprises a static pressure sensor that senses depression of the input surface region.
5. The handheld controller of any one of claims 1 to 4, wherein the pressure sensor comprises a zero-movement sensor.
6. The handheld controller of any one of the preceding claims, further comprising: a haptic actuator coupled to the input surface region and that provides haptic feedback to the thumb based on an output.
7. The handheld controller of any one of the preceding claims, further comprising a capacitive sensor coupled to the input surface region and configured to detect a touching with the input surface region for interpretation as a second input.
8. The handheld controller of any one of the preceding claims, further comprising a capacitive sensor that detects a touched location on the input surface region for interpretation as a second input.
9. The handheld controller of any one of the preceding claims, further comprising an input button coupled to the body outside the input surface region and configured to be engaged by the thumb while the hand grasps the grasping region, wherein the input button indicates whether the input button has been engaged for interpretation as a second input.
10. A system comprising: a display that presents a virtual object in an artificial environment; a processor that processes a plurality of inputs for manipulating the virtual object; and a handheld controller comprising: a body comprising: a grasping region configured to be grasped by a hand; and an input surface region configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and a pressure sensor coupled with the input surface region, wherein the pressure sensor indicates a level of pressure imposed by the thumb on the input surface region, and wherein the processor processes the level of pressure as a first input of the plurality of inputs.
11. The system of claim 10, wherein the handheld controller further comprises: a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand, wherein: the trigger button indicates whetherthe trigger button has been activated; and the processor processes whether the trigger button has been activated as a second input of the plurality of inputs.
12. The system of claim 11, wherein the processor interprets a combination of the first input and the second input as a manipulation of the virtual object; and preferably wherein the manipulation of the virtual object comprises a pinching action imposed on the virtual object.
13. The system of claim 11 or claim 12, wherein the handheld controller further comprises a capacitive sensor coupled to the input surface region, wherein: the capacitive sensor detects a touched location of the input surface region; and the processor processes a representation of the touched location as a third input of the plurality of inputs; and preferably: i. wherein the processor interprets the third input as a navigation of a menu presented by the display; or ii. wherein the processor interprets a combination of the first input, the second input, and the third input as a manipulation of the virtual object; and preferably wherein the manipulation of the virtual object comprises a rotational action imposed on the virtual object.
14. A method comprising: detecting, by a pressure sensor coupled to an input surface region of a body of a handheld controller, a level of pressure imposed on the input surface region, wherein: the body further comprises a grasping region configured to be grasped by a hand; and the input surface region is configured to be engaged by a thumb of the hand while the hand grasps the grasping region; and interpreting, by a processor, the level of pressure as a first input.
15. The method of claim 14, further comprising: detecting, by a trigger button coupled to the grasping region and configured to be engaged by an index finger of the hand while the hand grasps the grasping region, whether the trigger button has been activated; and interpreting, by the processor, whether the trigger button has been activated as a second input; and preferably further comprising: presenting, by a display, a virtual object in an artificial environment; and interpreting, by the processor, a combination of the first input and the second input as a manipulation of the virtual object.
EP21727684.9A 2020-05-04 2021-05-01 Handheld controller with thumb pressure sensing Pending EP4330796A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202016866322A 2020-05-04 2020-05-04
PCT/US2021/030373 WO2022235250A1 (en) 2020-05-04 2021-05-01 Handheld controller with thumb pressure sensing

Publications (1)

Publication Number Publication Date
EP4330796A1 true EP4330796A1 (en) 2024-03-06

Family

ID=76076481

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21727684.9A Pending EP4330796A1 (en) 2020-05-04 2021-05-01 Handheld controller with thumb pressure sensing

Country Status (3)

Country Link
EP (1) EP4330796A1 (en)
CN (1) CN117377927A (en)
WO (1) WO2022235250A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3686686B2 (en) * 1993-05-11 2005-08-24 松下電器産業株式会社 Haptic device, data input device, and data input device device
US10691233B2 (en) * 2016-10-11 2020-06-23 Valve Corporation Sensor fusion algorithms for a handheld controller that includes a force sensing resistor (FSR)

Also Published As

Publication number Publication date
CN117377927A (en) 2024-01-09
WO2022235250A1 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
US11042221B2 (en) Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures
US11039651B1 (en) Artificial reality hat
US11467670B2 (en) Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures
US11176367B1 (en) Apparatuses, systems, and methods for mapping a surface of an eye via an event camera
EP4121838A1 (en) Apparatus, system, and method for wrist tracking and gesture detection via time of flight sensors
US20240095948A1 (en) Self-tracked controller
US11715331B1 (en) Apparatuses, systems, and methods for mapping corneal curvature
US11120258B1 (en) Apparatuses, systems, and methods for scanning an eye via a folding mirror
US20230037329A1 (en) Optical systems and methods for predicting fixation distance
WO2023147038A1 (en) Systems and methods for predictively downloading volumetric data
US20230053497A1 (en) Systems and methods for performing eye-tracking
US11579704B2 (en) Systems and methods for adaptive input thresholding
US20230043585A1 (en) Ultrasound devices for making eye measurements
US11334157B1 (en) Wearable device and user input system with active sensing for computing devices and artificial reality environments
EP4330796A1 (en) Handheld controller with thumb pressure sensing
US11789544B2 (en) Systems and methods for communicating recognition-model uncertainty to users
JP2024516755A (en) HANDHELD CONTROLLER WITH THUMB PRESSURE SENSING - Patent application
US11722137B1 (en) Variable-distance proximity detector
US11726552B1 (en) Systems and methods for rendering a trigger finger
WO2023023299A1 (en) Systems and methods for communicating model uncertainty to users
US20230067343A1 (en) Tunable transparent antennas implemented on lenses of augmented-reality devices
US20240130681A1 (en) Electrode placement calibration
US20220236795A1 (en) Systems and methods for signaling the onset of a user's intent to interact
US20230411932A1 (en) Tunable laser array
WO2023023206A1 (en) Systems and methods for performing eye-tracking

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230628

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR