CN118056176A - Apparatus, system, and method for detecting user input via hand gestures and arm motions - Google Patents

Apparatus, system, and method for detecting user input via hand gestures and arm motions Download PDF

Info

Publication number
CN118056176A
CN118056176A CN202280067672.7A CN202280067672A CN118056176A CN 118056176 A CN118056176 A CN 118056176A CN 202280067672 A CN202280067672 A CN 202280067672A CN 118056176 A CN118056176 A CN 118056176A
Authority
CN
China
Prior art keywords
mounted display
user
head mounted
wearable device
additional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280067672.7A
Other languages
Chinese (zh)
Inventor
孔祥宇
弗尔·桑德·林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/705,899 external-priority patent/US11662815B2/en
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Priority claimed from PCT/US2022/044648 external-priority patent/WO2023059458A1/en
Publication of CN118056176A publication Critical patent/CN118056176A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

An artificial reality system, comprising: (1) A wearable device sized to be worn on a body part of a user, wherein the wearable device comprises: (A) A set of electrodes that detect one or more neuromuscular signals via a body part of a user; and (B) a transmitter that transmits the electromagnetic signal; (2) A head mounted display communicatively coupled to the wearable device, wherein the head mounted display includes a set of receivers that receive the electromagnetic signals; and (3) one or more processing devices that: (1) Determining that the user made the particular gesture based at least in part on the one or more neuromuscular signals, and (2) determining a position of the body part of the user at the time the user made the particular gesture based at least in part on the electromagnetic signals. Various other devices, systems, and methods are also disclosed.

Description

Apparatus, system, and method for detecting user input via hand gestures and arm motions
Cross Reference to Related Applications
The present application claims priority in accordance with 35u.s.c. ≡119 (e) from U.S. provisional application No. 63/253,667 filed on 810 and No. 17/705899 filed on 28 of 2022, the contents of both of which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates generally to devices, systems, and methods for detecting user input via hand gestures and arm motions. As will be explained in more detail below, these devices, systems, and methods may provide a number of functions and benefits.
Background
Artificial reality generally provides a rich, immersive experience in which a user can interact with virtual objects and/or virtual environments in one or more of the ways. In this context, artificial reality may constitute a realistic form of: the real form has been changed by the virtual object presented to the user. Such artificial reality may include and/or represent virtual reality, augmented reality, mixed reality (mixed reality), mixed reality (hybrid reality), or some combination and/or variation of one or more of the above.
While artificial reality systems are typically implemented for gaming and other entertainment purposes, such systems are also implemented for purposes other than entertainment. For example, governments may use them for military training simulations, doctors may use them for practice surgery, engineers may use them as visualization aids, and colleagues may use them for facilitating human interaction and collaboration from around the globe.
Some artificial reality systems may contain manually operated controls that enable a user to enter inputs that can modify their artificial reality experience. Unfortunately, these manually operated controls may limit the mobility and/or movement of the user, particularly based on hand movements and/or gestures. To address these limitations, some artificial reality systems may include wearable devices capable of sensing a few movements, actions, and/or gestures made by a user. However, sensing other movements, actions, and/or gestures via such wearable devices may prove challenging and/or impractical.
For example, some wearable devices may not accurately detect and/or track the distance and/or location of a body part located in the vicinity of the wearable device. Additionally or alternatively, some wearable devices may not be able to translate the distance and/or location of such body parts into virtual components that are presented to the user via a head-mounted display (e.g., augmented reality glasses), let alone control the head-mounted display via such body parts. Accordingly, the present disclosure recognizes and addresses the need for additional systems and methods for detecting user input via hand gestures and arm motions.
Disclosure of Invention
As will be described in more detail below, the artificial reality system may include and/or represent wearable devices (e.g., wrist bands and/or watches) and/or head mounted displays (e.g., augmented reality glasses) communicatively coupled to each other. In one example, the wearable device may be sized to be worn on a body part (e.g., wrist) of a user of the artificial reality system. In this example, the wearable device may include and/or represent: (1) A set of electrodes (e.g., electromyography (EMG) sensors) that detect one or more neuromuscular signals via a body part of a user; and (2) a transmitter (e.g., an ultra wideband radio) that transmits the electromagnetic signal.
In some examples, the head mounted display may include and/or represent a set of receivers that receive electromagnetic signals transmitted by a transmitter included on the wearable device. In such examples, the artificial reality system may include and/or represent one or more processing devices that: (1) Determining that the user made the particular gesture based at least in part on the one or more neuromuscular signals detected via the body part of the user; and (2) determining a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
In some examples, at least one of the one or more processing devices may be incorporated into a wearable device. Additionally or alternatively, at least one of the one or more processing devices may be incorporated into a head mounted display.
In some examples, the user's hand gestures and/or arm motions may act as and/or as a user interface for an artificial reality system. For example, a user's hand gestures and/or arm motions may generate control signals that are translated into commands to an artificial reality system.
In some examples, the wearable device may include and/or represent EMG sensors that detect and/or measure muscle activity and/or patterns. In one example, the artificial reality system may include and/or represent bluetooth radios that facilitate configuring and/or pairing the wearable device and/or the head mounted display. Additionally or alternatively, the bluetooth radio may send EMG data from the wearable device to the head-mounted display.
In some examples, the wearable device may include and/or represent one or more ultra-wideband pulse radios that provide and/or transmit accurate time-stamped pulse signals to the head-mounted display for angle-of-arrival calculation. In such examples, the head mounted display may include and/or represent an ultra wideband antenna array (e.g., 2 antennas, 3 antennas, 4 antennas, or more) that receives the pulsed signal. The head mounted display may identify and/or detect the arrival time of the pulse signal received by the ultra wideband antenna. The head mounted display may then calculate and/or estimate different path times for each pulse signal relative to the ultra wideband antenna array based at least in part on the arrival times and the time stamps. In one example, the head-mounted display may then convert the arrival time and/or range time of the pulse signal to an arrival angle of the pulse signal that corresponds to and/or represents the position of the wearable device within the defined field of view of the head-mounted display (e.g., a 2-dimensional representation and/or a 3-dimensional representation). In this example, the accuracy and/or precision of the angle of arrival may increase with the number of antennas included in the array. The head mounted display may then generate a pointer and/or superimpose the pointer on top of the augmented reality presentation provided to and/or viewed by the user.
In some examples, an ultra-wideband pulse radio incorporated into a wearable device may wirelessly transmit accurate time stamp data and/or EMG signal data to an ultra-wideband antenna array incorporated into a head mounted display. In these examples, if the wearable device is located within the field of view of the head mounted display, the head mounted display may activate and/or generate a pointer and/or cursor for display and/or presentation to the user. In one example, the head mounted display may determine a hand gesture performed by the user based at least in part on the EMG signal data. In this example, the hand gestures may correspond to and/or represent commands and/or computer readable instructions for an artificial reality system.
In some examples, the head mounted display may determine an appropriate 2-and/or 3-dimensional location or position of the pointer and/or cursor within the field of view based at least in part on the angle of arrival. By combining EMG signal data and angle of arrival data, the head mounted display may be able to create such control mechanisms and/or user interfaces: the control mechanism and/or user interface enable a user to control and/or interact with virtual features displayed to the user without touching the head mounted display or even the wearable device.
In one aspect of the present invention, there is provided an artificial reality system including: a wearable device sized to be worn on a body part of a user, wherein the wearable device comprises: a set of electrodes that detect one or more neuromuscular signals via a body part of a user; and a transmitter that transmits the electromagnetic signal; a head mounted display communicatively coupled to the wearable device, wherein the head mounted display includes a set of receivers that receive electromagnetic signals transmitted by a transmitter included on the wearable device; and one or more processing devices, the one or more processing devices: determining that the user made the particular gesture based at least in part on the one or more neuromuscular signals detected via the body part of the user; and determining a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
At least one of the one or more processing devices may be incorporated into a wearable device.
At least one of the one or more processing devices may be incorporated into a head mounted display.
The wearable device may include a first bluetooth radio; and the head mounted display may include a second bluetooth radio communicatively coupled to the first bluetooth radio, the first bluetooth radio and the second bluetooth radio may be configured to exchange configuration data between the wearable device and the head mounted display.
The first bluetooth radio and the second bluetooth radio may be further configured to exchange data about the one or more neuromuscular signals between the wearable device and the head-mounted display.
At least one of the one or more processing devices may generate an input command based at least in part on the data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one virtual component to address a particular gesture.
The transmitter may incorporate a time stamp into the electromagnetic signal prior to transmitting the electromagnetic signal to the set of receivers.
At least one of the one or more processing devices may: determining a first time of arrival of an electromagnetic signal received by a first receiver included in the set of receivers; determining a second time of arrival of the electromagnetic signal received by a second receiver included in the set of receivers; and calculating an angle of arrival of the electromagnetic signal relative to the set of receivers based at least in part on the first and second times of arrival of the electromagnetic signal and the time stamp.
At least one of the one or more processing devices may: calculating at least one dimension of a position of the virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and rendering the virtual component at a location within a field of view of the head mounted display based at least in part on the at least one dimension.
The at least one dimension of the calculated position of the virtual component may include at least one of: an azimuth of a virtual component to be presented within a field of view of the head mounted display; elevation angle of a virtual component to be presented within a field of view of a head mounted display; or the depth of the virtual component to be rendered within the field of view of the head mounted display.
At least one of the one or more processing devices may: determining a first phase of an electromagnetic signal received by a first receiver included in the set of receivers; determining a second phase of the electromagnetic signal received by a second receiver included in the set of receivers; and calculating an angle of arrival of the electromagnetic signal relative to the set of receivers based at least in part on a difference between the first phase and the second phase of the electromagnetic signal and the timestamp.
At least one of the one or more processing devices may: calculating a two-dimensional position of the virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and presenting the virtual component at the two-dimensional location within the field of view of the head mounted display.
At least one of the one or more processing devices may: calculating a three-dimensional position of the virtual component within a field of view of the head-mounted display based at least in part on the angle of arrival; and presenting the virtual component at the three-dimensional location within the field of view of the head mounted display.
The virtual component presented at the location may include a pointer presented at the location; and at least one of the one or more processing devices may superimpose the pointer on a screen of the head mounted display.
At least one of the one or more processing devices may generate an input command based at least in part on the data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one additional virtual component presented near a pointer within a field of view of the head mounted display to address a particular gesture.
At least one of the one or more processing devices may: determining, based at least in part on the angle of arrival, that the wearable device is no longer visible within a field of view of the head-mounted display; and removing the virtual component from the field of view of the head-mounted display in response to determining that the wearable device is no longer visible within the field of view of the head-mounted display.
The artificial reality system may further include an additional wearable device sized to be worn on an additional body part of the user, wherein the wearable device may include: an additional set of electrodes that can detect one or more additional neuromuscular signals via an additional body part of the user; and at least one additional transmitter that can transmit additional electromagnetic signals; wherein: the head mounted display may also be communicatively coupled to an additional wearable device, wherein the set of receivers receives additional electromagnetic signals sent by an additional transmitter included on the additional wearable device; and at least one of the one or more processing devices may: determining that the user made an additional gesture based at least in part on the one or more additional neuromuscular signals detected via the additional body part of the user; and determining a position of the additional body part of the user at the time the user makes the additional gesture based at least in part on the additional electromagnetic signals received by the set of receivers included on the head mounted display.
At least one of the one or more processing devices may: calculating at least one dimension of a position of the virtual component within a field of view of the head mounted display based at least in part on the electromagnetic signals; calculating at least one additional dimension of an additional position of the additional virtual component within the field of view of the head mounted display based at least in part on the additional electromagnetic signals; and rendering the virtual component at the location and the additional virtual component at the additional location within the field of view of the head mounted display based at least in part on the at least one dimension and the at least one additional dimension.
In one aspect of the present invention, there is provided a head mounted display comprising: a set of receivers configured to receive electromagnetic signals transmitted by a transmitter included on a wearable device sized to be worn on a body part of a user; a radio configured to receive data regarding one or more neuromuscular signals detected by the wearable device via a body part of a user; and at least one processing device communicatively coupled to the set of receivers and radios, wherein the at least one processing device: determining that the user made the particular gesture based at least in part on data regarding the one or more neuromuscular signals detected via the body part of the user; and determining a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
In one aspect of the invention, a method is provided, the method comprising: detecting, by a wearable device worn on a body part of a user, one or more neuromuscular signals at the body part of the user; transmitting, by a transmitter included on the wearable device, an electromagnetic signal; receiving, by a set of receivers included on a head mounted display worn by a user, electromagnetic signals sent by a transmitter included on a wearable device; determining, by the one or more processing devices, that a particular gesture was made by the user based at least in part on the one or more neuromuscular signals; and determining, by the one or more processing devices, a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signal.
Drawings
The accompanying drawings illustrate various exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the present disclosure.
Fig. 1 is a diagram of an artificial reality system for detecting user input via hand gestures and hand motions in accordance with one or more embodiments of the present disclosure.
FIG. 2 is an illustration of an example wearable device that facilitates detection of user input via hand gestures and hand motions in accordance with one or more embodiments of the present disclosure.
FIG. 3 is an illustration of an exemplary head-mounted display that facilitates detecting user input via hand gestures and hand motions in accordance with one or more embodiments of the present disclosure.
Fig. 4 is an illustration of an example implementation of an artificial reality system for detecting user input via hand gestures and arm motions in accordance with one or more embodiments of the present disclosure.
Fig. 5 is an illustration of an example implementation of an artificial reality system for detecting user input via hand gestures and arm motions in accordance with one or more embodiments of the present disclosure.
Fig. 6 is an illustration of an example implementation of a wearable device that facilitates detecting user input via hand gestures and arm motions, in accordance with one or more embodiments of the present disclosure.
Fig. 7 is a diagram of exemplary neuromuscular signals detected by a wearable device in combination with user input via hand gesture and arm motion input in accordance with one or more embodiments of the present disclosure.
Fig. 8 is an illustration of an exemplary angle of arrival calculation for determining user input via hand gesture and arm motion input in accordance with one or more embodiments of the present disclosure.
Fig. 9 is an illustration of an exemplary viewpoint implementation of an artificial reality system for detecting user input via hand gestures and arm motions, in accordance with one or more embodiments of the present disclosure.
FIG. 10 is an illustration of an exemplary spherical coordinate system for converting user input to a display screen of a head-mounted display in accordance with one or more embodiments of the present disclosure.
FIG. 11 is a flowchart of an exemplary method for detecting user input via hand gestures and arm motions in accordance with one or more embodiments of the present disclosure.
Fig. 12 is an illustration of exemplary augmented reality glasses that may be used in connection with embodiments of the present disclosure.
Fig. 13 is an illustration of an exemplary virtual reality headset (head set) that may be used in connection with various embodiments of the present disclosure.
FIG. 14 is an illustration of an exemplary haptic device that can be used in connection with various embodiments of the present disclosure.
Fig. 15 is an illustration of an exemplary virtual reality environment, according to various embodiments of the disclosure.
Fig. 16 is an illustration of an exemplary augmented reality environment according to various embodiments of the present disclosure.
Fig. 17A and 17B are illustrations of an exemplary human-machine interface configured to be worn around a user's forearm or wrist.
Fig. 18A and 18B are illustrations of exemplary schematic diagrams of various internal components of a wearable system.
While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the present disclosure covers all modifications, combinations, equivalents, and alternatives falling within the disclosure.
Detailed Description
Detailed descriptions of exemplary devices, systems, components, and corresponding embodiments for detecting user input via hand gestures and arm motions are provided below with reference to fig. 1-10. In addition, a method for detecting user input via hand gestures and arm motions is described in detail in connection with fig. 11. The discussion corresponding to fig. 12-18 will provide a detailed description of various exemplary artificial reality devices, wearable devices, and/or associated systems that may support and/or facilitate detection of user input via hand gestures and arm motions.
Fig. 1 illustrates an example artificial reality system 100 that includes and/or represents wearable devices 102 and/or head mounted displays 104 capable of communicating with each other. As shown in fig. 1, wearable device 102 may include and/or represent processor 120 (1), radio 112 (1), a set of electrodes 116, and/or transmitter 114. In some examples, the head mounted display 104 may include and/or represent the processor 120 (2), the radio 112 (2), the set of receivers 118, the camera 128, the display screen 110, and/or the power supply 122.
In some examples, wearable device 102 may refer to and/or represent any type or form of computing device worn as part of a piece of apparel, accessory, and/or implant. In one example, wearable device 102 may include and/or represent a wristband secured to and/or worn by a user's wrist. Additional examples of wearable device 102 include, but are not limited to, an arm band, a pendant, a bracelet, a ring, jewelry, an ankle band, apparel, a smart watch, an electronic textile, shoes, clips, headbands, gloves, variations or combinations of one or more of the foregoing, and/or any other suitable wearable device.
In some examples, head mounted display 104 may refer to and/or represent any type of display and/or visual device that is worn on and/or mounted to a user's head or face. In one example, the head-mounted display 104 may include and/or represent a pair of augmented reality (augmented reality, AR) glasses designed to be worn on and/or secured to a user's head or face. As shown in fig. 3, the head mounted display 104 may include and/or contain a display screen 110 as a lens and/or corresponding partial see-through component on such AR glasses. In this example, the head mounted display 104 may include and/or include cameras 128 (1) and 128 (2) directed toward and/or toward the user's line of sight and/or field of view. In another example, the head mounted display 104 may include and/or represent a virtual reality head mounted device and/or any other suitable type or form of artificial reality head mounted device.
In some examples, wearable device 102 and/or head mounted display 104 may implement and/or establish one or more links, connections, and/or channels that communicate with each other. For example, wearable device 102 and head mounted display 104 may be capable of communicating with each other via transmitter 114 and receiver 118, respectively. In this example, the transmitter 114 and the receiver 118 may enable, support, facilitate, and/or establish ultra-wideband pulse radio (ultra-wideband impulse radio, UWB-IR) communications 140 between the wearable device 102 and the head-mounted display 104. Additionally or alternatively, wearable device 102 and head mounted display 104 may be capable of communicating with each other via radio 112 (1) and radio 112 (2), respectively. In this example, radio 112 (1) and radio 112 (2) may enable, support, facilitate, and/or establish radio communications 138 (e.g., bluetooth communications) between wearable device 102 and head-mounted display 104.
In some examples, each electrode 116 may each constitute and/or represent any type or form of electrical conductor capable of detecting and/or sensing neuromuscular signals via the body of a user. In one example, the electrodes 116 may include and/or represent neuromuscular and/or Electromyographic (EMG) sensors arranged, configured and/or disposed circumferentially around the wearable device 102 as shown in fig. 2. Additional examples of electrodes 116 include, but are not limited to, a myogram (mechanomyography, MMG) sensor, an acoustic myogram (sonomyography, SMG) sensor, combinations or variations of one or more of the above, and/or any other suitable electrode. Any suitable number and/or arrangement of electrodes 116 may be applied to wearable device 102.
In some examples, the electrodes 116 may be communicatively coupled to each other and/or to the processor 120 (1) through flexible electronics, connectors, traces, and/or wiring. Additionally or alternatively, the electrode 116 may be integrated with and/or into the elastic band and/or wristband of the wearable device 102.
In some examples, the electrodes 116 may be arranged on the wearable device 102 in a particular and/or contemplated configuration. In one example, the electrodes 116 may be separated from one another and/or spaced apart from one another along the wearable device 102 by one or more known distances.
In some embodiments, the output of one or more of the electrodes 116 may be processed, amplified, rectified, and/or filtered via hardware signal processing circuitry. Additionally or alternatively, the output of one or more of the electrodes 116 may be processed, amplified, rectified, and/or filtered via signal processing software or firmware. Thus, the processing of neuromuscular signals may be performed in hardware, software and/or firmware.
In some examples, one or more of processors 120 (1) and 120 (2) may include and/or represent any type or form of hardware-implemented processing device capable of interpreting and/or executing computer-readable instructions. In one example, the processor 120 (1) or 120 (2) may access and/or modify certain software modules to facilitate and/or support detection of user input via hand gestures and/or arm motions. Examples of processors 120 (1) and 120 (2) include, but are not limited to, physical processors, central processing units (Central Processing Unit, CPUs), microprocessors, microcontrollers, field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) implementing soft-core processors, application-specific integrated circuits (ASICs), portions of one or more of the above, variations or combinations of one or more of the above, and/or any other suitable processing device.
In some examples, wearable device 102 may include and/or incorporate a wearable band. For example, wearable device 102 may include and/or represent a strap and/or belt that is designed and/or sized to at least partially enclose a wrist and/or arm of a user. The strip and/or tape may comprise and/or contain a variety of different materials. Examples of such materials include, but are not limited to, cotton, polyester, nylon, elastomer, plastic, neoprene, rubber, metal, wood, composites, combinations or variations of one or more of the foregoing, and/or any other suitable material. The strap and/or belt may be defined and/or shaped in various shapes and/or sizes for the purpose of securing the wearable device 102 to the wrist and/or arm of the user. In one example, the strip and/or belt may include and/or represent one or more segments, chains, and/or portions. Additionally or alternatively, the strip and/or belt may be adjustable to provide general functionality.
In some examples, wearable device 102 and/or head mounted display 104 may include and/or represent one or more additional components, devices, and/or mechanisms that are not necessarily shown and/or labeled in fig. 1. For example, wearable device 102 and/or head mounted display 104 may include and/or represent one or more storage devices that are not necessarily shown and/or labeled in fig. 1. Such a storage device may include and/or store computer-executable instructions that, when executed by the processor 120 (1) or 120 (2), cause the processor 120 (1) or 120 (2) to perform one or more tasks related to detecting user input via hand gestures and arm motions. Additionally or alternatively, although wearable device 102 and/or head mounted display 104 are not necessarily shown and/or labeled in fig. 1 in this manner, the wearable device and/or head mounted display may include and/or represent circuitry, transistors, resistors, capacitors, diodes, transceivers, sockets, wiring, circuit boards, additional processors, and/or additional storage devices, batteries, cables, and/or connectors, among other components.
In some examples, such a storage device may include and/or represent any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, such a storage device may store, load, and/or maintain one or more modules and/or trained inference models that perform certain tasks, classifications, and/or determinations related to locating motor unit action potentials to facilitate spike decomposition and stabilizing representations. Examples of Memory 108 include, but are not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), flash Memory, hard disk drive (HARD DISK DRIVE, HDD), solid state drive (Solid-STATE DRIVE, SSD), optical disk drive, cache Memory, variations or combinations of one or more of the above, and/or any other suitable storage Memory.
In some examples, wearable device 102 and/or head mounted display 104 may exclude and/or omit one or more of the components, devices, and/or mechanisms shown and/or labeled in fig. 1. For example, wearable device 102 and/or head mounted display 104 may exclude and/or omit radio 112 (1) or radio 112 (2), respectively. In this example, UWB-IR communication 140 between wearable device 102 and head mounted display 104 may still be available via and/or provided via transmitter 114 and receiver 118.
In some examples, the electrodes 116 may engage and/or make physical contact with the skin of the user while the user is wearing the wearable device 102. In one example, the wearable device 102 may be communicatively coupled to a computing system (e.g., a virtual reality headset, an augmented reality headset, a laptop, a desktop, a smart television, a monitor, etc.). In this example, the user may place and/or position their body in a particular state and/or condition (e.g., hand gesture) to control and/or modify the presentation or execution of the computing system. When a user places and/or places his body in the state and/or condition, the user's body may generate and/or produce neuromuscular signals representing, indicating and/or indicating the state or condition.
In some examples, the neuromuscular signals may traverse and/or traverse the body of the user. For example, the user may make such gestures and/or gestures: the gesture and/or gesture generates a neuromuscular signal that travels along the user's arm to the hand. In one example, one or more of the electrodes 116 may detect and/or sense neuromuscular signals as they travel along the arm to the hand. Electrical conductors (e.g., wires and/or traces) coupled between the electrodes and the processor 120 (1) may carry such signals and/or derivatives thereof and/or communicate such signals and/or derivatives thereof to the processor 120 (1). Processor 120 (1) may then generate and/or generate data representing these signals.
In some examples, data representing these signals may undergo some processing and/or conversion. Examples of such data include, but are not limited to, raw data generated and/or output by the electrodes, digital conversions and/or representations of analog signals output by the electrodes, processed digital representations of signals output by the electrodes, combinations or variations of one or more of the above, and/or any other suitable version of data representing neuromuscular signals.
In this example, the processor 120 (1) or 120 (2) may analyze and/or evaluate data representing neuromuscular signals to locate motor unit action potentials and/or facilitate spike decomposition or stabilization representations. For example, the processor 120 (1) or 120 (2) may execute and/or implement one or more software models and/or trained inference models or classifiers. The processor 120 (1) or 120 (2) may input and/or feed data representing neuromuscular signals to one or more of these software modules and/or inference models. From this data, such software modules and/or inference models may be able to output and/or generate the following classifications: the classification identifies and/or indicates one or more units of motion that cause certain spikes in the neuromuscular signal. Additionally or alternatively, such software modules and/or inference models may be capable of determining, based at least in part on the units of motion, that a user has made a particular gesture with at least one portion of the user's body (e.g., using a K-Nearest Neighbor (KNN) classifier).
In some examples, radios 112 (1) and 112 (2) may each include and/or represent a bluetooth radio and/or a bluetooth low energy radio. Additionally or alternatively, the transmitter 114 and/or the receiver 118 may each include and/or represent one or more UWB-IR devices. In one example, the transmitter 114 and/or the receiver 118 may each be included in and/or represent a portion of a transceiver that facilitates and/or supports UWB-IR communications, UWB-IR links, and/or UWB-IR channels. Additional examples of the transmitter 114, the receiver 118, and/or the radios 112 (1) and 112 (2) include, but are not limited to, a WiFi device, a cellular communication device, a bluetooth radio, a bluetooth low energy radio, a UWB device, a pulse radio, combinations or variations of one or more of the foregoing, and/or any other suitable wireless communication device.
In some examples, wearable device 102 and head mounted display 104 may exchange configuration data and/or synchronization data with each other via radios 112 (1) and 112 (2). For example, wearable device 102 may send and/or transmit configuration data and/or synchronization data to head mounted display 104 via radios 112 (1) and 112 (2). Additionally or alternatively, the head mounted display 104 may send and/or transmit configuration data and/or synchronization data to the wearable device 102 via the radios 112 (2) and 112 (1). In these examples, processor 120 (1) and/or processor 120 (2) may use the configuration data and/or synchronization data, respectively, to configure wearable device 102 and/or head mounted display 104, and/or to synchronize wearable device 102 and/or head mounted display 104 with each other. In another example, wearable device 102 may send and/or transmit data regarding neuromuscular signals detected via a body part of a user to head mounted display 104 via radios 112 (1) and 112 (2).
In some examples, wearable device 102 may be sized to be worn on a body part of a user. In such examples, the electrodes 116 included on the wearable device 102 may detect, sense, and/or conduct one or more neuromuscular signals via a body part of the user. In one example, the transmitter 114 incorporated into the wearable device 102 may transmit, send, and/or transmit electromagnetic signals (e.g., UWB-IR signals) to the receiver 118 incorporated into the head mounted display 104. In this example, a receiver 118 incorporated into the head mounted display 104 may receive and/or detect electromagnetic signals transmitted by the transmitter 114.
In some examples, the processor 120 (1) or 120 (2) may determine that the user made a particular gesture and/or a particular arm motion based at least in part on neuromuscular signals detected via the body part of the user. In one example, the particular gesture made by the user may include and/or represent a pinch action and/or pinch gesture. In this example, the pinching action and/or pinching gesture may involve one of the user's digits (e.g., index finger, middle finger, ring finger, and/or little finger) pressing and/or against the user's thumb. Additional examples of such gestures include, but are not limited to, fist motions or fist gestures, wrist motions or wrist motions, open hand motions or open hand gestures, alternative pinch or gestures, hand motions or arm motions, combinations or variations of one or more of the above, and/or any other suitable gestures.
In some examples, certain gestures and/or motions may be mapped to different input commands for the head mounted display 104. In one example, the gesture may be mapped to a particular input command such that when the user makes and/or executes the gesture, wearable device 102 or head mounted display 104 causes an application running on head mounted display 104 to click, select, and/or modify one or more features (e.g., virtual components presented by head mounted display 104). Additionally or alternatively, the input command may be triggered and/or initiated in response to the user holding and/or performing the gesture for a predetermined duration.
In some examples, the processor 120 (2) may determine a position of the user's body part at the time the user made the particular gesture based at least in part on the electromagnetic signals received by the receiver 118. For example, as a UWB-IR device, the transmitter 114 may apply and/or incorporate a time stamp into the UWB-IR signal transmitted to the receiver 118. In this example, UWB-IR signals may arrive at and/or arrive at each receiver 118 at different times relative to each other.
In some examples, the processor 120 (2) may identify and/or determine a first time of arrival of the UWB-IR signal at and/or to a first receiver included in each receiver 118. In such an example, the processor 120 (2) may identify and/or determine a second arrival time of the UWB-IR signal at which the UWB-IR signal arrives and/or arrives at a second receiver included in each receiver 118. Thus, the first arrival time and the second arrival time may relate to and/or correspond to the first receiver and the second receiver, respectively.
In one example, the processor 120 (2) may calculate and/or estimate an angle of arrival of the UWB-IR signal with respect to the receiver 118 based at least in part on the first and second times of arrival and the timestamp. For example, the processor 120 (2) may subtract the time identified in the time stamp from the different times of arrival of the UWB-IR signals as received by the respective receivers 118. The resolution and/or accuracy of the calculation may increase and/or improve with the number of receivers involved. Thus, the processor 120 (2) may calculate and/or estimate a more accurate and/or more precise angle of arrival of the UWB-IR signal with respect to each receiver 118 by considering each time of arrival with respect to 3 or 4 different receivers incorporated into the head mounted display 104.
In some examples, the processor 120 (2) may calculate and/or estimate at least one dimension of the position of the virtual component within the field of view of the head mounted display 104 based at least in part on the angle of arrival. In one example, the processor 120 (2) can present and/or display the virtual component at the location within the field of view of the head mounted display 104 based at least in part on the at least one dimension. For example, the processor 120 (2) may cause and/or direct the head mounted display 104 to present a cursor and/or pointer at a particular location and/or position within the field of view of the display screen 110 based at least in part on the dimension. Additionally or alternatively, the processor 120 (2) may cause and/or direct the head mounted display 104 to superimpose and/or overlay the cursor and/or pointer on or on top of the display screen 110 in accordance with the dimension. In some examples, processor 120 (2) may also cause and/or control head mounted display 104 to select, open, and/or modify another virtual component in the vicinity of a cursor and/or pointer presented on display screen 110 to address one or more gestures made by the user.
In some examples, the dimensions of the calculated and/or estimated position of the virtual component may constitute and/or represent an azimuth, an elevation, and/or a depth of the virtual component to be presented within the field of view of the head mounted display. Additionally or alternatively, the processor 120 (2) may determine the size, orientation, and/or shape of the virtual component based at least in part on the dimension and/or position.
In some examples, the processor 120 (2) may detect, sense, and/or determine a first phase of the UWB-IR signal as it arrives and/or arrives at a first receiver included in each receiver 118. In such examples, the processor 120 (2) may detect, sense, and/or determine a second phase of the UWB-IR signal as it arrives and/or arrives at a second receiver included in each receiver 118. Thus, the first phase and the second phase of the UWB-IR signal may relate to and/or correspond to the first receiver and the second receiver, respectively. In one example, the first phase and the second phase of the UWB-IR signal may be related and/or correlated to each other.
In one example, the processor 120 (2) may calculate and/or estimate an angle of arrival of the UWB-IR signal with respect to each receiver 118 based at least in part on a difference between the first phase and the second phase of the UWB-IR signal and/or a timestamp. The resolution and/or accuracy of the calculation may increase and/or improve with the number of receivers involved. Thus, the processor 120 (2) may calculate and/or estimate a more accurate and/or precise angle of arrival of the UWB-IR signal relative to the respective receiver 118 by considering the respective phases of the UWB-IR signal relative to the 3 or 4 different receivers incorporated into the head mounted display 104.
In some examples, the processor 120 (2) may calculate and/or estimate a two-dimensional (2D) location and/or a three-dimensional (3D) location of the virtual component within the field of view of the head-mounted display 104 based at least in part on the angle of arrival and/or the phase of the UWB-IR signal. In one example, the processor 120 (2) may present and/or display virtual components at the 2D location and/or 3D location within the field of view of the head mounted display 104. For example, the processor 120 (2) may cause and/or direct the head mounted display 104 to present a cursor and/or pointer at the 2D location and/or 3D location within the field of view of the display screen 110. Additionally or alternatively, the processor 120 (2) may cause and/or direct the head mounted display 104 to superimpose and/or overlay a cursor and/or pointer on or on top of the display screen 110 at the 2D and/or 3D position.
In some examples, the 2D position and/or 3D position of the virtual component may include, relate to, and/or account for azimuth, elevation, and/or depth of the virtual component presented within the field of view of the head mounted display 104. Additionally or alternatively, the processor 120 (2) may determine a size, orientation, and/or shape of the virtual component based at least in part on the 2D location and/or the 3D location.
In some examples, the user may remove the wearable device 102 and/or the corresponding body part from the field of view of the head-mounted display 104. In one example, the processor 120 (2) may detect, sense, and/or determine that the wearable device 102 is no longer visible within the field of view of the head mounted display 104. The processor 120 (2) may remove the representation and/or virtual component corresponding to the wearable device 102 or body part from the field of view of the head-mounted display 104 in response to the determination. For example, the processor 120 (2) may cause and/or direct the head mounted display 104 to cause the cursor and/or pointer to disappear from the field of view of the display screen 110.
In some examples, a user may wear and/or wear multiple instances of wearable device 102, and all of these instances of wearable device 102 may be communicatively coupled to head-mounted display 104. For example, a user may wear one wristband on the right wrist and/or wear another wristband on the left wrist. In this example, both the right wristband and the left wristband may transmit UWB-IR signals to the head mounted display 104.
In some examples, all examples of wearable device 102 may perform any of the operations and/or functions described above in connection with fig. 1. For example, the right wristband may detect and/or sense neuromuscular signals via the right wrist, while the left wristband may detect and/or sense neuromuscular signals via the left wrist. In this example, the head mounted display 104 may determine that the user made a gesture with the right hand based on neuromuscular signals detected via the right wrist. Similarly, the head mounted display 104 may determine that the user made another gesture with the left hand based on neuromuscular signals detected via the left wrist.
In some examples, the head mounted display 104 may determine the position of the right wristband and/or the corresponding body part when the user performs a gesture with the right hand based at least in part on the UWB-IR signal transmitted by the right wristband. Similarly, the head mounted display 104 may determine the position of the left wristband and/or the corresponding body part when the user performs a gesture with the left hand based at least in part on the UWB-IR signal transmitted by the left wristband.
In some examples, the head mounted display 104 may calculate and/or estimate the position of the virtual component within the field of view of the display screen 110 based at least in part on the angles of arrival of the UWB-IR signals transmitted by the right and left bracelets. In one example, the head mounted display 104 may present and/or display virtual components at those locations within the field of view of the display screen 110. For example, the head mounted display 104 may cause and/or direct the display screen 110 to overlay and/or cover a cursor and/or pointer corresponding to the right hand and another cursor and/or pointer corresponding to the left hand. In some examples, the head mounted display 104 may also cause and/or direct one or more of these cursors or pointers to select, open, and/or modify another virtual component presented on the display screen 110 to address one or more gestures made by the right and/or left hand of the user. These cursors and/or pointers may be presented simultaneously and/or on the display screen 110.
Fig. 4 is an illustration of an exemplary embodiment 400 of the artificial reality system 100 for detecting user input via hand gestures and arm motions. In some examples, implementation 400 may include and/or involve user 410 being wearing and/or operating wearable device 102. In such an example, wearable device 102 may locate the motion unit action potential to facilitate spike decomposition and stable representation. In one example, wearable device 102 may detect and/or sense neuromuscular signals 440 through the body of user 410 via electrodes 116. Wearable device 102 may then convert these neuromuscular signals from a time domain representation to a frequency domain representation and/or a spatial domain representation for further processing.
In some examples, wearable device 102 may identify, within a portion of the body of user 410, a unit of motion that caused a spike in neuromuscular signal 440 by decomposing the spike in neuromuscular signal 440. In one example, the motor unit may include and/or represent a motor neuron and/or skeletal muscle fibers innervated by axon terminals of the motor neuron. For example, a motor unit may include and/or represent a motor neuron and all muscle fibers stimulated by the motor neuron.
In some examples, wearable device 102 may determine that the user made a particular gesture with at least one body part based at least in part on the unit of motion that caused the spike in neuromuscular signal 440. For example, the wearable device 102 may process the neuromuscular signal 440 converted to the frequency domain representation via a machine learning classifier (e.g., KNN classifier). In this example, wearable device 102 may detect, via a machine learning classifier, a spike pattern indicative of a particular gesture, and then determine that the particular gesture was made by the user based at least in part on the spike pattern. Additionally or alternatively, wearable device 102 may then direct head mounted display 104 to manipulate and/or alter one or more audio elements and/or visual elements presented via head mounted display 104 to address the particular gesture made by the user.
Fig. 5 illustrates an exemplary embodiment 500 of the artificial reality system 100 for detecting user input via hand gestures and arm motions. In some examples, the implementation 500 may include and/or involve the user being wearing and/or operating the wearable device 102. In such an example, the wearable device 102 may detect and/or sense neuromuscular signals 440 through the body of the user via the electrodes 116. In one example, wearable device 102 may then convert neuromuscular signal 440 from a time domain representation to a frequency domain representation to facilitate and/or support detection and/or recognition of gesture 502 and/or motion 504.
In some examples, wearable device 102 may detect and/or identify one or more gestures 502 and/or motions 504 based at least in part on neuromuscular signals 440. In such examples, wearable device 102 may send and/or transmit information or data to head-mounted display 104 via radio communication 138 and/or UWB-IR communication 140 indicating that the user performed one or more of gesture 502 and/or motion 504.
In other examples, the wearable device 102 may send and/or transmit information or data representing the neuromuscular signal 440 to the head-mounted display 104 via the radio communication 138 or UWB-IR communication 140. In such examples, the head mounted display 104 may detect and/or identify one or more of the gestures 502 and/or movements 504 based at least in part on the information or data representing the neuromuscular signals 440.
In some examples, the processor 120 (1) and/or the transmitter 114 incorporated into the wearable device 102 may tag the UWB-IR signal with a timestamp. In such an example, the transmitter 114 may send and/or transmit the tagged UWB-IR signals via UWB-IR communication 140 to the set of receivers 118 incorporated into the head mounted display 104. For example, the tagged UWB-IR signal may reach and/or arrive at both receivers 118 (1) and 118 (2), which are positioned a distance 510 from each other. In this example, the head mounted display 104 may calculate and/or estimate an angle of arrival and/or a phase difference of arrival of the tagged UWB-IR signals relative to the receivers 118 (1) and 118 (2) based at least in part on the time stamps.
In some examples, the head mounted display 104 may perform the angle of arrival calculation 800 in fig. 8 to determine and/or estimate the position of virtual components (e.g., pointers and/or cursors) superimposed on the display screen 110. As a specific example, the head mounted display 104 may calculate and/or estimate a Radio Frequency (RF) carrier phase difference by applying the following equation: Wherein/> Representing the phase of the carrier signal relative to receiver 118 (1)/>Representing the phase of the carrier signal relative to the receiver 118 (2), f representing the frequency of the carrier signal, Δt representing the path time or arrival time of the carrier signal between the transmitter 114 and the corresponding receiver, Δd representing the distance between the transmitter 114 and the corresponding receiver, c representing the speed of light, D representing the known distance between the receivers 118 (1) and 118 (2), θ representing the angle of arrival of the carrier signal relative to the receivers 118 (1) and 118 (2), and λ representing the wavelength of the carrier signal. Additionally or alternatively, the head mounted display 104 may calculate and/or estimate the angle of arrival of the carrier signal relative to the receivers 118 (1) and 118 (2) by applying the following formula: /(I)In one example, the angle of arrival of the carrier signal may constitute and/or represent the direction in which the carrier is received relative to receivers 118 (1) and 118 (2).
In some of the examples of the present invention,And θ may correspond to and/or represent the azimuth and elevation angle of wearable device 102, and/or the azimuth and elevation angle of a virtual component within the field of view, respectively. For example,/>The calculations may be translated and/or converted into azimuth angles of virtual components (e.g., pointers and/or cursors) to be presented within the spherical coordinate system of the head mounted display 104. In this example, the θ calculation may be translated and/or converted to an elevation angle of the virtual component to be presented within the spherical coordinate system of the head mounted display 104.
In some examples, head mounted display 104 may calculate and/or estimate a path time and/or an arrival time of a carrier signal between transmitter 114 and receivers 118 (1) and 118 (2). For example, the processor 120 (2) incorporated into the head mounted display 104 may identify and/or detect a timestamp tagged to the UWB-IR signal. In this example, processor 120 (2) may also identify and/or detect the arrival time of the carrier signal received by receiver 118 (1) and/or receiver 118 (2). In one example, processor 120 (2) may determine the journey time and/or time delta of the carrier signal by subtracting the time identified in the timestamp from the arrival time. Processor 120 (2) may then calculate and/or estimate an angle of arrival of the carrier signal relative to receivers 118 (1) and 118 (2) based at least in part on the process time and/or time increment.
In some examples, the angle of arrival of the carrier signal may correspond to and/or represent a position (e.g., 2D representation and/or 3D representation) of the wearable device and/or associated body part within the field of view of the head-mounted display 104. In such examples, the head mounted display 104 may generate virtual components (e.g., pointers and/or cursors) representing the wearable device and/or associated body for presentation and/or overlaying on the display screen 110 of the head mounted display 104. The head mounted display 104 may convert and/or translate the angle of arrival of the carrier signal into a corresponding coordinate system of the display screen 110 and/or a location of the grid (e.g., expressed as azimuth and/or elevation). In one example, the virtual component may be visually combined (e.g., in an augmented reality environment or scene) with one or more real components visible through the display screen 110.
Fig. 6 illustrates an exemplary embodiment 600 of the artificial reality system 100 for detecting user input via hand gestures and arm motions. In some examples, embodiment 600 may include and/or involve a user wearing wearable device 102 on skin 602 and/or operating the wearable device on the skin. In such examples, wearable device 102 may include and/or represent electrode 606 coupled to skin 602. In one example, the EMG signal 612 may traverse and/or traverse the muscle 604 in the user's body.
In some examples, the electrodes 606 may detect and/or sense an EMG signal 612 that is traversing the muscle 604 in the user's body via the skin 602. In one example, wearable device 102 and/or head mounted display 104 may then transform and/or convert EMG signals 612 from a time-domain representation to a frequency-domain representation and/or a spatial-domain representation to facilitate and/or support detection and/or recognition of one or more of gesture 502 and/or motion 504.
In some examples, wearable device 102 and/or head mounted display 104 may parse and/or decompose such EMG signals as represented in the time domain. For example, wearable device 102 and/or head mounted display 104 may perform a decomposition 614 of EMG signal 612 to a sequence of motion unit action potentials 616 representing individual motor neurons 610 that innervate muscle 604 in the user's body. Wearable device 102 and/or head mounted display 104 may determine that the user performed one or more of gesture 502 and/or motion 504 based at least in part on certain patterns and/or spikes identified and/or detected in motion unit action potential sequence 616.
Fig. 7 illustrates an exemplary representation of neuromuscular signals 700 detected and/or identified via a wearable device 102 worn by a user operating the artificial reality system 100. In some examples, neuromuscular signal 700 may include and/or represent raw time domain EMG signal 702 detected via electrodes 116 disposed and/or disposed on wearable device 102. In one example, the wearable device 102 and/or the head mounted display 104 may process, convert, and/or convert the original time domain EMG signal 702 into a processed frequency domain EMG signal 704. In this example, wearable device 102 and/or head mounted display 104 may determine that the user performed one or more of gesture 502 and/or motion 504 based at least in part on certain patterns and/or spikes identified and/or detected in original time domain EMG signal 702 and/or processed frequency domain EMG signal 704.
Fig. 9 illustrates an exemplary viewpoint implementation 900 of the artificial reality system 100. As shown in fig. 9, an exemplary viewpoint implementation 900 may include and/or represent a user wearing a head mounted display 104 and a wearable device 102. In some examples, the display screen 110 of the head mounted display 104 may include and/or represent lenses that facilitate and/or support see-through visibility with overlaid virtual overlays. For example, wearable device 102 may transmit accurate time-stamped UWB-IR signals to various UWB antennas incorporated into head mounted display 104. In this example, the head mounted display 104 may be able to determine the angle of arrival of the time stamped signal. The head mounted display 104 may be capable of triangulating and/or tracking the relative position of the hand 902 proximate to the wearable device 102 based on the angle of arrival. The head mounted display 104 may then display a virtual component 904 corresponding to the position of the hand 902 in the field of view of the display screen 110.
In some examples, the head mounted display 104 may superimpose the virtual component 906 on some real components visible via the display screen 110. For example, virtual component 904 may include and/or represent pointers controlled by movements and/or gestures of wearable device 102 and/or hand 902, and virtual component 906 may include and/or represent messages, dialog boxes, and/or pattern windows. In one example, a user may perform one or more gestures and/or motions related to virtual component 906 to interact with virtual component 906, open the virtual component, control the virtual component, modify the virtual component, and/or otherwise manipulate the virtual component. In this example, wearable device 102 may detect neuromuscular signals indicative of such gestures and/or movements, and head mounted display 104 may receive data from wearable device 102 indicative of such gestures and/or movements. Head mounted display 104 may then determine that the user performed such gestures and/or movements while virtual component 904 and virtual component 906 overlap. Accordingly, the head mounted display 104 can perform one or more actions (e.g., click, rotate, drag, drop, zoom, etc.) related to the virtual component 906 that map to such gestures and/or motions.
Fig. 10 illustrates an exemplary spherical coordinate system 1000 for converting user input to a display screen 110 of the head mounted display 104. As shown in fig. 10, an exemplary spherical coordinate system 1000 may include and/or represent an azimuth 1010 and an elevation 1020. In some examples, the head mounted display 104 may present virtual components (e.g., a pointer and/or cursor) at locations and/or positions within the field of view of the display screen 110 defined by the azimuth 1010 and elevation 1020 of the spherical coordinate system 1000.
FIG. 11 is a flow chart of an exemplary method 1100 for detecting user input via hand gestures and arm motions. In one example, the steps shown in fig. 11 may be performed during operation of the artificial reality system. Additionally or alternatively, the steps illustrated in fig. 11 may also include and/or involve various sub-steps and/or variations consistent with the description provided above in connection with fig. 1-10.
As shown in fig. 11, method 1100 may include and/or involve the steps of: one or more neuromuscular signals are detected at a body part of a user by a wearable device (1110). Step 1110 may be performed in a variety of ways, including any of those described above in connection with fig. 1-10. For example, a wearable device worn by a user of an artificial reality system may detect one or more neuromuscular signals at a body part of the user.
The method 1100 may further include and/or involve the steps of: an electromagnetic signal is transmitted by a transmitter included on the wearable device (1120). Step 1120 may be performed in a variety of ways, including any of those described above in connection with fig. 1-10. For example, the wearable device may include a transmitter that transmits and/or transmits UWB-IR signals to a UWB antenna array disposed on the head mounted display.
The method 1100 may further include and/or involve the steps of: electromagnetic signals transmitted by the transmitter are received at a set of receivers included on the head mounted display (1130). Step 1130 may be performed in a variety of ways including any of those described above in connection with fig. 1-10. For example, a set of UWB antennas disposed on a head mounted display may detect and/or receive UWB-IR signals transmitted and/or received by a transmitter included on a wearable device.
The method 1100 may further include and/or involve the steps of: a determination is made that a particular gesture was made by the user based at least in part on the one or more neuromuscular signals (1140). Step 1140 may be performed in a variety of ways, including any of those described above in connection with fig. 1-10. For example, a processing device incorporated into a wearable device or head mounted display may determine and/or identify a particular gesture and/or a particular motion made by a user based at least in part on the one or more neuromuscular signals.
The method 1100 may further include and/or involve the steps of: a position of a body part of the user at the time the user makes the particular gesture is determined based at least in part on the electromagnetic signals (1150). Step 1150 may be performed in a variety of ways, including any of those described above in connection with fig. 1-10. For example, a processing device incorporated into the head mounted display may determine and/or identify a location of a body part of the user at the time the user makes a particular gesture based at least in part on the received electromagnetic signals from the wearable device.
As described above in connection with fig. 1-11, AR glasses and/or corresponding systems may facilitate and/or support multi-dimensional virtual cursor movement and object selection through arm movement and hand gestures. In some examples, the AR glasses may contain a set of UWB antennas that receive various information and/or data from a wristband worn by the user. In one example, the wristband may include a UWB pulse radio that transmits accurate time-stamped signals to the set of UWB antennas incorporated into the AR glasses. In this example, the AR glasses may be able to determine the angle of arrival of the time-stamped signal. The AR glasses may be capable of triangulating and/or tracking the relative position of the user's hand near the wristband based on the angle of arrival. The AR glasses may then display a virtual representation of the user's hand and/or a cursor in the field of view of the user.
In addition, the wristband may include a set of EMG sensors that measure EMG activity at the user's wrist. In this example, the wristband and/or AR glasses may be able to interpret certain hand gestures performed by the user when operating the AR glasses based at least in part on EMG activity measured by the EMG sensor. The wristband and/or AR glasses may then determine whether any of these hand gestures correspond to certain commands and/or user inputs to be applied to the user's AR experience. For example, the user may point to a real part and/or a virtual part displayed in the user's AR experience, and the AR glasses may determine which real part and/or virtual part the user is pointing to based at least in part on the angle of arrival of the UWB transmitted signal. The user may then select such a component in the AR experience by performing, for example, a specific gesture with his hand (the specific gesture being detected by EMG activity measured by the EMG sensor).
Example embodiment
Example 1: an artificial reality system, comprising: (1) A wearable device sized to be worn on a body part of a user, wherein the wearable device comprises: (A) A set of electrodes that detect one or more neuromuscular signals via a body part of a user, and (B) a transmitter that transmits electromagnetic signals; (2) A head mounted display communicatively coupled to the wearable device, wherein the head mounted display includes a set of receivers that receive electromagnetic signals transmitted by a transmitter included on the wearable device; and (3) one or more processing devices that: (1) Determining that the user made the particular gesture based at least in part on the one or more neuromuscular signals detected via the body part of the user, and (2) determining a position of the body part of the user at the time the user made the particular gesture based at least in part on electromagnetic signals received by the set of receivers included on the head-mounted display.
Example 2: the artificial reality system of example 1, wherein at least one of the one or more processing devices is incorporated into a wearable device.
Example 3: the artificial reality system of example 1 or 2, wherein at least one of the one or more processing devices is incorporated into a head mounted display.
Example 4: the artificial reality system of any of examples 1-3, wherein (1) the wearable device comprises a first bluetooth radio, and (2) the head mounted display comprises a second bluetooth radio communicatively coupled to the first bluetooth radio, the first bluetooth radio and the second bluetooth radio configured to exchange configuration data between the wearable device and the head mounted display.
Example 5: the artificial reality system of any of examples 1-4, wherein the first bluetooth radio and the second bluetooth radio are further configured to exchange data about the one or more neuromuscular signals between the wearable device and the head-mounted display.
Example 6: the artificial reality system of any of examples 1-5, wherein at least one of the one or more processing devices generates an input command based at least in part on data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one virtual component to address a particular gesture.
Example 7: the artificial reality system according to any one of examples 1-6, wherein the transmitter incorporates a timestamp into the electromagnetic signal before sending the electromagnetic signal to the set of receivers.
Example 8: the artificial reality system of any of examples 1-7, wherein at least one of the one or more processing devices: (1) Determining a first time of arrival of an electromagnetic signal received by a first receiver included in the set of receivers; (2) Determining a second time of arrival of the electromagnetic signal received by a second receiver included in the set of receivers; and (3) calculating an angle of arrival of the electromagnetic signal relative to the set of receivers based at least in part on the first and second times of arrival of the electromagnetic signal and the time stamp.
Example 9: the artificial reality system of any of examples 1-8, wherein at least one of the one or more processing devices: (1) Calculating at least one dimension of a position of the virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and (2) presenting the virtual component at the location within the field of view of the head mounted display based at least in part on the at least one dimension.
Example 10: the artificial reality system of any of examples 1-9, wherein the at least one dimension of the calculated position of the virtual component comprises at least one of: (1) An azimuth of a virtual component to be presented within a field of view of the head mounted display; (2) Elevation angle of a virtual component to be presented within a field of view of a head mounted display; or (3) the depth of the virtual component to be rendered within the field of view of the head mounted display.
Example 11: the artificial reality system of any of examples 1 to 10, wherein at least one of the one or more processing devices: (1) Determining a first phase of an electromagnetic signal received by a first receiver included in the set of receivers; (2) Determining a second phase of the electromagnetic signal received by a second receiver included in the set of receivers; and (3) calculating an angle of arrival of the electromagnetic signal relative to the set of receivers based at least in part on a difference between the first phase and the second phase of the electromagnetic signal and the timestamp.
Example 12: the artificial reality system of any of examples 1 to 11, wherein at least one of the one or more processing devices: (1) Calculating a two-dimensional position of the virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and (2) presenting the virtual component at the two-dimensional location within the field of view of the head mounted display.
Example 13: the artificial reality system of any of examples 1 to 12, wherein at least one of the one or more processing devices: (1) Calculating a three-dimensional position of the virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and (2) presenting the virtual component at the three-dimensional location within the field of view of the head-mounted display.
Example 14: the artificial reality system according to any one of examples 1-13, wherein: (1) The virtual part presented at the location includes a pointer presented at the location; and (2) at least one of the one or more processing devices superimposes the pointer on a screen of the head mounted display.
Example 15: the artificial reality system of any of examples 1-14, wherein at least one of the one or more processing devices generates an input command based at least in part on data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one additional virtual component presented near a pointer within a field of view of the head mounted display to correspond to a particular gesture.
Example 16: the artificial reality system of any of examples 1 to 15, wherein at least one of the one or more processing devices: (1) Determining, based at least in part on the angle of arrival, that the wearable device is no longer visible within the field of view of the head-mounted display; and in response to determining that the wearable device is no longer visible within the field of view of the head-mounted display, (2) removing the virtual component from the field of view of the head-mounted display.
Example 17: the artificial reality system of any of examples 1-16, further comprising an additional wearable device sized to be worn on an additional body part of a user, wherein the wearable device comprises: (1) An additional set of electrodes that detect one or more additional neuromuscular signals via an additional body part of the user; and (2) at least one additional transmitter that transmits an additional electromagnetic signal; wherein: the head mounted display is also communicatively coupled to an additional wearable device, the set of receivers receiving additional electromagnetic signals transmitted by an additional transmitter included on the additional wearable device; and at least one of the one or more processing devices: (1) Determining that the user made an additional gesture based at least in part on the one or more additional neuromuscular signals detected via the body part of the user; and (2) determining a position of the body part of the user at the time the user makes the additional gesture based at least in part on the additional electromagnetic signals received by the set of receivers included on the head mounted display.
Example 18: the artificial reality system of any of examples 1 to 17, wherein at least one of the one or more processing devices: (1) Calculating at least one dimension of a position of the virtual component within a field of view of the head mounted display based at least in part on the electromagnetic signals; (2) Calculating at least one additional dimension of an additional position of the additional virtual component within the field of view of the head mounted display based at least in part on the additional electromagnetic signals; and (3) presenting the virtual component at the location and presenting the additional virtual component at the additional location within the field of view of the head mounted display based at least in part on the at least one dimension and the at least one additional dimension.
Example 19: a head mounted display, comprising: (1) A set of receivers configured to receive electromagnetic signals transmitted by a transmitter included on a wearable device sized to be worn on a body part of a user; (2) A radio configured to receive data regarding one or more neuromuscular signals detected by the wearable device via a body part of a user; and (3) at least one processing device communicatively coupled to the set of receivers and radios, wherein the processing device: (A) Determining that the user made the particular gesture based at least in part on data regarding the one or more neuromuscular signals detected via the body part of the user; and (B) determining a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
Example 20: a method, comprising: (1) Detecting, by a wearable device worn on a body part of a user, one or more neuromuscular signals at the body part of the user; (2) Transmitting, by a transmitter included on the wearable device, an electromagnetic signal; (3) Receiving, by a set of receivers included on a head mounted display worn by a user, electromagnetic signals sent by a transmitter included on a wearable device; (4) Determining, by the one or more processing devices, that a particular gesture was made by the user based at least in part on the one or more neuromuscular signals; and (5) determining, by the one or more processing devices, a position of the body part of the user at the time the user makes the particular gesture based at least in part on the electromagnetic signal.
Embodiments of the present disclosure may include or be implemented in conjunction with various types of artificial reality systems. An artificial reality is a form of reality that has been somehow adjusted before being presented to a user, which may include, for example, virtual reality, augmented reality, mixed reality (mixed reality), mixed reality (hybrid reality), or some combination and/or derivative thereof. The artificial reality content may include entirely computer-generated content, or computer-generated content in combination with collected (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or multiple channels (e.g., stereoscopic video that brings three-dimensional (3D) effects to the viewer). Additionally, in some embodiments, the artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, e.g., for creating content in the artificial reality, and/or otherwise using in the artificial reality (e.g., performing an activity in the artificial reality).
The artificial reality system may be implemented in a variety of different form factors and configurations. Some artificial reality systems may be designed to operate without a near-eye display (NED-EYE DISPLAY, NED). Other artificial reality systems may include NEDs that also provide visibility to the real world (e.g., augmented reality system 1200 in FIG. 12) or NEDs that visually immerse the user in artificial reality (e.g., virtual reality system 1300 in FIG. 13). While some artificial reality devices may be stand-alone systems, other artificial reality devices may communicate with and/or coordinate with external devices to provide an artificial reality experience to a user. Examples of such external devices include a handheld controller, a mobile device, a desktop computer, a device worn by a user, a device worn by one or more other users, and/or any other suitable external system.
Turning to fig. 12, the augmented reality system 1200 may include an eyeglass device 1202 having a frame 1210 configured to hold a left display device 1215 (a) and a right display device 1215 (B) in front of a user's eyes. The display device 1215 (a) and the display device 1215 (B) may act together or independently to present an image or series of images to a user. Although the augmented reality system 1200 includes two displays, embodiments of the present disclosure may be implemented in an augmented reality system having a single NED or more than two nes.
In some embodiments, the augmented reality system 1200 may include one or more sensors, such as sensor 1240. The sensor 1240 may generate measurement signals in response to the motion of the augmented reality system 1200 and may be located substantially on any portion of the frame 1210. The sensor 1240 may represent one or more of a variety of different sensing mechanisms, such as a position sensor, an inertial measurement unit (inertial measurement unit, IMU), a depth camera assembly, structured light emitters and/or detectors, or any combination thereof. In some embodiments, the augmented reality system 1200 may or may not include a sensor 1240, or may include more than one sensor. In embodiments where the sensor 1240 includes an IMU, the IMU may generate calibration data based on measurement signals from the sensor 1240. Examples of sensors 1240 may include, but are not limited to, accelerometers, gyroscopes, magnetometers, other suitable types of sensors that detect motion, sensors for error correction of an IMU, or some combination thereof.
In some examples, the augmented reality system 1200 may also include a microphone array having a plurality of acoustic transducers 1220 (a) through 1220 (J), collectively referred to as acoustic transducers 1220. The acoustic transducer 1220 may represent a transducer that detects changes in air pressure caused by sound waves. Each acoustic transducer 1220 may be configured to detect sound and convert the detected sound to an electronic format (e.g., analog format or digital format). The microphone array in fig. 12 may include, for example, ten acoustic transducers: 1220 (a) and 1220 (B), which may be designed to be placed within respective ears of a user; acoustic transducers 1220 (C), 1220 (D), 1220 (E), 1220 (F), 1220 (G), and 1220 (H), which may be positioned at various locations on frame 1210; and/or acoustic transducers 1220 (I) and 1220 (J) that may be positioned on corresponding neck straps 1205.
In some embodiments, one or more of the acoustic transducers 1220 (a) through 1220 (J) may be used as output transducers (e.g., speakers). For example, acoustic transducer 1220 (a) and/or acoustic transducer 1220 (B) may be an ear bud or any other suitable type of headphones or speakers.
The configuration of the individual acoustic transducers 1220 in the microphone array may vary. Although the augmented reality system 1200 is shown in fig. 12 as having ten acoustic transducers 1220, the number of acoustic transducers 1220 may be more or less than ten. In some embodiments, using a greater number of acoustic transducers 1220 may increase the amount of audio information collected and/or improve the sensitivity and accuracy of the audio information. In contrast, using a fewer number of acoustic transducers 1220 may reduce the computational power required by the associated controller 1250 to process the collected audio information. In addition, the location of each acoustic transducer 1220 in the microphone array may vary. For example, the location of the acoustic transducers 1220 may include defined locations on the user, defined coordinates on the frame 1210, a position associated with each acoustic transducer 1220, or some combination thereof.
Acoustic transducers 1220 (a) and 1220 (B) may be positioned on different locations of the user's ear, such as behind the pinna (pinna), behind the tragus, and/or within the pinna (auricle) or the ear socket. Or there may be additional acoustic transducers 1220 on or around the ear in addition to the acoustic transducers 1220 in the ear canal. Positioning the acoustic transducer 1220 near the ear canal of the user may enable the microphone array to collect information about how sound reaches the ear canal. By positioning at least two acoustic transducers of the plurality of acoustic transducers 1220 on both sides of the user's head (e.g., as binaural microphones), the augmented reality system 1200 may simulate binaural hearing and capture a 3D stereoscopic field around the user's head. In some embodiments, acoustic transducers 1220 (a) and 1220 (B) may be connected to augmented reality system 1200 via wired connection 1230, while in other embodiments acoustic transducers 1220 (a) and 1220 (B) may be connected to augmented reality system 1200 via a wireless connection (e.g., a bluetooth connection). In other embodiments, acoustic transducers 1220 (a) and 1220 (B) may not be used at all in conjunction with augmented reality system 1200.
The plurality of acoustic transducers 1220 on the frame 1210 may be positioned in a variety of different ways including along the length of the earpieces, across the bridge, above or below the display devices 1215 (a) and 1215 (B), or some combination thereof. The plurality of acoustic transducers 1220 may also be oriented such that the microphone array is capable of detecting sound over a wide range of directions around a user wearing the augmented reality system 1200. In some embodiments, an optimization process may be performed during manufacture of the augmented reality system 1200 to determine the relative positioning of the individual acoustic transducers 1220 in the microphone array.
In some examples, the augmented reality system 1200 may include or be connected to an external device (e.g., a pairing device), such as a neck strap 1205. Neck strap 1205 generally represents any type or form of mating device. Accordingly, the following discussion of neck strap 1205 may also apply to various other paired devices, such as charging boxes, smartwatches, smartphones, bracelets, other wearable devices, hand-held controllers, tablet computers, laptop computers, other external computing devices, and the like.
As shown, the neck strap 1205 may be coupled to the eyeglass apparatus 1202 via one or more connectors. These connectors may be wired or wireless and may include electronic components and/or non-electronic components (e.g., structural components). In some cases, the eyeglass apparatus 1202 and the neck strap 1205 can operate independently without any wired or wireless connection between them. Although fig. 12 shows the various components of the eyeglass apparatus 1202 and the neck strap 1205 in example locations on the eyeglass apparatus 1202 and the neck strap 1205, the components may be located elsewhere on the eyeglass apparatus 1202 and/or the neck strap 1205 and/or distributed differently on the eyeglass apparatus and/or the neck strap. In some embodiments, the various components in the eyeglass apparatus 1202 and the neck strap 1205 can be located on one or more additional peripheral devices paired with the eyeglass apparatus 1202, the neck strap 1205, or some combination thereof.
Pairing an external device (e.g., neck strap 1205) with an augmented reality eyewear device may enable the eyewear device to implement the form factor of a pair of eyewear while still providing sufficient battery power and computing power for the extended capabilities. Some or all of the battery power, computing resources, and/or additional features of the augmented reality system 1200 may be provided by, or shared between, the paired device and the eyeglass device, thereby generally reducing the weight, heat distribution, and form factor of the eyeglass device while still maintaining the desired functionality. For example, the neck strap 1205 may allow for multiple components to be otherwise included in the eyeglass device to be included in the neck strap 1205, as it may bear a heavier weight load on its shoulders than a user bears on its head. The neck strap 1205 may also have a large surface area through which heat may be diffused and dissipated to the surrounding environment. Thus, the neck strap 1205 may allow for greater battery power and greater computing power than would otherwise be possible on a standalone eyeglass device. Because the weight carried in the neck strap 1205 may be less invasive to the user than the weight carried in the eyeglass device 1202, the user may endure wearing a lighter eyeglass device and carrying or wearing a paired device for a longer period of time than the user would endure wearing a heavy, independent eyeglass device, thereby enabling the user to more fully integrate the artificial reality environment into his daily activities.
The neck strap 1205 can be communicatively coupled with the eyeglass device 1202 and/or communicatively coupled to other devices. These other devices may provide certain functionality (e.g., tracking, positioning, depth map construction, processing, storage, etc.) to the augmented reality system 1200. In the embodiment of fig. 12, the neck strap 1205 may include two acoustic transducers (e.g., 1220 (I) and 1220 (J)) that are part of the microphone array (or potentially form their own microphone sub-arrays). The neck strap 1205 can also include a controller 1225 and a power source 1235.
The acoustic transducers 1220 (I) and 1220 (J) in the neck strap 1205 may be configured to detect sound and convert the detected sound to an electronic format (analog or digital). In the embodiment of fig. 12, acoustic transducers 1220 (I) and 1220 (J) may be positioned on the neck strap 1205, increasing the distance between the neck strap's acoustic transducers 1220 (I) and 1220 (J) and the other acoustic transducers 1220 positioned on the eyeglass device 1202. In some cases, increasing the distance between the plurality of acoustic transducers 1220 in the microphone array may increase the accuracy of beamforming performed via the microphone array. For example, if acoustic transducers 1220 (C) and 1220 (D) detect sound, and the distance between acoustic transducers 1220 (C) and 1220 (D) is greater than the distance between acoustic transducers 1220 (D) and 1220 (E), for example, the determined source location of the detected sound may be more accurate than when the sound is detected by acoustic transducers 1220 (D) and 1220 (E).
The controller 1225 in the neck strap 1205 may process information generated by a plurality of sensors on the neck strap 1205 and/or the augmented reality system 1200. For example, the controller 1225 may process information from the microphone array describing sounds detected by the microphone array. For each detected sound, the controller 1225 may perform a direction-of-arrival (DOA) estimation to estimate from which direction the detected sound arrived at the microphone array. When the microphone array detects sound, the controller 1225 may populate the audio data set with this information. In embodiments where the augmented reality system 1200 includes an inertial measurement unit, the controller 1225 may calculate all inertial and spatial calculations from the IMU located on the eyeglass device 1202. The connector may communicate information between the augmented reality system 1200 and the neck strap 1205, as well as between the augmented reality system 1200 and the controller 1225. The information may be in the form of optical data, electronic data, wireless data, or any other transmissible data. Moving the processing of information generated by the augmented reality system 1200 to the neck strap 1205 may reduce the weight and heat of the eyeglass device 1202, making the eyeglass device more comfortable for the user.
A power source 1235 in the neck strap 1205 can provide power to the eyeglass device 1202 and/or the neck strap 1205. The power source 1235 may include, but is not limited to, a lithium ion battery, a lithium-polymer battery, a primary lithium battery, an alkaline battery, or any other form of power storage. In some cases, power supply 1235 may be a wired power supply. The inclusion of the power source 1235 on the neck strap 1205 instead of on the eyeglass device 1202 may help better disperse the weight and heat generated by the power source 1235.
As mentioned, some artificial reality systems may use a virtual experience to substantially replace one or more of the user's multiple sensory perceptions of the real world, rather than mixing artificial reality with real reality. One example of this type of system is a head mounted display system that covers a majority or all of a user's field of view, such as virtual reality system 1300 in fig. 13. The virtual reality system 1300 may include a front rigid body 1302 and a strap 1304 shaped to fit around the user's head. The virtual reality system 1300 may also include output audio transducers 1306 (a) and 1306 (B). Further, although not shown in fig. 13, the front rigid body 1302 may include one or more electronic components including one or more electronic displays, one or more Inertial Measurement Units (IMUs), one or more tracking emitters or detectors, and/or any other suitable device or system for creating an artificial reality experience.
The artificial reality system may include various types of visual feedback mechanisms. For example, a display device in the augmented reality system 1200 and/or in the virtual reality system 1300 may include: one or more Liquid Crystal Displays (LCDs) CRYSTAL DISPLAY, one or more Light Emitting Diode (LED) displays, one or more micro LED displays, one or more Organic LED (OLED) displays, one or more digital light projection (DIGITAL LIGHT project) micro displays, one or more liquid crystal on silicon (liquid crystal on silicon, LCoS) micro displays, and/or any other suitable type of display screen. These artificial reality systems may include a single display screen for both eyes, or one display screen may be provided for each eye, which may provide additional flexibility for zoom adjustment or correction of refractive errors of the user. Some of these artificial reality systems may also include multiple optical subsystems having one or more lenses (e.g., concave or convex lenses, fresnel lenses, adjustable liquid lenses, etc.) through which a user may view the display screen. These optical subsystems may be used for a variety of purposes, including collimating light (e.g., causing an object to appear at a greater distance than its physical distance), amplifying light (e.g., causing an object to appear larger than its physical size), and/or delivering light (e.g., delivering light to an eye of a viewer). These optical subsystems may be used for direct-view architectures (non-pupil-forming architecture) (e.g., single lens configurations that directly collimate light but produce so-called pincushion distortion (pincushion distortion)) and/or non-direct-view architectures (pupil-forming architecture) (e.g., multi-lens configurations that produce so-called barrel distortion to eliminate pincushion distortion).
Some of the plurality of artificial reality systems described herein may include one or more projection systems in addition to, or instead of, using a display screen. For example, the display devices in the augmented reality system 1200 and/or the virtual reality system 1300 may include micro LED projectors that project light into the display devices (e.g., using waveguides), such as transparent combination lenses that allow ambient light to pass through. The display device may refract the projected light toward the pupil of the user, and may enable the user to view both the artificial reality content and the real world simultaneously. The display device may use any of a variety of different optical components to achieve this end, including waveguide components (e.g., holographic waveguide elements, planar waveguide elements, diffractive waveguide elements, polarizing waveguide elements, and/or reflective waveguide elements), light manipulating surfaces and elements (e.g., diffractive elements and gratings, reflective elements and gratings, and refractive elements and gratings), coupling elements, and the like. The artificial reality system may also be configured with any other suitable type or form of image projection system, such as a retinal projector for a virtual retinal display.
The artificial reality systems described herein may also include various types of computer vision components and subsystems. For example, the augmented reality system 1200 and/or the virtual reality system 1300 may include one or more optical sensors, such as a 2D camera or 3D camera, structured light emitters and detectors, time-of-flight depth sensors, single beam rangefinders or scanning laser rangefinders, 3D laser radar (LiDAR) sensors, and/or any other suitable type or form of optical sensor. The artificial reality system may process data from one or more of these sensors to identify the user's location, map the real world, provide the user with a background related to the real world surroundings, and/or perform various other functions.
The artificial reality system described herein may also include one or more input and/or output audio transducers. The output audio transducer may include a voice coil speaker, a ribbon speaker, an electrostatic speaker, a piezoelectric speaker, a bone conduction transducer, a cartilage conduction transducer, a tragus vibration transducer, and/or any other suitable type or form of audio transducer. Similarly, the input audio transducer may include a condenser microphone, a dynamic microphone, a ribbon microphone, and/or any other type or form of input transducer. In some embodiments, a single transducer may be used for both the audio input and the audio output.
In some embodiments, the artificial reality systems described herein may also include a haptic (tactile) (i.e., haptic) feedback system, which may be incorporated into headwear, gloves, clothing, hand-held controllers, environmental devices (e.g., chairs, floor mats, etc.), and/or any other type of device or system. The haptic feedback system may provide various types of skin feedback including vibration, thrust, traction, texture, and/or temperature. Haptic feedback systems may also provide various types of kinesthetic feedback, such as motion and compliance. Haptic feedback may be implemented using motors, piezoelectric actuators, fluid systems, and/or various other types of feedback mechanisms. The haptic feedback system may be implemented independently of, within, and/or in combination with other artificial reality devices.
By providing haptic sensations, auditory content, and/or visual content, the artificial reality system can create a complete virtual experience or enhance the user's real-world experience in a variety of contexts and environments. For example, an artificial reality system may assist or extend a user's perception, memory, or cognition in a particular environment. Some systems may enhance user interaction with others in the real world or may enable more immersive interaction with others in the virtual world. The artificial reality system may also be used for educational purposes (e.g., for teaching or training in schools, hospitals, government institutions, military institutions, businesses, etc.), entertainment purposes (e.g., for playing video games, listening to music, watching video content, etc.), and/or for accessibility purposes (e.g., as hearing aids, visual aids, etc.). Embodiments disclosed herein may implement or enhance the user's artificial reality experience in one or more of these contexts and environments, and/or in other contexts and environments.
Some augmented reality systems may use a technique known as "instant localization and mapping" (simultaneous location AND MAPPING, SLAM) to map the user's environment and/or the device's environment. SLAM mapping and location identification techniques may involve various hardware tools and software tools that may create or update a map of an environment while maintaining tracking of a user's location within the mapped environment. SLAM may use many different types of sensors to create a map and determine the location of a user within the map.
SLAM technology may implement, for example, optical sensors to determine the location of a user. Radios (including WiFi, bluetooth, global positioning system (global positioning system, GPS), cellular or other communication devices) may also be used to determine the location of a user relative to a radio transceiver or transceiver group (e.g., a WiFi router or GPS satellite group). Acoustic sensors (e.g., sensor arrays, or 2D or 3D sonar sensors) may also be used to determine the location of a user within an environment. The augmented reality device and the virtual reality device (e.g., system 1200 in fig. 12 and system 1300 in fig. 13, respectively) may contain any or all of these types of sensors to perform SLAM operations, such as creating and continuously updating a map of the user's current environment. In at least some of the embodiments described herein, SLAM data generated by these sensors may be referred to as "environmental data" and may be indicative of the user's current environment. This data may be stored in a local data store or a remote data store (e.g., a cloud data store) and may be provided to the user's AR/VR device as desired.
As mentioned, the artificial reality systems 1200 and 1300 may be used with various other types of devices to provide a more engaging artificial reality experience. These devices may be haptic interfaces with multiple transducers that provide haptic feedback and/or collect haptic information about user interactions with the environment. Each of the artificial reality systems disclosed herein may include various types of haptic interfaces that detect or communicate various types of haptic information, including haptic feedback (e.g., feedback perceived by a user via nerves in the skin, which feedback may also be referred to as skin feedback) and/or kinesthetic feedback (e.g., feedback perceived by a user via receptors located in muscles, joints, and/or tendons).
The haptic feedback may be provided through an interface positioned within the user's environment (e.g., chair, table, floor, etc.) and/or an interface on an item (e.g., glove, wristband, etc.) that the user may wear or carry. As an example, fig. 14 shows a vibrotactile system 1400 in the form of a wearable glove (tactile device 1410) and wristband (tactile device 1420). Haptic devices 1410 and 1420 are shown as examples of wearable devices that include flexible wearable textile material 1430 shaped and configured to be positioned against a user's hand and wrist, respectively. The present disclosure also includes vibrotactile systems that can be shaped and configured to be positioned against other body parts (e.g., fingers, arms, head, torso, feet, or legs). By way of example and not limitation, vibrotactile systems according to various embodiments of the present disclosure may also be in the form of gloves, headbands, armbands, sleeves, headcaps, socks, shirts, or pants, among other possible forms. In some examples, the term "textile" may include any flexible wearable material, including wovens, nonwovens, leathers, fabrics, flexible polymeric materials, composites, and the like.
The one or more vibrotactile devices 1440 can be positioned at least partially within one or more corresponding pockets formed in the textile material 1430 of the vibrotactile system 1400. The vibrotactile device 1440 may be positioned in a location that provides a perception of vibration (e.g., haptic feedback) to a user of the vibrotactile system 1400. For example, as shown in fig. 14, the vibrotactile device 1440 may be positioned against one or more fingers, thumbs, or wrists of the user. In some examples, vibrotactile device 1440 may be flexible enough to conform to, or bend with, one or more corresponding body parts of a user.
A power source 1450 (e.g., a battery) for applying voltage to the plurality of vibrotactile devices 1440 for activating the vibrotactile devices may be electrically coupled to the vibrotactile devices 1440 (e.g., via wires 1452). In some examples, each of the plurality of vibrotactile devices 1440 may be independently electrically coupled to the power source 1450 for individual activation. In some embodiments, the processor 1460 may be operably coupled to the power supply 1450 and configured (e.g., programmed) to control activation of the plurality of vibrotactile devices 1440.
The vibrotactile system 1400 can be implemented in a variety of ways. In some examples, the vibrotactile system 1400 may be a stand-alone system having multiple integrated subsystems and multiple components to operate independently of other devices and systems. As another example, the vibrotactile system 1400 can be configured to interact with another device or system 1470. For example, in some examples, the vibrotactile system 1400 can include a communication interface 1480 that is used to receive signals and/or transmit signals to another device or system 1470. The other device or system 1470 may be a mobile device, a game console, an artificial reality (e.g., virtual reality, augmented reality, mixed reality) device, a personal computer, a tablet computer, a network device (e.g., modem, router, etc.), a handheld controller, etc. The communication interface 1480 may enable communication between the vibrotactile system 1400 and the other device or system 1470 via a wireless (e.g., wi-Fi, bluetooth, cellular, radio, etc.) link or a wired link. If present, communication interface 1480 may communicate with processor 1460, for example, to provide signals to processor 1460 to activate or deactivate one or more of the plurality of vibrotactile devices 1440.
The vibrotactile system 1400 may optionally include other subsystems and components, such as touch-sensitive pads 1490, pressure sensors, motion sensors, position sensors, lighting elements, and/or user interface elements (e.g., on/off buttons, vibration control elements, etc.). During use, the vibrotactile device 1440 may be configured to be activated for a variety of different reasons, such as in response to: user interaction with a user interface element, signals from a motion sensor or position sensor, signals from a touch-sensitive pad 1490, signals from a pressure sensor, signals from another device or system 1470, and so forth.
Although the power supply 1450, processor 1460, and communication interface 1480 are shown in fig. 14 as being located in the haptic device 1420, the present disclosure is not so limited. For example, one or more of the power source 1450, the processor 1460, or the communication interface 1480 may be positioned within the haptic device 1410 or within another wearable textile.
Haptic wearable devices (e.g., those shown in fig. 14 and described in connection with fig. 14) may be implemented in various types of artificial reality systems and environments. Fig. 15 illustrates an example artificial reality environment 1500 that includes one head mounted virtual reality display and two haptic devices (i.e., gloves), and in other embodiments, any number of these and other components, and/or any combination of these and other components, may be included in an artificial reality system. For example, in some embodiments, there may be multiple head mounted displays, each head mounted display having an associated haptic device, where each head mounted display and each haptic device communicates with the same console, portable computing device, or other computing system.
The head mounted display 1502 generally represents any type or form of virtual reality system, such as the virtual reality system 1300 in fig. 13. Haptic device 1504 generally represents any type or form of wearable device worn by a user of an artificial reality system that provides haptic feedback to the user to give the user the perception that he or she is in physical contact with a virtual object. In some embodiments, the haptic device 1504 may provide haptic feedback by applying vibrations, motions, and/or thrust to a user. For example, the haptic device 1504 may limit or enhance the movement of the user. To give a particular example, the haptic device 1504 may limit forward movement of a user's hand such that the user perceives that his or her hand has been in physical contact with the virtual wall. In this particular example, one or more actuators within the haptic device may achieve physical motion restriction by pumping fluid into an inflatable bladder of the haptic device. In some examples, the user may also use the haptic device 1504 to send an action request to the console. Examples of action requests include, but are not limited to: a request to launch and/or end an application; and/or a request to perform a particular action within the application.
Although the haptic interface may be used with a virtual reality system (as shown in fig. 15), the haptic interface may also be used with an augmented reality system (as shown in fig. 16). Fig. 16 is a perspective view of a user 1610 interacting with an augmented reality system 1600. In this example, the user 1610 can wear a pair of augmented reality glasses 1620, which can have one or more displays 1622 and pair with haptic devices 1630. In this example, the haptic device 1630 may be a wristband that includes a plurality of band elements 1632 and a stretching mechanism 1634 that connects the band elements 1632 to one another.
One or more of the plurality of band elements 1632 may include any type or form of actuator suitable for providing tactile feedback. For example, one or more of the plurality of band elements 1632 may be configured to provide one or more of a plurality of types of skin feedback, including vibration, thrust, traction, texture, and/or temperature. To provide such feedback, the plurality of band elements 1632 may include one or more of a plurality of types of actuators. In one example, each of the plurality of ribbon elements 1632 may include a vibrotactile (e.g., a vibrotactile actuator) configured to vibrate jointly or independently to provide one or more of a plurality of types of haptic sensations to a user. Alternatively, only a single ribbon element or a subset of the plurality of ribbon elements may include a vibrotactile.
Haptic devices 1410, 1420, 1504, and 1630 may include any suitable number and/or type of haptic transducers, sensors, and/or feedback mechanisms. For example, haptic devices 1410, 1420, 1504, and 1630 may include one or more mechanical transducers, one or more piezoelectric transducers, and/or one or more fluid transducers. Haptic devices 1410, 1420, 1504, and 1630 may also include various combinations of different types and forms of transducers working together or independently to enhance the user's artificial reality experience. In one example, each of the plurality of ribbon elements 1632 of the haptic device 1630 may include a vibrotactile (e.g., a vibrotactile actuator) configured to vibrate jointly or independently to provide one or more of a plurality of types of haptic sensations to a user.
Fig. 17A illustrates an exemplary human-machine interface (also referred to herein as an EMG control interface) configured to be worn as a wearable system 1700 around a user's lower arm or wrist. In this example, the wearable system 1700 may include sixteen neuromuscular sensors 1710 (e.g., EMG sensors) arranged circumferentially around the elastic band 1720 having an inner surface 1730 configured to contact the skin of the user. However, any suitable number of neuromuscular sensors may be used. The number and arrangement of neuromuscular sensors may depend on the particular application for which the wearable device is used. For example, the wearable armband or wristband may be used to generate control information for controlling an augmented reality system, a robot, for controlling a vehicle, for scrolling text, for controlling an avatar, or for any other suitable control task. As shown, the sensors may be coupled together using flexible electronics incorporated into the wireless device. Fig. 17B illustrates a cross-sectional view of one of the plurality of sensors throughout the wearable device illustrated in fig. 17A. In some embodiments, hardware signal processing circuitry may optionally be used to process the output of one or more of the plurality of sensing components (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some of the signal processing of the outputs of the sensing components may be performed in software. Thus, signal processing of the signals sampled by the sensors may be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the techniques described herein are not limited in this respect. A non-limiting example of a signal processing chain for processing recorded data from sensor 1710 is discussed in more detail below with reference to fig. 18A and 18B.
Fig. 18A and 18B show exemplary schematic diagrams of various internal components of a wearable system with an EMG sensor. As shown, the wearable system may include a wearable portion 1810 (fig. 18A) and an adapter portion 1820 (fig. 18B) that communicates with the wearable portion 1810 (e.g., via bluetooth or another suitable wireless communication technology). As shown in fig. 18A, the wearable portion 1810 may include a plurality of skin contact electrodes 1811, examples of which are described in connection with fig. 17A and 17B. The outputs of the plurality of skin contact electrodes 1811 may be provided to an analog front end 1830, which may be configured to perform analog processing (e.g., amplification, noise reduction, filtering, etc.) on the recorded signals. The processed analog signals may then be provided to an analog-to-digital converter 1832, which may convert the analog signals to digital signals that may be processed by one or more computer processors. An example of a computer processor that may be used in accordance with some embodiments is a Microcontroller (MCU) 1834 as shown in FIG. 18A. As shown, the MCU 1834 may also include inputs from other sensors (e.g., IMU sensor 1840) as well as inputs from the power and battery module 1842. An output of the processing performed by the MCU 1834 may be provided to the antenna 1850 for transmission to the adapter section 1820 shown in FIG. 18B.
The adapter portion 1820 may include an antenna 1852, which may be configured to communicate with an antenna 1850 included as part of the wearable portion 1810. Communication between antenna 1850 and antenna 1852 may be made using any suitable wireless technology and protocol, non-limiting examples of which include radio frequency signaling and bluetooth. As shown, signals received by the antenna 1852 of the adapter section 1820 may be provided to a host for further processing, display, and/or for enabling control of a particular physical object or virtual object, or multiple physical objects or virtual objects.
Although the examples provided with reference to fig. 17A and 17B, and fig. 18A and 18B are discussed in the context of an interface with an EMG sensor, the techniques described herein for reducing electromagnetic interference may also be implemented in a wearable interface with other types of sensors, including, but not limited to, a myogram (mechanomyography, MMG) sensor, an acoustic myogram (sonomyography, SMG) sensor, and an Electrical Impedance Tomography (EIT) sensor. The techniques described herein for reducing electromagnetic interference may also be implemented in such a wearable interface: the wearable interface communicates with a host computer through wires and cables (e.g., universal Serial Bus (USB) cable, fiber optic cable, etc.).
The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and may be varied as desired. For example, although steps illustrated and/or described herein may be shown or discussed in a particular order, the steps need not be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
The previous description has been provided to enable other persons skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. The exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the disclosure. The embodiments disclosed herein are to be considered in all respects as illustrative and not restrictive. In determining the scope of the present disclosure, reference should be made to any claims appended hereto and their equivalents.
The terms "connected to" and "coupled to" (and derivatives thereof) as used in the specification and/or claims, are to be interpreted as allowing both direct connection and indirect (i.e., indirect connection via other elements or components) unless otherwise stated. In addition, the terms "a" or "an" as used in the specification and/or claims will be construed to mean at least one of "… …. Finally, for ease of use, the terms "comprising" and "having" (and their derivatives) as used in the specification and/or claims are interchangeable with, and have the same meaning as, the word "comprising".

Claims (15)

1. An artificial reality system, comprising:
A wearable device sized to be worn on a body part of a user, wherein the wearable device comprises:
A set of electrodes that detect one or more neuromuscular signals via the body part of the user; and
A transmitter that transmits an electromagnetic signal;
A head mounted display communicatively coupled to the wearable device, wherein the head mounted display comprises: a set of receivers that receive the electromagnetic signals transmitted by the transmitter included on the wearable device; and
One or more processing devices, the one or more processing devices:
determining that the user made a particular gesture based at least in part on the one or more neuromuscular signals detected via the body part of the user; and
A position of the body part of the user at the time the user makes the particular gesture is determined based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
2. The artificial reality system of claim 1, wherein at least one of the one or more processing devices is incorporated into the wearable device and/or the head mounted display.
3. The artificial reality system of claim 1, wherein:
the wearable device includes a first bluetooth radio;
the head mounted display includes a second bluetooth radio communicatively coupled to the first bluetooth radio, the first and second bluetooth radios configured to exchange configuration data between the wearable device and the head mounted display; and optionally, the number of the groups of groups,
The first bluetooth radio and the second bluetooth radio are further configured to exchange data about the one or more neuromuscular signals between the wearable device and the head-mounted display.
4. The artificial reality system of claim 3, wherein at least one of the one or more processing devices generates an input command based at least in part on data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one virtual component to correspond to the particular gesture.
5. The artificial reality system of claim 1, wherein the transmitter incorporates a timestamp into the electromagnetic signal before sending the electromagnetic signal to the set of receivers.
6. The artificial reality system of claim 5, wherein at least one of the one or more processing devices:
Determining a first time of arrival of the electromagnetic signal received by a first receiver included in the set of receivers;
Determining a second time of arrival of the electromagnetic signal received by a second receiver included in the set of receivers;
Calculating an angle of arrival of the electromagnetic signal relative to the set of receivers based at least in part on the first and second times of arrival of the electromagnetic signal and the timestamp; optionally, the first and second heat exchangers are configured to,
Calculating at least one dimension of a position of a virtual component within a field of view of the head mounted display based at least in part on the angle of arrival; and
The virtual component is presented at the location within the field of view of the head mounted display based at least in part on the at least one dimension.
7. The artificial reality system of claim 6, wherein the at least one dimension of the calculated position of the virtual component comprises at least one of:
An azimuth of the virtual component to be presented within the field of view of the head mounted display;
An elevation angle of the virtual component to be presented within the field of view of the head mounted display; or (b)
The depth of the virtual component to be presented within the field of view of the head mounted display.
8. The artificial reality system of claim 6, wherein at least one of the one or more processing devices:
determining a first phase of the electromagnetic signal received by a first receiver included in the set of receivers;
determining a second phase of the electromagnetic signal received by a second receiver included in the set of receivers; and
The angle of arrival of the electromagnetic signal relative to the set of receivers is calculated based at least in part on a difference between the first phase and the second phase of the electromagnetic signal and the timestamp.
9. The artificial reality system of claim 6, wherein at least one of the one or more processing devices:
Calculating a two-dimensional position of the virtual component within the field of view of the head mounted display based at least in part on the angle of arrival;
presenting the virtual component at the two-dimensional location within the field of view of the head mounted display; and/or
Calculating a three-dimensional position of the virtual component within the field of view of the head mounted display based at least in part on the angle of arrival; and
The virtual component is presented at the three-dimensional location within the field of view of the head mounted display.
10. The artificial reality system of claim 6, wherein:
the virtual part presented at the location includes a pointer presented at the location;
at least one of the one or more processing devices superimposes the pointer on a screen of the head mounted display; and optionally, the number of the groups of groups,
At least one of the one or more processing devices generates an input command based at least in part on data regarding the one or more neuromuscular signals, the input command causing the head mounted display to modify at least one additional virtual component presented near the pointer within the field of view of the head mounted display to address the particular gesture.
11. The artificial reality system of claim 6, wherein at least one of the one or more processing devices:
Determining, based at least in part on the angle of arrival, that the wearable device is no longer visible within the field of view of the head-mounted display; and
In response to determining that the wearable device is no longer visible within the field of view of the head-mounted display, the virtual component is removed from the field of view of the head-mounted display.
12. The artificial reality system of claim 1, further comprising an additional wearable device sized to be worn on an additional body part of the user, wherein the wearable device comprises:
an additional set of electrodes that detect one or more additional neuromuscular signals via the additional body part of the user; and
At least one additional transmitter that transmits an additional electromagnetic signal;
Wherein:
The head mounted display is also communicatively coupled to the additional wearable device, wherein the set of receivers receives the additional electromagnetic signals sent by the additional transmitter included on the additional wearable device; and
At least one of the one or more processing devices:
Determining that the user made an additional gesture based at least in part on the one or more additional neuromuscular signals detected via the additional body part of the user; and
The method further includes determining a location of the additional body part of the user at the time the additional gesture was made by the user based at least in part on the additional electromagnetic signals received by the set of receivers included on the head mounted display.
13. The artificial reality system of claim 12, wherein at least one of the one or more processing devices:
Calculating at least one dimension of a position of a virtual component within the field of view of the head mounted display based at least in part on the electromagnetic signals;
calculating at least one additional dimension of an additional position of an additional virtual component within the field of view of the head mounted display based at least in part on the additional electromagnetic signals; and
The virtual component is presented at the location and the additional virtual component is presented at the additional location within the field of view of the head mounted display based at least in part on the at least one dimension and the at least one additional dimension.
14. A head mounted display, comprising:
A set of receivers configured to receive electromagnetic signals transmitted by a transmitter included on a wearable device sized to be worn on a body part of a user;
A radio configured to receive data regarding one or more neuromuscular signals detected by the wearable device via the body part of the user; and
At least one processing device communicatively coupled to the set of receivers and the radio, wherein the at least one processing device:
Determining that the user made a particular gesture based at least in part on data regarding the one or more neuromuscular signals detected via the body part of the user; and
A position of the body part of the user at the time the user makes the particular gesture is determined based at least in part on the electromagnetic signals received by the set of receivers included on the head mounted display.
15. A method, comprising:
Detecting, by a wearable device worn on a body part of a user, one or more neuromuscular signals at the body part of the user;
transmitting an electromagnetic signal by a transmitter included on the wearable device;
Receiving, by a set of receivers included on a head mounted display worn by the user, the electromagnetic signals sent by the transmitter included on the wearable device;
Determining, by one or more processing devices, that a particular gesture was made by the user based at least in part on the one or more neuromuscular signals; and
A location of the body part of the user at the time the user made the particular gesture is determined by the one or more processing devices based at least in part on the electromagnetic signals.
CN202280067672.7A 2021-10-08 2022-09-26 Apparatus, system, and method for detecting user input via hand gestures and arm motions Pending CN118056176A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/253,667 2021-10-08
US17/705,899 US11662815B2 (en) 2021-10-08 2022-03-28 Apparatus, system, and method for detecting user input via hand gestures and arm movements
US17/705,899 2022-03-28
PCT/US2022/044648 WO2023059458A1 (en) 2021-10-08 2022-09-26 Apparatus, system, and method for detecting user input via hand gestures and arm movements

Publications (1)

Publication Number Publication Date
CN118056176A true CN118056176A (en) 2024-05-17

Family

ID=91047033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280067672.7A Pending CN118056176A (en) 2021-10-08 2022-09-26 Apparatus, system, and method for detecting user input via hand gestures and arm motions

Country Status (1)

Country Link
CN (1) CN118056176A (en)

Similar Documents

Publication Publication Date Title
US11042221B2 (en) Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures
US11474227B1 (en) Devices, systems, and methods for radar-based artificial reality tracking
US11467670B2 (en) Methods, devices, and systems for displaying a user interface on a user and detecting touch gestures
US11086392B1 (en) Devices, systems, and methods for virtual representation of user interface devices
US20230259207A1 (en) Apparatus, system, and method for detecting user input via hand gestures and arm movements
US11132058B1 (en) Spatially offset haptic feedback
US11531389B1 (en) Systems and methods for electric discharge-based sensing via wearables donned by users of artificial reality systems
US11941174B1 (en) Finger pinch detection
US11366527B1 (en) Systems and methods for sensing gestures via vibration-sensitive wearables donned by users of artificial reality systems
US11175731B1 (en) Apparatus, system, and method for directional acoustic sensing via wearables donned by users of artificial reality systems
US11150737B2 (en) Apparatus, system, and method for wrist tracking and gesture detection via time of flight sensors
US11579704B2 (en) Systems and methods for adaptive input thresholding
US11550397B1 (en) Systems and methods for simulating a sensation of expending effort in a virtual environment
TW202315217A (en) Antenna system for wearable devices
US11662815B2 (en) Apparatus, system, and method for detecting user input via hand gestures and arm movements
CN118056176A (en) Apparatus, system, and method for detecting user input via hand gestures and arm motions
US11836828B2 (en) Controlling interactions with virtual objects
US11571159B1 (en) Floating biopotential samplings
US20240130681A1 (en) Electrode placement calibration
US11722137B1 (en) Variable-distance proximity detector
US11961494B1 (en) Electromagnetic interference reduction in extended reality environments
US20220015663A1 (en) Right leg drive through conductive chassis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination