WO2020061451A1 - Neuromuscular text entry, writing and drawing in augmented reality systems - Google Patents

Neuromuscular text entry, writing and drawing in augmented reality systems Download PDF

Info

Publication number
WO2020061451A1
WO2020061451A1 PCT/US2019/052151 US2019052151W WO2020061451A1 WO 2020061451 A1 WO2020061451 A1 WO 2020061451A1 US 2019052151 W US2019052151 W US 2019052151W WO 2020061451 A1 WO2020061451 A1 WO 2020061451A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
user
mode
identifying
computerized system
Prior art date
Application number
PCT/US2019/052151
Other languages
French (fr)
Inventor
Adam Berenzweig
Daniel WETMORE
Original Assignee
Ctrl-Labs Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ctrl-Labs Corporation filed Critical Ctrl-Labs Corporation
Priority to EP19861903.3A priority Critical patent/EP3853698A4/en
Priority to CN201980062920.7A priority patent/CN112789577B/en
Publication of WO2020061451A1 publication Critical patent/WO2020061451A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • Augmented reality (AR) systems provide users with an interactive experience of a real-world environment supplemented with virtual information by overlaying computer generated perceptual or virtual information on aspects of the real-world environment.
  • one or more input devices such as a controller, a keyboard, a mouse, a camera, a microphone, and the like, may be used to control operations of the AR system.
  • a user may manipulate a number of buttons on an input device, such as a controller or a keyboard, to effectuate control of the AR system.
  • a user may use voice commands to control operations of the AR system.
  • the current techniques for controlling operations of an AR system have many flaws, so improved techniques are needed.
  • Some embodiments are directed to coupling a system that senses
  • XR extended reality
  • XR functions may include augmented reality (AR), virtual reality (VR) functions, mixed reality (MR) functions, and the like.
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • a system that senses neuromuscular signals may be used in conjunction with an XR system to provide an improved XR experience for a user.
  • Neuromuscular signals may be used directly as an input to an XR system (e.g., by using motor unit action potentials as an input signal) and/or the neuromuscular signals may be processed (including by using an inference model as described herein) for purpose of determining a movement, force, and/or position of a part of the user’s body (e.g., the fingers, hand, and wrist). For instance, information gained within both systems may be used to improve the overall XR experience.
  • a camera in an XR system may capture data that is used to improve the accuracy of a model of the musculoskeletal representation and/or used to calibrate the model.
  • muscle activation data may be visualized and displayed to a user in an XR environment.
  • display information in the XR environment may be used as feedback to the user to permit the user to more accurately control their musculoskeletal input to the system.
  • control features may be provided that permit neuromuscular signals to control XR system elements including operation of the XR system itself.
  • various forms of input e.g., text, writing, and/or drawing
  • identified based on the neuromuscular signals may be provided as input to the XR system, as well as inputs to the XR system based on specific gestures.
  • a computerized system for providing input to an extended reality system may comprise one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor.
  • the at least one computer processor may be programmed to: determine that the computerized system is in a mode configured to provide an input to the extended reality system; identify the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and provide the identified input to the extended reality system.
  • the mode is determined based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals. [0006] In an aspect, the mode is determined based on a gesture detected from the user, wherein the gesture is identified based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • the mode is determined by receiving a selection of the mode from a user interface displayed in an extended reality environment provided by the extended reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
  • the mode is determined based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
  • the mode is determined in response to receiving a signal from the extended reality system.
  • the signal is generated by the extended reality system in response to detection of an event for which input within an extended reality environment provided by the extended reality system is desired.
  • the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • the computerized system further comprises a physical keyboard or a virtual keyboard.
  • identifying one or more tapping or typing actions comprises detecting the user tapping or typing on the physical keyboard or the virtual keyboard.
  • the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • the computerized system further comprises a physical stylus, physical writing implement, virtual stylus and/or virtual writing implement.
  • identifying the one or more writing actions comprises detecting the user using the physical stylus, physical writing implement, virtual stylus and/or virtual writing implement.
  • the one or more writing actions are identified as detected in mid air.
  • the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • the one or more drawing actions detected from the user comprise a plurality of line segments and/or curves.
  • identifying the input for the drawing mode comprises identifying input as drawings and/or text.
  • the at least one computer processor is further programmed to combine drawing and text inputs such that the text overlays or annotates the drawing.
  • the mode comprises a one-handed mode and identifying the input comprise identifying one or more one-handed actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • the extended reality system is configured to display an indication of the identified input to the user.
  • the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
  • the computerized system further comprises a visual display or interface configured to present one or more suggested or predicted words or phrases for text input.
  • the computerized system further comprises a visual display or interface configured to present one or more virtual ink marks associated with one or more strokes as detected from the user.
  • the computerized system further comprises a visual display or interface configured to present a drawing as identified based on one or more drawing actions detected from the user.
  • the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
  • IMU inertial measurement unit
  • the computerized system further comprises at least one camera, wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one camera.
  • the mode comprises a first mode
  • the at least one computer processor is further programmed to: identify a second input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of
  • neuromuscular signals wherein the second input is further identified based, at least in part, on a second mode; and provide the identified second input to the extended reality system.
  • the one or more wearable devices comprises a first wearable device configured to detect neuromuscular signals from a first arm of the user and a second wearable device configured to detect neuromuscular signals from a second arm of the user.
  • the extended reality system is an augmented reality system.
  • a method performed by a computerized system for providing input to an extended reality system comprises detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, a plurality of neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input to the extended reality system; identifying the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and providing the identified input to the extended reality system.
  • a system for providing one or more inputs to an extended reality (XR) system comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and at least one computer processor.
  • XR extended reality
  • the at least one computer processor is programmed to: determine that the system is in a mode configured to provide one or more inputs to the XR system, associate the neuromuscular signals with the information detected from the one or more auxiliary sensors, process the neuromuscular signals and/or the information detected from the one or more auxiliary sensors using one or more inference models; identify the one or more inputs based on the processed neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors; and provide the identified one or more inputs to the XR system.
  • kits for use with an extended reality (XR) system comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and at least one storage medium storing instructions.
  • XR extended reality
  • the instructions when executed by at least one computer processor, cause the at least one computer processor to: process the neuromuscular signals from the neuromuscular sensors, process the information detected from the one or more auxiliary sensors, identify one or more user inputs based on the processed neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors, and communicate the identified one or more user inputs to the XR system.
  • a computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals.
  • the system comprises a plurality of neuromuscular sensors configured to record a plurality of neuromuscular signals from a user, wherein the plurality of neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor.
  • the at least one computer processor is programmed to:
  • the computerized system determines that the computerized system is in a mode configured to provide input including text to the augmented reality system; identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and provide the identified input to the augmented reality system.
  • the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying text input for the typing mode based on the one or more tapping or typing actions.
  • identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface of a physical keyboard.
  • identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface that has a virtual keyboard projected thereon by the augmented reality system.
  • the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying text input for the writing mode based on the one or more writing actions.
  • identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user.
  • identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed on a surface with a virtual stylus or virtual writing implement. [0041] In an aspect, identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed in mid-air.
  • the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying the input for the drawing mode based on the one or more drawing actions.
  • the input for the drawing mode comprises a plurality of line segments and/or curves.
  • the input for the drawing mode comprises input determined based on a sequence of pixel positions controlled by the one or more drawing actions performed by the user.
  • identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions performed by the user; and identifying the text based on the one or more drawing actions performed by the user.
  • the at least one computer processor is further programmed to combine the drawing and the text such that the text overlays or annotates the drawing.
  • identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions performed by the user; and identifying the text from the drawing.
  • the mode comprises a one-handed mode and identifying the input comprises identifying one or more one-handed actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying the input for the one-handed mode based on the one or more one-handed actions.
  • determining that the computerized system is in the mode configured to provide the input comprises receiving a user selection of the mode.
  • receiving the user selection of the mode comprises receiving the user selection from a user interface displayed in an augmented reality environment provided by the augmented reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
  • determining that the computerized system is in the mode configured to provide the input comprises determining the mode from the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
  • determining the mode comprises identifying at least one gesture performed by the user based on the plurality of neuromuscular signals and/or the
  • determining the mode comprises determining the mode based on one or more typing, writing, drawing actions, or one-handed actions performed by the user.
  • determining that the computerized system is in the mode configured to provide the input comprises determining the mode in response to receiving a signal from the augmented reality system.
  • the signal is generated at the augmented reality system in response to detection of an event for which input within an augmented reality environment provided by the augmented reality system is desired.
  • the augmented reality system is configured to display an indication of the identified input to the user.
  • the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions performed by the user.
  • the indication of the identified input comprises a listing of one or more suggested or predicted words or phrases for the text input.
  • the indication of the identified input comprises one or more virtual ink marks associated with one or more strokes made by a writing implement.
  • the indication of the identified input comprises a drawing identified based on one or more drawing actions performed by the user.
  • the indication is displayed via a user interface presented within an augmented reality environment provided by the augmented reality system.
  • the indication is rendered onto a surface that the user is interacting with by the augmented reality system.
  • the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
  • IMU inertial measurement unit
  • the computerized system further comprises at least one camera, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one camera.
  • the mode comprises a first mode
  • the at least one computer processor is further programmed to: identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, a second input, wherein the second input is further identified based, at least in part, on a second mode; and provide the identified second input to the augmented reality system.
  • the identified input provided to the augmented reality system comprises input identified from a plurality of sources, wherein the plurality of sources include the plurality of neuromuscular signals and at least one source other than the plurality of neuromuscular signals.
  • a method performed by a computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals comprises recording, using a plurality of neuromuscular sensors arranged on one or more wearable devices, a plurality neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input including text to the augmented reality system; identifying based, at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
  • a computerized system for providing input to an augmented reality system.
  • the computerized system comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor.
  • the at least one computer processor is programmed to: determine that the computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one-handed mode; identify, based at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and provide the identified input to the augmented reality system.
  • the mode is determined based on a gesture as detected from the user based on the neuromuscular signals and/or information based on the neuromuscular signals.
  • the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying text input for the typing mode based on the one or more tapping or typing actions.
  • identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface of a physical keyboard or a surface that has a representation of a keyboard projected thereon.
  • the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying text input for the writing mode based on the one or more writing actions.
  • identifying the one or more writing actions detected from the user comprises identifying the one or more writing actions performed on a surface with a physical stylus, physical writing implement, virtual stylus, or virtual writing implement.
  • identifying the one or more writing actions detected from the user comprises identifying the one or more writing actions as detected in mid-air.
  • the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying the input for the drawing mode based on the one or more drawing actions.
  • the one or more drawing actions comprises a plurality of line segments and/or curves.
  • identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions detected from the user; and identifying the text based on the one or more drawing actions detected from the user.
  • the at least one computer processor is further programmed to combine the drawing and the text such that the text overlays or annotates the drawing.
  • identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions detected from the user; and identifying the text from the drawing.
  • the mode comprises a one-handed mode and identifying the input comprises identifying one or more one-handed actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying the input for the one-handed mode based on the one or more one-handed actions.
  • the augmented reality system is configured to display an indication of the identified input to the user.
  • the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
  • the indication of the identified input comprises one or more suggested or predicted words or phrases for the text input.
  • the indication of the identified input comprises one or more virtual ink marks associated with one or more strokes detected from the user.
  • the indication of the identified input comprises a drawing identified based on one or more drawing actions detected from the user.
  • the indication is displayed via a user interface presented within an augmented reality environment provided by the augmented reality system.
  • the indication is rendered onto a surface that the user is interacting with by the augmented reality system.
  • the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
  • IMU inertial measurement unit
  • the computerized system further comprises at least one camera, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one camera.
  • the mode comprises a first mode
  • the at least one computer processor is further programmed to: identify, based at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, a second input, wherein the second input is further identified based, at least in part, on a second mode; determine that the computerized system is to be switched from the first mode to the second mode; switch the computerized system from the first mode to the second mode in response to determining that the computerized system is to be switched from the first mode to the second mode; and provide the identified second input to the augmented reality system.
  • the identified input provided to the augmented reality system comprises input identified from a plurality of sources, wherein the plurality of sources include the neuromuscular signals and at least one source other than the neuromuscular signals.
  • the at least one source other than the neuromuscular signals comprises at least one physical input device, and the identified input provided to the augmented reality system comprises a combination of the input identified from the plurality of sources.
  • the one or more wearable devices comprises a first wearable device configured to detect neuromuscular signals from a first arm of the user and a second wearable device configured to detect neuromuscular signals from a second arm of the user.
  • a method performed by a computerized system for providing input to an augmented reality system comprises detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one- handed mode; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
  • the mode is determined based on a gesture detected from the user, wherein the gesture is detected based on the neuromuscular signals
  • a non-transitory computer-readable medium encoded with instructions that, when executed by at least one computer processor performs a method of: detecting, using a plurality of neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that a computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one-handed mode; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
  • FIG. 1 is a schematic diagram of a computer-based system for processing neuromuscular sensor data in accordance with some embodiments of the technology described herein;
  • FIG. 2 is a schematic diagram of a distributed computer-based system that integrates an augmented reality (AR) system with a neuromuscular activity system in accordance with some embodiments of the technology described herein;
  • AR augmented reality
  • FIG. 3 is a flowchart of a process for providing input to an AR system in accordance with some embodiments of the technology described herein;
  • FIG. 4 is a flowchart of a process for providing input to an AR system based on one or more neuromuscular signals in accordance with some embodiments of the technology described herein;
  • FIG. 5 illustrates a wristband having EMG sensors arranged circumferentially thereon, in accordance with some embodiments of the technology described herein;
  • FIG. 6 illustrates a user wearing the wristband of FIG. 5 while typing on a keyboard in accordance with some embodiments of the technology described herein.
  • FIG. 7A illustrates a wearable system with sixteen EMG sensors arranged circumferentially around an elastic band configured to be worn around a user’s lower arm or wrist, in accordance with some embodiments of the technology described herein.
  • FIG. 7B is a cross-sectional view through one of the sixteen EMG sensors illustrated in FIG. 7A.
  • FIGS. 8 A and 8B schematically illustrate components of a computer-based system on which some embodiments are implemented.
  • FIG. 8A illustrates a wearable portion of the computer-based system
  • FIG. 8B illustrates a dongle portion connected to a computer, wherein the dongle portion is configured to communicate with the wearable portion.
  • FIGS. 9A-9C depict exemplary scenarios in which user input may be provided to an XR system in accordance with some embodiments of the technology described herein.
  • XR extended reality
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • Various embodiments described herein offer certain advantages, including, but not limited to, avoiding the use of an undesirable or burdensome physical keyboard, joystick, or other controller; overcoming issues associated with time consuming and high latency processing of low quality images of the user captured by a camera; allowing for the capture and detection of subtle, small, or fast movements and/or variations in force exerted by a user (e.g., varying amounts of force exerted through a stylus, writing instrument, or finger being pressed against a surface) that can be important for resolving text input and other control signals; collecting and analyzing various physiological and/or behavioral information detected from the user that enhances the identification process and is not readily obtained by conventional input devices; allowing instances where the user’s hand is obscured or outside the camera’s field of view, e.g.,
  • signals recorded or detected from wearable sensors are used to identify and provide input to an XR system.
  • Various forms of input for example, discrete control signals, continuous (e.g., 2D) control signals, text entry via a keyboard or other mode of text entry, writing, and/or drawing, may be identified from the recorded or detected signals and/or information based on the recorded or detected signals to enable improved techniques for providing input (such as text) to the XR system.
  • various forms of input may be identified based on a mode of the system that senses signals via the wearable sensors and provides input to the XR system.
  • the user can manually, or the system can automatically, switch between input modes based, at least in part, on neuromuscular data detected from the user.
  • the system can enter a typing mode and can identify text from the user to be provided to the XR system based on one or more tapping or typing actions performed by the user (e.g., tapping on a surface of a physical keyboard, tapping on a surface that has a virtual keyboard projected thereon by the XR system, tapping on a surface that does not have a keyboard projected on it, or performing gestures in mid-air that correspond to typing-style movements).
  • tapping or typing actions performed by the user e.g., tapping on a surface of a physical keyboard, tapping on a surface that has a virtual keyboard projected thereon by the XR system, tapping on a surface that does not have a keyboard projected on it, or performing gestures in mid-air that correspond to typing-style movements.
  • the systems and methods described herein can identify text input from the user based on the recorded or detected signals and/or information based on
  • the system can enter a writing mode and text input can be provided to the XR system by identifying one or more writing actions performed by the user (e.g., writing on a surface with a physical or virtual writing implement) based on the recorded or detected signals and/or information based on the recorded or detected signals.
  • the system can enter a drawing mode and input can be provided to the XR system by identifying one or more drawing actions (e.g., drawing one or more line segments and/or curves on a surface) performed by the user based on the recorded or detected signals and/or information based on the recorded or detected signals.
  • the system can enter a one-handed mode (i.e., a mode where the user uses only one hand to provide input), and input can be provided to the XR system by identifying one or more one- handed actions (for example, gestures such as squeezing, pinching, and/or tapping of various fingers and combinations of fingers) performed by the user based on the recorded or detected signals and/or information based on the recorded or detected signals.
  • a one-handed mode i.e., a mode where the user uses only one hand to provide input
  • input can be provided to the XR system by identifying one or more one- handed actions (for example, gestures such as squeezing, pinching, and/or tapping of various fingers and combinations of fingers) performed by the user based on the recorded or detected signals and/or information based on the recorded or detected signals.
  • the XR system may provide visual feedback by displaying an indication of the identified input to the user, which may facilitate text entry or other information provided as input to the XR system.
  • the indication can be displayed via a user interface presented within an XR environment provided by the XR system.
  • a display associated with the XR system can overlay a visual representation of the identified input in the user interface or provide audio feedback to the user about the identified input.
  • the indication may be rendered by the AR system onto a surface with which the user is interacting.
  • the system described herein senses signals via the wearable sensors and provides input to the XR system such that the system smoothly transitions from a first input mode to a second input mode without requiring an explicit mode switch instruction from the user.
  • This provides for a flexible approach to providing input to the XR system.
  • the system described herein may be operating in a typing mode where the user is providing text input to the system by typing on a physical keyboard. The user may stop typing on the physical keyboard and resume providing text input by writing with a stylus. In response, the system may automatically detect the change in input mode and seamlessly switch from the typing mode to a writing mode. In some embodiments, the user may switch to different forms of text entry while the system is in the same mode.
  • the user may begin by typing on a physical keyboard, and resume text entry by typing on a virtual keyboard or using typing motions without any virtual representation of a keyboard.
  • the manner in which the user is providing text input has changed even though the system remains in the typing mode.
  • the visual feedback provided by the XR system may continue uninterrupted regardless of the mode or the form of text entry.
  • the input to be provided to the XR system may be identified, at least in part, from raw (e.g., unprocessed) sensor signals collected by one or more of the wearable sensors.
  • the input to be provided to the XR system may be identified, at least in part, from information based on the raw sensor signals (e.g., processed sensor signals), where the raw sensor signals collected by one or more of the wearable sensors are processed to perform amplification, filtering, rectification, and/or other form of signal processing, examples of which are described in more detail below.
  • the input to be provided to the XR system may be identified, at least in part, from an output of one or more trained inference models that receive the sensor signals (or processed versions of the sensor signals) as input.
  • various muscular activation states may be identified directly from recorded or detected sensor data.
  • various muscular activation states may be identified directly from recorded or detected sensor data.
  • handstates, gestures, postures, and the like may be identified based, at least in part, on the output of a trained inference model.
  • various forms of input can be provided to the AR system and may be identified directly from recorded sensor data.
  • the input can be provided to the AR system and may be identified based, at least in part, on the output of one or more trained inference models.
  • a trained inference model may output motor unit or muscle activations and/or position, orientation, and/or force estimates for segments of a computer-generated musculoskeletal model.
  • all or portions of a human musculoskeletal system can be modeled as a multi-segment articulated rigid body system, with joints forming the interfaces between the different segments and joint angles defining the spatial relationships between connected segments in the model. Constraints on the movement at the joints are governed by the type of joint connecting the segments and the biological structures (e.g., muscles, tendons, ligaments) that restrict the range of movement at the joint.
  • the shoulder joint connecting the upper arm to the torso and the hip joint connecting the upper leg to the torso are ball and socket joints that permit extension and flexion movements as well as rotational movements.
  • a multi-segment articulated rigid body system is used to model portions of the human musculoskeletal system.
  • some segments of the human musculoskeletal system e.g., the forearm
  • may include multiple rigid structures e.g., the ulna and radius bones of the forearm
  • a model of an articulated rigid body system for use with some embodiments of the technology described herein may include segments that represent a combination of body parts that are not strictly rigid bodies. It will be appreciated that physical models other than the multi-segment articulated rigid body system may be used to model portions of the human musculoskeletal system without departing from the scope of this disclosure.
  • rigid bodies are objects that exhibit various attributes of motion (e.g., position, orientation, angular velocity, acceleration). Knowing the motion attributes of one segment of the rigid body enables the motion attributes for other segments of the rigid body to be determined based on constraints in how the segments are connected.
  • the hand may be modeled as a multi- segment articulated body with the joints in the wrist and each finger forming the interfaces between the multiple segments in the model.
  • movements of the segments in the rigid body model can be simulated as an articulated rigid body system in which position (e.g., actual position, relative position, or orientation) information of a segment relative to other segments in the model are predicted using a trained inference model, as described in more detail below.
  • position e.g., actual position, relative position, or orientation
  • the portion of the human body approximated by a musculoskeletal representation is a hand or a combination of a hand with one or more arm segments.
  • musculoskeletal representation is referred to herein as the“handstate” of the musculoskeletal representation. It should be appreciated, however, that the techniques described herein are also applicable to musculoskeletal representations of portions of the body other than the hand including, but not limited to, an arm, a leg, a foot, a torso, a neck, or any combination of the foregoing.
  • some embodiments are configured to predict force information associated with one or more segments of the musculoskeletal representation. For example, linear forces or rotational (torque) forces exerted by one or more segments may be estimated. Examples of linear forces include, but are not limited to, the force of a finger or hand pressing on a solid object such as a table, and a force exerted when two segments (e.g., two fingers) are pinched together. Examples of rotational forces include, but are not limited to, rotational forces created when segments in the wrist or fingers are twisted or flexed.
  • the force information determined as a portion of a current handstate estimate includes one or more of pinching force information, grasping force information, or information about co contraction forces between muscles represented by the musculoskeletal representation.
  • FIG. 1 illustrates an exemplary system 100, which comprises a
  • the system includes one or more sensors 102 (e.g., neuromuscular sensors) configured to record signals arising from neuromuscular activity in skeletal muscle of a human body.
  • sensors 102 e.g., neuromuscular sensors
  • the term“neuromuscular activity” as used herein refers to neural activation of spinal motor neurons that innervate a muscle, muscle activation, muscle contraction, or any combination of the neural activation, muscle activation, and/or muscle contraction.
  • Neuromuscular sensors may include one or more electromyography (EMG) sensors, one or more mechanomyography (MMG) sensors, one or more sonomyography (SMG) sensors, a combination of two or more types of EMG sensors, MMG sensors, and SMG sensors, and/or one or more sensors of any suitable type that are configured to detect neuromuscular signals.
  • EMG electromyography
  • MMG mechanomyography
  • SMG sonomyography
  • neuromuscular sensor(s) may be used to sense muscular activity related to a movement of the part of the body controlled by muscles from which the neuromuscular sensors are arranged to sense the muscle activity. Spatial information (e.g., position and/or orientation information) and force information describing the movement may be predicted based on the sensed neuromuscular signals as the user moves over time. In some embodiments, the neuromuscular sensor(s) may be used to sense muscular activity related to movement caused by external objects, for example, movement of a hand being pushed by an external object.
  • motor unit recruitment As the tension of a muscle increases during performance of a motor task, the firing rates of active neurons increases and additional neurons may become active, which is a process referred to as motor unit recruitment.
  • the pattern by which neurons become active and increase their firing rate is stereotyped, such that the expected motor unit recruitment patterns define an activity manifold associated with standard or normal movement.
  • Some embodiments record activation of a single motor unit or a group of motor units that are“off-manifold,” in that the pattern of motor unit activation is different than an expected or typical motor unit recruitment pattern.
  • off-manifold activation is referred to herein as,“sub-muscular activation” or“activation of a sub-muscular structure,” where a sub-muscular structure refers to the single motor unit or the group of motor units associated with the off-manifold activation.
  • Examples of off-manifold motor unit recruitment patterns include, but are not limited to, selectively activating a high-threshold motor unit without activating a lower-threshold motor unit that would normally be activated earlier in the recruitment order and modulating the firing rate of a motor unit across a substantial range without modulating the activity of other neurons that would normally be co-modulated in typical motor recruitment patterns.
  • the neuromuscular sensor(s) may be used to sense sub-muscular activation(s) without observable movement.
  • Sub-muscular activation(s) may be used, at least in part, to identify and provide input to an augmented reality system in accordance with some embodiments of the technology described herein.
  • sensors or sensing components 102 include one or more neuromuscular sensors (e.g., EMG sensors).
  • sensors 102 include one or more auxiliary sensors such as Inertial Measurement Units (IMUs), which measure a combination of physical aspects of motion, using, for example, an accelerometer, a gyroscope, a magnetometer, or any combination of one or more accelerometers, gyroscopes and magnetometers, or any other components or devices capable of detecting spatiotemporal positioning, motion, force, or other aspects of a user’s physiological state and/or behavior.
  • IMUs Inertial Measurement Units
  • IMUs may be used to sense information about the movement of the part of the body on which the IMU is attached and information derived from the sensed data (e.g., position and/or orientation information) may be tracked as the user moves over time.
  • information derived from the sensed data e.g., position and/or orientation information
  • one or more IMUs may be used to track movements of portions of a user’s body proximal to the user’s torso relative to the sensor (e.g., arms, legs) as the user moves over time.
  • sensors 102 include a plurality of neuromuscular sensors and at least one auxiliary sensor configured to continuously record a plurality of auxiliary signals.
  • auxiliary sensors include, but are not limited to, microphones, imaging devices (e.g., a camera), radiation-based sensors for use with a radiation-generation device (e.g., a laser-scanning device), or other types of sensors such as thermal sensors, infrared sensors, heart-rate or blood pressure monitors, and/or video eye trackers.
  • imaging devices e.g., a camera
  • radiation-based sensors for use with a radiation-generation device e.g., a laser-scanning device
  • other types of sensors such as thermal sensors, infrared sensors, heart-rate or blood pressure monitors, and/or video eye trackers.
  • the IMU(s) and neuromuscular sensors may be arranged to detect movement of the same or different parts of the human body.
  • the IMU(s) may be arranged to detect movements of one or more body segments proximal to the torso (e.g., an upper arm), whereas the neuromuscular sensors may be arranged to detect movements of one or more body segments distal to the torso (e.g., a forearm or wrist).
  • the sensors may be arranged in any suitable way, and embodiments of the technology described herein are not limited based on the particular sensor arrangement.
  • At least one IMU and a plurality of neuromuscular sensors may be co-located on a body segment to track movements of body segment using different types of measurements.
  • an IMU sensor and a plurality of EMG sensors are arranged on a wearable device configured to be worn around the lower arm or wrist of a user. In such an
  • the IMU sensor may be configured to track movement information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user has an open or closed hand configuration or sub-muscular information associated with activation of sub-muscular structures in muscles of the wrist or hand.
  • movement information e.g., positioning and/or orientation over time
  • the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user has an open or closed hand configuration or sub-muscular information associated with activation of sub-muscular structures in muscles of the wrist or hand.
  • Each of the sensors 102 includes one or more sensing components configured to sense information about a user.
  • the sensing components may include one or more accelerometers, gyroscopes, magnetometers, or any combination thereof to measure characteristics of body motion and/or characteristics related to body motion, examples of which include, but are not limited to, acceleration, angular velocity, and sensed magnetic field around the body.
  • the sensing components may include, but are not limited to, electrodes configured to detect electric potentials on the surface of the body (e.g., for EMG sensors), vibration sensors configured to measure skin surface vibrations (e.g., for MMG sensors), and acoustic sensing components configured to measure ultrasound signals (e.g., for SMG sensors) arising from muscle activity.
  • electrodes configured to detect electric potentials on the surface of the body
  • vibration sensors configured to measure skin surface vibrations
  • acoustic sensing components configured to measure ultrasound signals (e.g., for SMG sensors) arising from muscle activity.
  • At least some of the plurality of sensors 102 are arranged as a portion of a wearable device configured to be worn on or around part of a user’s body.
  • a wearable device configured to be worn on or around part of a user’s body.
  • an IMU sensor and a plurality of neuromuscular sensors are arranged circumferentially around an adjustable and/or elastic band such as a wristband or armband configured to be worn around a user’s wrist or arm.
  • at least some of the sensors may be arranged on a wearable patch configured to be affixed to a portion of the user’s body.
  • multiple wearable devices each having one or more IMUs and/or neuromuscular sensors included thereon may be used to detect neuromuscular data and generate control information based on activation of muscular and sub-muscular structures and/or movement(s) that involve(s) multiple parts of the body.
  • sixteen EMG sensors are arranged circumferentially around an elastic band configured to be worn around a user’s lower arm.
  • FIG. 5 shows EMG sensors 504 arranged circumferentially around elastic band 502.
  • a wearable armband or wristband may be used to detect neuromuscular data and generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling through text, controlling a virtual avatar, or any other suitable control task.
  • a user 506 can wear elastic band 502 on hand 508.
  • EMG sensors 504 may be configured to record EMG signals as a user controls keyboard 512 using fingers 510.
  • elastic band 502 may also include one or more IMUs (not shown), configured to record movement information, as discussed above.
  • IMUs IMUs
  • FIG. 5 depicts the user wearing one wearable device on the hand, it will be appreciated that some embodiments include multiple wearable devices (having one or more neuromuscular sensors integrated therewith) configured to be worn on one or both hands/arms of the user.
  • the output of one or more of the sensing components may be optionally processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification).
  • hardware signal processing circuitry e.g., to perform amplification, filtering, and/or rectification
  • at least some signal processing of the output of the sensing components may be performed in software.
  • signal processing of signals recorded by the sensors may be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect.
  • the sensor data recorded by the sensors 102 may be optionally processed to compute additional derived measurements that are then provided as input to one or more inference models, as described in more detail below.
  • recorded signals from an IMU sensor may be processed to derive an orientation signal that specifies the orientation of a rigid body segment over time.
  • Sensors 102 may implement signal processing using integrated components, or at least a portion of the signal processing may be performed by one or more components in communication with, but not directly integrated with the sensors.
  • System 100 can also include one or more computer processors 104 programmed to communicate with sensors 102 through either a one-way or two-way communication pathway. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to execute one or more machine learning algorithms that process signals output by the sensors 102 to train (or retrain) one or more inference models 106, and the resulting trained (or retrained) inference model(s) 106 may be stored for later use in identifying and providing input to an XR system, as described in more detail below.
  • an inference model may be a model that utilizes a statistical inference based on a probability distribution to deduce a result; in this regard, an inference model may comprise a statistical model.
  • signals recorded by sensors arranged on a first wearable device worn on one hand/arm and signals recorded by sensors arranged on a second wearable device worn on the other hand/arm may be processed using the same inference model(s) or separate inference model(s).
  • System 100 also optionally includes one or more controllers 108.
  • controller 108 may be a display controller configured to display a visual representation (e.g., a representation of a hand).
  • one or more computer processors may implement one or more trained inference models that receive as input signals recorded by sensors 102 and provide as output information (e.g., predicted handstate information) that may be used to identify and provide input to an XR system.
  • output information e.g., predicted handstate information
  • a trained inference model interprets neuromuscular signals recorded by wearable neuromuscular sensors into position and force estimates (e.g., handstate information) that are used to update the musculoskeletal representation.
  • the musculoskeletal representation is updated in real time and a visual representation of a hand (e.g., within an XR environment) may be rendered based on the current handstate estimates.
  • a visual representation of a hand e.g., within an XR environment
  • an estimate of a user’s handstate may be used to determine a gesture being performed by the user and/or to predict a gesture that the user will perform.
  • musculoskeletal representations may include actual visual representations of biomimetic (realistic) hands, synthetic (robotic) hands, as well as abstract "internal representations" that serve as input for gesture control (e.g., to other applications, systems, etc.). That is, the position and/or force of the hand may be provided to downstream algorithms (e.g., control algorithms in an XR system) but may not be directly rendered.
  • the system 100 optionally includes a computer application 110 that is configured to simulate a virtual reality (VR), augmented reality (AR), and/or a mixed reality (MR) environment (collectively, extended reality,“X Reality” or“XR” systems or environments), and the computer application 110 can display a visual character such as an avatar (e.g., via controller 108) in an XR environment. Positioning, movement, and/or forces applied by portions of the visual character within the virtual reality environment may be displayed in the XR environment based on the output of the trained inference model(s).
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • the computer application 110 can display a visual character such as an avatar (e.g., via controller 108) in an XR environment.
  • Positioning, movement, and/or forces applied by portions of the visual character within the virtual reality environment may be displayed in the XR environment based on the output of the trained inference model(s).
  • the visual representation in the XR environment may be dynamically updated as continuous signals are recorded by the sensors 102, processed by computer processor(s), and sent to the inference model(s) 106 for trained or inferred outputs, so that the system can provide a computer-generated representation of the visual character’s movement that is updated in real-time in the XR environment.
  • Information generated in either system can be used to improve the user experience, accuracy, feedback, inference models, calibration functions, and other aspects in the overall system.
  • system 100 may include an XR system that includes one or more of the following: processors, a camera (e.g., one or more camera(s) contained in a head-mounted display), a display (e.g., via XR glasses or other viewing device), or any other auxiliary sensor(s) that provides XR information within a view of the user or provides XR information to the user.
  • a camera e.g., one or more camera(s) contained in a head-mounted display
  • a display e.g., via XR glasses or other viewing device
  • any other auxiliary sensor(s) that provides XR information within a view of the user or provides XR information to the user.
  • information from a camera contained in the head-mounted display in the XR system may be used in combination with information from the neuromuscular sensors to interpret movement, gestures, and/or actions performed by the user.
  • System 100 may also include system elements that couple the XR system with a computer-based system that generates the musculoskeletal representation based on sensor data.
  • the systems may be coupled via a special-purpose or other type of computer system that receives inputs from the XR system and the system that generates the computer-based musculoskeletal
  • Such a system may include a gaming system, robotic control system, personal computer, or other system that is capable of interpreting XR and musculoskeletal information.
  • musculoskeletal representation may also be programmed to communicate directly. Such information may be communicated using any number of interfaces, protocols, or media.
  • inference model 106 may be a neural network and, for example, may be a recurrent neural network.
  • the recurrent neural network may be a long short-term memory (LSTM) neural network. It should be appreciated, however, that the recurrent neural network is not limited to being an LSTM neural network and may have any other suitable architecture.
  • LSTM long short-term memory
  • the recurrent neural network may be a fully recurrent neural network, a gated recurrent neural network, a recursive neural network, a Hopfield neural network, an associative memory neural network, an Elman neural network, a Jordan neural network, an echo state neural network, a second order recurrent neural network, and/or any other suitable type of recurrent neural network.
  • neural networks that are not recurrent neural networks may be used.
  • deep neural networks, convolutional neural networks, and/or feedforward neural networks may be used.
  • the output of one or more inference models comprises one or more discrete outputs.
  • Discrete outputs e.g., classification labels
  • a model may be trained to estimate or infer whether the user is activating a particular motor unit, activating a particular motor unit with a particular timing, activating a particular motor unit with a particular firing pattern, or activating a particular combination of motor units.
  • discrete classification is used in some embodiments to estimate whether a particular motor unit fired an action potential within a given amount of time. In such embodiments, these estimates may then be accumulated to obtain an estimated firing rate for that motor unit.
  • the neural network may include a softmax layer such that the outputs add up to one and may be interpreted as probabilities.
  • the output of the softmax layer may be a set of values corresponding to a respective set of control signals, with each value indicating a probability that the user want to perform a particular control action.
  • the output of the softmax layer may be a set of three probabilities (e.g., 0.92, 0.05, and 0.03) indicating the respective probabilities that the detected pattern of activity is one of three known patterns.
  • the inference model is a neural network configured to output a discrete output (e.g., a discrete signal)
  • the neural network is not required to produce outputs that add up to one.
  • the output layer of the neural network may be a sigmoid layer (which does not restrict the outputs to probabilities that add up to one).
  • the neural network may be trained with a sigmoid cross-entropy cost.
  • Such an implementation may be advantageous in the case when multiple different control actions may occur within a threshold amount of time and it is not important to distinguish the order in which these control actions occur (e.g., a user may activate two patterns of neural activity within the threshold amount of time).
  • any other suitable non-probabilistic multi-class classifier may be used, as aspects of the technology described herein are not limited in this respect.
  • the output(s) of the inference model(s) may be continuous signals rather than discrete signals.
  • the model may output an estimate of the firing rate of each motor unit or the model may output a time-series electrical signal corresponding to each motor unit or sub-muscular structure.
  • the inference model may comprise a hidden Markov model (HMM), a switching HMM with the switching allowing for toggling among different dynamic systems, dynamic Bayesian networks, and/or any other suitable graphical model having a temporal component. Any such inference model may be trained using recorded sensor signals.
  • HMM hidden Markov model
  • the inference model is a classifier taking as input, features derived from the recorded sensor signals.
  • the classifier may be trained using features extracted from the sensor data.
  • the classifier may be a support vector machine, a Gaussian mixture model, a regression based classifier, a decision tree classifier, a Bayesian classifier, and/or any other suitable classifier, as aspects of the technology described herein are not limited in this respect.
  • Input features to be provided to the classifier may be derived from the sensor data in any suitable way.
  • the sensor data may be analyzed as time series data using wavelet analysis techniques (e.g., continuous wavelet transform, discrete-time wavelet transform, etc.), Fourier- analytic techniques (e.g., short-time Fourier transform, Fourier transform, etc.), and/or any other suitable type of time-frequency analysis technique.
  • wavelet analysis techniques e.g., continuous wavelet transform, discrete-time wavelet transform, etc.
  • Fourier- analytic techniques e.g., short-time Fourier transform, Fourier transform, etc.
  • the sensor data may be transformed using a wavelet transform and the resulting wavelet coefficients may be provided as inputs to the classifier.
  • values for parameters of the inference model may be estimated from training data.
  • parameters of the neural network e.g., weights
  • parameters of the inference model may be estimated using gradient descent, stochastic gradient descent, and/or any other suitable iterative optimization technique.
  • the inference model may be trained using stochastic gradient descent and backpropagation through time.
  • the training may employ a cross-entropy loss function and/or any other suitable loss function, as aspects of the technology described herein are not limited in this respect.
  • some embodiments are directed to using an inference model for predicting musculoskeletal information based on signals recorded from wearable sensors. As discussed briefly above in the example where portions of the human
  • musculoskeletal system can be modeled as a multi-segment articulated rigid body system, the types of joints between segments in a multi-segment articulated rigid body model constrain movement of the rigid body. Additionally, different human individuals tend to move in characteristic ways when performing a task that can be captured in inference patterns of individual user behavior. At least some of these constraints on human body movement may be explicitly incorporated into inference models used for prediction in accordance with some embodiments. Additionally or alternatively, the constraints may be learned by the inference model though training based on recorded sensor data, as discussed briefly above.
  • some embodiments are directed to using an inference model for predicting handstate information to enable the generation and/or real-time update of a computer-based musculoskeletal representation.
  • the inference model may be used to predict the handstate information based on IMU signals, neuromuscular signals (e.g., EMG, MMG, and/or SMG signals), external device signals (e.g., camera or laser- scanning signals), or a combination of IMU signals, neuromuscular signals, and external device signals detected as a user performs one or more movements.
  • a camera associated with an XR system may be used to capture actual position data relating to a human subject of the computer-based musculoskeletal representation and such actual position information may be used to improve the accuracy of the representation.
  • outputs of the inference model may be used to generate a visual representation of the computer-based musculoskeletal representation in an XR environment. For example, a visual representation of muscle groups firing, force being applied, text being entered via movement, or other information produced by the computer-based musculoskeletal representation may be rendered in a visual display of an XR system.
  • Some embodiments of the technology described herein are directed to using an inference model, at least in part, to map one or more actions identified from the neuromuscular signals (e.g., map muscular activation state information identified from the neuromuscular sensors) to input signals including text.
  • an inference model e.g., to map one or more actions identified from the neuromuscular signals (e.g., map muscular activation state information identified from the neuromuscular sensors) to input signals including text.
  • the inference model may receive as input IMU signals, neuromuscular signals (e.g., EMG, MMG, and/or SMG signals), external device signals (e.g., camera or laser- scanning signals), or a combination of IMU signals, neuromuscular signals, and external device signals detected as a user performs one or more sub-muscular activations, one or more movements, and/or one or more gestures.
  • the inference model may be used to predict the input to be provided to the AR system without the user having to make perceptible movements.
  • FIG. 2 illustrates a schematic diagram of a distributed computer-based system
  • Neuromuscular activity system 202 is similar to system 100 described above with respect to FIG. 1.
  • an augmented reality (AR) system 201 may take the form of a pair of goggles, glasses, or other type(s) of device that shows display elements to the user that may be superimposed on“reality” which in some cases could be a user’s view of the environment (e.g., as viewed through the user’s eyes), or a captured (e.g., by cameras, for example) version of a user’s view of the environment.
  • AR system 201 may take the form of a pair of goggles, glasses, or other type(s) of device that shows display elements to the user that may be superimposed on“reality” which in some cases could be a user’s view of the environment (e.g., as viewed through the user’s eyes), or a captured (e.g., by cameras, for example) version of a user’s view of the environment.
  • System 201 may include one or more cameras (e.g., camera(s) 204) mounted within a device worn by a user that captures one or more views experienced by the user in their environment.
  • System 201 may have one or more processors 205 operating within the user device and/or within a peripheral device or computer system, and such processor(s) may be capable of transmitting and receiving video information and other types of sensor data.
  • AR system 201 may also include one or more sensors 207 such as microphones, GPS elements, accelerometers, infrared detectors, haptic feedback elements or any other type of sensor, or any combination thereof.
  • sensors 207 such as microphones, GPS elements, accelerometers, infrared detectors, haptic feedback elements or any other type of sensor, or any combination thereof.
  • the AR system may also include one or more sensors 207 such as microphones, GPS elements, accelerometers, infrared detectors, haptic feedback elements or any other type of sensor, or any combination thereof.
  • AR system 201 may be an audio-based AR system and the one or more sensors 207 may also include one or more headphones or speakers. Further, AR system 201 may also have one or more displays 208 that permit the AR system to overlay and/or display information to the user in addition to the view of the user’s environment presented by the AR system. AR system 201 may also include one or more communication interfaces (e.g. interfaces 206) for the purpose of communicating information to one or more computer systems (e.g., a gaming system or other systems capable of rendering or receiving AR data). AR systems can take many forms and are provided by a number of different manufacturers. For example, various
  • embodiments may be implemented in association with one or more types of AR systems.
  • various embodiments may be implemented with the HoloLens holographic reality glasses available from the Microsoft Corporation, the Lightwear AR headset from Magic Leap, the Google Glass AR glasses available from Alphabet, the R-7 Smartglasses System available from ODG, or any other type of AR and/or VR device.
  • the HoloLens holographic reality glasses available from the Microsoft Corporation
  • the Lightwear AR headset from Magic Leap
  • the Google Glass AR glasses available from Alphabet
  • the R-7 Smartglasses System available from ODG, or any other type of AR and/or VR device.
  • AR system 201 may be operatively coupled to the neuromuscular activity system 202 through one or more communication methods, including but not limited to, the Bluetooth protocol, Wi-Fi, Ethemet-like protocols, or any number of connection types, wireless and/or wired. It should be appreciated that, for example, systems 201 and 202 may be directly connected or coupled through one or more intermediate computer systems or network elements. The double-headed arrow in FIG. 2 represents the communicative coupling between the systems 201 and 202.
  • Neuromuscular activity system 202 may be similar in structure and function to system 100 described above with reference to FIG. 1.
  • system 202 may include one or more neuromuscular sensors 209 and/or auxiliary sensors described in connection with FIG. 1, one or more inference models 210, and may create, maintain, and store a musculoskeletal representation 211.
  • system 202 may include a device such as a band that can be worn by a user in order to collect and analyze neuromuscular signals.
  • system 202 may include one or more communication interfaces 212 that permit system 202 to communicate with AR system 201, such as by Bluetooth, Wi-Fi, or other communication method.
  • AR system 201 and neuromuscular activity system 202 may communicate information which can be used to enhance the user experience and/or allow the AR system to function more accurately and effectively.
  • FIG. 2 describes a distributed computer-based system that integrates the AR system 201 with the neuromuscular activity system 202, it will be understood the integration may be non-distributed in nature.
  • the neuromuscular activity system 202 may be integrated into the AR system 201 such that the various components of the neuromuscular activity system 202 may be considered as part of the AR system.
  • neuromuscular signals recorded by the neuromuscular sensors 209 may be treated as any other inputs (e.g., camera(s) 204, sensors 207) to the AR system 201.
  • the processing of the sensor signals obtained from neuromuscular sensors 209 may be integrated into the AR system 201.
  • FIG. 3 illustrates a process 300 for identifying and providing input to an XR system.
  • process 300 is described with respect to identifying and providing input to an AR system, such as AR system 201, in accordance with some embodiments.
  • the process 300 may be performed by the neuromuscular activity system 202.
  • sensor signals may be recorded by one or more sensors 102 (also referred to herein as“raw sensor signals”) of the neuromuscular activity system 202.
  • the sensors include a plurality of neuromuscular sensors (e.g., EMG sensors) arranged on a wearable device worn by a user.
  • EMG sensors neuromuscular sensors
  • EMG sensors may be arranged on an elastic band configured to be worn around a wrist or forearm of the user to record neuromuscular signals from the user as the user performs various movements or gestures.
  • the EMG sensors may be the sensors 504 arranged on the band 502, as shown in FIG. 5; in some embodiments, the EMG sensors may be the sensors 710 arranged on the elastic band 720, as shown in FIG. 7A.
  • gestures refers to a static or dynamic configuration of one or more body parts including the position of the one or more body parts and forces associated with the configuration.
  • gestures performed by the user include static/discrete gestures (also referred to as“pose”) that indicate a static configuration of one or more body parts.
  • a pose can include a fist, an open hand, statically placing or pressing the palm of the hand down on a solid surface or grasping a ball.
  • a pose can indicate the static configuration by providing positional information (e.g., segment coordinates, joint angles, or similar information) for the pose, or by providing an identifier corresponding to a pose (e.g., a parameter, function argument, or variable value).
  • the gestures performed by the user may include dynamic/continuous gestures that indicate a dynamic configuration of one or more body parts.
  • the dynamic configuration can describe the position of the one or mode body parts, the movement of the one or more body parts, and forces associated with the dynamic configuration.
  • a dynamic gesture can include waving a finger back and forth, throwing a ball or grasping and throwing a ball.
  • Gestures may include covert gestures that are imperceptible to another person, such as slightly tensing a joint by co-contracting opposing muscles, or using sub-muscular activations. Gestures may be defined by an application configured to prompt a user to perform the gestures or, alternatively, gestures may be arbitrarily defined by a user.
  • the gestures performed by the user may include symbolic gestures (e.g., gestures mapped to other gestures, interactions, or commands, for example, based on a gesture vocabulary that specifies the mapping).
  • hand and arm gestures may be symbolic and used to communicate according to cultural standards.
  • the movements or gestures performed by the user may include tapping or typing actions such as, tapping or typing actions on a surface of a physical keyboard, tapping or typing actions on a surface that has a virtual keyboard projected thereon by the AR system 201, tapping or typing actions without any virtual representation of a keyboard and/or typing actions or other gestures performed in mid-air (e.g., not on a surface).
  • the movements or gestures performed by the user may include writing actions such as, writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user (e.g., a user might be imagining that he is holding a pen or stylus by holding his fingertips together in a writing position), writing actions performed on a surface with a virtual stylus or virtual writing implement, and/or writing actions performed with a physical writing implement, a virtual writing implement, or fingertip(s) of the user in mid-air and not on a particular surface.
  • writing actions such as, writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user (e.g., a user might be imagining that he is holding a pen or stylus by holding his fingertips together in a writing position), writing actions performed on a surface with a virtual stylus or virtual writing implement, and/or writing actions performed with a physical writing implement, a virtual writing implement, or fingertip(s) of the
  • the movements or gestures performed by the user may include drawing actions such as, drawing actions performed on a surface including drawing one or more line segments and/or curves and/or swiping through a virtual keyboard (e.g., virtual swipe keyboard) projected by the AR system 201.
  • drawing actions such as, drawing actions performed on a surface including drawing one or more line segments and/or curves and/or swiping through a virtual keyboard (e.g., virtual swipe keyboard) projected by the AR system 201.
  • the movements or gestures performed by the user may include one-handed actions such as one-handed chord gestures including squeezes, taps or pinches with various fingers or combinations of fingers of one hand.
  • auxiliary sensors configured to record auxiliary signals that may also be provided as input to the one or more trained inference models.
  • auxiliary sensors include IMU sensors, imaging devices, radiation detection devices (e.g., laser scanning devices), heart rate monitors, or any other type of biosensors configured to record biophysical information from the user during performance of one or more movements or gestures mentioned above.
  • the neuromuscular signals may be associated or correlated with information detected from the auxiliary sensors (e.g., auxiliary signals providing information indicative of a user’s physiological state and/or behavior).
  • the auxiliary signals may be used together with the neuromuscular signals to interpret the user’s movements, gestures, actions or otherwise augment and enhance the neuromuscular signals or the input identification process described in detail below.
  • Process 300 then proceeds to act 304, where the raw sensor signals recorded by the sensors 102 are optionally processed.
  • the raw sensor signals may be processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification).
  • at least some signal processing of the raw sensor signals may be performed in software.
  • signal processing of the raw sensor signals recorded by the sensors may be performed in hardware, software, or by any suitable combination of hardware and software.
  • the raw sensor signals may be processed to derive other signal data. For example, accelerometer data recorded by one or more IMU sensors may be integrated and/or filtered to determine derived signal data associated with one or more muscles during activation of a muscle or
  • Process 300 then proceeds to act 306, where the raw sensor signals or the processed sensor signals are optionally provided as input to a trained inference model(s) configured to output information representing user activity, such as handstate information and/or muscular activation state information (e.g., a gesture or pose), as described above.
  • a trained inference model(s) configured to output information representing user activity, such as handstate information and/or muscular activation state information (e.g., a gesture or pose), as described above.
  • Process 300 then proceeds to act 308, where input to be provided to the AR system 201 is identified based on the raw sensor signals, the processed sensor signals, and/or the outputs of the trained inference model(s) (e.g., the handstate information).
  • input to be provided to the AR system 201 may be identified based on the movements, gestures, or actions identified from the raw sensor signals, the processed sensor signals, and/or the outputs of the trained inference model(s).
  • text input to be provided to the AR system 201 may be identified based on the tapping or typing actions, writing actions, drawing actions, and/or one-handed actions.
  • Input other than or in addition to text input may be identified, for example, a drawing may be identified based on the drawing actions.
  • the one or more computer processors 104 of system 100 may be programmed to identify the input to be provided to the AR system 201 from signals recorded by sensors 102 (e.g., the raw sensor signals) and/or information based on these signals.
  • the information based on the signals recorded by sensors 102 may include information associated with processed sensor signals (e.g., processed EMG signals) and/or information associated with outputs of the trained inference model (e.g., handstate information).
  • input to be provided to the AR system 201 may be identified based on signals output from the auxiliary sensors (e.g., one or more IMU sensors, one or more cameras or imaging devices associated with neuromuscular activity system 202 or augmented reality system 201) in addition to the signals recorded by the neuromuscular sensors.
  • auxiliary sensors e.g., one or more IMU sensors, one or more cameras or imaging devices associated with neuromuscular activity system 202 or augmented reality system 201
  • Such auxiliary sensors can provide additional information regarding the movement of the pen, stylus, fingertip(s), when the user performs the various
  • the identified input may be provided to the AR system 201.
  • the AR system 201 may provide visual feedback by displaying an indication of the identified input to the user (and/or may provide other forms of feedback such as audio or haptic feedback).
  • the visual feedback may facilitate text entry, for example, by prompting the user to adjust the way various movements, gestures, and/or actions are performed.
  • the visual feedback may be useful in situations where the user provides input using an object or the user’s hand/fingertip, which does not leave physical marks when writing or drawing on a surface, for example.
  • the indication of the identified input includes text input identified based on the tapping or typing actions, writing actions, drawing actions, and/or one-handed actions performed by the user.
  • the indication of the identified input includes a listing of one or more suggested or predicted words or phrases for text input. For example, multiple options, guesses or alternative words may be presented to the user. The user may select from among the presented items by, for example, performing certain movements or gestures (that are identified based on neuromuscular signals) or using alternative control schemes (e.g., a cursor/pointer).
  • the indication of the identified input includes one or more virtual ink marks associated with one or more strokes made by a writing implement.
  • the indication of the identified input includes a drawing identified based on drawing actions performed by the user.
  • the indication may be displayed via a user interface presented with an augmented reality environment provided by the AR system 201.
  • the indication may be provided on a virtual document in the user interface or as a representation shown in the AR environment to be floating in space.
  • the indication may be rendered onto a surface that the user is interacting with by the AR system 201.
  • the indication may be rendered onto the surface where the user is typing, for example, as a scrolling tickertape or a line-oriented typewriter.
  • the indication may be rendered onto the surface where the user is writing, for example, as virtual ink on the surface.
  • FIG. 4 illustrates a process 400 for identifying and providing input to an XR system.
  • process 400 is described with respect to identifying and providing input to an AR system, such as AR system 201, in accordance with some embodiments.
  • the process 400 may be performed by the neuromuscular activity system 202.
  • sensor signals are recorded by one or more sensors such as neuromuscular sensors (e.g., EMG sensors) and/or auxiliary sensors (e.g., IMU sensors, imaging devices, radiation detection devices, heart rate monitors, or any other type of biosensors) of the neuromuscular activity system 202.
  • neuromuscular sensors e.g., EMG sensors
  • auxiliary sensors e.g., IMU sensors, imaging devices, radiation detection devices, heart rate monitors, or any other type of biosensors
  • a determination may be made that the neuromuscular activity system 202 is in a mode configured to provide input including text to the AR system 201.
  • the mode may include a typing mode in which a user may perform tapping or typing actions on a physical or virtual keyboard to provide text input, a writing mode in which a user may perform writing actions with a physical or virtual writing implement (e.g., pen, stylus, etc.) and/or fingertip(s) to provide text input, a drawing mode in which a user may perform drawing actions with a physical or virtual writing implement (e.g., pen, stylus, etc.) and/or fingertip(s) to provide text and/or drawing input, a one-handed mode in which a user may perform one-handed actions to provide text input, and/or a mode in which discrete and/or continuous control signals may be provided as input to the AR system 201.
  • a typing mode in which a user may perform tapping or typing actions on a physical or virtual keyboard to provide text input
  • a writing mode in which a
  • the mode determination may be made based on a user selection of the mode.
  • the mode that the neuromuscular activity system 202 is in may be determined in response to receiving a user selection of the mode.
  • the user selection may be received from a user interface displayed in an AR environment provided by the AR system 201.
  • the user interface may identify and display a number of modes from which the user may select a particular mode. For example, a list of available modes, such as, typing mode, writing mode, drawing mode, and/or one-handed mode may be provided and the user may select a mode from the list.
  • the mode determination may be made based on the sensor signals and/or information based on the sensor signals.
  • the mode that the neuromuscular activity system 202 is in may be determined based on the sensor signals and/or information based on the sensor signals.
  • a particular gesture performed by the user may be identified based on the sensor signals and/or information based on the sensor signals, and the mode may be determined by identifying the mode corresponding to the particular gesture. For example, different gestures may be mapped to different modes and a particular mode may be determined based on a corresponding gesture performed by the user.
  • the mode entered based on a particular gesture or muscular activation state may depend on the state of the system (e.g., a current mode of the system) and/or may be personalized according to a user’s preferred settings.
  • the mode may be determined as the user performs one or more actions associated with the corresponding mode. For example, when the user starts performing typing actions, the neuromuscular activity system 202 may be configured to recognize that the input mode is a typing mode and when the user starts performing writing actions, the neuromuscular activity system 202 may be configured to recognize that the input mode is a writing mode.
  • the neuromuscular activity system 202 may switch from one mode to another mode based on detection of different actions performed by the user. For example, the user may switch between performing typing actions and writing actions and the system may determine that the input mode should switch between the typing mode and the writing mode accordingly without interrupting text entry.
  • the mode determination may be made based on a signal received from the AR system 201.
  • the neuromuscular activity system 202 may be configured to operate in a mode determined in response to receiving a signal from the AR system.
  • the AR system 201 may generate the signal in response to detection of an event for which input within an AR environment provided by the AR system is desired. For example, text input may be desired to complete a portion of a form presented in a user interface displayed in the AR environment. Presentation of the form may trigger a signal to be generated by the AR system indicating that text input is desired.
  • the signal may identify the various modes that are available for providing the input.
  • the AR system 201 may communicate the signal to the neuromuscular activity system 202 and the neuromuscular activity system 202 may switch to particular available mode to provide the text input.
  • the input to be provided to the AR system 201 may be identified based on the raw or processed signals and/or information based on the recorded signals (e.g., handstate and/or muscular activation state information).
  • the one or more computer processors of system 100 may be programmed to identify the input based on the sensor signals, the handstate information, detection of a gesture or muscular activation state, and/or a combination of any of the foregoing.
  • the input to be provided to the AR system 201 may be further identified based on the current mode of the neuromuscular activity system 202.
  • input to be provided to the AR system 201 for the typing mode may be identified by identifying one or more tapping or typing actions performed by a user based on the sensor signals and/or information based on the sensor signals. For example, tapping or typing actions performed on a surface of a physical keyboard or a surface that has a virtual keyboard projected thereon by the AR system may be identified based on the sensor signals and text input for the typing mode may be identified based on the tapping/typing actions.
  • FIG. 9A depicts a user performing typing actions on a physical keyboard 902 placed on a table.
  • Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these typing actions (as detected by wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904.
  • input to be provided to the AR system 201 for the writing mode may be identified by identifying one or more writing actions performed by the user based on the sensor signals and/or information based on the sensor signals. For example, writing actions performed on a surface with a physical writing implement, a virtual writing implement and/or fingertip(s) of the user may be identified based on the sensor signals and text input for the writing mode may be identified based on the writing actions.
  • FIG. 9B depicts a user performing writing actions on an optional tablet device 912 using an optional stylus 910.
  • Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these writing actions (as detected by the wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904.
  • input to be provided to the AR system for the drawing mode may be identified by identifying one or more drawing actions (e.g., drawing a number of line segments and/or curves on a surface) performed by the user based on the sensor signals and/or information based on the sensor signals.
  • Input e.g., text input and/or drawing input
  • the input for the drawing mode may include one or more line segments and/or curves.
  • the input for the drawing mode may include input determined based on a sequence of pixel positions controlled by the drawing actions performed by the user.
  • FIGC depicts a user performing drawing actions mid-air (i.e., without using any writing instruments).
  • Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these drawing actions (as detected by wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904.
  • an auxiliary sensor e.g., a camera
  • additional information e.g., position of the hand
  • both the text input and the drawing may be identified based on the drawing actions performed by the user.
  • processing of the sensor signals may be performed by multiple processors.
  • the neuromuscular sensors may be configured to communicate at least some of the sensor signals to a first computer processor and a second computer processor, where drawings may be identified by the first computer processor and text input (e.g., handwriting) may be identified by the second computer processor.
  • the text input and the drawing from the first and second computer processors may be combined such that the text overlays or annotates the drawing, or is stored as metadata for later processing (e.g., search and filtering).
  • the drawing may be identified based on the drawing actions performed by the user and the text input may be identified from the drawing.
  • the drawing may be identified from the sensor signals and text may be identified from the drawing by running a handwriting recognition process on the drawing.
  • input to be provided to the AR system for the one-handed mode may be identified by identifying one or more one-handed actions (for example, squeezing, pinching, and/or tapping of various fingers and
  • Text input for the one-handed mode may be identified based on the one-handed actions.
  • one or more gestures may be identified in addition to the typing/tapping, writing, drawing, and/or one-handed actions to allow editing and/or correction of identified text.
  • one or more delete gestures may be recognized in addition to writing actions (based on which text input is identified) that allow deletion of identified letters or words in the text input.
  • the one or more delete gestures may include a gesture to delete a single letter, a gesture to delete a previous word, and/or a gesture to delete a selected word.
  • the selection of the word to be deleted may be accomplished using neuromuscular controls, for example, cursor navigation.
  • the one or more delete gestures may involve manipulating an object being held by a user (e.g., a stylus or pencil).
  • the one or more delete gestures may include flipping the object, such as a pencil, to an eraser position and then swiping or pressing an imaginary button on the object with a particular finger to initiate deletion of one or more letters or words.
  • one or more gestures may be identified and combined with recognizing text input to allow the user to compose longer sequences of text without having to physically move his hand (e.g., to the right or down a virtual page in a virtual document). For example, a swipe or flick in a particular direction may be used as a newline gesture and a“pen up” motion may be used for space or word breaks.
  • the input identified in act 406 may be provided to the AR system 201.
  • Text input and/or drawing input identified based on the sensor signals and/or information based on the sensor signals may be provided to the AR system 201.
  • the one or more computer processors of system 100 may identify and provide the input to the AR system.
  • the neuromuscular activity system 202 may switch between different modes, for example typing, writing, drawing, and/or one-handed modes, for providing input. For example, a user may provide text-based input by tapping on a surface of a physical keyboard, writing on a surface with a stylus, swiping though a virtual swipe keyboard projected in the AR environment, or using a custom movement-free mapping from neuromuscular signals to text.
  • a user may provide text-based input by tapping on a surface of a physical keyboard, writing on a surface with a stylus, swiping though a virtual swipe keyboard projected in the AR environment, or using a custom movement-free mapping from neuromuscular signals to text.
  • API application programming interface
  • neuromuscular activity system 202 and provided to the AR system 201, where the AR system receives the different forms of text input via a common text API.
  • the input to be provided to the AR system 201 may be identified from multiple sources, where the sources may include the neuromuscular signals and at least one source other than the neuromuscular signals.
  • the at least one source may include a physical input device such as a physical keyboard or stylus.
  • Input received from the multiple sources may be combined and the combined input may be provided to the AR system 201.
  • the common API may receive input from the multiple sources.
  • visual feedback provided by the AR system may continue regardless of the source, the mode or the form of text entry.
  • the neuromuscular activity system 202 may learn to emulate the physical input devices using the neuromuscular signals, thereby allowing seamless switching between the physical input devices and their virtual emulations.
  • FIG. 7A illustrates a wearable system with sixteen neuromuscular sensors 710 (e.g., EMG sensors) arranged circumferentially around an elastic band 720 configured to be worn around a user’s lower arm or wrist. As shown, EMG sensors 710 are arranged circumferentially around elastic band 720. It should be appreciated that any suitable number of neuromuscular sensors may be used. The number and arrangement of neuromuscular sensors may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband can be used to generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling through text, controlling a virtual avatar, or any other suitable control task.
  • EMG sensors 710 e.g., EMG sensors
  • FIG. 7A illustrates a wearable system with sixteen neuromuscular sensors 710 (e.g., EMG sensors) arranged circumferentially around an elastic band 720 configured to be worn around a user’s lower arm or wrist.
  • EMG sensors 710 are
  • sensors 710 include a set of neuromuscular sensors (e.g., EMG sensors). In other embodiments, sensors 710 can include a set of neuromuscular sensors and at least one“auxiliary” sensor configured to continuously record auxiliary signals. Examples of auxiliary sensors include, but are not limited to, other sensors such as IMU sensors, microphones, imaging sensors (e.g., a camera), radiation based sensors for use with a radiation-generation device (e.g., a laser-scanning device), or other types of sensors such as a heart-rate monitor. As shown the sensors 710 may be coupled together using flexible electronics 730 incorporated into the wearable device. FIG. 7B illustrates a cross- sectional view through one of the sensors 710 of the wearable device shown in FIG. 7A.
  • the output of one or more of the sensing components can be optionally processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification).
  • hardware signal processing circuitry e.g., to perform amplification, filtering, and/or rectification
  • at least some signal processing of the output of the sensing components can be performed in software.
  • signal processing of signals sampled by the sensors can be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect.
  • a non-limiting example of a signal processing chain used to process recorded data from sensors 710 are discussed in more detail below in connection with FIGS. 8 A and 8B.
  • FIGS. 8 A and 8B illustrate a schematic diagram with internal components of a wearable system with sixteen EMG sensors, in accordance with some embodiments of the technology described herein.
  • the wearable system includes a wearable portion 810 (FIG. 8 A) and a dongle portion 820 (FIG. 8B) in communication with the wearable portion 810 (e.g., via Bluetooth or another suitable short range wireless communication technology).
  • the wearable portion 810 includes the sensors 710, examples of which are described in connection with FIGS. 7 A and 7B.
  • the output of the sensors 710 is provided to analog front end 830 configured to perform analog processing (e.g., noise reduction, filtering, etc.) on the recorded signals.
  • analog processing e.g., noise reduction, filtering, etc.
  • MCU microcontroller
  • IMU IMU sensor
  • battery module 842 The output of the processing performed by MCU may be provided to antenna 850 for transmission to dongle portion 820 shown in FIG. 8B.
  • Dongle portion 820 includes antenna 852 configured to communicate with antenna 850 included as part of wearable portion 810. Communication between antenna 850 and 852 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radiofrequency signaling and Bluetooth. As shown, the signals received by antenna 852 of dongle portion 820 may be provided to a host computer for further processing, display, and/or for effecting control of a particular physical or virtual object or objects.
  • FIGS.8 A, 8B are discussed in the context of interfaces with EMG sensors, it is understood that the techniques described herein for reducing electromagnetic interference can also be implemented in wearable interfaces with other types of sensors including, but not limited to, meehanomyography (MMG) sensors, sonomyography (SMG) sensors, and electrical impedance tomography (EIT) sensors.
  • MMG meehanomyography
  • SMG sonomyography
  • EIT electrical impedance tomography
  • techniques described herein for providing input to an AR system can also be implemented within VR, MR or XR systems.
  • the disclosure is not limited to the use of typing, writing, drawing, and/or one-handed modes or identifying input based on tapping/typing actions, writing actions, drawing actions, and/or one-handed actions, and other modes or actions can be used.
  • two-handed actions other that typing, tapping, writing, or drawing on a surface, such as, combinations of fingertip squeezes, hand gestures, or finger movements on both hands may be used without departing from the scope of this disclosure.
  • the above-described embodiments can be implemented in any of numerous ways.
  • the embodiments may be implemented using hardware, software or a combination thereof.
  • the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions.
  • the one or more controllers can be implemented in numerous ways, such as with dedicated hardware or with one or more processors programmed using microcode or software to perform the functions recited above.
  • one implementation of the embodiments of the present invention comprises at least one non-transitory computer- readable storage medium (e.g., a computer memory, a portable memory, a compact disk, etc.) encoded with a computer program (i.e., a plurality of instructions), which, when executed on a processor, performs the above-discussed functions of the embodiments of the present invention.
  • the computer-readable storage medium can be transportable such that the program stored thereon can be loaded onto any computer resource to implement the aspects of the present invention discussed herein.
  • the reference to a computer program which, when executed, performs the above-discussed functions is not limited to an application program running on a host computer. Rather, the term computer program is used herein in a generic sense to reference any type of computer code (e.g., software or microcode) that can be employed to program a processor to implement the above-discussed aspects of the present invention.
  • embodiments of the invention may be implemented as one or more methods, of which an example has been provided.
  • the acts performed as part of the method(s) may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • any use of the phrase“at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified.
  • any use of the phrase“equal” or“the same” in reference to two values means that two values are the same within manufacturing tolerances. Thus, two values being equal, or the same, may mean that the two values are different from one another by ⁇ 5%.
  • references to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • “or” should be understood to have the same meaning as“and/or” as defined above.
  • “or” or“and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as“only one of’ or“exactly one of,” or, when used in the claims,“consisting of,” will refer to the inclusion of exactly one element of a number or list of elements.

Abstract

Methods, systems, and kits for providing input to an augmented reality system or extended reality system based, at least in part, on neuromuscular signals. The systems, methods, kits comprise detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that a computerized system is in a mode configured to provide input to the augmented reality system; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.

Description

NEUROMUSCULAR TEXT ENTRY, WRITING AND DRAWING IN AUGMENTED
REALITY SYSTEMS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit under 35 U.S.C. § 119(e) of U.S.
Provisional Patent Application Serial No. 62/734,138, filed September 20, 2018, entitled “NEUROMUSCULAR TEXT ENTRY, WRITING AND DRAWING IN AUGMENTED REALITY SYSTEMS,” the entire contents of which is incorporated by reference herein.
BACKGROUND
[0002] Augmented reality (AR) systems provide users with an interactive experience of a real-world environment supplemented with virtual information by overlaying computer generated perceptual or virtual information on aspects of the real-world environment.
Various techniques exist for controlling operations of an AR system. Typically, one or more input devices, such as a controller, a keyboard, a mouse, a camera, a microphone, and the like, may be used to control operations of the AR system. For example, a user may manipulate a number of buttons on an input device, such as a controller or a keyboard, to effectuate control of the AR system. In another example, a user may use voice commands to control operations of the AR system. The current techniques for controlling operations of an AR system have many flaws, so improved techniques are needed.
SUMMARY
[0003] Some embodiments are directed to coupling a system that senses
neuromuscular signals via neuromuscular sensors with a system that performs extended reality (XR) functions. As will be appreciated, XR functions may include augmented reality (AR), virtual reality (VR) functions, mixed reality (MR) functions, and the like. In particular, a system that senses neuromuscular signals may be used in conjunction with an XR system to provide an improved XR experience for a user. Neuromuscular signals may be used directly as an input to an XR system (e.g., by using motor unit action potentials as an input signal) and/or the neuromuscular signals may be processed (including by using an inference model as described herein) for purpose of determining a movement, force, and/or position of a part of the user’s body (e.g., the fingers, hand, and wrist). For instance, information gained within both systems may be used to improve the overall XR experience. In embodiments where a musculoskeletal representation associated with the body part is generated based on sensor data, a camera in an XR system may capture data that is used to improve the accuracy of a model of the musculoskeletal representation and/or used to calibrate the model. Further, in another implementation, muscle activation data may be visualized and displayed to a user in an XR environment. In yet another example, display information in the XR environment may be used as feedback to the user to permit the user to more accurately control their musculoskeletal input to the system. Further, control features may be provided that permit neuromuscular signals to control XR system elements including operation of the XR system itself. In addition, various forms of input (e.g., text, writing, and/or drawing) identified based on the neuromuscular signals may be provided as input to the XR system, as well as inputs to the XR system based on specific gestures.
[0004] According to aspects of the technology described herein, a computerized system for providing input to an extended reality system is provided. The system may comprise one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor. The at least one computer processor may be programmed to: determine that the computerized system is in a mode configured to provide an input to the extended reality system; identify the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and provide the identified input to the extended reality system.
[0005] In an aspect, the mode is determined based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals. [0006] In an aspect, the mode is determined based on a gesture detected from the user, wherein the gesture is identified based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0007] In an aspect, the mode is determined by receiving a selection of the mode from a user interface displayed in an extended reality environment provided by the extended reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
[0008] In an aspect, the mode is determined based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
[0009] In as aspect, the mode is determined in response to receiving a signal from the extended reality system. In an aspect, the signal is generated by the extended reality system in response to detection of an event for which input within an extended reality environment provided by the extended reality system is desired.
[0010] In an aspect, the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0011] In an aspect, the computerized system further comprises a physical keyboard or a virtual keyboard.
[0012] In an aspect, identifying one or more tapping or typing actions comprises detecting the user tapping or typing on the physical keyboard or the virtual keyboard.
[0013] In an aspect, the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0014] In an aspect, the computerized system further comprises a physical stylus, physical writing implement, virtual stylus and/or virtual writing implement. [0015] In an aspect, identifying the one or more writing actions comprises detecting the user using the physical stylus, physical writing implement, virtual stylus and/or virtual writing implement.
[0016] In an aspect, the one or more writing actions are identified as detected in mid air.
[0017] In an aspect, the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0018] In an aspect, the one or more drawing actions detected from the user comprise a plurality of line segments and/or curves.
[0019] In an aspect, identifying the input for the drawing mode comprises identifying input as drawings and/or text.
[0020] In an aspect, the at least one computer processor is further programmed to combine drawing and text inputs such that the text overlays or annotates the drawing.
[0021] In an aspect, the mode comprises a one-handed mode and identifying the input comprise identifying one or more one-handed actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0022] In an aspect, the extended reality system is configured to display an indication of the identified input to the user. In an aspect, the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
[0023] In an aspect, the computerized system further comprises a visual display or interface configured to present one or more suggested or predicted words or phrases for text input. [0024] In an aspect, the computerized system further comprises a visual display or interface configured to present one or more virtual ink marks associated with one or more strokes as detected from the user.
[0025] In an aspect, the computerized system further comprises a visual display or interface configured to present a drawing as identified based on one or more drawing actions detected from the user.
[0026] In an aspect, the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
[0027] In an aspect, the computerized system further comprises at least one camera, wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one camera.
[0028] In an aspect, the mode comprises a first mode, and the at least one computer processor is further programmed to: identify a second input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of
neuromuscular signals, wherein the second input is further identified based, at least in part, on a second mode; and provide the identified second input to the extended reality system.
[0029] In an aspect, the one or more wearable devices comprises a first wearable device configured to detect neuromuscular signals from a first arm of the user and a second wearable device configured to detect neuromuscular signals from a second arm of the user.
[0030] In an aspect, the extended reality system is an augmented reality system.
[0031] According to aspects of the technology described herein, a method performed by a computerized system for providing input to an extended reality system is provided. The method comprises detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, a plurality of neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input to the extended reality system; identifying the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and providing the identified input to the extended reality system.
[0032] According to aspects of the technology described herein, a system for providing one or more inputs to an extended reality (XR) system. The system comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and at least one computer processor. The at least one computer processor is programmed to: determine that the system is in a mode configured to provide one or more inputs to the XR system, associate the neuromuscular signals with the information detected from the one or more auxiliary sensors, process the neuromuscular signals and/or the information detected from the one or more auxiliary sensors using one or more inference models; identify the one or more inputs based on the processed neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors; and provide the identified one or more inputs to the XR system.
[0033] According to aspects of the technology described herein, a kit for use with an extended reality (XR) system is provided. The kit comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and at least one storage medium storing instructions. The instructions, when executed by at least one computer processor, cause the at least one computer processor to: process the neuromuscular signals from the neuromuscular sensors, process the information detected from the one or more auxiliary sensors, identify one or more user inputs based on the processed neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors, and communicate the identified one or more user inputs to the XR system.
[0034] According to aspects of the technology described herein, a computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals is provided. The system comprises a plurality of neuromuscular sensors configured to record a plurality of neuromuscular signals from a user, wherein the plurality of neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor. The at least one computer processor is programmed to:
determine that the computerized system is in a mode configured to provide input including text to the augmented reality system; identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and provide the identified input to the augmented reality system.
[0035] In an aspect, the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying text input for the typing mode based on the one or more tapping or typing actions.
[0036] In an aspect, identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface of a physical keyboard.
[0037] In an aspect, identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface that has a virtual keyboard projected thereon by the augmented reality system.
[0038] In an aspect, the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying text input for the writing mode based on the one or more writing actions.
[0039] In an aspect, identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user.
[0040] In an aspect, identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed on a surface with a virtual stylus or virtual writing implement. [0041] In an aspect, identifying the one or more writing actions performed by the user comprises identifying the one or more writing actions performed in mid-air.
[0042] In an aspect, the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying the input for the drawing mode based on the one or more drawing actions.
[0043] In an aspect, the input for the drawing mode comprises a plurality of line segments and/or curves.
[0044] In an aspect, the input for the drawing mode comprises input determined based on a sequence of pixel positions controlled by the one or more drawing actions performed by the user.
[0045] In an aspect, identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions performed by the user; and identifying the text based on the one or more drawing actions performed by the user.
[0046] In an aspect, the at least one computer processor is further programmed to combine the drawing and the text such that the text overlays or annotates the drawing.
[0047] In an aspect, identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions performed by the user; and identifying the text from the drawing.
[0048] In an aspect, the mode comprises a one-handed mode and identifying the input comprises identifying one or more one-handed actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and identifying the input for the one-handed mode based on the one or more one-handed actions.
[0049] In an aspect, determining that the computerized system is in the mode configured to provide the input comprises receiving a user selection of the mode. [0050] In an aspect, receiving the user selection of the mode comprises receiving the user selection from a user interface displayed in an augmented reality environment provided by the augmented reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
[0051] In an aspect, determining that the computerized system is in the mode configured to provide the input comprises determining the mode from the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
[0052] In an aspect determining the mode comprises identifying at least one gesture performed by the user based on the plurality of neuromuscular signals and/or the
information based on the plurality of neuromuscular signals; and determining the mode corresponding to the at least one gesture.
[0053] In an aspect, determining the mode comprises determining the mode based on one or more typing, writing, drawing actions, or one-handed actions performed by the user.
[0054] In an aspect, determining that the computerized system is in the mode configured to provide the input comprises determining the mode in response to receiving a signal from the augmented reality system. In an aspect, the signal is generated at the augmented reality system in response to detection of an event for which input within an augmented reality environment provided by the augmented reality system is desired.
[0055] In an aspect, the augmented reality system is configured to display an indication of the identified input to the user.
[0056] In an aspect, the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions performed by the user.
[0057] In an aspect, the indication of the identified input comprises a listing of one or more suggested or predicted words or phrases for the text input.
[0058] In an aspect, the indication of the identified input comprises one or more virtual ink marks associated with one or more strokes made by a writing implement. [0059] In an aspect, the indication of the identified input comprises a drawing identified based on one or more drawing actions performed by the user.
[0060] In an aspect, the indication is displayed via a user interface presented within an augmented reality environment provided by the augmented reality system.
[0061] In an aspect, the indication is rendered onto a surface that the user is interacting with by the augmented reality system.
[0062] In an aspect, the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
[0063] In an aspect, the computerized system further comprises at least one camera, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one camera.
[0064] In an aspect, the mode comprises a first mode, and wherein the at least one computer processor is further programmed to: identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, a second input, wherein the second input is further identified based, at least in part, on a second mode; and provide the identified second input to the augmented reality system.
[0065] In an aspect, the identified input provided to the augmented reality system comprises input identified from a plurality of sources, wherein the plurality of sources include the plurality of neuromuscular signals and at least one source other than the plurality of neuromuscular signals.
[0066] According to aspects of the technology described herein, a method performed by a computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals is provided. The method comprises recording, using a plurality of neuromuscular sensors arranged on one or more wearable devices, a plurality neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input including text to the augmented reality system; identifying based, at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
[0067] According to aspects of the technology described herein, a computerized system for providing input to an augmented reality system is provided. The computerized system comprises one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; and at least one computer processor. The at least one computer processor is programmed to: determine that the computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one-handed mode; identify, based at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and provide the identified input to the augmented reality system.
[0068] In an aspect, the mode is determined based on a gesture as detected from the user based on the neuromuscular signals and/or information based on the neuromuscular signals.
[0069] In an aspect, the mode comprises a typing mode and identifying the input comprises identifying one or more tapping or typing actions based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying text input for the typing mode based on the one or more tapping or typing actions.
[0070] In an aspect, identifying the one or more tapping or typing actions comprises identifying the one or more tapping or typing actions on a surface of a physical keyboard or a surface that has a representation of a keyboard projected thereon. [0071] In an aspect, the mode comprises a writing mode and identifying the input comprises identifying one or more writing actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying text input for the writing mode based on the one or more writing actions.
[0072] In an aspect, identifying the one or more writing actions detected from the user comprises identifying the one or more writing actions performed on a surface with a physical stylus, physical writing implement, virtual stylus, or virtual writing implement.
[0073] In an aspect, identifying the one or more writing actions detected from the user comprises identifying the one or more writing actions as detected in mid-air.
[0074] In an aspect, the mode comprises a drawing mode and identifying the input comprises identifying one or more drawing actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying the input for the drawing mode based on the one or more drawing actions.
[0075] In an aspect, the one or more drawing actions comprises a plurality of line segments and/or curves.
[0076] In an aspect, identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions detected from the user; and identifying the text based on the one or more drawing actions detected from the user.
[0077] In an aspect, the at least one computer processor is further programmed to combine the drawing and the text such that the text overlays or annotates the drawing.
[0078] In an aspect, identifying the input for the drawing mode comprises identifying a drawing based on the one or more drawing actions detected from the user; and identifying the text from the drawing.
[0079] In an aspect, the mode comprises a one-handed mode and identifying the input comprises identifying one or more one-handed actions detected from the user based, at least in part, on the neuromuscular signals and/or the information based on the neuromuscular signals; and identifying the input for the one-handed mode based on the one or more one-handed actions.
[0080] In an aspect, the augmented reality system is configured to display an indication of the identified input to the user.
[0081] In an aspect, the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
[0082] In an aspect, the indication of the identified input comprises one or more suggested or predicted words or phrases for the text input.
[0083] In an aspect, the indication of the identified input comprises one or more virtual ink marks associated with one or more strokes detected from the user.
[0084] In an aspect, the indication of the identified input comprises a drawing identified based on one or more drawing actions detected from the user.
[0085] In an aspect, the indication is displayed via a user interface presented within an augmented reality environment provided by the augmented reality system.
[0086] In an aspect, the indication is rendered onto a surface that the user is interacting with by the augmented reality system.
[0087] In an aspect, the computerized system further comprises at least one inertial measurement unit (IMU) sensor, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
[0088] In an aspect, the computerized system further comprises at least one camera, wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one camera.
[0089] In an aspect, the mode comprises a first mode, and wherein the at least one computer processor is further programmed to: identify, based at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, a second input, wherein the second input is further identified based, at least in part, on a second mode; determine that the computerized system is to be switched from the first mode to the second mode; switch the computerized system from the first mode to the second mode in response to determining that the computerized system is to be switched from the first mode to the second mode; and provide the identified second input to the augmented reality system.
[0090] In an aspect, the identified input provided to the augmented reality system comprises input identified from a plurality of sources, wherein the plurality of sources include the neuromuscular signals and at least one source other than the neuromuscular signals.
[0091] In an aspect, the at least one source other than the neuromuscular signals comprises at least one physical input device, and the identified input provided to the augmented reality system comprises a combination of the input identified from the plurality of sources.
[0092] In an aspect, the one or more wearable devices comprises a first wearable device configured to detect neuromuscular signals from a first arm of the user and a second wearable device configured to detect neuromuscular signals from a second arm of the user.
[0093] According to aspects of the technology described herein, a method performed by a computerized system for providing input to an augmented reality system is provided. The method comprises detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that the computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one- handed mode; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system. [0094] In an aspect, the mode is determined based on a gesture detected from the user, wherein the gesture is detected based on the neuromuscular signals and/or information based on the neuromuscular signals.
[0095] According to aspects of the technology described herein, a non-transitory computer-readable medium encoded with instructions that, when executed by at least one computer processor performs a method of: detecting, using a plurality of neuromuscular sensors arranged on one or more wearable devices, neuromuscular signals from a user; determining that a computerized system is in a mode configured to provide input including text to the augmented reality system, wherein the mode is determined based on the neuromuscular signals and/or information based on the neuromuscular signals, and the mode is selected from the group consisting of a typing mode, a writing mode, a drawing mode, and a one-handed mode; identifying based, at least in part, on the neuromuscular signals and/or information based on the neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and providing the identified input to the augmented reality system.
[0096] It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein.
BRIEF DESCRIPTION OF DRAWINGS
[0097] Various non-limiting embodiments of the technology will be described with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale.
[0098] FIG. 1 is a schematic diagram of a computer-based system for processing neuromuscular sensor data in accordance with some embodiments of the technology described herein; [0099] FIG. 2 is a schematic diagram of a distributed computer-based system that integrates an augmented reality (AR) system with a neuromuscular activity system in accordance with some embodiments of the technology described herein;
[0100] FIG. 3 is a flowchart of a process for providing input to an AR system in accordance with some embodiments of the technology described herein;
[0101] FIG. 4 is a flowchart of a process for providing input to an AR system based on one or more neuromuscular signals in accordance with some embodiments of the technology described herein;
[0102] FIG. 5 illustrates a wristband having EMG sensors arranged circumferentially thereon, in accordance with some embodiments of the technology described herein; and
[0103] FIG. 6 illustrates a user wearing the wristband of FIG. 5 while typing on a keyboard in accordance with some embodiments of the technology described herein.
[0104] FIG. 7A illustrates a wearable system with sixteen EMG sensors arranged circumferentially around an elastic band configured to be worn around a user’s lower arm or wrist, in accordance with some embodiments of the technology described herein.
[0105] FIG. 7B is a cross-sectional view through one of the sixteen EMG sensors illustrated in FIG. 7A.
[0106] FIGS. 8 A and 8B schematically illustrate components of a computer-based system on which some embodiments are implemented. FIG. 8A illustrates a wearable portion of the computer-based system and FIG. 8B illustrates a dongle portion connected to a computer, wherein the dongle portion is configured to communicate with the wearable portion.
[0107] FIGS. 9A-9C depict exemplary scenarios in which user input may be provided to an XR system in accordance with some embodiments of the technology described herein.
DETAILED DESCRIPTION [0108] The inventors have developed novel techniques for providing input to extended reality (XR) systems, which include inter alia augmented reality (AR), virtual reality (VR), and mixed reality (MR) systems. Various embodiments described herein offer certain advantages, including, but not limited to, avoiding the use of an undesirable or burdensome physical keyboard, joystick, or other controller; overcoming issues associated with time consuming and high latency processing of low quality images of the user captured by a camera; allowing for the capture and detection of subtle, small, or fast movements and/or variations in force exerted by a user (e.g., varying amounts of force exerted through a stylus, writing instrument, or finger being pressed against a surface) that can be important for resolving text input and other control signals; collecting and analyzing various physiological and/or behavioral information detected from the user that enhances the identification process and is not readily obtained by conventional input devices; allowing instances where the user’s hand is obscured or outside the camera’s field of view, e.g., in the user’s pocket, or while the user is wearing a glove; and allowing better user operability and navigability within the XR environment.
[0109] Other embodiments account for scenarios in which an individual either does not have access to input devices or may otherwise want to provide input to the XR system without the use of input devices. For example, an individual may want to provide input to the XR system in a covert manner without being noticed by other individuals.
[0110] In accordance with some embodiments, signals recorded or detected from wearable sensors are used to identify and provide input to an XR system. Various forms of input, for example, discrete control signals, continuous (e.g., 2D) control signals, text entry via a keyboard or other mode of text entry, writing, and/or drawing, may be identified from the recorded or detected signals and/or information based on the recorded or detected signals to enable improved techniques for providing input (such as text) to the XR system. In some embodiments, various forms of input may be identified based on a mode of the system that senses signals via the wearable sensors and provides input to the XR system. The user can manually, or the system can automatically, switch between input modes based, at least in part, on neuromuscular data detected from the user. In one embodiment, the system can enter a typing mode and can identify text from the user to be provided to the XR system based on one or more tapping or typing actions performed by the user (e.g., tapping on a surface of a physical keyboard, tapping on a surface that has a virtual keyboard projected thereon by the XR system, tapping on a surface that does not have a keyboard projected on it, or performing gestures in mid-air that correspond to typing-style movements). The systems and methods described herein can identify text input from the user based on the recorded or detected signals and/or information based on the recorded or detected signals. In another embodiment, the system can enter a writing mode and text input can be provided to the XR system by identifying one or more writing actions performed by the user (e.g., writing on a surface with a physical or virtual writing implement) based on the recorded or detected signals and/or information based on the recorded or detected signals. In yet another embodiment, the system can enter a drawing mode and input can be provided to the XR system by identifying one or more drawing actions (e.g., drawing one or more line segments and/or curves on a surface) performed by the user based on the recorded or detected signals and/or information based on the recorded or detected signals. In another embodiment, the system can enter a one-handed mode (i.e., a mode where the user uses only one hand to provide input), and input can be provided to the XR system by identifying one or more one- handed actions (for example, gestures such as squeezing, pinching, and/or tapping of various fingers and combinations of fingers) performed by the user based on the recorded or detected signals and/or information based on the recorded or detected signals.
[0111] In some embodiments, the XR system may provide visual feedback by displaying an indication of the identified input to the user, which may facilitate text entry or other information provided as input to the XR system. The indication can be displayed via a user interface presented within an XR environment provided by the XR system. For example, a display associated with the XR system can overlay a visual representation of the identified input in the user interface or provide audio feedback to the user about the identified input. In some embodiments, the indication may be rendered by the AR system onto a surface with which the user is interacting.
[0112] In some embodiments, the system described herein senses signals via the wearable sensors and provides input to the XR system such that the system smoothly transitions from a first input mode to a second input mode without requiring an explicit mode switch instruction from the user. This provides for a flexible approach to providing input to the XR system. For example, the system described herein may be operating in a typing mode where the user is providing text input to the system by typing on a physical keyboard. The user may stop typing on the physical keyboard and resume providing text input by writing with a stylus. In response, the system may automatically detect the change in input mode and seamlessly switch from the typing mode to a writing mode. In some embodiments, the user may switch to different forms of text entry while the system is in the same mode. For example, the user may begin by typing on a physical keyboard, and resume text entry by typing on a virtual keyboard or using typing motions without any virtual representation of a keyboard. In this scenario, the manner in which the user is providing text input has changed even though the system remains in the typing mode. In some
embodiments, the visual feedback provided by the XR system may continue uninterrupted regardless of the mode or the form of text entry.
[0113] According to some embodiments, the input to be provided to the XR system may be identified, at least in part, from raw (e.g., unprocessed) sensor signals collected by one or more of the wearable sensors. In some embodiments, the input to be provided to the XR system may be identified, at least in part, from information based on the raw sensor signals (e.g., processed sensor signals), where the raw sensor signals collected by one or more of the wearable sensors are processed to perform amplification, filtering, rectification, and/or other form of signal processing, examples of which are described in more detail below. In some embodiments, the input to be provided to the XR system may be identified, at least in part, from an output of one or more trained inference models that receive the sensor signals (or processed versions of the sensor signals) as input.
[0114] As described herein, in some embodiments, various muscular activation states may be identified directly from recorded or detected sensor data. In other
embodiments, handstates, gestures, postures, and the like (collectively or individually referred to as muscular activation states) may be identified based, at least in part, on the output of a trained inference model. In some embodiments, various forms of input can be provided to the AR system and may be identified directly from recorded sensor data. In other embodiments, the input can be provided to the AR system and may be identified based, at least in part, on the output of one or more trained inference models. In some embodiments, a trained inference model may output motor unit or muscle activations and/or position, orientation, and/or force estimates for segments of a computer-generated musculoskeletal model. In one example, all or portions of a human musculoskeletal system can be modeled as a multi-segment articulated rigid body system, with joints forming the interfaces between the different segments and joint angles defining the spatial relationships between connected segments in the model. Constraints on the movement at the joints are governed by the type of joint connecting the segments and the biological structures (e.g., muscles, tendons, ligaments) that restrict the range of movement at the joint. For example, the shoulder joint connecting the upper arm to the torso and the hip joint connecting the upper leg to the torso are ball and socket joints that permit extension and flexion movements as well as rotational movements. By contrast, the elbow joint connecting the upper arm and the forearm and the knee joint connecting the upper leg and the lower leg allow for a more limited range of motion. In this example, a multi-segment articulated rigid body system is used to model portions of the human musculoskeletal system. However, it should be appreciated that some segments of the human musculoskeletal system (e.g., the forearm), though approximated as a rigid body in the articulated rigid body system, may include multiple rigid structures (e.g., the ulna and radius bones of the forearm) that provide for more complex movement within the segment that is not explicitly considered by the rigid body model. Accordingly, a model of an articulated rigid body system for use with some embodiments of the technology described herein may include segments that represent a combination of body parts that are not strictly rigid bodies. It will be appreciated that physical models other than the multi-segment articulated rigid body system may be used to model portions of the human musculoskeletal system without departing from the scope of this disclosure.
[0115] Continuing with the example above, in kinematics, rigid bodies are objects that exhibit various attributes of motion (e.g., position, orientation, angular velocity, acceleration). Knowing the motion attributes of one segment of the rigid body enables the motion attributes for other segments of the rigid body to be determined based on constraints in how the segments are connected. For example, the hand may be modeled as a multi- segment articulated body with the joints in the wrist and each finger forming the interfaces between the multiple segments in the model. In some embodiments, movements of the segments in the rigid body model can be simulated as an articulated rigid body system in which position (e.g., actual position, relative position, or orientation) information of a segment relative to other segments in the model are predicted using a trained inference model, as described in more detail below.
[0116] In one non-limiting example, the portion of the human body approximated by a musculoskeletal representation is a hand or a combination of a hand with one or more arm segments. The information used to describe a current state of the positional relationships between segments, force relationships for individual segments or combinations of segments, and muscle and motor unit activation relationships between segments, in the
musculoskeletal representation is referred to herein as the“handstate” of the musculoskeletal representation. It should be appreciated, however, that the techniques described herein are also applicable to musculoskeletal representations of portions of the body other than the hand including, but not limited to, an arm, a leg, a foot, a torso, a neck, or any combination of the foregoing.
[0117] In addition to spatial (e.g., position and/or orientation) information, some embodiments are configured to predict force information associated with one or more segments of the musculoskeletal representation. For example, linear forces or rotational (torque) forces exerted by one or more segments may be estimated. Examples of linear forces include, but are not limited to, the force of a finger or hand pressing on a solid object such as a table, and a force exerted when two segments (e.g., two fingers) are pinched together. Examples of rotational forces include, but are not limited to, rotational forces created when segments in the wrist or fingers are twisted or flexed. In some embodiments, the force information determined as a portion of a current handstate estimate includes one or more of pinching force information, grasping force information, or information about co contraction forces between muscles represented by the musculoskeletal representation.
[0118] FIG. 1 illustrates an exemplary system 100, which comprises a
neuromuscular activity system, in accordance with some embodiments. The system includes one or more sensors 102 (e.g., neuromuscular sensors) configured to record signals arising from neuromuscular activity in skeletal muscle of a human body. The term“neuromuscular activity” as used herein refers to neural activation of spinal motor neurons that innervate a muscle, muscle activation, muscle contraction, or any combination of the neural activation, muscle activation, and/or muscle contraction. Neuromuscular sensors may include one or more electromyography (EMG) sensors, one or more mechanomyography (MMG) sensors, one or more sonomyography (SMG) sensors, a combination of two or more types of EMG sensors, MMG sensors, and SMG sensors, and/or one or more sensors of any suitable type that are configured to detect neuromuscular signals. In some embodiments, the
neuromuscular sensor(s) may be used to sense muscular activity related to a movement of the part of the body controlled by muscles from which the neuromuscular sensors are arranged to sense the muscle activity. Spatial information (e.g., position and/or orientation information) and force information describing the movement may be predicted based on the sensed neuromuscular signals as the user moves over time. In some embodiments, the neuromuscular sensor(s) may be used to sense muscular activity related to movement caused by external objects, for example, movement of a hand being pushed by an external object.
[0119] As the tension of a muscle increases during performance of a motor task, the firing rates of active neurons increases and additional neurons may become active, which is a process referred to as motor unit recruitment. The pattern by which neurons become active and increase their firing rate is stereotyped, such that the expected motor unit recruitment patterns define an activity manifold associated with standard or normal movement. Some embodiments record activation of a single motor unit or a group of motor units that are“off-manifold,” in that the pattern of motor unit activation is different than an expected or typical motor unit recruitment pattern. Such off-manifold activation is referred to herein as,“sub-muscular activation” or“activation of a sub-muscular structure,” where a sub-muscular structure refers to the single motor unit or the group of motor units associated with the off-manifold activation. Examples of off-manifold motor unit recruitment patterns include, but are not limited to, selectively activating a high-threshold motor unit without activating a lower-threshold motor unit that would normally be activated earlier in the recruitment order and modulating the firing rate of a motor unit across a substantial range without modulating the activity of other neurons that would normally be co-modulated in typical motor recruitment patterns. In some embodiments, the neuromuscular sensor(s) may be used to sense sub-muscular activation(s) without observable movement. Sub-muscular activation(s) may be used, at least in part, to identify and provide input to an augmented reality system in accordance with some embodiments of the technology described herein.
[0120] In some embodiments, sensors or sensing components 102 include one or more neuromuscular sensors (e.g., EMG sensors). In other embodiments, sensors 102 include one or more auxiliary sensors such as Inertial Measurement Units (IMUs), which measure a combination of physical aspects of motion, using, for example, an accelerometer, a gyroscope, a magnetometer, or any combination of one or more accelerometers, gyroscopes and magnetometers, or any other components or devices capable of detecting spatiotemporal positioning, motion, force, or other aspects of a user’s physiological state and/or behavior. In some embodiments, IMUs may be used to sense information about the movement of the part of the body on which the IMU is attached and information derived from the sensed data (e.g., position and/or orientation information) may be tracked as the user moves over time. For example, one or more IMUs may be used to track movements of portions of a user’s body proximal to the user’s torso relative to the sensor (e.g., arms, legs) as the user moves over time. In other embodiments, sensors 102 include a plurality of neuromuscular sensors and at least one auxiliary sensor configured to continuously record a plurality of auxiliary signals. Examples of other auxiliary sensors include, but are not limited to, microphones, imaging devices (e.g., a camera), radiation-based sensors for use with a radiation-generation device (e.g., a laser-scanning device), or other types of sensors such as thermal sensors, infrared sensors, heart-rate or blood pressure monitors, and/or video eye trackers.
[0121] In embodiments that include at least one IMU and one or more
neuromuscular sensors, the IMU(s) and neuromuscular sensors may be arranged to detect movement of the same or different parts of the human body. For example, the IMU(s) may be arranged to detect movements of one or more body segments proximal to the torso (e.g., an upper arm), whereas the neuromuscular sensors may be arranged to detect movements of one or more body segments distal to the torso (e.g., a forearm or wrist). It should be appreciated, however, that the sensors may be arranged in any suitable way, and embodiments of the technology described herein are not limited based on the particular sensor arrangement. For example, in some embodiments, at least one IMU and a plurality of neuromuscular sensors may be co-located on a body segment to track movements of body segment using different types of measurements. In one implementation described in more detail below, an IMU sensor and a plurality of EMG sensors are arranged on a wearable device configured to be worn around the lower arm or wrist of a user. In such an
arrangement, the IMU sensor may be configured to track movement information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement information associated with wrist or hand segments to determine, for example, whether the user has an open or closed hand configuration or sub-muscular information associated with activation of sub-muscular structures in muscles of the wrist or hand.
[0122] Each of the sensors 102 includes one or more sensing components configured to sense information about a user. In the case of IMUs, the sensing components may include one or more accelerometers, gyroscopes, magnetometers, or any combination thereof to measure characteristics of body motion and/or characteristics related to body motion, examples of which include, but are not limited to, acceleration, angular velocity, and sensed magnetic field around the body. In the case of neuromuscular sensors, the sensing components may include, but are not limited to, electrodes configured to detect electric potentials on the surface of the body (e.g., for EMG sensors), vibration sensors configured to measure skin surface vibrations (e.g., for MMG sensors), and acoustic sensing components configured to measure ultrasound signals (e.g., for SMG sensors) arising from muscle activity.
[0123] In some embodiments, at least some of the plurality of sensors 102 are arranged as a portion of a wearable device configured to be worn on or around part of a user’s body. For example, in one non-limiting example, an IMU sensor and a plurality of neuromuscular sensors are arranged circumferentially around an adjustable and/or elastic band such as a wristband or armband configured to be worn around a user’s wrist or arm. Alternatively, at least some of the sensors may be arranged on a wearable patch configured to be affixed to a portion of the user’s body. In some embodiments, multiple wearable devices, each having one or more IMUs and/or neuromuscular sensors included thereon may be used to detect neuromuscular data and generate control information based on activation of muscular and sub-muscular structures and/or movement(s) that involve(s) multiple parts of the body.
[0124] In one embodiment, sixteen EMG sensors are arranged circumferentially around an elastic band configured to be worn around a user’s lower arm. For example, FIG. 5 shows EMG sensors 504 arranged circumferentially around elastic band 502. It should be appreciated that any suitable number of neuromuscular sensors may be used and the number and arrangement of neuromuscular sensors used may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband may be used to detect neuromuscular data and generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling through text, controlling a virtual avatar, or any other suitable control task. For example, as shown in FIG. 6, a user 506 can wear elastic band 502 on hand 508. In this way, EMG sensors 504 may be configured to record EMG signals as a user controls keyboard 512 using fingers 510. In some
embodiments, elastic band 502 may also include one or more IMUs (not shown), configured to record movement information, as discussed above. Although FIG. 5 depicts the user wearing one wearable device on the hand, it will be appreciated that some embodiments include multiple wearable devices (having one or more neuromuscular sensors integrated therewith) configured to be worn on one or both hands/arms of the user.
[0125] In some embodiments, the output of one or more of the sensing components may be optionally processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the output of the sensing components may be performed in software. Thus, signal processing of signals recorded by the sensors may be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect.
[0126] In some embodiments, the sensor data recorded by the sensors 102 may be optionally processed to compute additional derived measurements that are then provided as input to one or more inference models, as described in more detail below. For example, recorded signals from an IMU sensor may be processed to derive an orientation signal that specifies the orientation of a rigid body segment over time. Sensors 102 may implement signal processing using integrated components, or at least a portion of the signal processing may be performed by one or more components in communication with, but not directly integrated with the sensors.
[0127] System 100 can also include one or more computer processors 104 programmed to communicate with sensors 102 through either a one-way or two-way communication pathway. For example, signals recorded by one or more of the sensors may be provided to the processor(s), which may be programmed to execute one or more machine learning algorithms that process signals output by the sensors 102 to train (or retrain) one or more inference models 106, and the resulting trained (or retrained) inference model(s) 106 may be stored for later use in identifying and providing input to an XR system, as described in more detail below. As will be appreciated, an inference model may be a model that utilizes a statistical inference based on a probability distribution to deduce a result; in this regard, an inference model may comprise a statistical model.
[0128] In some embodiments, signals recorded by sensors arranged on a first wearable device worn on one hand/arm and signals recorded by sensors arranged on a second wearable device worn on the other hand/arm may be processed using the same inference model(s) or separate inference model(s).
[0129] System 100 also optionally includes one or more controllers 108. For example, controller 108 may be a display controller configured to display a visual representation (e.g., a representation of a hand). As discussed in more detail herein, one or more computer processors may implement one or more trained inference models that receive as input signals recorded by sensors 102 and provide as output information (e.g., predicted handstate information) that may be used to identify and provide input to an XR system. In some embodiments, as a user performs different movements, a trained inference model interprets neuromuscular signals recorded by wearable neuromuscular sensors into position and force estimates (e.g., handstate information) that are used to update the musculoskeletal representation. Because the neuromuscular signals are continuously sensed and recorded, the musculoskeletal representation is updated in real time and a visual representation of a hand (e.g., within an XR environment) may be rendered based on the current handstate estimates. As will be appreciated, an estimate of a user’s handstate may be used to determine a gesture being performed by the user and/or to predict a gesture that the user will perform.
[0130] According to some embodiments, musculoskeletal representations (e.g., hand-rendering) may include actual visual representations of biomimetic (realistic) hands, synthetic (robotic) hands, as well as abstract "internal representations" that serve as input for gesture control (e.g., to other applications, systems, etc.). That is, the position and/or force of the hand may be provided to downstream algorithms (e.g., control algorithms in an XR system) but may not be directly rendered.
[0131] In some embodiments, as shown in FIG. 1, the system 100 optionally includes a computer application 110 that is configured to simulate a virtual reality (VR), augmented reality (AR), and/or a mixed reality (MR) environment (collectively, extended reality,“X Reality” or“XR” systems or environments), and the computer application 110 can display a visual character such as an avatar (e.g., via controller 108) in an XR environment. Positioning, movement, and/or forces applied by portions of the visual character within the virtual reality environment may be displayed in the XR environment based on the output of the trained inference model(s). The visual representation in the XR environment may be dynamically updated as continuous signals are recorded by the sensors 102, processed by computer processor(s), and sent to the inference model(s) 106 for trained or inferred outputs, so that the system can provide a computer-generated representation of the visual character’s movement that is updated in real-time in the XR environment.
[0132] Information generated in either system (e.g., XR camera inputs from an XR system, neuromuscular sensor inputs from a computer-based system that generates the musculoskeletal representation based on sensor data) can be used to improve the user experience, accuracy, feedback, inference models, calibration functions, and other aspects in the overall system. To this end, in an XR environment for example, system 100 may include an XR system that includes one or more of the following: processors, a camera (e.g., one or more camera(s) contained in a head-mounted display), a display (e.g., via XR glasses or other viewing device), or any other auxiliary sensor(s) that provides XR information within a view of the user or provides XR information to the user. In some embodiments, information from a camera contained in the head-mounted display in the XR system may be used in combination with information from the neuromuscular sensors to interpret movement, gestures, and/or actions performed by the user. System 100 may also include system elements that couple the XR system with a computer-based system that generates the musculoskeletal representation based on sensor data. For example, the systems may be coupled via a special-purpose or other type of computer system that receives inputs from the XR system and the system that generates the computer-based musculoskeletal
representation. Such a system may include a gaming system, robotic control system, personal computer, or other system that is capable of interpreting XR and musculoskeletal information. The XR system and the system that generates the computer-based
musculoskeletal representation may also be programmed to communicate directly. Such information may be communicated using any number of interfaces, protocols, or media.
[0133] In some embodiments, inference model 106 may be a neural network and, for example, may be a recurrent neural network. In some embodiments, the recurrent neural network may be a long short-term memory (LSTM) neural network. It should be appreciated, however, that the recurrent neural network is not limited to being an LSTM neural network and may have any other suitable architecture. For example, in some embodiments, the recurrent neural network may be a fully recurrent neural network, a gated recurrent neural network, a recursive neural network, a Hopfield neural network, an associative memory neural network, an Elman neural network, a Jordan neural network, an echo state neural network, a second order recurrent neural network, and/or any other suitable type of recurrent neural network. In other embodiments, neural networks that are not recurrent neural networks may be used. For example, deep neural networks, convolutional neural networks, and/or feedforward neural networks, may be used.
[0134] In some embodiments, the output of one or more inference models comprises one or more discrete outputs. Discrete outputs (e.g., classification labels) may be used, for example, when a user performs a particular gesture or causes a particular pattern of activation (including individual neural spiking events) to occur. For example, a model may be trained to estimate or infer whether the user is activating a particular motor unit, activating a particular motor unit with a particular timing, activating a particular motor unit with a particular firing pattern, or activating a particular combination of motor units. On a shorter timescale, discrete classification is used in some embodiments to estimate whether a particular motor unit fired an action potential within a given amount of time. In such embodiments, these estimates may then be accumulated to obtain an estimated firing rate for that motor unit.
[0135] In embodiments in which an inference model is implemented as a neural network configured to output a discrete signal, the neural network may include a softmax layer such that the outputs add up to one and may be interpreted as probabilities. The output of the softmax layer may be a set of values corresponding to a respective set of control signals, with each value indicating a probability that the user want to perform a particular control action. As one non-limiting example, the output of the softmax layer may be a set of three probabilities (e.g., 0.92, 0.05, and 0.03) indicating the respective probabilities that the detected pattern of activity is one of three known patterns.
[0136] It should be appreciated that when the inference model is a neural network configured to output a discrete output (e.g., a discrete signal), the neural network is not required to produce outputs that add up to one. For example, instead of a softmax layer, the output layer of the neural network may be a sigmoid layer (which does not restrict the outputs to probabilities that add up to one). In such embodiments, the neural network may be trained with a sigmoid cross-entropy cost. Such an implementation may be advantageous in the case when multiple different control actions may occur within a threshold amount of time and it is not important to distinguish the order in which these control actions occur (e.g., a user may activate two patterns of neural activity within the threshold amount of time). In some embodiments, any other suitable non-probabilistic multi-class classifier may be used, as aspects of the technology described herein are not limited in this respect.
[0137] In some embodiments, the output(s) of the inference model(s) may be continuous signals rather than discrete signals. For example, the model may output an estimate of the firing rate of each motor unit or the model may output a time-series electrical signal corresponding to each motor unit or sub-muscular structure. [0138] It should be appreciated that aspects of the technology described herein are not limited to using neural networks, as other types of inference models may be employed in some embodiments. For example, in some embodiments, the inference model may comprise a hidden Markov model (HMM), a switching HMM with the switching allowing for toggling among different dynamic systems, dynamic Bayesian networks, and/or any other suitable graphical model having a temporal component. Any such inference model may be trained using recorded sensor signals.
[0139] As another example, in some embodiments, the inference model is a classifier taking as input, features derived from the recorded sensor signals. In such embodiments, the classifier may be trained using features extracted from the sensor data. The classifier may be a support vector machine, a Gaussian mixture model, a regression based classifier, a decision tree classifier, a Bayesian classifier, and/or any other suitable classifier, as aspects of the technology described herein are not limited in this respect. Input features to be provided to the classifier may be derived from the sensor data in any suitable way. For example, the sensor data may be analyzed as time series data using wavelet analysis techniques (e.g., continuous wavelet transform, discrete-time wavelet transform, etc.), Fourier- analytic techniques (e.g., short-time Fourier transform, Fourier transform, etc.), and/or any other suitable type of time-frequency analysis technique. As one non-limiting example, the sensor data may be transformed using a wavelet transform and the resulting wavelet coefficients may be provided as inputs to the classifier.
[0140] In some embodiments, values for parameters of the inference model may be estimated from training data. For example, when the inference model is a neural network, parameters of the neural network (e.g., weights) may be estimated from the training data. In some embodiments, parameters of the inference model may be estimated using gradient descent, stochastic gradient descent, and/or any other suitable iterative optimization technique. In embodiments where the inference model is a recurrent neural network (e.g., an LSTM), the inference model may be trained using stochastic gradient descent and backpropagation through time. The training may employ a cross-entropy loss function and/or any other suitable loss function, as aspects of the technology described herein are not limited in this respect. [0141] As discussed above, some embodiments are directed to using an inference model for predicting musculoskeletal information based on signals recorded from wearable sensors. As discussed briefly above in the example where portions of the human
musculoskeletal system can be modeled as a multi-segment articulated rigid body system, the types of joints between segments in a multi-segment articulated rigid body model constrain movement of the rigid body. Additionally, different human individuals tend to move in characteristic ways when performing a task that can be captured in inference patterns of individual user behavior. At least some of these constraints on human body movement may be explicitly incorporated into inference models used for prediction in accordance with some embodiments. Additionally or alternatively, the constraints may be learned by the inference model though training based on recorded sensor data, as discussed briefly above.
[0142] As discussed above, some embodiments are directed to using an inference model for predicting handstate information to enable the generation and/or real-time update of a computer-based musculoskeletal representation. The inference model may be used to predict the handstate information based on IMU signals, neuromuscular signals (e.g., EMG, MMG, and/or SMG signals), external device signals (e.g., camera or laser- scanning signals), or a combination of IMU signals, neuromuscular signals, and external device signals detected as a user performs one or more movements. For instance, as discussed above, a camera associated with an XR system may be used to capture actual position data relating to a human subject of the computer-based musculoskeletal representation and such actual position information may be used to improve the accuracy of the representation. Further, outputs of the inference model may be used to generate a visual representation of the computer-based musculoskeletal representation in an XR environment. For example, a visual representation of muscle groups firing, force being applied, text being entered via movement, or other information produced by the computer-based musculoskeletal representation may be rendered in a visual display of an XR system. In some embodiments, other input/output devices (e.g., auditory inputs/outputs, haptic devices, etc.) may be used to further improve the accuracy of the overall system and/or user experience. [0143] Some embodiments of the technology described herein are directed to using an inference model, at least in part, to map one or more actions identified from the neuromuscular signals (e.g., map muscular activation state information identified from the neuromuscular sensors) to input signals including text. The inference model may receive as input IMU signals, neuromuscular signals (e.g., EMG, MMG, and/or SMG signals), external device signals (e.g., camera or laser- scanning signals), or a combination of IMU signals, neuromuscular signals, and external device signals detected as a user performs one or more sub-muscular activations, one or more movements, and/or one or more gestures. The inference model may be used to predict the input to be provided to the AR system without the user having to make perceptible movements.
[0144] FIG. 2 illustrates a schematic diagram of a distributed computer-based system
200 that integrates an augmented reality (AR) system 201 with a neuromuscular activity system 202 in accordance with some embodiments. Neuromuscular activity system 202 is similar to system 100 described above with respect to FIG. 1.
[0145] Generally, an augmented reality (AR) system 201 may take the form of a pair of goggles, glasses, or other type(s) of device that shows display elements to the user that may be superimposed on“reality” which in some cases could be a user’s view of the environment (e.g., as viewed through the user’s eyes), or a captured (e.g., by cameras, for example) version of a user’s view of the environment. In some embodiments, AR system
201 may include one or more cameras (e.g., camera(s) 204) mounted within a device worn by a user that captures one or more views experienced by the user in their environment. System 201 may have one or more processors 205 operating within the user device and/or within a peripheral device or computer system, and such processor(s) may be capable of transmitting and receiving video information and other types of sensor data.
[0146] AR system 201 may also include one or more sensors 207 such as microphones, GPS elements, accelerometers, infrared detectors, haptic feedback elements or any other type of sensor, or any combination thereof. In some embodiments, the AR system
201 may be an audio-based AR system and the one or more sensors 207 may also include one or more headphones or speakers. Further, AR system 201 may also have one or more displays 208 that permit the AR system to overlay and/or display information to the user in addition to the view of the user’s environment presented by the AR system. AR system 201 may also include one or more communication interfaces (e.g. interfaces 206) for the purpose of communicating information to one or more computer systems (e.g., a gaming system or other systems capable of rendering or receiving AR data). AR systems can take many forms and are provided by a number of different manufacturers. For example, various
embodiments may be implemented in association with one or more types of AR systems.
For example, various embodiments may be implemented with the HoloLens holographic reality glasses available from the Microsoft Corporation, the Lightwear AR headset from Magic Leap, the Google Glass AR glasses available from Alphabet, the R-7 Smartglasses System available from ODG, or any other type of AR and/or VR device. Although discussed by way of example, it should be appreciated that one or more embodiments may be implemented within VR or XR systems.
[0147] AR system 201 may be operatively coupled to the neuromuscular activity system 202 through one or more communication methods, including but not limited to, the Bluetooth protocol, Wi-Fi, Ethemet-like protocols, or any number of connection types, wireless and/or wired. It should be appreciated that, for example, systems 201 and 202 may be directly connected or coupled through one or more intermediate computer systems or network elements. The double-headed arrow in FIG. 2 represents the communicative coupling between the systems 201 and 202.
[0148] Neuromuscular activity system 202 may be similar in structure and function to system 100 described above with reference to FIG. 1. In particular, system 202 may include one or more neuromuscular sensors 209 and/or auxiliary sensors described in connection with FIG. 1, one or more inference models 210, and may create, maintain, and store a musculoskeletal representation 211. In an example embodiment discussed above, system 202 may include a device such as a band that can be worn by a user in order to collect and analyze neuromuscular signals. Further, system 202 may include one or more communication interfaces 212 that permit system 202 to communicate with AR system 201, such as by Bluetooth, Wi-Fi, or other communication method. Notably, AR system 201 and neuromuscular activity system 202 may communicate information which can be used to enhance the user experience and/or allow the AR system to function more accurately and effectively.
[0149] Although FIG. 2 describes a distributed computer-based system that integrates the AR system 201 with the neuromuscular activity system 202, it will be understood the integration may be non-distributed in nature. In some embodiments, the neuromuscular activity system 202 may be integrated into the AR system 201 such that the various components of the neuromuscular activity system 202 may be considered as part of the AR system. For example, neuromuscular signals recorded by the neuromuscular sensors 209 may be treated as any other inputs (e.g., camera(s) 204, sensors 207) to the AR system 201. In addition, the processing of the sensor signals obtained from neuromuscular sensors 209 may be integrated into the AR system 201.
[0150] FIG. 3 illustrates a process 300 for identifying and providing input to an XR system. In particular, process 300 is described with respect to identifying and providing input to an AR system, such as AR system 201, in accordance with some embodiments. The process 300 may be performed by the neuromuscular activity system 202. In act 302, sensor signals may be recorded by one or more sensors 102 (also referred to herein as“raw sensor signals”) of the neuromuscular activity system 202. In some embodiments, the sensors include a plurality of neuromuscular sensors (e.g., EMG sensors) arranged on a wearable device worn by a user. For example, EMG sensors may be arranged on an elastic band configured to be worn around a wrist or forearm of the user to record neuromuscular signals from the user as the user performs various movements or gestures. In some embodiments, the EMG sensors may be the sensors 504 arranged on the band 502, as shown in FIG. 5; in some embodiments, the EMG sensors may be the sensors 710 arranged on the elastic band 720, as shown in FIG. 7A.
[0151] As used herein, the term“gestures” refers to a static or dynamic configuration of one or more body parts including the position of the one or more body parts and forces associated with the configuration. For example, gestures performed by the user include static/discrete gestures (also referred to as“pose”) that indicate a static configuration of one or more body parts. For example, a pose can include a fist, an open hand, statically placing or pressing the palm of the hand down on a solid surface or grasping a ball. A pose can indicate the static configuration by providing positional information (e.g., segment coordinates, joint angles, or similar information) for the pose, or by providing an identifier corresponding to a pose (e.g., a parameter, function argument, or variable value). The gestures performed by the user may include dynamic/continuous gestures that indicate a dynamic configuration of one or more body parts. The dynamic configuration can describe the position of the one or mode body parts, the movement of the one or more body parts, and forces associated with the dynamic configuration. For example, a dynamic gesture can include waving a finger back and forth, throwing a ball or grasping and throwing a ball. Gestures may include covert gestures that are imperceptible to another person, such as slightly tensing a joint by co-contracting opposing muscles, or using sub-muscular activations. Gestures may be defined by an application configured to prompt a user to perform the gestures or, alternatively, gestures may be arbitrarily defined by a user. The gestures performed by the user may include symbolic gestures (e.g., gestures mapped to other gestures, interactions, or commands, for example, based on a gesture vocabulary that specifies the mapping). In some cases, hand and arm gestures may be symbolic and used to communicate according to cultural standards.
[0152] In some embodiments, the movements or gestures performed by the user may include tapping or typing actions such as, tapping or typing actions on a surface of a physical keyboard, tapping or typing actions on a surface that has a virtual keyboard projected thereon by the AR system 201, tapping or typing actions without any virtual representation of a keyboard and/or typing actions or other gestures performed in mid-air (e.g., not on a surface).
[0153] In some embodiments, the movements or gestures performed by the user may include writing actions such as, writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user (e.g., a user might be imagining that he is holding a pen or stylus by holding his fingertips together in a writing position), writing actions performed on a surface with a virtual stylus or virtual writing implement, and/or writing actions performed with a physical writing implement, a virtual writing implement, or fingertip(s) of the user in mid-air and not on a particular surface. [0154] In some embodiments, the movements or gestures performed by the user may include drawing actions such as, drawing actions performed on a surface including drawing one or more line segments and/or curves and/or swiping through a virtual keyboard (e.g., virtual swipe keyboard) projected by the AR system 201.
[0155] In some embodiments, the movements or gestures performed by the user may include one-handed actions such as one-handed chord gestures including squeezes, taps or pinches with various fingers or combinations of fingers of one hand.
[0156] In addition to a plurality of neuromuscular sensors, some embodiments include one or more auxiliary sensors configured to record auxiliary signals that may also be provided as input to the one or more trained inference models. Examples of auxiliary sensors include IMU sensors, imaging devices, radiation detection devices (e.g., laser scanning devices), heart rate monitors, or any other type of biosensors configured to record biophysical information from the user during performance of one or more movements or gestures mentioned above. In some embodiments, the neuromuscular signals may be associated or correlated with information detected from the auxiliary sensors (e.g., auxiliary signals providing information indicative of a user’s physiological state and/or behavior). For example, the auxiliary signals may be used together with the neuromuscular signals to interpret the user’s movements, gestures, actions or otherwise augment and enhance the neuromuscular signals or the input identification process described in detail below.
[0157] Process 300 then proceeds to act 304, where the raw sensor signals recorded by the sensors 102 are optionally processed. In some embodiments, the raw sensor signals may be processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the raw sensor signals may be performed in software. Accordingly, signal processing of the raw sensor signals recorded by the sensors may be performed in hardware, software, or by any suitable combination of hardware and software. In some implementations, the raw sensor signals may be processed to derive other signal data. For example, accelerometer data recorded by one or more IMU sensors may be integrated and/or filtered to determine derived signal data associated with one or more muscles during activation of a muscle or
performance of a gesture. [0158] Process 300 then proceeds to act 306, where the raw sensor signals or the processed sensor signals are optionally provided as input to a trained inference model(s) configured to output information representing user activity, such as handstate information and/or muscular activation state information (e.g., a gesture or pose), as described above.
[0159] Process 300 then proceeds to act 308, where input to be provided to the AR system 201 is identified based on the raw sensor signals, the processed sensor signals, and/or the outputs of the trained inference model(s) (e.g., the handstate information). In some embodiments, input to be provided to the AR system 201 may be identified based on the movements, gestures, or actions identified from the raw sensor signals, the processed sensor signals, and/or the outputs of the trained inference model(s). For example, text input to be provided to the AR system 201 may be identified based on the tapping or typing actions, writing actions, drawing actions, and/or one-handed actions. Input other than or in addition to text input may be identified, for example, a drawing may be identified based on the drawing actions.
[0160] According to some embodiments, the one or more computer processors 104 of system 100 may be programmed to identify the input to be provided to the AR system 201 from signals recorded by sensors 102 (e.g., the raw sensor signals) and/or information based on these signals. The information based on the signals recorded by sensors 102 may include information associated with processed sensor signals (e.g., processed EMG signals) and/or information associated with outputs of the trained inference model (e.g., handstate information).
[0161] According to some embodiments, input to be provided to the AR system 201 may be identified based on signals output from the auxiliary sensors (e.g., one or more IMU sensors, one or more cameras or imaging devices associated with neuromuscular activity system 202 or augmented reality system 201) in addition to the signals recorded by the neuromuscular sensors. Such auxiliary sensors can provide additional information regarding the movement of the pen, stylus, fingertip(s), when the user performs the various
movements, gestures and/or actions. The additional information can be used to improve the accuracy of the identification process. [0162] In some embodiments, the identified input may be provided to the AR system 201. The AR system 201 may provide visual feedback by displaying an indication of the identified input to the user (and/or may provide other forms of feedback such as audio or haptic feedback). The visual feedback may facilitate text entry, for example, by prompting the user to adjust the way various movements, gestures, and/or actions are performed. The visual feedback may be useful in situations where the user provides input using an object or the user’s hand/fingertip, which does not leave physical marks when writing or drawing on a surface, for example. In some embodiments, the indication of the identified input includes text input identified based on the tapping or typing actions, writing actions, drawing actions, and/or one-handed actions performed by the user. In some embodiments, the indication of the identified input includes a listing of one or more suggested or predicted words or phrases for text input. For example, multiple options, guesses or alternative words may be presented to the user. The user may select from among the presented items by, for example, performing certain movements or gestures (that are identified based on neuromuscular signals) or using alternative control schemes (e.g., a cursor/pointer). In some embodiments, the indication of the identified input includes one or more virtual ink marks associated with one or more strokes made by a writing implement. In some embodiments, the indication of the identified input includes a drawing identified based on drawing actions performed by the user. In some embodiments, the indication may be displayed via a user interface presented with an augmented reality environment provided by the AR system 201. For example, the indication may be provided on a virtual document in the user interface or as a representation shown in the AR environment to be floating in space. In some embodiments, the indication may be rendered onto a surface that the user is interacting with by the AR system 201. The indication may be rendered onto the surface where the user is typing, for example, as a scrolling tickertape or a line-oriented typewriter. The indication may be rendered onto the surface where the user is writing, for example, as virtual ink on the surface.
[0163] FIG. 4 illustrates a process 400 for identifying and providing input to an XR system. In particular, process 400 is described with respect to identifying and providing input to an AR system, such as AR system 201, in accordance with some embodiments. The process 400 may be performed by the neuromuscular activity system 202. In act 402, sensor signals are recorded by one or more sensors such as neuromuscular sensors (e.g., EMG sensors) and/or auxiliary sensors (e.g., IMU sensors, imaging devices, radiation detection devices, heart rate monitors, or any other type of biosensors) of the neuromuscular activity system 202.
[0164] In act 404, a determination may be made that the neuromuscular activity system 202 is in a mode configured to provide input including text to the AR system 201. The mode may include a typing mode in which a user may perform tapping or typing actions on a physical or virtual keyboard to provide text input, a writing mode in which a user may perform writing actions with a physical or virtual writing implement (e.g., pen, stylus, etc.) and/or fingertip(s) to provide text input, a drawing mode in which a user may perform drawing actions with a physical or virtual writing implement (e.g., pen, stylus, etc.) and/or fingertip(s) to provide text and/or drawing input, a one-handed mode in which a user may perform one-handed actions to provide text input, and/or a mode in which discrete and/or continuous control signals may be provided as input to the AR system 201.
[0165] In some embodiments, the mode determination may be made based on a user selection of the mode. In other words, the mode that the neuromuscular activity system 202 is in may be determined in response to receiving a user selection of the mode. The user selection may be received from a user interface displayed in an AR environment provided by the AR system 201. The user interface may identify and display a number of modes from which the user may select a particular mode. For example, a list of available modes, such as, typing mode, writing mode, drawing mode, and/or one-handed mode may be provided and the user may select a mode from the list.
[0166] In some embodiments, the mode determination may be made based on the sensor signals and/or information based on the sensor signals. In other words, the mode that the neuromuscular activity system 202 is in may be determined based on the sensor signals and/or information based on the sensor signals. In one embodiment, a particular gesture performed by the user may be identified based on the sensor signals and/or information based on the sensor signals, and the mode may be determined by identifying the mode corresponding to the particular gesture. For example, different gestures may be mapped to different modes and a particular mode may be determined based on a corresponding gesture performed by the user. The mode entered based on a particular gesture or muscular activation state may depend on the state of the system (e.g., a current mode of the system) and/or may be personalized according to a user’s preferred settings. In some embodiments, the mode may be determined as the user performs one or more actions associated with the corresponding mode. For example, when the user starts performing typing actions, the neuromuscular activity system 202 may be configured to recognize that the input mode is a typing mode and when the user starts performing writing actions, the neuromuscular activity system 202 may be configured to recognize that the input mode is a writing mode. The neuromuscular activity system 202 may switch from one mode to another mode based on detection of different actions performed by the user. For example, the user may switch between performing typing actions and writing actions and the system may determine that the input mode should switch between the typing mode and the writing mode accordingly without interrupting text entry.
[0167] In some embodiments, the mode determination may be made based on a signal received from the AR system 201. In other words, the neuromuscular activity system 202 may be configured to operate in a mode determined in response to receiving a signal from the AR system. The AR system 201 may generate the signal in response to detection of an event for which input within an AR environment provided by the AR system is desired. For example, text input may be desired to complete a portion of a form presented in a user interface displayed in the AR environment. Presentation of the form may trigger a signal to be generated by the AR system indicating that text input is desired. The signal may identify the various modes that are available for providing the input. The AR system 201 may communicate the signal to the neuromuscular activity system 202 and the neuromuscular activity system 202 may switch to particular available mode to provide the text input.
[0168] In act 406, the input to be provided to the AR system 201 may be identified based on the raw or processed signals and/or information based on the recorded signals (e.g., handstate and/or muscular activation state information). In some embodiments, the one or more computer processors of system 100 may be programmed to identify the input based on the sensor signals, the handstate information, detection of a gesture or muscular activation state, and/or a combination of any of the foregoing. [0169] In some embodiments, the input to be provided to the AR system 201 may be further identified based on the current mode of the neuromuscular activity system 202.
When the neuromuscular activity system 202 is in a typing mode, input to be provided to the AR system 201 for the typing mode may be identified by identifying one or more tapping or typing actions performed by a user based on the sensor signals and/or information based on the sensor signals. For example, tapping or typing actions performed on a surface of a physical keyboard or a surface that has a virtual keyboard projected thereon by the AR system may be identified based on the sensor signals and text input for the typing mode may be identified based on the tapping/typing actions. FIG. 9A depicts a user performing typing actions on a physical keyboard 902 placed on a table. Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these typing actions (as detected by wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904.
[0170] When the neuromuscular activity system 202 is in a writing mode, input to be provided to the AR system 201 for the writing mode may be identified by identifying one or more writing actions performed by the user based on the sensor signals and/or information based on the sensor signals. For example, writing actions performed on a surface with a physical writing implement, a virtual writing implement and/or fingertip(s) of the user may be identified based on the sensor signals and text input for the writing mode may be identified based on the writing actions. FIG. 9B depicts a user performing writing actions on an optional tablet device 912 using an optional stylus 910. Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these writing actions (as detected by the wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904.
[0171] When the neuromuscular activity system 202 is in a drawing mode, input to be provided to the AR system for the drawing mode may be identified by identifying one or more drawing actions (e.g., drawing a number of line segments and/or curves on a surface) performed by the user based on the sensor signals and/or information based on the sensor signals. Input (e.g., text input and/or drawing input) for the drawing mode may be identified based on the drawing actions. In some embodiments, the input for the drawing mode may include one or more line segments and/or curves. In some embodiments, the input for the drawing mode may include input determined based on a sequence of pixel positions controlled by the drawing actions performed by the user. FIG. 9C depicts a user performing drawing actions mid-air (i.e., without using any writing instruments). Input to be provided to the AR system may be identified based on the neuromuscular signals and/or muscular activation state(s) associated with these drawing actions (as detected by wearable portion 810) and indications of the identified input may be displayed to the user via the virtual headset 904. In this scenario and other scenarios described herein, an auxiliary sensor (e.g., a camera) (not shown) may be provided as part of the virtual headset or as a separate component and may provide additional information (e.g., position of the hand) that maybe used to further interpret the user actions and associated neuromuscular signals and/or muscular activation state(s).
[0172] In some embodiments, both the text input and the drawing may be identified based on the drawing actions performed by the user. In some implementations, processing of the sensor signals may be performed by multiple processors. The neuromuscular sensors may be configured to communicate at least some of the sensor signals to a first computer processor and a second computer processor, where drawings may be identified by the first computer processor and text input (e.g., handwriting) may be identified by the second computer processor. The text input and the drawing from the first and second computer processors may be combined such that the text overlays or annotates the drawing, or is stored as metadata for later processing (e.g., search and filtering). In other implementations, the drawing may be identified based on the drawing actions performed by the user and the text input may be identified from the drawing. For example, the drawing may be identified from the sensor signals and text may be identified from the drawing by running a handwriting recognition process on the drawing.
[0173] When the neuromuscular activity system 202 is in a one-handed mode (i.e., a mode where the user uses only one hand to provide input), input to be provided to the AR system for the one-handed mode may be identified by identifying one or more one-handed actions (for example, squeezing, pinching, and/or tapping of various fingers and
combinations of fingers) performed by the user based on the sensor signals and/or information based on the sensor signals. Text input for the one-handed mode may be identified based on the one-handed actions.
[0174] In some embodiments, one or more gestures may be identified in addition to the typing/tapping, writing, drawing, and/or one-handed actions to allow editing and/or correction of identified text. For example, one or more delete gestures may be recognized in addition to writing actions (based on which text input is identified) that allow deletion of identified letters or words in the text input. The one or more delete gestures may include a gesture to delete a single letter, a gesture to delete a previous word, and/or a gesture to delete a selected word. In some embodiments, the selection of the word to be deleted may be accomplished using neuromuscular controls, for example, cursor navigation. The one or more delete gestures may involve manipulating an object being held by a user (e.g., a stylus or pencil). For example, the one or more delete gestures may include flipping the object, such as a pencil, to an eraser position and then swiping or pressing an imaginary button on the object with a particular finger to initiate deletion of one or more letters or words.
[0175] In some embodiments, one or more gestures (such as newline gestures that indicate the end of a line of text and start of a new line of text, space gestures that indicate a space break in text, and/or other gestures) may be identified and combined with recognizing text input to allow the user to compose longer sequences of text without having to physically move his hand (e.g., to the right or down a virtual page in a virtual document). For example, a swipe or flick in a particular direction may be used as a newline gesture and a“pen up” motion may be used for space or word breaks.
[0176] In act 408, the input identified in act 406 may be provided to the AR system 201. Text input and/or drawing input identified based on the sensor signals and/or information based on the sensor signals may be provided to the AR system 201. The one or more computer processors of system 100 may identify and provide the input to the AR system.
[0177] In some embodiments, the neuromuscular activity system 202 may switch between different modes, for example typing, writing, drawing, and/or one-handed modes, for providing input. For example, a user may provide text-based input by tapping on a surface of a physical keyboard, writing on a surface with a stylus, swiping though a virtual swipe keyboard projected in the AR environment, or using a custom movement-free mapping from neuromuscular signals to text. These different approaches may all be integrated with the AR system 201 though a common application programming interface (API). In other words, the different forms of text input may be identified by the
neuromuscular activity system 202 and provided to the AR system 201, where the AR system receives the different forms of text input via a common text API.
[0178] In some embodiments, the input to be provided to the AR system 201 may be identified from multiple sources, where the sources may include the neuromuscular signals and at least one source other than the neuromuscular signals. For example, the at least one source may include a physical input device such as a physical keyboard or stylus. Input received from the multiple sources may be combined and the combined input may be provided to the AR system 201. In some implementations, the common API may receive input from the multiple sources. In some embodiments, visual feedback provided by the AR system may continue regardless of the source, the mode or the form of text entry.
[0179] In some embodiments, when used in combination with physical input devices, the neuromuscular activity system 202 may learn to emulate the physical input devices using the neuromuscular signals, thereby allowing seamless switching between the physical input devices and their virtual emulations.
[0180] FIG. 7A illustrates a wearable system with sixteen neuromuscular sensors 710 (e.g., EMG sensors) arranged circumferentially around an elastic band 720 configured to be worn around a user’s lower arm or wrist. As shown, EMG sensors 710 are arranged circumferentially around elastic band 720. It should be appreciated that any suitable number of neuromuscular sensors may be used. The number and arrangement of neuromuscular sensors may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband can be used to generate control information for controlling an augmented reality system, a robot, controlling a vehicle, scrolling through text, controlling a virtual avatar, or any other suitable control task. [0181] In some embodiments, sensors 710 include a set of neuromuscular sensors (e.g., EMG sensors). In other embodiments, sensors 710 can include a set of neuromuscular sensors and at least one“auxiliary” sensor configured to continuously record auxiliary signals. Examples of auxiliary sensors include, but are not limited to, other sensors such as IMU sensors, microphones, imaging sensors (e.g., a camera), radiation based sensors for use with a radiation-generation device (e.g., a laser-scanning device), or other types of sensors such as a heart-rate monitor. As shown the sensors 710 may be coupled together using flexible electronics 730 incorporated into the wearable device. FIG. 7B illustrates a cross- sectional view through one of the sensors 710 of the wearable device shown in FIG. 7A.
[0182] In some embodiments, the output of one or more of the sensing components can be optionally processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the output of the sensing components can be performed in software. Thus, signal processing of signals sampled by the sensors can be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect. A non-limiting example of a signal processing chain used to process recorded data from sensors 710 are discussed in more detail below in connection with FIGS. 8 A and 8B.
[0183] FIGS. 8 A and 8B illustrate a schematic diagram with internal components of a wearable system with sixteen EMG sensors, in accordance with some embodiments of the technology described herein. As shown, the wearable system includes a wearable portion 810 (FIG. 8 A) and a dongle portion 820 (FIG. 8B) in communication with the wearable portion 810 (e.g., via Bluetooth or another suitable short range wireless communication technology). As shown in FIG. 8A, the wearable portion 810 includes the sensors 710, examples of which are described in connection with FIGS. 7 A and 7B. The output of the sensors 710 is provided to analog front end 830 configured to perform analog processing (e.g., noise reduction, filtering, etc.) on the recorded signals. The processed analog signals are then provided to analog-to-digital converter 832, which converts the analog signals to digital signals that can be processed by one or more computer processors. An example of a computer processor that may be used in accordance with some embodiments is microcontroller (MCU) 834 illustrated in FIG. 8A. As shown, MCU 834 may also include inputs from other sensors (e.g., IMU sensor 840), and power and battery module 842. The output of the processing performed by MCU may be provided to antenna 850 for transmission to dongle portion 820 shown in FIG. 8B.
[0184] Dongle portion 820 includes antenna 852 configured to communicate with antenna 850 included as part of wearable portion 810. Communication between antenna 850 and 852 may occur using any suitable wireless technology and protocol, non-limiting examples of which include radiofrequency signaling and Bluetooth. As shown, the signals received by antenna 852 of dongle portion 820 may be provided to a host computer for further processing, display, and/or for effecting control of a particular physical or virtual object or objects.
[0185] Although the examples provided with reference to FIGS. 7A, 7B and
FIGS.8 A, 8B are discussed in the context of interfaces with EMG sensors, it is understood that the techniques described herein for reducing electromagnetic interference can also be implemented in wearable interfaces with other types of sensors including, but not limited to, meehanomyography (MMG) sensors, sonomyography (SMG) sensors, and electrical impedance tomography (EIT) sensors. In addition, it will be understood that techniques described herein for providing input to an AR system can also be implemented within VR, MR or XR systems.
[0186] It will be appreciated that the disclosure is not limited to the use of typing, writing, drawing, and/or one-handed modes or identifying input based on tapping/typing actions, writing actions, drawing actions, and/or one-handed actions, and other modes or actions can be used. For example, two-handed actions other that typing, tapping, writing, or drawing on a surface, such as, combinations of fingertip squeezes, hand gestures, or finger movements on both hands may be used without departing from the scope of this disclosure.
[0187] The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. It should be appreciated that any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions. The one or more controllers can be implemented in numerous ways, such as with dedicated hardware or with one or more processors programmed using microcode or software to perform the functions recited above.
[0188] In this respect, it should be appreciated that one implementation of the embodiments of the present invention comprises at least one non-transitory computer- readable storage medium (e.g., a computer memory, a portable memory, a compact disk, etc.) encoded with a computer program (i.e., a plurality of instructions), which, when executed on a processor, performs the above-discussed functions of the embodiments of the present invention. The computer-readable storage medium can be transportable such that the program stored thereon can be loaded onto any computer resource to implement the aspects of the present invention discussed herein. In addition, it should be appreciated that the reference to a computer program which, when executed, performs the above-discussed functions, is not limited to an application program running on a host computer. Rather, the term computer program is used herein in a generic sense to reference any type of computer code (e.g., software or microcode) that can be employed to program a processor to implement the above-discussed aspects of the present invention.
[0189] Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
[0190] Also, embodiments of the invention may be implemented as one or more methods, of which an example has been provided. The acts performed as part of the method(s) may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[0191] The foregoing features may be used, separately or together in any
combination, in any of the embodiments discussed herein.
[0192] Further, although advantages of the present invention may be indicated, it should be appreciated that not every embodiment of the invention will include every described advantage. Some embodiments may not implement any features described as advantageous herein. Accordingly, the foregoing description and attached drawings are by way of example only.
[0193] Variations on the disclosed embodiment are possible. For example, various aspects of the present technology may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing, and therefore they are not limited in application to the details and arrangements of components set forth in the foregoing description or illustrated in the drawings. Aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
[0194] Use of ordinal terms such as“first,”“second,”“third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term).
[0195] The indefinite articles“a” and“an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean“at least one.”
[0196] Any use of the phrase“at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified.
[0197] Any use of the phrase“equal” or“the same” in reference to two values (e.g., distances, widths, etc.) means that two values are the same within manufacturing tolerances. Thus, two values being equal, or the same, may mean that the two values are different from one another by ±5%.
[0198] The phrase“and/or,” as used herein in the specification and in the claims, should be understood to mean“either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases.
Multiple elements listed with“and/or” should be construed in the same fashion, i.e.,“one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the“and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[0199] As used herein in the specification and in the claims,“or” should be understood to have the same meaning as“and/or” as defined above. For example, when separating items in a list,“or” or“and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as“only one of’ or“exactly one of,” or, when used in the claims,“consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e.“one or the other but not both”) when preceded by terms of exclusivity, such as“either,”“one of,” “only one of,” or“exactly one of.”“Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law. [0200] The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including," "comprising," "having," “containing”,“involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.
[0201] Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The invention is limited only as defined by the following claims and the equivalents thereto.

Claims

CLAIMS What is claimed is:
1. A computerized system for providing input to an extended reality system, the computerized system comprising:
one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices; and
at least one computer processor programmed to:
determine that the computerized system is in a mode configured to provide an input to the extended reality system;
identify the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and
provide the identified input to the extended reality system.
2. The computerized system of claim 1, wherein the mode is determined based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
3. The computerized system of claim 1, wherein the mode is determined based on a gesture detected from the user, wherein the gesture is identified based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
4. The computerized system of claim 1, wherein the mode is determined by:
receiving a selection of the mode from a user interface displayed in an extended reality environment provided by the extended reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
5. The computerized system of claim 1, wherein the mode is determined based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
6. The computerized system of claim 1, wherein the mode is determined in response to receiving a signal from the extended reality system.
7. The computerized system of claim 6, wherein the signal is generated by the extended reality system in response to detection of an event for which input within an extended reality environment provided by the extended reality system is desired.
8. The computerized system of claim 1, wherein the mode comprises a typing mode and wherein identifying the input comprises:
identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
9. The computerized system of claim 8, further comprising a physical keyboard or a virtual keyboard.
10. The computerized system of claim 1, wherein the mode comprises a writing mode and wherein identifying the input comprises:
identifying one or more writing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
11. The computerized system of claim 10, further comprising a physical stylus, physical writing implement, virtual stylus and/or virtual writing implement.
12. The computerized system of claim 10, wherein the one or more writing actions are identified as detected in mid-air.
13. The computerized system of claim 1, wherein the mode comprises a drawing mode and wherein identifying the input comprises:
identifying one or more drawing actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
14. The computerized system of claim 13, wherein the one or more drawing actions detected from the user comprise a plurality of line segments and/or curves.
15. The computerized system of claim 13, wherein identifying the input for the drawing mode comprises identifying input as drawings and/or text.
16. The computerized system of claim 15, wherein the at least one computer processor is further programmed to combine drawing and text inputs such that the text overlays or annotates the drawing.
17. The computerized system of claim 1, wherein the mode comprises a one-handed mode and wherein identifying the input comprises:
identifying one or more one-handed actions as detected from the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
18. The computerized system of claim 1, wherein the extended reality system is configured to display an indication of the identified input to the user.
19. The computerized system of claim 18, wherein the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions detected from the user.
20. The computerized system of claim 1, further comprising a visual display or interface configured to present one or more suggested or predicted words or phrases for text input.
21. The computerized system of claim 1, further comprising a visual display or interface configured to present one or more virtual ink marks associated with one or more strokes as detected from the user.
22. The computerized system of claim 1, further comprising a visual display or interface configured to present a drawing as identified based on one or more drawing actions detected from the user.
23. The computerized system of claim 1, further comprising:
at least one inertial measurement unit (IMU) sensor,
wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
24. The computerized system of claim 1, further comprising:
at least one camera,
wherein identifying the input comprises identifying the input based, at least in part, on at least one output signal associated with the at least one camera.
25. The computerized system of claim 1, wherein the mode comprises a first mode, and wherein the at least one computer processor is further programmed to:
identify a second input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, wherein the second input is further identified based, at least in part, on a second mode; and
provide the identified second input to the extended reality system.
26. The computerized system of claim 1, wherein the one or more wearable devices comprises a first wearable device configured to detect neuromuscular signals from a first arm of the user and a second wearable device configured to detect neuromuscular signals from a second arm of the user.
27. The computerized system of claim 1, wherein the extended reality system is an augmented reality system.
28. A method performed by a computerized system for providing input to an extended reality system, the method comprising:
detecting, using one or more neuromuscular sensors arranged on one or more wearable devices, a plurality of neuromuscular signals from a user;
determining that the computerized system is in a mode configured to provide input to the extended reality system;
identifying the input based at least in part on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals; and
providing the identified input to the extended reality system.
29. A system for providing one or more inputs to an extended reality (XR) system, the system comprising:
one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices;
one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and
at least one computer processor programmed to:
determine that the system is in a mode configured to provide one or more inputs to the XR system,
associate the neuromuscular signals with the information detected from the one or more auxiliary sensors,
process the neuromuscular signals and/or the information detected from the one or more auxiliary sensors using one or more inference models; identify the one or more inputs based on the processed neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors; and
provide the identified one or more inputs to the XR system.
30. A kit for use with an extended reality (XR) system, the kit comprising:
one or more neuromuscular sensors configured to detect neuromuscular signals from a user, wherein the one or more neuromuscular sensors are arranged on one or more wearable devices;
one or more auxiliary sensors configured to detect information regarding a physiological state and/or behavior from the user; and
at least one storage medium storing instructions, that when executed by at least one computer processor, cause the at least one computer processor to:
process the neuromuscular signals from the neuromuscular sensors, process the information detected from the one or more auxiliary sensors, identify one or more user inputs based on the processed
neuromuscular signals and/or the processed information detected from the one or more auxiliary sensors, and
communicate the identified one or more user inputs to the XR system.
31. A computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals, the system comprising:
a plurality of neuromuscular sensors configured to record a plurality of
neuromuscular signals from a user, wherein the plurality of neuromuscular sensors are arranged on one or more wearable devices; and
at least one computer processor programmed to:
determine that the computerized system is in a mode configured to provide input including text to the augmented reality system; identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode; and
provide the identified input to the augmented reality system.
32. The computerized system of claim 31, wherein the mode comprises a typing mode and wherein identifying the input comprises:
identifying one or more tapping or typing actions based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and
identifying text input for the typing mode based on the one or more tapping or typing actions.
33. The computerized system of claim 32, wherein identifying the one or more tapping or typing actions comprises:
identifying the one or more tapping or typing actions on a surface of a physical keyboard.
34. The computerized system of claim 32, wherein identifying the one or more tapping or typing actions comprises:
identifying the one or more tapping or typing actions on a surface that has a virtual keyboard projected thereon by the augmented reality system.
35. The computerized system of claim 31, wherein the mode comprises a writing mode and wherein identifying the input comprises:
identifying one or more writing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and
identifying text input for the writing mode based on the one or more writing actions.
36. The computerized system of claim 35, wherein identifying the one or more writing actions performed by the user comprises:
identifying the one or more writing actions performed on a surface with a physical stylus, a physical writing implement, or fingertip or fingertips of the user.
37. The computerized system of claim 35, wherein identifying the one or more writing actions performed by the user comprises:
identifying the one or more writing actions performed on a surface with a virtual stylus or virtual writing implement.
38. The computerized system of claim 35, wherein identifying the one or more writing actions performed by the user comprises:
identifying the one or more writing actions performed in mid-air.
39. The computerized system of claim 31, wherein the mode comprises a drawing mode and wherein identifying the input comprises:
identifying one or more drawing actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and
identifying the input for the drawing mode based on the one or more drawing actions.
40. The computerized system of claim 39, wherein the input for the drawing mode comprises a plurality of line segments and/or curves.
41. The computerized system of claim 39, wherein the input for the drawing mode comprises input determined based on a sequence of pixel positions controlled by the one or more drawing actions performed by the user.
42. The computerized system of claim 39, wherein identifying the input for the drawing mode comprises:
identifying a drawing based on the one or more drawing actions performed by the user; and
identifying the text based on the one or more drawing actions performed by the user.
43. The computerized system of claim 42, wherein the at least one computer processor is further programmed to:
combine the drawing and the text such that the text overlays or annotates the drawing.
44. The computerized system of claim 39, wherein identifying the input for the drawing mode comprises:
identifying a drawing based on the one or more drawing actions performed by the user; and
identifying the text from the drawing.
45. The computerized system of claim 31, wherein the mode comprises a one-handed mode and wherein identifying the input comprises:
identifying one or more one-handed actions performed by the user based, at least in part, on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and
identifying the input for the one-handed mode based on the one or more one-handed actions.
46. The computerized system of claim 31, wherein determining that the computerized system is in the mode configured to provide the input comprises:
receiving a user selection of the mode.
47. The computerized system of claim 46, wherein receiving the user selection of the mode comprises:
receiving the user selection from a user interface displayed in an augmented reality environment provided by the augmented reality system, wherein the user interface is configured to identify a plurality of modes from which the user may select.
48. The computerized system of claim 31, wherein determining that the computerized system is in the mode configured to provide the input comprises:
determining the mode from the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals.
49. The computerized system of claim 48, wherein determining the mode comprises: identifying at least one gesture performed by the user based on the plurality of neuromuscular signals and/or the information based on the plurality of neuromuscular signals; and
determining the mode corresponding to the at least one gesture.
50. The computerized system of claim 48, wherein determining the mode comprises: determining the mode based on one or more typing, writing, drawing actions, or one- handed actions performed by the user.
51. The computerized system of claim 31, wherein determining that the computerized system is in the mode configured to provide the input comprises:
determining the mode in response to receiving a signal from the augmented reality system.
52. The computerized system of claim 51, wherein the signal is generated at the augmented reality system in response to detection of an event for which input within an augmented reality environment provided by the augmented reality system is desired.
53. The computerized system of claim 31, wherein the augmented reality system is configured to display an indication of the identified input to the user.
54. The computerized system of claim 53, wherein the indication of the identified input comprises text input identified based on one or more typing, writing, drawing actions, or one-handed actions performed by the user.
55. The computerized system of claim 54, wherein the indication of the identified input comprises a listing of one or more suggested or predicted words or phrases for the text input.
56. The computerized system of claim 53, wherein the indication of the identified input comprises one or more virtual ink marks associated with one or more strokes made by a writing implement.
57. The computerized system of claim 53, wherein the indication of the identified input comprises a drawing identified based on one or more drawing actions performed by the user.
58. The computerized system of claim 53, wherein the indication is displayed via a user interface presented within an augmented reality environment provided by the augmented reality system.
59. The computerized system of claim 53, wherein the indication is rendered onto a surface that the user is interacting with by the augmented reality system.
60. The computerized system of claim 31, further comprising:
at least one inertial measurement unit (IMU) sensor,
wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one IMU sensor.
61. The computerized system of claim 31, further comprising: at least one camera,
wherein identifying the text input comprises identifying the text input based, at least in part, on at least one output signal associated with the at least one camera.
62. The computerized system of claim 31, wherein the mode comprises a first mode, and wherein the at least one computer processor is further programmed to:
identify, based at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, a second input, wherein the second input is further identified based, at least in part, on a second mode; and
provide the identified second input to the augmented reality system.
63. The computerized system of claim 31, wherein the identified input provided to the augmented reality system comprises input identified from a plurality of sources, wherein the plurality of sources include the plurality of neuromuscular signals and at least one source other than the plurality of neuromuscular signals.
64. A method performed by a computerized system for providing input to an augmented reality system based, at least in part, on neuromuscular signals, the method comprising: recording, using a plurality of neuromuscular sensors arranged on one or more wearable devices, a plurality neuromuscular signals from a user;
determining that the computerized system is in a mode configured to provide input including text to the augmented reality system;
identifying based, at least in part, on the plurality of neuromuscular signals and/or information based on the plurality of neuromuscular signals, the input, wherein the input is further identified based, at least in part, on the mode;
providing the identified input to the augmented reality system.
PCT/US2019/052151 2018-09-20 2019-09-20 Neuromuscular text entry, writing and drawing in augmented reality systems WO2020061451A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19861903.3A EP3853698A4 (en) 2018-09-20 2019-09-20 Neuromuscular text entry, writing and drawing in augmented reality systems
CN201980062920.7A CN112789577B (en) 2018-09-20 2019-09-20 Neuromuscular text input, writing and drawing in augmented reality systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862734138P 2018-09-20 2018-09-20
US62/734,138 2018-09-20

Publications (1)

Publication Number Publication Date
WO2020061451A1 true WO2020061451A1 (en) 2020-03-26

Family

ID=69884192

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/052151 WO2020061451A1 (en) 2018-09-20 2019-09-20 Neuromuscular text entry, writing and drawing in augmented reality systems

Country Status (4)

Country Link
US (1) US11567573B2 (en)
EP (1) EP3853698A4 (en)
CN (1) CN112789577B (en)
WO (1) WO2020061451A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481030B2 (en) 2019-03-29 2022-10-25 Meta Platforms Technologies, Llc Methods and apparatus for gesture detection and classification
US11481031B1 (en) 2019-04-30 2022-10-25 Meta Platforms Technologies, Llc Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
US11493993B2 (en) 2019-09-04 2022-11-08 Meta Platforms Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control
US11567573B2 (en) 2018-09-20 2023-01-31 Meta Platforms Technologies, Llc Neuromuscular text entry, writing and drawing in augmented reality systems
US11635736B2 (en) 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
US11644799B2 (en) 2013-10-04 2023-05-09 Meta Platforms Technologies, Llc Systems, articles and methods for wearable electronic devices employing contact sensors
US11666264B1 (en) 2013-11-27 2023-06-06 Meta Platforms Technologies, Llc Systems, articles, and methods for electromyography sensors
US11797087B2 (en) 2018-11-27 2023-10-24 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
US11868531B1 (en) 2021-04-08 2024-01-09 Meta Platforms Technologies, Llc Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof
CN117389441A (en) * 2023-11-23 2024-01-12 首都医科大学附属北京天坛医院 Writing imagination Chinese character track determining method and system based on visual following assistance
US11907423B2 (en) 2019-11-25 2024-02-20 Meta Platforms Technologies, Llc Systems and methods for contextualized interactions with an environment
US11921471B2 (en) 2013-08-16 2024-03-05 Meta Platforms Technologies, Llc Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source
US11961494B1 (en) 2019-03-29 2024-04-16 Meta Platforms Technologies, Llc Electromagnetic interference reduction in extended reality environments

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10970936B2 (en) * 2018-10-05 2021-04-06 Facebook Technologies, Llc Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
US20220374085A1 (en) * 2021-05-19 2022-11-24 Apple Inc. Navigating user interfaces using hand gestures
CN115857706B (en) * 2023-03-03 2023-06-06 浙江强脑科技有限公司 Character input method and device based on facial muscle state and terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170067873A (en) * 2014-10-16 2017-06-16 후아웨이 테크놀러지 컴퍼니 리미티드 Method, device, and system for processing touch interaction
US20180024635A1 (en) * 2016-07-25 2018-01-25 Patrick Kaifosh Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
US20180093181A1 (en) * 2016-09-30 2018-04-05 Disney Enterprises, Inc. Virtual blaster
US20180153430A1 (en) 2016-12-02 2018-06-07 Pison Technology, Inc. Detecting and Using Body Tissue Electrical Signals
US20180247443A1 (en) * 2017-02-28 2018-08-30 International Business Machines Corporation Emotional analysis and depiction in virtual reality

Family Cites Families (629)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1411995A (en) 1919-06-20 1922-04-04 Raymond W Dull Link belt
US3580243A (en) 1968-10-21 1971-05-25 Marquette Electronics Inc Means and method for subtracting dc noise from electrocardiographic signals
US3620208A (en) 1969-11-03 1971-11-16 Atomic Energy Commission Ekg amplifying electrode pickup
US3735425A (en) 1971-02-10 1973-05-29 Us Of America The Secretary Of Myoelectrically controlled prothesis
US3880146A (en) 1973-06-04 1975-04-29 Donald B Everett Noise compensation techniques for bioelectric potential sensing
US4055168A (en) 1976-09-21 1977-10-25 The Rockefeller University Posture training device
LU84250A1 (en) 1982-07-01 1984-03-22 Mardice Holding METHOD AND DEVICE FOR THE CONTACTLESS MEASUREMENT OF VOLTAGE DIFFERENCES IN LIVING ORGANISMS
EP0210199A1 (en) 1985-02-05 1987-02-04 MILLES, Victor Alexander Method for fabricating jewellery parts or wrist watches
US5003978A (en) 1985-08-21 1991-04-02 Technology 21, Inc. Non-polarizable dry biomedical electrode
IL78244A0 (en) 1986-03-24 1986-07-31 Zvi Kamil Instrumentation amplifier arrangement
DE3661161D1 (en) 1986-04-01 1988-12-15 Altop Sa Watch mounted on a clip
JPH01126692A (en) 1987-07-24 1989-05-18 Univ Leland Stanford Jr Biopotential digital controller for music and video
USD322227S (en) 1989-03-23 1991-12-10 North American Watch Company Watch
US5625577A (en) 1990-12-25 1997-04-29 Shukyohojin, Kongo Zen Sohonzan Shorinji Computer-implemented motion analysis method using dynamics
US5081852A (en) 1991-02-14 1992-01-21 Cox Michael F Display bracelet
JP3103427B2 (en) 1992-04-01 2000-10-30 ダイヤメディカルシステム株式会社 Bioelectricity detector
EP0648090A4 (en) 1992-07-06 1995-11-02 James F Kramer Determination of kinematically constrained multi-articulated structures.
US5251189A (en) 1992-10-16 1993-10-05 Timex Corporation Wristwatch radiotelephone
JPH06216475A (en) 1993-01-21 1994-08-05 Matsushita Electric Ind Co Ltd Flexible board
USD348660S (en) 1993-04-29 1994-07-12 Micro-Integration Corp. Hand-held computer input device
US5482051A (en) 1994-03-10 1996-01-09 The University Of Akron Electromyographic virtual reality system
WO1995027341A1 (en) 1994-04-04 1995-10-12 Motorola Inc. Shielded circuit assembly and method for forming same
DE4412278A1 (en) 1994-04-09 1995-10-12 Bosch Gmbh Robert Circuit board with both rigid and flexible regions
US6032530A (en) 1994-04-29 2000-03-07 Advantedge Systems Inc. Biofeedback system for sensing body motion and flexure
US5462065A (en) 1994-08-17 1995-10-31 Cusimano; Maryrose Integrated movement analyziing system
US5605059A (en) 1995-03-09 1997-02-25 Woodward; Robin Sleeved bangle bracelet
US6238338B1 (en) 1999-07-19 2001-05-29 Altec, Inc. Biosignal monitoring system and method
JP3141737B2 (en) 1995-08-10 2001-03-05 株式会社セガ Virtual image generation apparatus and method
US6066794A (en) 1997-01-21 2000-05-23 Longo; Nicholas C. Gesture synthesizer for electronic sound device
US5683404A (en) 1996-06-05 1997-11-04 Metagen, Llc Clamp and method for its use
EP0959444A4 (en) 1996-08-14 2005-12-07 Nurakhmed Nurislamovic Latypov Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject, and systems for implementing said methods
US6009210A (en) 1997-03-05 1999-12-28 Digital Equipment Corporation Hands-free interface to a virtual reality environment using head tracking
US6880364B1 (en) 1998-04-23 2005-04-19 Michael F. Vidolin Friendship band with exchangeable closed loop members
AU5900299A (en) 1998-08-24 2000-03-14 Emory University Method and apparatus for predicting the onset of seizures based on features derived from signals indicative of brain activity
WO2000017848A1 (en) 1998-09-22 2000-03-30 Vega Vista, Inc. Intuitive control of portable data displays
US6745062B1 (en) 1998-10-05 2004-06-01 Advanced Imaging Systems, Inc. Emg electrode apparatus and positioning system
US6244873B1 (en) 1998-10-16 2001-06-12 At&T Corp. Wireless myoelectric control apparatus and methods
US6774885B1 (en) 1999-01-20 2004-08-10 Motek B.V. System for dynamic registration, evaluation, and correction of functional human behavior
US7640007B2 (en) 1999-02-12 2009-12-29 Fisher-Rosemount Systems, Inc. Wireless handheld communicator in a process control environment
US6411843B1 (en) 1999-05-28 2002-06-25 Respironics, Inc. Method and apparatus for producing a model EMG signal from a measured EMG signal
US6972734B1 (en) 1999-06-11 2005-12-06 Canon Kabushiki Kaisha Mixed reality apparatus and mixed reality presentation method
CA2276962A1 (en) 1999-07-07 2001-01-07 Universite De Montreal Electromyogram signal analysis method and system for use with electrode array
US6807438B1 (en) 1999-08-26 2004-10-19 Riccardo Brun Del Re Electric field sensor
JP4168221B2 (en) 1999-09-06 2008-10-22 株式会社島津製作所 Body-mounted display system
US6527711B1 (en) 1999-10-18 2003-03-04 Bodymedia, Inc. Wearable human physiological data sensors and reporting system therefor
WO2001035173A1 (en) 1999-11-11 2001-05-17 The Swatch Group Management Services Ag Electronic wrist watch comprising an integrated circuit incorporated in a flexible band
US6771294B1 (en) 1999-12-29 2004-08-03 Petri Pulli User interface
GB0004688D0 (en) 2000-02-28 2000-04-19 Radley Smith Philip J Bracelet
EP2324761A3 (en) 2000-04-17 2014-06-18 Adidas AG Systems and methods for ambulatory monitoring of physiological signals
US6683600B1 (en) * 2000-04-19 2004-01-27 Microsoft Corporation Adaptive input pen mode selection
US6510333B1 (en) 2000-05-16 2003-01-21 Mark J. Licata Sensor for biopotential measurements
US6720984B1 (en) 2000-06-13 2004-04-13 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Characterization of bioelectric potentials
AU2001278318A1 (en) 2000-07-24 2002-02-05 Jean Nicholson Prudent Modeling human beings by symbol manipulation
US20030036691A1 (en) 2000-08-10 2003-02-20 Stanaland Thomas G. Capacitively coupled electrode system with variable capacitance for sensing potentials at the surface of tissue
US6487906B1 (en) 2000-09-18 2002-12-03 Advantedge Systems Inc Flexible film sensor system for monitoring body motion
WO2002037827A2 (en) 2000-10-30 2002-05-10 Naval Postgraduate School Method and apparatus for motion tracking of an articulated rigid body
US6743982B2 (en) 2000-11-29 2004-06-01 Xerox Corporation Stretchable interconnects using stress gradient films
JP2004527815A (en) 2000-12-18 2004-09-09 ヒューマン バイオニクス エルエルシー、 Activity initiation method and system based on sensed electrophysiological data
WO2002065904A1 (en) 2001-02-23 2002-08-29 Cordless Antistatic Research Inc. Enhanced pickup bio-electrode
JP2002287869A (en) 2001-03-26 2002-10-04 System Lsi Kk Discrimination system for myogenic potential signal, and input device using myogenic potential signal
JP2002358149A (en) 2001-06-01 2002-12-13 Sony Corp User inputting device
USD459352S1 (en) 2001-08-10 2002-06-25 Michael C. Giovanniello Wireless mouse wristband
US20030051505A1 (en) 2001-09-14 2003-03-20 Robertson Catrina M. Selectively self-adjustable jewelry item and method of making same
US6755795B2 (en) 2001-10-26 2004-06-29 Koninklijke Philips Electronics N.V. Selectively applied wearable medical sensors
US6865409B2 (en) 2001-11-07 2005-03-08 Kinesense, Inc. Surface electromyographic electrode assembly
JP4391717B2 (en) 2002-01-09 2009-12-24 富士通マイクロエレクトロニクス株式会社 Contactor, manufacturing method thereof and contact method
AU2003217253A1 (en) 2002-01-25 2003-09-02 Intellipatch, Inc. Evaluation of a patient and prediction of chronic symptoms
JP2003220040A (en) 2002-01-31 2003-08-05 Seiko Instruments Inc Biological information-observing device
JP2003255993A (en) 2002-03-04 2003-09-10 Ntt Docomo Inc System, method, and program for speech recognition, and system, method, and program for speech synthesis
CA2379268A1 (en) 2002-03-26 2003-09-26 Hans Kolpin Skin impedance matched biopotential electrode
ATE384474T1 (en) 2002-03-29 2008-02-15 Koninkl Philips Electronics Nv A PORTABLE MONITORING SYSTEM AND MANUFACTURING METHOD FOR A PORTABLE MONITORING SYSTEM
US6942621B2 (en) 2002-07-11 2005-09-13 Ge Medical Systems Information Technologies, Inc. Method and apparatus for detecting weak physiological signals
US6984208B2 (en) 2002-08-01 2006-01-10 The Hong Kong Polytechnic University Method and apparatus for sensing body gesture, posture and movement
US7409242B2 (en) 2002-09-11 2008-08-05 National Institute Of Information And Communications Technology Active muscle display device
EP1408443B1 (en) 2002-10-07 2006-10-18 Sony France S.A. Method and apparatus for analysing gestures produced by a human, e.g. for commanding apparatus by gesture recognition
KR100506084B1 (en) 2002-10-24 2005-08-05 삼성전자주식회사 Apparatus and method for searching acupuncture point
CA2415173A1 (en) 2002-12-09 2004-06-09 Thomas Hemmerling Neuromuscular monitoring using phonomyography
US7491892B2 (en) 2003-03-28 2009-02-17 Princeton University Stretchable and elastic interconnects
US7028507B2 (en) 2003-04-03 2006-04-18 Broadway Entertainment, Inc. Article of jewelry
US20040194500A1 (en) 2003-04-03 2004-10-07 Broadway Entertainment, Inc. Article of jewelry
US7265298B2 (en) 2003-05-30 2007-09-04 The Regents Of The University Of California Serpentine and corduroy circuits to enhance the stretchability of a stretchable electronic device
KR20060023149A (en) 2003-06-12 2006-03-13 컨트롤 바이오닉스 Method, system, and software for interactive communication and analysis
US7022919B2 (en) 2003-06-30 2006-04-04 Intel Corporation Printed circuit board trace routing method
WO2005006956A2 (en) 2003-07-09 2005-01-27 Medical Technologies Unlimited, Inc. Comprehensive neuromuscular profiler
ATE413902T1 (en) 2003-08-18 2008-11-15 Cardiac Pacemakers Inc PATIENT MONITORING SYSTEM
TWI240819B (en) 2003-08-21 2005-10-01 Toppoly Optoelectronics Corp Flexible printed circuit board (FPC) for liquid crystal display (LCD) module
JP4178186B2 (en) 2003-08-21 2008-11-12 国立大学法人 筑波大学 Wearable motion assist device, control method for wearable motion assist device, and control program
CN1838933B (en) 2003-08-21 2010-12-08 国立大学法人筑波大学 Wearable action-assist device, and method and program for controlling wearable action-assist device
US7559902B2 (en) 2003-08-22 2009-07-14 Foster-Miller, Inc. Physiological monitoring garment
US7565295B1 (en) 2003-08-28 2009-07-21 The George Washington University Method and apparatus for translating hand gestures
US7574253B2 (en) 2003-09-26 2009-08-11 Northwestern University Signal processing using non-linear regression with a sinusoidal model
US20050070227A1 (en) 2003-09-30 2005-03-31 Chih-Hsiang Shen Detecting and actuating method of bluetooth devices and a control system thereof
US7961909B2 (en) 2006-03-08 2011-06-14 Electronic Scripting Products, Inc. Computer interface employing a manipulated object with absolute pose detection component and a display
USD502661S1 (en) 2004-02-23 2005-03-08 Broadway Entertainment, Inc. Bracelet
USD503646S1 (en) 2004-02-23 2005-04-05 Broadway Entertainment, Inc. Bracelet
USD502662S1 (en) 2004-02-23 2005-03-08 Broadway Entertainment, Inc. Bracelet
WO2005083546A1 (en) 2004-02-27 2005-09-09 Simon Richard Daniel Wearable modular interface strap
KR100541958B1 (en) 2004-04-21 2006-01-10 삼성전자주식회사 Flexible printed circuit board
US7173437B2 (en) 2004-06-10 2007-02-06 Quantum Applied Science And Research, Inc. Garment incorporating embedded physiological sensors
WO2005122900A1 (en) 2004-06-16 2005-12-29 The University Of Tokyo Mascular strength acquiring method and device based on musculo skeletal model
US8061160B2 (en) 2004-08-17 2011-11-22 Carissa Stinespring Adjustable fashion mechanism
KR100594117B1 (en) 2004-09-20 2006-06-28 삼성전자주식회사 Apparatus and method for inputting key using biosignal in HMD information terminal
KR100680023B1 (en) 2004-12-06 2007-02-07 한국전자통신연구원 Cellular phone input device using electromyography and control method thereof
US7254516B2 (en) 2004-12-17 2007-08-07 Nike, Inc. Multi-sensor monitoring of athletic performance
US7901368B2 (en) 2005-01-06 2011-03-08 Braingate Co., Llc Neurally controlled patient ambulation system
WO2006086504A2 (en) 2005-02-09 2006-08-17 Alfred E. Mann Institute For Biomedical Engineering At The University Of Southern California Method and system for training adaptive control of limb movement
US8702629B2 (en) 2005-03-17 2014-04-22 Great Lakes Neuro Technologies Inc. Movement disorder recovery system and method for continuous monitoring
WO2006105094A2 (en) 2005-03-29 2006-10-05 Duke University Sensor system for identifying and tracking movements of multiple sources
US20070132785A1 (en) 2005-03-29 2007-06-14 Ebersole John F Jr Platform for immersive gaming
USD535401S1 (en) 2005-06-21 2007-01-16 Biocare Systems, Inc. Heat and light therapy device
US7428516B2 (en) 2005-06-23 2008-09-23 Microsoft Corporation Handwriting recognition using neural networks
US8190249B1 (en) 2005-08-01 2012-05-29 Infinite Biomedical Technologies, Llc Multi-parametric quantitative analysis of bioelectrical signals
US7086218B1 (en) 2005-08-18 2006-08-08 M & J - R & R Grosbard, Inc. Linked ring structures
US7725147B2 (en) 2005-09-29 2010-05-25 Nellcor Puritan Bennett Llc System and method for removing artifacts from waveforms
US7733224B2 (en) 2006-06-30 2010-06-08 Bao Tran Mesh network personal emergency response appliance
US7271774B2 (en) 2005-10-21 2007-09-18 Suunto Oy Electronic wearable device
US7517725B2 (en) 2005-11-28 2009-04-14 Xci, Inc. System and method for separating and packaging integrated circuits
US8280503B2 (en) 2008-10-27 2012-10-02 Michael Linderman EMG measured during controlled hand movement for biometric analysis, medical diagnosis and related analysis
US8280169B2 (en) * 2005-12-21 2012-10-02 Michael Linderman Recordation of handwriting and hand movement using electromyography
US7365647B2 (en) 2005-12-23 2008-04-29 Avinoam Nativ Kinesthetic training system with composite feedback
USD543212S1 (en) 2006-01-04 2007-05-22 Sony Computer Entertainment Inc. Object for interfacing with a computer program
US7809435B1 (en) 2006-01-11 2010-10-05 Iq Biolabs, Inc. Adjustable wireless electromyography sensor and system
JP4826459B2 (en) 2006-01-12 2011-11-30 株式会社豊田中央研究所 Musculoskeletal model creation method, human stress / strain estimation method, program, and recording medium
US8762733B2 (en) 2006-01-30 2014-06-24 Adidas Ag System and method for identity confirmation using physiologic biometrics to determine a physiologic fingerprint
US7580742B2 (en) 2006-02-07 2009-08-25 Microsoft Corporation Using electroencephalograph signals for task classification and activity recognition
US7827000B2 (en) 2006-03-03 2010-11-02 Garmin Switzerland Gmbh Method and apparatus for estimating a motion parameter
JP4937341B2 (en) 2006-04-04 2012-05-23 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Expression pen
US8311623B2 (en) 2006-04-15 2012-11-13 The Board Of Trustees Of The Leland Stanford Junior University Systems and methods for estimating surface electromyography
US7636549B2 (en) 2006-04-21 2009-12-22 Abbott Medical Optics Inc. Automated bonding for wireless devices
US20090112080A1 (en) 2006-04-21 2009-04-30 Quantum Applied Science & Research, Inc. System for Measuring Electric Signals
US7558622B2 (en) 2006-05-24 2009-07-07 Bao Tran Mesh network stroke monitoring appliance
US7539533B2 (en) 2006-05-16 2009-05-26 Bao Tran Mesh network monitoring appliance
WO2007137047A2 (en) 2006-05-16 2007-11-29 Greer Douglas S Modeling the neocortex
US7661068B2 (en) 2006-06-12 2010-02-09 Microsoft Corporation Extended eraser functions
US9405372B2 (en) 2006-07-14 2016-08-02 Ailive, Inc. Self-contained inertial navigation system for interactive control using movable controllers
GB0614261D0 (en) 2006-07-18 2006-08-30 Univ Sussex The Electric Potential Sensor
US7844310B2 (en) 2006-07-20 2010-11-30 L3 Communications Corporation Wearable communication device with contoured back
US7848797B2 (en) 2006-08-17 2010-12-07 Neurometrix, Inc. Motor unit number estimation (MUNE) for the assessment of neuromuscular function
US8437844B2 (en) 2006-08-21 2013-05-07 Holland Bloorview Kids Rehabilitation Hospital Method, system and apparatus for real-time classification of muscle signals from self-selected intentional movements
JP4267648B2 (en) 2006-08-25 2009-05-27 株式会社東芝 Interface device and method thereof
US8212859B2 (en) 2006-10-13 2012-07-03 Apple Inc. Peripheral treatment for head-mounted displays
US7885732B2 (en) 2006-10-25 2011-02-08 The Boeing Company Systems and methods for haptics-enabled teleoperation of vehicles and other devices
US8082149B2 (en) 2006-10-26 2011-12-20 Biosensic, Llc Methods and apparatuses for myoelectric-based speech processing
US20080136775A1 (en) 2006-12-08 2008-06-12 Conant Carson V Virtual input device for computing
US9311528B2 (en) * 2007-01-03 2016-04-12 Apple Inc. Gesture learning
US20090265671A1 (en) 2008-04-21 2009-10-22 Invensense Mobile devices with motion gesture recognition
US20080221487A1 (en) 2007-03-07 2008-09-11 Motek Bv Method for real time interactive visualization of muscle forces and joint torques in the human body
EP1970005B1 (en) 2007-03-15 2012-10-03 Xsens Holding B.V. A system and a method for motion tracking using a calibration unit
US8583206B2 (en) 2007-04-24 2013-11-12 Koninklijke Philips N.V. Sensor arrangement and method for monitoring physiological parameters
FR2916069B1 (en) 2007-05-11 2009-07-31 Commissariat Energie Atomique PROCESSING METHOD FOR MOTION CAPTURE OF ARTICULATED STRUCTURE
US8718742B2 (en) 2007-05-24 2014-05-06 Hmicro, Inc. Integrated wireless patch for physiological monitoring
US8504146B2 (en) 2007-06-29 2013-08-06 The Regents Of The University Of California Multi-channel myoelectrical control using single muscle
US20090007597A1 (en) 2007-07-02 2009-01-08 Hanevold Gail F Body attached band with removal visual image pockets
CA2693193A1 (en) 2007-07-16 2009-01-22 Sunrise Medical Hhg Inc. Physiological data collection system
DE102007044554B3 (en) 2007-07-18 2009-07-16 Siemens Ag Sensor band with optical sensor fiber, sensor with this sensor band and method for calibrating an optical sensor fiber
US8726194B2 (en) * 2007-07-27 2014-05-13 Qualcomm Incorporated Item selection using enhanced control
JP5057070B2 (en) 2007-07-31 2012-10-24 株式会社エクォス・リサーチ ECG sensor
US7925100B2 (en) 2007-07-31 2011-04-12 Microsoft Corporation Tiled packaging of vector image data
US20090031757A1 (en) 2007-08-01 2009-02-05 Funki Llc Modular toy bracelet
JP4434247B2 (en) 2007-08-10 2010-03-17 ソニー株式会社 Remote controller, remote control system, and remote control method
EP2587345A3 (en) 2007-08-19 2013-06-26 Ringbow Ltd. Finger-worn devices and related methods of use
WO2009026289A2 (en) 2007-08-20 2009-02-26 Hmicro, Inc. Wearable user interface device, system, and method of use
WO2009042579A1 (en) * 2007-09-24 2009-04-02 Gesturetek, Inc. Enhanced interface for voice and video communications
US20090082692A1 (en) 2007-09-25 2009-03-26 Hale Kelly S System And Method For The Real-Time Evaluation Of Time-Locked Physiological Measures
US7714757B2 (en) 2007-09-26 2010-05-11 Medtronic, Inc. Chopper-stabilized analog-to-digital converter
US8343079B2 (en) 2007-10-18 2013-01-01 Innovative Surgical Solutions, Llc Neural monitoring sensor
US8159313B2 (en) 2007-10-22 2012-04-17 D-Wave Systems Inc. Systems, methods, and apparatus for electrical filters and input/output systems
FI20075798A0 (en) 2007-11-12 2007-11-12 Polar Electro Oy The electrode structure
GB0800144D0 (en) 2008-01-04 2008-02-13 Fitzpatrick Adam P Electrocardiographic device and method
US8355671B2 (en) 2008-01-04 2013-01-15 Kopin Corporation Method and apparatus for transporting video signal over Bluetooth wireless interface
US8456425B2 (en) 2008-01-30 2013-06-04 International Business Machines Corporation Self-adapting keypad
US10969917B2 (en) 2008-01-30 2021-04-06 Apple Inc. Auto scanning for multiple frequency stimulation multi-touch sensor panels
US8344998B2 (en) 2008-02-01 2013-01-01 Wimm Labs, Inc. Gesture-based power management of a wearable portable electronic device with display
US9597015B2 (en) 2008-02-12 2017-03-21 Portland State University Joint angle tracking with inertial sensors
US20110018754A1 (en) 2008-03-28 2011-01-27 Akira Tojima Remote operation apparatus, operation target apparatus, method for controlling remote operation apparatus, method for controlling operation target apparatus, and remote operation system
US20090251407A1 (en) 2008-04-03 2009-10-08 Microsoft Corporation Device interaction with combination of rings
US8469741B2 (en) 2008-05-01 2013-06-25 3M Innovative Properties Company Stretchable conductive connector
US20100030532A1 (en) 2008-06-12 2010-02-04 Jasbir Arora System and methods for digital human model prediction and simulation
JP2010000283A (en) 2008-06-23 2010-01-07 Shimadzu Corp Real-time simultaneous measuring system, real-time simultaneous measuring instrument, real-time simultaneous measuring method and program
US8207473B2 (en) 2008-06-24 2012-06-26 Imec Method for manufacturing a stretchable electronic device
US9037530B2 (en) 2008-06-26 2015-05-19 Microsoft Technology Licensing, Llc Wearable electromyography-based human-computer interface
US8170656B2 (en) 2008-06-26 2012-05-01 Microsoft Corporation Wearable electromyography-based controllers for human-computer interface
US8447704B2 (en) 2008-06-26 2013-05-21 Microsoft Corporation Recognizing gestures from forearm EMG signals
EP2296544A4 (en) 2008-07-07 2012-12-05 Heard Systems Pty Ltd A system for sensing electrophysiological signals
EP2321013A4 (en) 2008-07-29 2011-11-23 James B Klassen Balance training system
US8389862B2 (en) 2008-10-07 2013-03-05 Mc10, Inc. Extremely stretchable electronics
US9393418B2 (en) 2011-06-03 2016-07-19 Great Lakes Neuro Technologies Inc. Movement disorder therapy system, devices and methods of tuning
US8647287B2 (en) 2008-12-07 2014-02-11 Andrew Greenberg Wireless synchronized movement monitoring apparatus and system
US9439566B2 (en) 2008-12-15 2016-09-13 Proteus Digital Health, Inc. Re-wearable wireless device
US7870211B2 (en) 2008-12-23 2011-01-11 At&T Mobility Ii Llc Conversation message routing supporting dynamic class transitions
WO2010086033A1 (en) 2009-01-30 2010-08-05 Interuniversitair Microelektronica Centrum Vzw Stretchable electronic device
PL2392196T3 (en) 2009-01-30 2019-05-31 Imec Vzw Stretchable electronic device
US8444564B2 (en) 2009-02-02 2013-05-21 Jointvue, Llc Noninvasive diagnostic system
USD661613S1 (en) 2009-02-19 2012-06-12 1922 Manifatture Preziose Torino SpA Jewelry
US20100228487A1 (en) 2009-03-05 2010-09-09 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Postural information system and method
US20100249635A1 (en) 2009-03-26 2010-09-30 Cordial Medical Europe B.V. Hearing screening system for a subject or a patient, and a method for hearing screening
EP2236078A1 (en) 2009-04-02 2010-10-06 Koninklijke Philips Electronics N.V. Processing a bio-physiological signal
US8764676B2 (en) 2009-05-07 2014-07-01 Massachusetts Eye & Ear Infirmary Signal processing in physiological noise
WO2010131267A1 (en) 2009-05-15 2010-11-18 Nox Medical System and methods using flexible capacitive electrodes for measuring biosignals
US8376968B2 (en) 2009-05-15 2013-02-19 The Hong Kong Polytechnic University Method and system for quantifying an intention of movement of a user
US8856691B2 (en) 2009-05-29 2014-10-07 Microsoft Corporation Gesture tool
US20100315266A1 (en) 2009-06-15 2010-12-16 Microsoft Corporation Predictive interfaces with usability constraints
US8427977B2 (en) 2009-06-23 2013-04-23 CSC Holdings, LLC Wireless network polling and data warehousing
SG176603A1 (en) 2009-06-26 2012-01-30 Widex As Eeg monitoring apparatus and method for presenting messages therein
US8883287B2 (en) 2009-06-29 2014-11-11 Infinite Corridor Technology, Llc Structured material substrates for flexible, stretchable electronics
WO2011007569A1 (en) 2009-07-15 2011-01-20 国立大学法人筑波大学 Classification estimating system and classification estimating program
US9687168B2 (en) 2009-07-30 2017-06-27 University Of Cape Town Non-invasive deep muscle electromyography
US8718980B2 (en) 2009-09-11 2014-05-06 Qualcomm Incorporated Method and apparatus for artifacts mitigation with multiple wireless sensors
FR2950713A1 (en) 2009-09-29 2011-04-01 Movea Sa SYSTEM AND METHOD FOR RECOGNIZING GESTURES
US20110077484A1 (en) 2009-09-30 2011-03-31 Nellcor Puritan Bennett Ireland Systems And Methods For Identifying Non-Corrupted Signal Segments For Use In Determining Physiological Parameters
US9779267B2 (en) 2009-10-07 2017-10-03 F-Secure Oyj Computer security method and apparatus
TWI496558B (en) 2009-10-20 2015-08-21 Tatung Co System and method for measuring ekg and breath signals by using two polar electrodes
US9615767B2 (en) 2009-10-26 2017-04-11 Impedimed Limited Fluid level indicator determination
US20110119216A1 (en) 2009-11-16 2011-05-19 Microsoft Corporation Natural input trainer for gestural instruction
US20110270135A1 (en) 2009-11-30 2011-11-03 Christopher John Dooley Augmented reality for testing and training of human performance
US8421634B2 (en) 2009-12-04 2013-04-16 Microsoft Corporation Sensing mechanical energy to appropriate the body for data input
KR101708682B1 (en) 2010-03-03 2017-02-21 엘지전자 주식회사 Apparatus for displaying image and and method for operationg the same
US8620361B2 (en) 2009-12-08 2013-12-31 At&T Mobility Ii Llc Intelligent routing of SMS and MMS messages to short codes
WO2011070554A2 (en) 2009-12-13 2011-06-16 Ringbow Ltd. Finger-worn input devices and methods of use
EP2512331A4 (en) 2009-12-16 2015-01-14 Ictalcare As A system for the prediction of epileptic seizures
US20110151974A1 (en) 2009-12-18 2011-06-23 Microsoft Corporation Gesture style recognition and reward
US8631355B2 (en) 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US9268404B2 (en) 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
JP5471490B2 (en) 2010-01-20 2014-04-16 オムロンヘルスケア株式会社 Body motion detection device
SG182687A1 (en) 2010-02-01 2012-08-30 Widex As Portable eeg monitor system with wireless communication
US8947455B2 (en) 2010-02-22 2015-02-03 Nike, Inc. Augmented reality design system
WO2011104933A1 (en) 2010-02-25 2011-09-01 シャープ株式会社 Bias circuit, lna, lnb, receiver for communication, transmitter for communication, and sensor system
US20110213278A1 (en) 2010-02-26 2011-09-01 Apdm, Inc. Movement monitoring system and apparatus for objective assessment of movement disorders
WO2011106797A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
US8591411B2 (en) 2010-03-10 2013-11-26 Sotera Wireless, Inc. Body-worn vital sign monitor
US9268990B2 (en) 2010-03-16 2016-02-23 Carlo Trugenberger Apparatus and method for producing an identification device
KR101114870B1 (en) 2010-03-22 2012-03-06 (주)헤리트 Intelligent led lighting control system and control method thereof
US9341659B2 (en) 2010-04-08 2016-05-17 Disney Enterprises, Inc. User interactive living organisms
US20110248914A1 (en) 2010-04-11 2011-10-13 Sherr Alan B System and Method for Virtual Touch Typing
US9110505B2 (en) 2010-04-16 2015-08-18 Innovative Devices Inc. Wearable motion sensing computing interface
JP5702947B2 (en) 2010-04-22 2015-04-15 矢崎総業株式会社 Wiring material
US8384683B2 (en) 2010-04-23 2013-02-26 Tong Luo Method for user input from the back panel of a handheld computerized device
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
KR101130697B1 (en) 2010-05-07 2012-04-02 삼성전자주식회사 Multilayer stretchable cable
US8588884B2 (en) 2010-05-28 2013-11-19 Emkinetics, Inc. Microneedle electrode
WO2011150407A2 (en) 2010-05-28 2011-12-01 The Regents Of The University Of California Cell-phone based wireless and mobile brain-machine interface
US20110313762A1 (en) 2010-06-20 2011-12-22 International Business Machines Corporation Speech output with confidence indication
USD643428S1 (en) 2010-07-07 2011-08-16 Iota, Inc. Wearable mobile accessory
US8754862B2 (en) 2010-07-11 2014-06-17 Lester F. Ludwig Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (HDTP) user interfaces
FR2962821B1 (en) 2010-07-13 2013-02-22 Commissariat Energie Atomique METHOD AND SYSTEM FOR CLASSIFYING NEURAL SIGNALS, AND METHOD FOR SELECTING ELECTRODES FOR DIRECT NEURONAL CONTROL.
WO2012018821A2 (en) 2010-08-02 2012-02-09 The Johns Hopkins University Method for presenting force sensor information using cooperative robot control and audio feedback
US20120066163A1 (en) 2010-09-13 2012-03-15 Nottingham Trent University Time to event data analysis method and system
WO2012040390A2 (en) 2010-09-21 2012-03-29 Somaxis Incorporated Methods for assessing and optimizing muscular performance
US10216893B2 (en) 2010-09-30 2019-02-26 Fitbit, Inc. Multimode sensor devices
US20120117514A1 (en) 2010-11-04 2012-05-10 Microsoft Corporation Three-Dimensional User Interaction
US20130123656A1 (en) 2010-11-15 2013-05-16 Sandy L. Heck Control System and Apparatus Utilizing Signals Originating in the Periauricular Neuromuscular System
US10244988B2 (en) 2010-12-16 2019-04-02 Nokia Technologies Oy Method, apparatus and computer program of using a bio-signal profile
US20120203076A1 (en) 2011-02-08 2012-08-09 Jean Pierre Fatta Portable Physiological Data Monitoring Device
KR101448106B1 (en) 2011-02-17 2014-10-08 주식회사 라이프사이언스테크놀로지 Analisys Method of Rehabilitation status using Electromyogram
KR101206280B1 (en) 2011-02-28 2012-11-29 (주)락싸 Electric contactless electric potential sensor circuit
USD646192S1 (en) 2011-03-11 2011-10-04 Jamie Renae Woode Bracelet
US20120265090A1 (en) 2011-04-13 2012-10-18 Fink Rainer J System and method of acquiring uterine emg signals and wirelessly transmitting the same
WO2012155157A1 (en) 2011-05-06 2012-11-15 Azoteq (Pty) Ltd Multiple media capacitive sensor
CN103764021B (en) 2011-05-20 2015-11-25 南洋理工大学 A kind ofly to repair for collaborative neuro-physiological and/or system, instrument, the apparatus and method of functional promotion
US9330499B2 (en) 2011-05-20 2016-05-03 Microsoft Technology Licensing, Llc Event augmentation with real-time information
US8203502B1 (en) 2011-05-25 2012-06-19 Google Inc. Wearable heads-up display with integrated finger-tracking input sensor
US9018532B2 (en) 2011-06-09 2015-04-28 Multi-Fineline Electronix, Inc. Stretchable circuit assemblies
US20130198694A1 (en) 2011-06-10 2013-08-01 Aliphcom Determinative processes for wearable devices
US9921641B1 (en) 2011-06-10 2018-03-20 Amazon Technologies, Inc. User/object interactions in an augmented reality environment
US8879276B2 (en) 2011-06-15 2014-11-04 Power Gold LLC Flexible circuit assembly and method thereof
EP2721582A4 (en) 2011-06-20 2015-03-25 Nokia Corp Methods, apparatuses and computer program products for performing accurate pose estimation of objects
US9089270B2 (en) 2011-06-29 2015-07-28 Lg Electronics Inc. Terminal and control method thereof
US9128521B2 (en) 2011-07-13 2015-09-08 Lumo Bodytech, Inc. System and method of biomechanical posture detection and feedback including sensor normalization
US8179604B1 (en) 2011-07-13 2012-05-15 Google Inc. Wearable marker for passive interaction
US8832233B1 (en) 2011-07-20 2014-09-09 Google Inc. Experience sharing for conveying communication status
US9123155B2 (en) 2011-08-09 2015-09-01 Covidien Lp Apparatus and method for using augmented reality vision system in surgical procedures
WO2013029196A1 (en) 2011-08-26 2013-03-07 国立云林科技大学 Feedback-control wearable upper-limb electrical stimulation device
USD654622S1 (en) 2011-09-15 2012-02-21 Shih-Ling Hsu Hair band
JP2014531662A (en) * 2011-09-19 2014-11-27 アイサイト モバイル テクノロジーズ リミテッド Touch-free interface for augmented reality systems
US20130080794A1 (en) 2011-09-24 2013-03-28 Jawbone Industrial Co., Ltd. Wireless controller with universal serial bus and system having the same
US20130077820A1 (en) 2011-09-26 2013-03-28 Microsoft Corporation Machine learning gesture detection
US20130271292A1 (en) 2011-10-09 2013-10-17 James Andrew McDermott Driver Alert and Monitoring System
FR2981561B1 (en) 2011-10-21 2015-03-20 Commissariat Energie Atomique METHOD FOR DETECTING MOTION SENSOR ACTIVITY, CORRESPONDING DEVICE AND COMPUTER PROGRAM
US8467270B2 (en) 2011-10-26 2013-06-18 Google Inc. Smart-watch with user interface features
US20130106686A1 (en) 2011-10-31 2013-05-02 Broadcom Corporation Gesture processing framework
ITTO20111024A1 (en) 2011-11-08 2013-05-09 Bitron Spa MEASUREMENT DEVICE FOR HIGH-RESOLUTION ELECTROMYOGRAPHIC SIGNALS AND HIGH NUMBER OF CHANNELS.
WO2013071285A1 (en) 2011-11-11 2013-05-16 Rutgers, The State University Of New Jersey Methods for the diagnosis and treatment of neurological disorders
US8704882B2 (en) 2011-11-18 2014-04-22 L-3 Communications Corporation Simulated head mounted display system and method
US9152376B2 (en) 2011-12-01 2015-10-06 At&T Intellectual Property I, L.P. System and method for continuous multimodal speech and gesture interaction
US10430066B2 (en) 2011-12-06 2019-10-01 Nri R&D Patent Licensing, Llc Gesteme (gesture primitive) recognition for advanced touch user interfaces
TWI446896B (en) 2011-12-23 2014-08-01 Ind Tech Res Inst Sensor for acquiring muscle parameters
EP2613223A1 (en) * 2012-01-09 2013-07-10 Softkinetic Software System and method for enhanced gesture-based interaction
US8971023B2 (en) 2012-03-21 2015-03-03 Google Inc. Wearable computing device frame
US20130191741A1 (en) 2012-01-24 2013-07-25 Motorola Mobility, Inc. Methods and Apparatus for Providing Feedback from an Electronic Device
WO2013126798A2 (en) 2012-02-23 2013-08-29 Bio-Signal Group Corp. Shielded multi-channel eeg headset systems and methods
US8970571B1 (en) 2012-03-13 2015-03-03 Google Inc. Apparatus and method for display lighting adjustment
US8922481B1 (en) 2012-03-16 2014-12-30 Google Inc. Content annotation
USD682728S1 (en) 2012-03-27 2013-05-21 Bulgari S.P.A. Ring
ITMI20120494A1 (en) 2012-03-27 2013-09-28 B10Nix S R L APPARATUS AND METHOD FOR THE ACQUISITION AND ANALYSIS OF A MUSCULAR ACTIVITY
JP2013206273A (en) 2012-03-29 2013-10-07 Sony Corp Information processing apparatus, information processing method, and information processing system
US10448161B2 (en) 2012-04-02 2019-10-15 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for gestural manipulation of a sound field
US10130298B2 (en) 2012-04-03 2018-11-20 Carnegie Mellon University Musculoskeletal activity recognition system and method
US9170674B2 (en) 2012-04-09 2015-10-27 Qualcomm Incorporated Gesture-based device control using pressure-sensitive sensors
US8994672B2 (en) 2012-04-09 2015-03-31 Sony Corporation Content transfer via skin input
US9221177B2 (en) 2012-04-18 2015-12-29 Massachusetts Institute Of Technology Neuromuscular model-based sensing and control paradigm for a robotic leg
US20130285916A1 (en) 2012-04-30 2013-10-31 Research In Motion Limited Touchscreen keyboard providing word predictions at locations in association with candidate letters
US20130297460A1 (en) 2012-05-01 2013-11-07 Zambala Lllp System and method for facilitating transactions of a physical product or real life service via an augmented reality environment
US9538940B2 (en) 2012-05-03 2017-01-10 University of Pittsburgh—of the Commonwealth System of Higher Education Intelligent algorithms for tracking three-dimensional skeletal movement from radiographic image sequences
USD716457S1 (en) 2012-05-23 2014-10-28 Neurometrix, Inc. Transcutaneous electrical nerve stimulation device
US9278453B2 (en) 2012-05-25 2016-03-08 California Institute Of Technology Biosleeve human-machine interface
WO2013177592A2 (en) 2012-05-25 2013-11-28 Emotiv Lifesciences, Inc. System and method for providing and aggregating biosignals and action data
US20130332196A1 (en) 2012-06-07 2013-12-12 The Government Of The United States As Represented By The Secretary Of The Army Diabetes Monitoring Using Smart Device
US20150366504A1 (en) 2014-06-20 2015-12-24 Medibotics Llc Electromyographic Clothing
US10921886B2 (en) * 2012-06-14 2021-02-16 Medibotics Llc Circumferential array of electromyographic (EMG) sensors
US9891718B2 (en) 2015-04-22 2018-02-13 Medibotics Llc Devices for measuring finger motion and recognizing hand gestures
US9536449B2 (en) 2013-05-23 2017-01-03 Medibotics Llc Smart watch and food utensil for monitoring food consumption
US9582072B2 (en) 2013-09-17 2017-02-28 Medibotics Llc Motion recognition clothing [TM] with flexible electromagnetic, light, or sonic energy pathways
US9814426B2 (en) 2012-06-14 2017-11-14 Medibotics Llc Mobile wearable electromagnetic brain activity monitor
US8954135B2 (en) 2012-06-22 2015-02-10 Fitbit, Inc. Portable biometric monitoring devices and methods of operating same
US9615447B2 (en) 2012-07-23 2017-04-04 Zhuhai Advanced Chip Carriers & Electronic Substrate Solutions Technologies Co. Ltd. Multilayer electronic support structure with integral constructional elements
US8484022B1 (en) 2012-07-27 2013-07-09 Google Inc. Adaptive auto-encoders
EP2698686B1 (en) 2012-07-27 2018-10-10 LG Electronics Inc. Wrist-wearable terminal and control method thereof
US20150182165A1 (en) 2012-08-03 2015-07-02 Neurotopia, Inc. Neurophysiological training headset
US20140045547A1 (en) 2012-08-10 2014-02-13 Silverplus, Inc. Wearable Communication Device and User Interface
USD689862S1 (en) 2012-08-15 2013-09-17 Kye Systems Corp. Ring mouse
US20140049417A1 (en) 2012-08-20 2014-02-20 Playtabase, LLC Wireless motion activated command transfer device, system, and method
EP2893388B1 (en) 2012-09-03 2016-08-03 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted system
US8895865B2 (en) 2012-09-07 2014-11-25 Conor P. Lenahan Conductive connections allowing XYZ translation
US9211417B2 (en) 2012-09-10 2015-12-15 Great Lakes Neurotechnologies Inc Movement disorder therapy system, devices and methods, and intelligent methods of tuning
US20170277282A1 (en) 2012-09-14 2017-09-28 Widevantage Inc. Input device for transmitting user input
US10606353B2 (en) 2012-09-14 2020-03-31 Interaxon Inc. Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
US10824310B2 (en) 2012-12-20 2020-11-03 Sri International Augmented reality virtual personal assistant for external representation
CN203252647U (en) 2012-09-29 2013-10-30 艾利佛公司 Wearable device for judging physiological features
US10234941B2 (en) 2012-10-04 2019-03-19 Microsoft Technology Licensing, Llc Wearable sensor for tracking articulated body-parts
US10413251B2 (en) 2012-10-07 2019-09-17 Rhythm Diagnostic Systems, Inc. Wearable cardiac monitor
US9563740B2 (en) 2012-10-16 2017-02-07 The Florida International University Board Of Trustees Neural interface activity simulator
USD695454S1 (en) 2012-10-24 2013-12-10 Patricia Lynn Moore Hair holding device
EP2911576B1 (en) 2012-10-26 2021-12-22 NIKE Innovate C.V. Athletic performance monitoring system utilizing heart rate information
KR102043703B1 (en) 2012-11-12 2019-11-12 한국전자통신연구원 method for manufacturing stretchable thin film transistor
US9477313B2 (en) 2012-11-20 2016-10-25 Samsung Electronics Co., Ltd. User gesture input to wearable electronic device involving outward-facing sensor of device
US9182826B2 (en) 2012-11-21 2015-11-10 Intel Corporation Gesture-augmented speech recognition
US8743052B1 (en) 2012-11-24 2014-06-03 Eric Jeffrey Keller Computing interface system
US9892655B2 (en) 2012-11-28 2018-02-13 Judy Sibille SNOW Method to provide feedback to a physical therapy patient or athlete
US9351653B1 (en) 2012-11-29 2016-05-31 Intan Technologies, LLC Multi-channel reconfigurable systems and methods for sensing biopotential signals
US10009644B2 (en) 2012-12-04 2018-06-26 Interaxon Inc System and method for enhancing content using brain-state data
US9042829B2 (en) 2013-01-04 2015-05-26 Nokia Corporation Method, apparatus, and computer program product for wireless short-range communication
US20140196131A1 (en) 2013-01-07 2014-07-10 Salutron, Inc. User authentication based on a wrist vein pattern
US10528135B2 (en) 2013-01-14 2020-01-07 Ctrl-Labs Corporation Wearable muscle interface systems, devices and methods that interact with content displayed on an electronic display
US9092664B2 (en) 2013-01-14 2015-07-28 Qualcomm Incorporated Use of EMG for subtle gesture recognition on surfaces
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US10610737B1 (en) 2013-01-22 2020-04-07 Bruce Scott Crawford System and method for using video-synchronized electromyography to improve neuromuscular performance of a target muscle
US9791921B2 (en) 2013-02-19 2017-10-17 Microsoft Technology Licensing, Llc Context-aware augmented reality object commands
US9211073B2 (en) 2013-02-20 2015-12-15 Tosense, Inc. Necklace-shaped physiological monitor
US9299248B2 (en) * 2013-02-22 2016-03-29 Thalmic Labs Inc. Method and apparatus for analyzing capacitive EMG and IMU sensor signals for gesture control
US20140245200A1 (en) 2013-02-25 2014-08-28 Leap Motion, Inc. Display control with gesture-selectable control paradigms
US20140249397A1 (en) 2013-03-01 2014-09-04 Thalmic Labs Inc. Differential non-contact biopotential sensor
US20140257141A1 (en) 2013-03-05 2014-09-11 Great Lakes Neurotechnologies Inc. Movement disorder monitoring and symptom quantification system and method
US20150182113A1 (en) 2013-12-31 2015-07-02 Aliphcom Real-time fatigue, personal effectiveness, injury risk device(s)
US20140334653A1 (en) 2013-03-14 2014-11-13 Aliphcom Combination speaker and light source responsive to state(s) of an organism based on sensor data
US20140285326A1 (en) 2013-03-15 2014-09-25 Aliphcom Combination speaker and light source responsive to state(s) of an organism based on sensor data
US9436287B2 (en) 2013-03-15 2016-09-06 Qualcomm Incorporated Systems and methods for switching processing modes using gestures
US9495389B2 (en) 2013-03-15 2016-11-15 Qualcomm Incorporated Client-server based dynamic search
US20140277622A1 (en) * 2013-03-15 2014-09-18 First Principles, Inc. System and method for bio-signal control of an electronic device
US9766709B2 (en) 2013-03-15 2017-09-19 Leap Motion, Inc. Dynamic user interactions for display control
US9146730B2 (en) 2013-03-15 2015-09-29 Netgear, Inc. System and method for remotely updating cable modem software
US9361411B2 (en) 2013-03-15 2016-06-07 Honeywell International, Inc. System and method for selecting a respirator
JP5900393B2 (en) 2013-03-21 2016-04-06 ソニー株式会社 Information processing apparatus, operation control method, and program
IN2013MU01148A (en) 2013-03-26 2015-04-24 Tata Consultancy Services Ltd
US20140299362A1 (en) 2013-04-04 2014-10-09 Electronics And Telecommunications Research Institute Stretchable electric device and manufacturing method thereof
US10620709B2 (en) 2013-04-05 2020-04-14 Ultrahaptics IP Two Limited Customized gesture interpretation
US9717440B2 (en) 2013-05-03 2017-08-01 The Florida International University Board Of Trustees Systems and methods for decoding intended motor commands from recorded neural signals for the control of external devices or to interact in virtual environments
KR102043200B1 (en) 2013-05-07 2019-11-11 엘지전자 주식회사 Smart watch and method for controlling thereof
US9582317B2 (en) 2013-05-10 2017-02-28 Samsung Electronics Co., Ltd. Method of using use log of portable terminal and apparatus using the same
US10152082B2 (en) 2013-05-13 2018-12-11 North Inc. Systems, articles and methods for wearable electronic devices that accommodate different user forms
US9706647B2 (en) 2013-05-14 2017-07-11 Mc10, Inc. Conformal electronics including nested serpentine interconnects
US10314506B2 (en) 2013-05-15 2019-06-11 Polar Electro Oy Heart activity sensor structure
USD741855S1 (en) 2013-05-16 2015-10-27 Samsung Electronics Co., Ltd. Smart watch
USD750623S1 (en) 2013-05-16 2016-03-01 Samsung Electronics Co., Ltd. Smart watch
US10620775B2 (en) 2013-05-17 2020-04-14 Ultrahaptics IP Two Limited Dynamic interactive objects
US9904356B2 (en) 2013-05-28 2018-02-27 The Boeing Company Tracking a user to support tasks performed on complex-system components
US9645652B2 (en) 2013-05-28 2017-05-09 The Boeing Company Ubiquitous natural user system for human-machine interaction
US9395810B2 (en) 2013-05-28 2016-07-19 The Boeing Company Ubiquitous natural user system
US9218574B2 (en) 2013-05-29 2015-12-22 Purepredictive, Inc. User interface for machine learning
KR20160016925A (en) 2013-05-31 2016-02-15 프레지던트 앤드 펠로우즈 오브 하바드 칼리지 Soft exosuit for assistance with human motion
US9383819B2 (en) 2013-06-03 2016-07-05 Daqri, Llc Manipulation of virtual object in augmented reality via intent
US20150019135A1 (en) 2013-06-03 2015-01-15 Mc10, Inc. Motion sensor and analysis
KR101933921B1 (en) 2013-06-03 2018-12-31 삼성전자주식회사 Method and apparatus for estimating pose
WO2015047462A2 (en) 2013-06-03 2015-04-02 The Regents Of The University Of California Artifact removal techniques with signal reconstruction
US11083402B2 (en) 2013-06-04 2021-08-10 Medtronic, Inc. Patient state determination based on one or more spectral characteristics of a bioelectrical brain signal
AU2014274726B2 (en) * 2013-06-06 2018-07-19 Tricord Holdings, L.L.C. Modular physiologic monitoring systems, kits, and methods
KR101501661B1 (en) 2013-06-10 2015-03-12 한국과학기술연구원 Wearable electromyogram sensor system
WO2014204330A1 (en) 2013-06-17 2014-12-24 3Divi Company Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
KR102131358B1 (en) 2013-06-17 2020-07-07 삼성전자주식회사 User interface device and method of operation of user interface device
KR102170321B1 (en) * 2013-06-17 2020-10-26 삼성전자주식회사 System, method and device to recognize motion using gripped object
CN105357997A (en) 2013-06-21 2016-02-24 Mc10股份有限公司 Band with conformable electronics
US20140376773A1 (en) 2013-06-21 2014-12-25 Leap Motion, Inc. Tunable operational parameters in motion-capture and touchless interface operation
US10402517B2 (en) 2013-06-26 2019-09-03 Dassault Systémes Simulia Corp. Musculo-skeletal modeling using finite element analysis, process integration, and design optimization
US9146618B2 (en) 2013-06-28 2015-09-29 Google Inc. Unlocking a head mounted device
US10295338B2 (en) 2013-07-12 2019-05-21 Magic Leap, Inc. Method and system for generating map data from an image
US9408316B2 (en) 2013-07-22 2016-08-02 Thalmic Labs Inc. Systems, articles and methods for strain mitigation in wearable electronic devices
US20150029092A1 (en) 2013-07-23 2015-01-29 Leap Motion, Inc. Systems and methods of interpreting complex gestures
US11426123B2 (en) 2013-08-16 2022-08-30 Meta Platforms Technologies, Llc Systems, articles and methods for signal routing in wearable electronic devices that detect muscle activity of a user using a set of discrete and separately enclosed pod structures
US20150124566A1 (en) 2013-10-04 2015-05-07 Thalmic Labs Inc. Systems, articles and methods for wearable electronic devices employing contact sensors
US10042422B2 (en) 2013-11-12 2018-08-07 Thalmic Labs Inc. Systems, articles, and methods for capacitive electromyography sensors
US10188309B2 (en) 2013-11-27 2019-01-29 North Inc. Systems, articles, and methods for electromyography sensors
CA2921954A1 (en) 2013-08-23 2015-02-26 Thalmic Labs Inc. Systems, articles, and methods for human-electronics interfaces
US9788789B2 (en) 2013-08-30 2017-10-17 Thalmic Labs Inc. Systems, articles, and methods for stretchable printed circuit boards
KR102263496B1 (en) 2013-09-04 2021-06-10 에씰로 앙터나시오날 Navigation method based on a see-through head-mounted device
US9372535B2 (en) 2013-09-06 2016-06-21 Thalmic Labs Inc. Systems, articles, and methods for electromyography-based human-electronics interfaces
CN105309056B (en) 2013-09-06 2018-04-03 株式会社村田制作所 Multilager base plate
US9483123B2 (en) 2013-09-23 2016-11-01 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
WO2015044851A2 (en) 2013-09-25 2015-04-02 Mindmaze Sa Physiological parameter measurement and feedback system
US10048761B2 (en) 2013-09-30 2018-08-14 Qualcomm Incorporated Classification of gesture detection systems through use of known and yet to be worn sensors
US10405786B2 (en) 2013-10-09 2019-09-10 Nedim T. SAHIN Systems, environment and methods for evaluation and management of autism spectrum disorder using a wearable data collection device
US10422810B2 (en) 2013-10-14 2019-09-24 Nike, Inc. Calculating pace and energy expenditure from athletic movement attributes
US9389694B2 (en) 2013-10-22 2016-07-12 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
CN103777752A (en) 2013-11-02 2014-05-07 上海威璞电子科技有限公司 Gesture recognition device based on arm muscle current detection and motion sensor
GB2519987B (en) 2013-11-04 2021-03-03 Imperial College Innovations Ltd Biomechanical activity monitoring
US9594433B2 (en) 2013-11-05 2017-03-14 At&T Intellectual Property I, L.P. Gesture-based controls via bone conduction
EP3068349A4 (en) 2013-11-13 2017-12-13 Hrl Laboratories, Llc System for controlling brain machine interfaces and neural prosthetic systems
US10676083B1 (en) 2013-11-13 2020-06-09 Hrl Laboratories, Llc System and method for prediction of occupant motor response in accidents
US20150157944A1 (en) 2013-12-06 2015-06-11 Glenn I. Gottlieb Software Application for Generating a Virtual Simulation for a Sport-Related Activity
US9367086B2 (en) 2013-12-10 2016-06-14 Atmel Corporation Smart watch with adaptive touch screen
US9367139B2 (en) 2013-12-12 2016-06-14 Thalmic Labs Inc. Systems, articles, and methods for gesture identification in wearable electromyography devices
US9971412B2 (en) 2013-12-20 2018-05-15 Lenovo (Singapore) Pte. Ltd. Enabling device features according to gesture input
WO2015100172A1 (en) 2013-12-27 2015-07-02 Kopin Corporation Text editing with gesture control and natural speech
USD751065S1 (en) 2013-12-28 2016-03-08 Intel Corporation Wearable computing device
KR20150077684A (en) 2013-12-30 2015-07-08 삼성전자주식회사 Function Operating Method based on Biological Signals and Electronic Device supporting the same
US20150182130A1 (en) 2013-12-31 2015-07-02 Aliphcom True resting heart rate
US9524580B2 (en) 2014-01-06 2016-12-20 Oculus Vr, Llc Calibration of virtual reality systems
US9659403B1 (en) 2014-01-06 2017-05-23 Leap Motion, Inc. Initializing orientation in space for predictive information for free space gesture control and communication
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
CN106102504A (en) 2014-02-14 2016-11-09 赛尔米克实验室公司 For the elastic system of power cable, goods and method and the wearable electronic installation using elastic power cable
TWI596994B (en) 2014-02-20 2017-08-21 Tear-resistant structure of the flexible circuit board
US20150242009A1 (en) 2014-02-26 2015-08-27 Qeexo, Co. Using Capacitive Images for Touch Type Classification
US10254843B2 (en) 2014-02-28 2019-04-09 Vikas Gupta Gesture operated wrist mounted camera system
USD751068S1 (en) 2014-03-07 2016-03-08 Sony Mobile Communications Ab Display portion of watch shaped communications equipment
US10613642B2 (en) 2014-03-12 2020-04-07 Microsoft Technology Licensing, Llc Gesture parameter tuning
US9649558B2 (en) 2014-03-14 2017-05-16 Sony Interactive Entertainment Inc. Gaming device with rotatably placed cameras
US20150261306A1 (en) 2014-03-17 2015-09-17 Thalmic Labs Inc. Systems, devices, and methods for selecting between multiple wireless connections
USD736664S1 (en) 2014-03-18 2015-08-18 Google Technology Holdings LLC Wrist band for an electronic device
US10520378B1 (en) 2014-03-20 2019-12-31 Invent.Ly, Llc Wearable user input device and sensors system to detect injury
US10575760B2 (en) 2014-03-26 2020-03-03 GestureLogic Inc. Systems, methods and devices for activity recognition
US10199008B2 (en) 2014-03-27 2019-02-05 North Inc. Systems, devices, and methods for wearable electronic devices as state machines
US10409382B2 (en) 2014-04-03 2019-09-10 Honda Motor Co., Ltd. Smart tutorial for gesture control system
TWD165563S (en) 2014-04-08 2015-01-21 緯創資通股份有限公司 Portion of wearable electronic device
US20150296553A1 (en) 2014-04-11 2015-10-15 Thalmic Labs Inc. Systems, devices, and methods that establish proximity-based wireless connections
US9149938B1 (en) 2014-04-11 2015-10-06 Harris Corporation Robotic exoskeleton with adaptive viscous user coupling
US9858391B2 (en) 2014-04-17 2018-01-02 The Boeing Company Method and system for tuning a musculoskeletal model
US9402582B1 (en) 2014-04-21 2016-08-02 Verily Life Sciences Llc Smart surgical glove
US10845982B2 (en) 2014-04-28 2020-11-24 Facebook, Inc. Providing intelligent transcriptions of sound messages in a messaging application
WO2015164951A1 (en) 2014-05-01 2015-11-05 Abbas Mohamad Methods and systems relating to personalized evolving avatars
US20150323998A1 (en) * 2014-05-06 2015-11-12 Qualcomm Incorporated Enhanced user interface for a wearable electronic device
US20150325202A1 (en) 2014-05-07 2015-11-12 Thalmic Labs Inc. Systems, devices, and methods for wearable computers with heads-up displays
AU2015259540B2 (en) 2014-05-12 2018-11-08 Commscope Technologies Llc Remote radio heads having wireless jumper connections and related equipment, systems and methods
US9785247B1 (en) 2014-05-14 2017-10-10 Leap Motion, Inc. Systems and methods of tracking moving hands and recognizing gestural interactions
USD717685S1 (en) 2014-05-15 2014-11-18 Thalmic Labs Inc. Expandable armband
USD756359S1 (en) 2014-05-15 2016-05-17 Thalmic Labs Inc. Expandable armband device
KR101666399B1 (en) 2014-05-15 2016-10-14 한국과학기술연구원 Human joint kinematics information extraction method from multi-channel surface electromyogram signals, recording medium and device for performing the method
US9741169B1 (en) 2014-05-20 2017-08-22 Leap Motion, Inc. Wearable augmented reality devices with object detection and tracking
US10782657B2 (en) 2014-05-27 2020-09-22 Ultrahaptics IP Two Limited Systems and methods of gestural interaction in a pervasive computing environment
CN105320248B (en) 2014-06-03 2018-12-07 深圳Tcl新技术有限公司 Aerial gesture input method and device
US9329694B2 (en) 2014-06-06 2016-05-03 Google Technology Holdings LLC Preemptive machine learning-based gesture recognition
US9977505B2 (en) 2014-06-06 2018-05-22 International Business Machines Corporation Controlling inadvertent inputs to a mobile device
CN112651288B (en) 2014-06-14 2022-09-20 奇跃公司 Method and system for generating virtual and augmented reality
JP6362444B2 (en) 2014-06-16 2018-07-25 日本メクトロン株式会社 Flexible printed circuit board and method for manufacturing flexible printed circuit board
US9880632B2 (en) 2014-06-19 2018-01-30 Thalmic Labs Inc. Systems, devices, and methods for gesture identification
WO2015199747A1 (en) 2014-06-23 2015-12-30 Thalmic Labs Inc. Systems, articles, and methods for wearable human-electronics interface devices
US10216274B2 (en) 2014-06-23 2019-02-26 North Inc. Systems, articles, and methods for wearable human-electronics interface devices
US20150379770A1 (en) 2014-06-27 2015-12-31 David C. Haley, JR. Digital action in response to object interaction
US9552069B2 (en) 2014-07-11 2017-01-24 Microsoft Technology Licensing, Llc 3D gesture recognition
US10609267B2 (en) 2014-07-23 2020-03-31 Orcam Technologies Ltd. Systems and methods for analyzing advertisement effectiveness using wearable camera systems
US20160050037A1 (en) 2014-08-12 2016-02-18 Valcom, Inc. Emergency alert notification device, system, and method
US9734704B2 (en) 2014-08-12 2017-08-15 Dominick S. LEE Wireless gauntlet for electronic control
US20160071319A1 (en) 2014-09-09 2016-03-10 Schneider Electric It Corporation Method to use augumented reality to function as hmi display
WO2016041088A1 (en) 2014-09-19 2016-03-24 Sulon Technologies Inc. System and method for tracking wearable peripherals in augmented reality and virtual reality applications
US9811555B2 (en) 2014-09-27 2017-11-07 Intel Corporation Recognition of free-form gestures from orientation tracking of a handheld or wearable device
US10783900B2 (en) 2014-10-03 2020-09-22 Google Llc Convolutional, long short-term memory, fully connected deep neural networks
EP3210096B1 (en) 2014-10-21 2019-05-15 Robert Bosch GmbH Method and system for automation of response selection and composition in dialog systems
US10274992B2 (en) 2014-11-12 2019-04-30 Kyocera Corporation Wearable device with muscle activity detector
US9807221B2 (en) 2014-11-28 2017-10-31 Thalmic Labs Inc. Systems, devices, and methods effected in response to establishing and/or terminating a physical communications link
KR20160071732A (en) 2014-12-12 2016-06-22 삼성전자주식회사 Method and apparatus for processing voice input
WO2016100368A1 (en) 2014-12-16 2016-06-23 Somatix, Inc. Methods and systems for monitoring and influencing gesture-based behaviors
USD743963S1 (en) 2014-12-22 2015-11-24 Osterhout Group, Inc. Air mouse
US9612661B2 (en) 2015-01-02 2017-04-04 Wearable Devices Ltd. Closed loop feedback interface for wearable devices
US9720515B2 (en) 2015-01-02 2017-08-01 Wearable Devices Ltd. Method and apparatus for a gesture controlled interface for wearable devices
US11119565B2 (en) 2015-01-19 2021-09-14 Samsung Electronics Company, Ltd. Optical detection and analysis of bone
US9852546B2 (en) 2015-01-28 2017-12-26 CCP hf. Method and system for receiving gesture input via virtual control objects
US9696795B2 (en) 2015-02-13 2017-07-04 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
US10606341B2 (en) 2015-02-22 2020-03-31 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
US10599216B2 (en) 2015-03-02 2020-03-24 Tap Systems Inc. Arbitrary surface and finger position keyboard
EP3064130A1 (en) 2015-03-02 2016-09-07 MindMaze SA Brain activity measurement and feedback system
US10124210B2 (en) 2015-03-13 2018-11-13 KO Luxembourg SARL Systems and methods for qualitative assessment of sports performance
GB201504362D0 (en) * 2015-03-16 2015-04-29 Elliptic Laboratories As Touchless user interfaces for electronic devices
US20160274758A1 (en) 2015-03-20 2016-09-22 Thalmic Labs Inc. Systems, devices, and methods for mitigating false positives in human-electronics interfaces
US20160282947A1 (en) 2015-03-26 2016-09-29 Lenovo (Singapore) Pte. Ltd. Controlling a wearable device using gestures
KR101927323B1 (en) 2015-04-03 2018-12-10 엘지전자 주식회사 Mobile terminal and method for controlling the same
US10432842B2 (en) 2015-04-06 2019-10-01 The Texas A&M University System Fusion of inertial and depth sensors for movement measurements and recognition
US20170095178A1 (en) 2015-04-09 2017-04-06 Marco Schoen EMG Home Trainer
US20180107275A1 (en) * 2015-04-13 2018-04-19 Empire Technology Development Llc Detecting facial expressions
CN108883335A (en) 2015-04-14 2018-11-23 约翰·詹姆斯·丹尼尔斯 The more sensory interfaces of wearable electronics for people and machine or person to person
US20160309249A1 (en) 2015-04-16 2016-10-20 Hong Fu Jin Precision Industry (Wuhan) Co., Ltd. Wearable electronic device
KR102063895B1 (en) 2015-04-20 2020-01-08 삼성전자주식회사 Master device, slave device and control method thereof
US9804733B2 (en) * 2015-04-21 2017-10-31 Dell Products L.P. Dynamic cursor focus in a multi-display information handling system environment
US10175492B2 (en) 2015-04-24 2019-01-08 Eon Reality, Inc. Systems and methods for transition between augmented reality and virtual reality
US10078435B2 (en) 2015-04-24 2018-09-18 Thalmic Labs Inc. Systems, methods, and computer program products for interacting with electronically displayed presentation materials
US9972133B2 (en) 2015-04-24 2018-05-15 Jpw Industries Inc. Wearable display for use with tool
GB2537899B (en) 2015-04-30 2018-02-21 Hy5Pro As Control of digits for artificial hand
US9654477B1 (en) 2015-05-05 2017-05-16 Wells Fargo Bank, N. A. Adaptive authentication
US9740310B2 (en) 2015-05-22 2017-08-22 Adobe Systems Incorporated Intuitive control of pressure-sensitive stroke attributes
US9898864B2 (en) 2015-05-28 2018-02-20 Microsoft Technology Licensing, Llc Shared tactile interaction and user safety in shared space multi-person immersive virtual reality
ES2940824T3 (en) 2015-06-02 2023-05-11 Battelle Memorial Institute Systems for the formation of neural bridges of the central nervous system
EP4134125A1 (en) 2015-06-02 2023-02-15 Battelle Memorial Institute Neural sleeve for neuromuscular stimulation, sensing and recording
EP3302691B1 (en) 2015-06-02 2019-07-24 Battelle Memorial Institute Non-invasive motor impairment rehabilitation system
US10813559B2 (en) 2015-06-14 2020-10-27 Facense Ltd. Detecting respiratory tract infection based on changes in coughing sounds
US11589814B2 (en) 2015-06-26 2023-02-28 Carnegie Mellon University System for wearable, low-cost electrical impedance tomography for non-invasive gesture recognition
US9240069B1 (en) 2015-06-30 2016-01-19 Ariadne's Thread (Usa), Inc. Low-latency virtual reality display system
WO2017007518A1 (en) 2015-07-07 2017-01-12 Obma Padraic R Noninvasive medical monitoring device, system and method
EP3329484A4 (en) 2015-07-29 2019-06-05 Sensel Inc. Systems and methods for manipulating a virtual environment
KR101626748B1 (en) 2015-08-03 2016-06-14 숭실대학교산학협력단 Apparatus for measuring movement pattern using brainwave and electromyogram and Method thereof
US10854104B2 (en) 2015-08-28 2020-12-01 Icuemotion Llc System for movement skill analysis and skill augmentation and cueing
US10387034B2 (en) * 2015-09-03 2019-08-20 Microsoft Technology Licensing, Llc Modifying captured stroke information into an actionable form
US10348355B2 (en) 2015-09-16 2019-07-09 Intel Corporation Techniques for gesture recognition using photoplethysmographic (PPMG) sensor and low-power wearable gesture recognition device using the same
US20170079828A1 (en) 2015-09-22 2017-03-23 Lim Innovations, Inc. Scoliosis treatment system and method
US9824287B2 (en) 2015-09-29 2017-11-21 Huami Inc. Method, apparatus and system for biometric identification
US10459537B2 (en) 2015-09-30 2019-10-29 Stmicroelectronics, Inc. Encapsulated pressure sensor
US11026628B1 (en) 2015-09-30 2021-06-08 Apple Inc. Systems and methods of spatial filtering for measuring electrical signals
WO2017062544A1 (en) 2015-10-06 2017-04-13 University Of Pittsburgh-Of The Commonwealth System Of Higher Education Method, device and system for sensing neuromuscular, physiological, biomechanical, and musculoskeletal activity
US9881273B2 (en) 2015-10-28 2018-01-30 Disney Interprises, Inc. Automatic object detection and state estimation via electronic emissions sensing
US11106273B2 (en) 2015-10-30 2021-08-31 Ostendo Technologies, Inc. System and methods for on-body gestural interfaces and projection displays
US10595941B2 (en) 2015-10-30 2020-03-24 Orthosensor Inc. Spine measurement system and method therefor
US10552752B2 (en) 2015-11-02 2020-02-04 Microsoft Technology Licensing, Llc Predictive controller for applications
GB201520367D0 (en) 2015-11-19 2016-01-06 Bespoke Vr Ltd Editing interactive motion capture data for creating the interaction characteristics of non player characters
KR102570068B1 (en) 2015-11-20 2023-08-23 삼성전자주식회사 Gesture recognition method, gesture recognition apparatus, wearable device
KR101739656B1 (en) 2015-11-23 2017-05-24 이경호 Handy-type Breast Cancer Diagnosis Device
US10776712B2 (en) 2015-12-02 2020-09-15 Preferred Networks, Inc. Generative machine learning systems for drug design
CN105511615B (en) 2015-12-04 2019-03-05 深圳大学 Wearable text input system and method based on EMG
US20170188980A1 (en) 2016-01-06 2017-07-06 Empire Technology Development Llc Wearable sensor based body modeling
US11402402B2 (en) 2016-01-12 2022-08-02 Bigmotion Technologies Inc. Systems and methods for human body motion capture
US10973422B2 (en) 2016-01-22 2021-04-13 Fitbit, Inc. Photoplethysmography-based pulse wave analysis using a wearable device
KR102619981B1 (en) 2016-02-02 2024-01-02 삼성전자주식회사 Gesture classification apparatus and method using electromyogram signals
KR102576908B1 (en) 2016-02-16 2023-09-12 삼성전자주식회사 Method and Apparatus for Providing Dynamic Panorama
WO2017143303A1 (en) 2016-02-17 2017-08-24 Meta Company Apparatuses, methods and systems for sharing virtual elements
US20170259167A1 (en) 2016-03-14 2017-09-14 Nathan Sterling Cook Brainwave virtual reality apparatus and method
KR102158743B1 (en) 2016-03-15 2020-09-22 한국전자통신연구원 Data augmentation method for spontaneous speech recognition
US9864434B2 (en) 2016-03-30 2018-01-09 Huami Inc. Gesture control of interactive events using multiple wearable devices
WO2017173386A1 (en) 2016-03-31 2017-10-05 Sensel Inc. Human-computer interface system
US10338686B2 (en) 2016-03-31 2019-07-02 Disney Enterprises, Inc. Control system using aesthetically guided gesture recognition
US10503253B2 (en) 2016-03-31 2019-12-10 Intel Corporation Sensor signal processing to determine finger and/or hand position
US10852835B2 (en) 2016-04-15 2020-12-01 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for controlling prosthetic devices by gestures and other modalities
CN109155303B (en) 2016-04-19 2020-05-22 天工方案公司 Selective shielding of radio frequency modules
US10046229B2 (en) 2016-05-02 2018-08-14 Bao Tran Smart device
US10203751B2 (en) 2016-05-11 2019-02-12 Microsoft Technology Licensing, Llc Continuous motion controls operable using neurological data
US9864431B2 (en) 2016-05-11 2018-01-09 Microsoft Technology Licensing, Llc Changing an application state using neurological data
US10555865B2 (en) 2016-05-25 2020-02-11 Carnegie Mellon University Torque control methods for an exoskeleton device
EP3463550B1 (en) 2016-05-31 2024-03-27 Lab Schöpfergeist AG Nerve stimulation apparatus and method
US10426371B2 (en) 2016-06-07 2019-10-01 Smk Corporation Muscle condition measurement sheet
KR101790147B1 (en) 2016-06-22 2017-10-25 재단법인 실감교류인체감응솔루션연구단 Virtual object control system and method
US10154791B2 (en) 2016-07-01 2018-12-18 L.I.F.E. Corporation S.A. Biometric identification by garments having a plurality of sensors
US10943398B2 (en) 2016-07-15 2021-03-09 Samsung Electronics Co., Ltd. Augmented reality device and operation thereof
US20180018986A1 (en) 2016-07-16 2018-01-18 Ron Zass System and method for measuring length of utterance
WO2018017399A1 (en) 2016-07-20 2018-01-25 Usens, Inc. Method and system for 3d hand skeleton tracking
KR102655669B1 (en) 2016-07-20 2024-04-05 삼성전자주식회사 Apparatus and method for extracting bio-signal feature, apparatus for detecting bio-information
US11635736B2 (en) 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
EP3487595A4 (en) 2016-07-25 2019-12-25 CTRL-Labs Corporation System and method for measuring the movements of articulated rigid bodies
US10687759B2 (en) 2018-05-29 2020-06-23 Facebook Technologies, Llc Shielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods
US20190223748A1 (en) 2018-01-25 2019-07-25 Ctrl-Labs Corporation Methods and apparatus for mitigating neuromuscular signal artifacts
CN110312471B (en) 2016-07-25 2022-04-29 脸谱科技有限责任公司 Adaptive system for deriving control signals from neuromuscular activity measurements
US11216069B2 (en) 2018-05-08 2022-01-04 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
EP3487402B1 (en) 2016-07-25 2021-05-05 Facebook Technologies, LLC Methods and apparatus for inferring user intent based on neuromuscular signals
US20180088675A1 (en) 2016-09-29 2018-03-29 Brian K. Vogel Coordinate system for gesture control
US10765363B2 (en) 2016-09-30 2020-09-08 Cognionics, Inc. Headgear for dry electroencephalogram sensors
US20180095542A1 (en) 2016-09-30 2018-04-05 Sony Interactive Entertainment Inc. Object Holder for Virtual Reality Interaction
US10162422B2 (en) 2016-10-10 2018-12-25 Deere & Company Control of machines through detection of gestures by optical and muscle sensors
WO2018094011A1 (en) 2016-11-16 2018-05-24 Lumo Bodytech, Inc. System and method for personalized exercise training and coaching
WO2018098046A2 (en) 2016-11-25 2018-05-31 Kinaptic, LLC Haptic human machine interface and wearable electronics methods and apparatus
US10646139B2 (en) 2016-12-05 2020-05-12 Intel Corporation Body movement tracking
US10736564B2 (en) 2016-12-16 2020-08-11 Elwha Llc System and method for enhancing learning of a motor task
US20190025919A1 (en) 2017-01-19 2019-01-24 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in an augmented reality system
US10488510B2 (en) 2017-02-02 2019-11-26 Leah La Salla Predictive probable cause system and unmanned vehicles using the same
US10854194B2 (en) 2017-02-10 2020-12-01 Johnson Controls Technology Company Building system with digital twin based data ingestion and processing
GB2561537B (en) 2017-02-27 2022-10-12 Emteq Ltd Optical expression detection
EP3609402A4 (en) 2017-04-14 2020-12-16 Rehabilitation Institute Of Chicago D/B/A Shirley Prosthetic virtual reality training interface and related methods
US10939833B2 (en) 2017-05-01 2021-03-09 Samsung Electronics Company, Ltd. Determining artery location using camera-based sensing
GB201709227D0 (en) 2017-06-09 2017-07-26 Microsoft Technology Licensing Llc A wearable device
CN107203272A (en) * 2017-06-23 2017-09-26 山东万腾电子科技有限公司 Wearable augmented reality task instruction system and method based on myoelectricity cognition technology
US11259746B2 (en) 2017-07-10 2022-03-01 General Electric Company Method and system for neuromuscular transmission measurement
US20190027141A1 (en) 2017-07-21 2019-01-24 Pearson Education, Inc. Systems and methods for virtual reality-based interaction evaluation
US10481699B2 (en) 2017-07-27 2019-11-19 Facebook Technologies, Llc Armband for tracking hand motion using electrical impedance measurement
US11231781B2 (en) 2017-08-03 2022-01-25 Intel Corporation Haptic gloves for virtual reality systems and methods of controlling the same
US11561806B2 (en) 2017-08-04 2023-01-24 Hannes Bendfeldt Adaptive interface for screen-based interactions
KR102408359B1 (en) 2017-08-23 2022-06-14 삼성전자주식회사 Electronic device and method for controlling using the electronic device
RU2678494C1 (en) 2017-08-24 2019-01-29 Самсунг Электроникс Ко., Лтд. Device and method for biometric user identification with rf (radio frequency) radar
US11762474B2 (en) 2017-09-06 2023-09-19 Georgia Tech Research Corporation Systems, methods and devices for gesture recognition
WO2019050651A1 (en) 2017-09-08 2019-03-14 Intel Corporation Low noise front-end for a heart rate monitor using photo-plethysmography
US20190076716A1 (en) 2017-09-12 2019-03-14 Intel Corporation Activity training system
KR102434402B1 (en) 2017-09-19 2022-08-22 한국전자통신연구원 Apparatus and method for providing mixed reality content
US10606620B2 (en) 2017-11-16 2020-03-31 International Business Machines Corporation Notification interaction in a touchscreen user interface
US20190150777A1 (en) 2017-11-17 2019-05-23 Ctrl-Labs Corporation Dual-supply analog circuitry for sensing surface emg signals
WO2019123463A1 (en) 2017-12-20 2019-06-27 The Elegant Monkeys Ltd. Method and system of modelling a mental/ emotional state of a user
US10275689B1 (en) 2017-12-21 2019-04-30 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers
US20190192024A1 (en) 2017-12-27 2019-06-27 X Development Llc Electroencephalogram system with reconfigurable network of redundant electrodes
US10671164B2 (en) 2017-12-27 2020-06-02 X Development Llc Interface for electroencephalogram for computer control
US10827942B2 (en) 2018-01-03 2020-11-10 Intel Corporation Detecting fatigue based on electroencephalogram (EEG) data
JP6999041B2 (en) 2018-01-09 2022-01-18 セント・ジュード・メディカル,カーディオロジー・ディヴィジョン,インコーポレイテッド Systems and methods for sorting electrophysiological signals on virtual catheters
US10729564B2 (en) 2018-01-12 2020-08-04 Ripple Llc Sensor system
EP3742961A4 (en) 2018-01-25 2021-03-31 Facebook Technologies, Inc. Calibration techniques for handstate representation modeling using neuromuscular signals
US11150730B1 (en) 2019-04-30 2021-10-19 Facebook Technologies, Llc Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
WO2019147949A1 (en) 2018-01-25 2019-08-01 Ctrl-Labs Corporation Real-time processing of handstate representation model estimates
US20200097081A1 (en) 2018-09-20 2020-03-26 Jasmine Stone Neuromuscular control of an augmented reality system
US10817795B2 (en) 2018-01-25 2020-10-27 Facebook Technologies, Llc Handstate reconstruction based on multiple inputs
WO2019147958A1 (en) 2018-01-25 2019-08-01 Ctrl-Labs Corporation User-controlled tuning of handstate representation model parameters
US10970936B2 (en) 2018-10-05 2021-04-06 Facebook Technologies, Llc Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
US10504286B2 (en) 2018-01-25 2019-12-10 Ctrl-Labs Corporation Techniques for anonymizing neuromuscular signal data
EP3743892A4 (en) 2018-01-25 2021-03-24 Facebook Technologies, Inc. Visualization of reconstructed handstate information
US10937414B2 (en) 2018-05-08 2021-03-02 Facebook Technologies, Llc Systems and methods for text input using neuromuscular information
US20190247650A1 (en) 2018-02-14 2019-08-15 Bao Tran Systems and methods for augmenting human muscle controls
US11145096B2 (en) 2018-03-07 2021-10-12 Samsung Electronics Co., Ltd. System and method for augmented reality interaction
US10901508B2 (en) 2018-03-20 2021-01-26 X Development Llc Fused electroencephalogram and machine learning for precognitive brain-computer interface for computer control
WO2019199924A1 (en) 2018-04-10 2019-10-17 Rhythmlink International, Llc Virtual electrode template system for neurological monitoring
US20190324549A1 (en) 2018-04-20 2019-10-24 Immersion Corporation Systems, devices, and methods for providing immersive reality interface modes
CN112424859A (en) 2018-05-08 2021-02-26 脸谱科技有限责任公司 System and method for improving speech recognition using neuromuscular information
US10592001B2 (en) 2018-05-08 2020-03-17 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
WO2019226691A1 (en) 2018-05-22 2019-11-28 Magic Leap, Inc. Transmodal input fusion for a wearable system
EP3801743A4 (en) 2018-05-25 2021-08-04 Facebook Technologies, LLC Methods and apparatus for providing sub-muscular control
US11099647B2 (en) 2018-08-05 2021-08-24 Pison Technology, Inc. User interface control of responsive devices
WO2020047429A1 (en) 2018-08-31 2020-03-05 Ctrl-Labs Corporation Camera-guided interpretation of neuromuscular signals
US11484267B2 (en) 2018-09-11 2022-11-01 Apple Inc. Contact detection for physiological sensor
WO2020061451A1 (en) 2018-09-20 2020-03-26 Ctrl-Labs Corporation Neuromuscular text entry, writing and drawing in augmented reality systems
US11457995B2 (en) 2018-12-27 2022-10-04 Biosense Webster (Israel) Ltd. Accurate balloon computation and visualization
US10905383B2 (en) 2019-02-28 2021-02-02 Facebook Technologies, Llc Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
TWI689859B (en) 2019-03-19 2020-04-01 國立臺灣科技大學 System for recognizing user gestures according to mechanomyogram detected from user's wrist and method thereof
KR102278069B1 (en) 2019-10-22 2021-07-14 조선대학교산학협력단 EMG-based user authentication device and authentication method
CN211090137U (en) 2020-02-24 2020-07-24 京东方科技集团股份有限公司 Display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170067873A (en) * 2014-10-16 2017-06-16 후아웨이 테크놀러지 컴퍼니 리미티드 Method, device, and system for processing touch interaction
US20180024635A1 (en) * 2016-07-25 2018-01-25 Patrick Kaifosh Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
US20180093181A1 (en) * 2016-09-30 2018-04-05 Disney Enterprises, Inc. Virtual blaster
US20180153430A1 (en) 2016-12-02 2018-06-07 Pison Technology, Inc. Detecting and Using Body Tissue Electrical Signals
US20180247443A1 (en) * 2017-02-28 2018-08-30 International Business Machines Corporation Emotional analysis and depiction in virtual reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3853698A4

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11921471B2 (en) 2013-08-16 2024-03-05 Meta Platforms Technologies, Llc Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source
US11644799B2 (en) 2013-10-04 2023-05-09 Meta Platforms Technologies, Llc Systems, articles and methods for wearable electronic devices employing contact sensors
US11666264B1 (en) 2013-11-27 2023-06-06 Meta Platforms Technologies, Llc Systems, articles, and methods for electromyography sensors
US11635736B2 (en) 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
US11567573B2 (en) 2018-09-20 2023-01-31 Meta Platforms Technologies, Llc Neuromuscular text entry, writing and drawing in augmented reality systems
US11797087B2 (en) 2018-11-27 2023-10-24 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
US11941176B1 (en) 2018-11-27 2024-03-26 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
US11481030B2 (en) 2019-03-29 2022-10-25 Meta Platforms Technologies, Llc Methods and apparatus for gesture detection and classification
US11961494B1 (en) 2019-03-29 2024-04-16 Meta Platforms Technologies, Llc Electromagnetic interference reduction in extended reality environments
US11481031B1 (en) 2019-04-30 2022-10-25 Meta Platforms Technologies, Llc Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
US11493993B2 (en) 2019-09-04 2022-11-08 Meta Platforms Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control
US11907423B2 (en) 2019-11-25 2024-02-20 Meta Platforms Technologies, Llc Systems and methods for contextualized interactions with an environment
US11868531B1 (en) 2021-04-08 2024-01-09 Meta Platforms Technologies, Llc Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof
CN117389441A (en) * 2023-11-23 2024-01-12 首都医科大学附属北京天坛医院 Writing imagination Chinese character track determining method and system based on visual following assistance
CN117389441B (en) * 2023-11-23 2024-03-15 首都医科大学附属北京天坛医院 Writing imagination Chinese character track determining method and system based on visual following assistance

Also Published As

Publication number Publication date
CN112789577B (en) 2024-04-05
CN112789577A (en) 2021-05-11
EP3853698A4 (en) 2021-11-17
US20200097082A1 (en) 2020-03-26
US11567573B2 (en) 2023-01-31
EP3853698A1 (en) 2021-07-28

Similar Documents

Publication Publication Date Title
US11567573B2 (en) Neuromuscular text entry, writing and drawing in augmented reality systems
US10970936B2 (en) Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
US10905350B2 (en) Camera-guided interpretation of neuromuscular signals
US20200097081A1 (en) Neuromuscular control of an augmented reality system
CN114341779B (en) Systems, methods, and interfaces for performing input based on neuromuscular control
US11163361B2 (en) Calibration techniques for handstate representation modeling using neuromuscular signals
US11069148B2 (en) Visualization of reconstructed handstate information
Luzhnica et al. A sliding window approach to natural hand gesture recognition using a custom data glove
KR20140146346A (en) System, method and device to recognize motion using gripped object
EP3951564A1 (en) Methods and apparatus for simultaneous detection of discrete and continuous gestures
US20220291753A1 (en) Spatial Gesture Recognition using Inputs from Different Devices to Control a Computing Device
US20220253146A1 (en) Combine Inputs from Different Devices to Control a Computing Device
Kwon et al. Myokey: Surface electromyography and inertial motion sensing-based text entry in ar

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19861903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019861903

Country of ref document: EP

Effective date: 20210420