WO2016097841A2 - Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback - Google Patents

Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback Download PDF

Info

Publication number
WO2016097841A2
WO2016097841A2 PCT/IB2015/002356 IB2015002356W WO2016097841A2 WO 2016097841 A2 WO2016097841 A2 WO 2016097841A2 IB 2015002356 W IB2015002356 W IB 2015002356W WO 2016097841 A2 WO2016097841 A2 WO 2016097841A2
Authority
WO
WIPO (PCT)
Prior art keywords
user
wearable
finger
movement
human
Prior art date
Application number
PCT/IB2015/002356
Other languages
French (fr)
Other versions
WO2016097841A3 (en
Inventor
Quan Xiao
Original Assignee
Quan Xiao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quan Xiao filed Critical Quan Xiao
Priority to CN201580069328.1A priority Critical patent/CN107209582A/en
Priority to EP15869412.5A priority patent/EP3234742A4/en
Publication of WO2016097841A2 publication Critical patent/WO2016097841A2/en
Publication of WO2016097841A3 publication Critical patent/WO2016097841A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • This invention is about intuitive human computer interface, more specifically related to methods of multi-mode UI input or gesture recognition, general human-computer interface relating to input and output (such as but not limited to tactile signal), and integration/simulation of input methods.
  • physics effects such as tactile pressure/vibration, airflow/wind, temperature change/humidity change
  • plugin/switch-able gesture input/recognition methods allows single hand easy navigation and selection on mobile (wearable) displays that might located on the same wearable, and could generate Images/Patterns (in response to events such as but not limited to scroll, highlight, cursor/indicator and etc interpreted from gesture recognition) for the corresponding control (being selected), and display on wearable screen, such as hand, wrist, glass(display)/HMD/VR/MR/augmented display.
  • a user interface is for user to control the machine/device and interact with the functions, "objects” or “controls” provided by the machine/device.
  • GUI graphical user interface
  • visualizations such as icons, windows, cubes
  • elements provide mechanisms for accessing and/or activating the system objects corresponding to the displayed
  • GUI representations.
  • a user can generally interact with the "desktop” and other windows/views/objects using a mouse, trackball, track pad or other known pointing device. If the GUI is touch sensitive, then a stylus or one or more fingers can be used to interact with the desktop.
  • a desktop GUI can be two-dimensional ("2D") or three- dimensional ("3D").
  • IoT Internet of things
  • smart phone or "universal" remote might provide some integration
  • wearable that could provide intuitive control (such as context aware gesture inputs) that might be able to communicate/control with device(s), with UI and feedbacks provided to user locally (without the need to go to the devices), and possibly with the ability to integrate information from different devices, and possibly with information from wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc, to one or more virtual dashboards, which could be displayed on wearable displays (or external display currently visible to user), and control different devices accordingly.
  • intuitive control such as context aware gesture inputs
  • wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc
  • the current invention includes a lightweight wearable dynamic pattern display system design that provides physics effects (such as tactile pressure/vibration, airflow/wind, temperature change/humidity change) to wearer ('s skin). It is flexible, compact and high resolution to allow multiple part of finger tip to feel different forces/effects. It can be controlled to present different feedback according to finger's movement, or pose, or position (note this might have issue), or gesture.
  • physics effects such as tactile pressure/vibration, airflow/wind, temperature change/humidity change
  • a gesture controlled wearable can have visual display located on the back of the hand, (such in Fig 2)
  • It can be carried by ppl and communicate with computer wirelessly.
  • it contains light weight C02 (or other liquid) gas tank/source.
  • IMU here refer to inertia measurement unit, this is currently usually integrated circuit MEMS device/sensor that can provide multiple degree of freedom (DOF) inertia and navigational signals such as acceleration (x,y,z), rotation (angular speed, around x,y,z axis), 3d-magnetic compass (x,y,z direction) and etc.
  • DOF degree of freedom
  • the pose can comfortably take when using the purposed input device/input method, in some cases the pose can be kept in an extended period of time (such as more than several minutes) without getting fatigue, and such pose is similar to the pose user would use when manipulating tools/devices that is being simulated by the purposed input device/input method, so it can also be considered "hint".
  • the "Ergonometric simulated pose” (or “hint pose”) for simulating using a mouse, is “palm face down” (and maybe together with forearm considered to be “level” within some tolerances, such as with in plus or minus 30 degrees, and such tolerance range might be adjustable/configurable), just like the way people would normally use a mouse on a flat surface;
  • the "Ergonometric simulated pose” for simulating using a joystick or jet “steering stick” is palm facing side ways, just like the way people would normally use a joystick/steering stick, The
  • "Ergonometric simulated pose” or “hint” for simulating using a remote control is palm up, just like the way people would normally use a remote control
  • "Ergonometric simulated pose” for simulating swipe/touch of a smart-phone/tablet will be like the way people would normally use a smart phone/tablet, such as palm forward or down or with some angle, which might be configurable, (and might with 1 or 2 fingers stick to the surface)
  • the "Ergonometric simulated pose" ("hint") for aiming an weapon could be either exactly the way user would hold such weapon (for example for rifle is using two hands, with right palm close-sideways and index finger on "trigger” if user is right handed and vice versa for left handed), or a "symbolized” pose that using one pose of holding kind of weapon (such as pistol, need to use just one hand) to represent all weapons, and also might allow optimize the pose for the purpose of comfort, such as allow elbow being supported (on the table
  • “Feelable” such as mechanical pressure, and temp change and etc. on multiple points on a surface.
  • Feel-able (feedback), (or physics effect patterns): Physical feedback that user can feel with their skins such as but not limited to: tactile/pressure pattern, vibration, temperature, wind/air flow, humidity change, and etc.
  • a (dynamic) pressure pattern consists of a time sequence of frames of static pressure pattern "presents" to user, (for example can be used to indicate the direction or movement of something on the displaying surface)
  • “human-focus” centered user input Traditional User Interface is “device-centered” which means they usually located on the device themselves or have a corresponding remote control for the device, “human-focus” centered user input is “human centered” interface in which sensors/controls, some maybe in wearable form, detecting/sensing user's movement/position (or get user input from wearable controls) and generate commands for device(s) from user input or interpretation based on user's movement (such as but not limited to acceleration or deceleration, rotation/flexing, 3D position change or relative position change of one or more body part segments such as segments of finger, palm, wrist and etc.)
  • sensors/controls some maybe in wearable form, detecting/sensing user's movement/position (or get user input from wearable controls) and generate commands for device(s) from user input or interpretation based on user's movement (such as but not limited to acceleration or deceleration, rotation/flexing, 3D position change or relative position change of one or more body part segments such as segments of
  • a proposed method of providing user INPUT to machine including: 1) use sensor to (reliably) detect user finger, palm and arm (fore arm, upper arm) movement (not static position) , (and maybe optionally calculate by using forward kinetics or "infer” the position information) , use the movement info (acceleration, angular acceleration/turn) to provide "delta" or “incremental” information for events/callbacks to software system such as OS or application, middleware.
  • the accumulation or other "stateful" calculation can be provided to client in a "per session” (or per app) way, such accumulated value (such as static position calculated from incremental movement events or from acceleration data) , can be calibrated by the static pose/position/direction/flex sensor info (such as tilt sensor, bent/flex sensor and magnetic sensor/compass), and thus provide more accurate/robust real-time position and movement info simultaneously that can be passed to client (such as in form of events/callback) to user.
  • the information of speed/movement/position/rotation could be forexample (but not limit to) 3D or 4D vector format, and maybe together with time delta/time stamp information for further (digital) signal processing.
  • a proposed method of providing user INPUT to machine including: Using the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
  • a mode of operation such as but not limited to, mouse, joystick, simulated "gun aiming pose” based on the corresponding Ergonometric simulated pose.
  • a mouse input simulation mode is triggered by user taking an Ergonometric simulated pose of mouse operation.
  • the "automatic" trigger can be turned off, so that to allow client application to "capture” the device and interpret the full information, or the client need continues/uninterrupted tracking of movements that do not allow mode switch.
  • a related apparatus for input embodiment have one or more sensor(s) to determine higher level limb pose such as that of forearm and palm; Have sensors to determine finger movements; Provide interpretation of finger movement/input device according to the mode determined by the higher level limb pose data.
  • An (independent) embodiment for an Apparatus for UI input comprise of:
  • wearable device to accommodate the size of a user's hand and have multiple IMU (MEMES) sensors (may include by not limited to: inertia/gyro scope sensors, magnetic sensors, pressure/temperature sensors and etc) located on the joints or movable parts of human limb/hand/fingers for tracking movements; Using the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
  • IMU IMU
  • body/limb(such as those from fingers and those from hands) can be used together to “filtering" out environment noise to improve measurement accuracy, and can provide "difference" signal (by subtracting on the same (eula) coordinate system on the same scale) so that local “relative” movement (such as finger relative to palm, or one finger relative to other finger, or one section of a finger relative to other sections, and etc.) can be accurately measured.
  • other kind of sensors such as flex sensor (changes output value according to the degree of curve/flex) is used to provide additional info for detecting the movement of body parts (such as finger and palm, etc), and such information can be integrated to improve the accuracy of the judgment from IMU sensor information.
  • buttons or touch sensors can be placed on the side of index finger for thumb to operate— that can simulate the function buttons of a game control pad (such as but not limit to direction(L/RU/D) ,A, B, Attack , Select buttons of the corresponding hand), or it can be used for some short-cut function such as menu, hold, capture(inhibit mode change) and etc. function.
  • a wearable apparatus related to 7 on the finger tips and sections of finger, palm could have pressure sensor to sense the pressure user actually feel on the surface/object. 12.
  • those IMU sensors can just be attached/detached ("patch on/patch off') ,
  • an wearable device having an ergonometric shape to accommodate user's hand (and fingers, maybe like for example but not limited to gloves), and might also include parts that is wearable to limb and torso, that provides mechanical/tactile pressure pattern and/or other kind of physics feedback such as temperature change, airflow/wind to user, using actuator(or micro actuators) such as but not limited to: pneumatic, solenoids, piezo components(actuator), memory alloy and etc, such mechanical/tactile pressure pattern /physics feedback serves as purpose to prompt user for certain events, certain position or range of positions in space, certain pose, or simulate certain physical characteristics/properties of one or more virtual objects that interact with user, so that user can have a more tangible/direct way of interacting with the virtual environment/objects/user interface.
  • actuator(or micro actuators) such as but not limited to: pneumatic, solenoids, piezo components(actuator), memory alloy and etc, such mechanical/tactile pressure pattern /physics feedback serves as purpose to prompt user for certain events, certain position
  • the virtual objects can be simultaneously displayed on a 2D display or 3-D display (such as but not limited to 3D stereoscoptic display, 3D holograph display and etc).
  • a software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, maybe (but not mandatory) together with visual/audio representations to prompt user certain events or the result/feedback of actions related to the movement of user's limb(s)/figer(s)/palm/other bodyparts, such as but not limited to: touch some object (which could be virtual object defined in the space the corresponding moving body part of user resides) or left/detach from object/virtual object, to press/release some control (might also be UI control, virtual 3D control and etc) , catch/release/drag and drop certain objects/virtual objects or representatives and etc.
  • touch some object which could be virtual object defined in the space the corresponding moving body part of user resides
  • left/detach from object/virtual object to press/release some control (might also be UI control, virtual 3D control and etc)
  • a wearable apparatus to use tactile/ pressure pattern to simulate the touch of an object (a certain places or gesture) that might in accordance or in sync with the object presented to user visually, to prompt user certain finger reached a certain position in space.
  • an related method for intuitive input is to provide a "snap in" effect when user's finger, /palm/arm are used for pointing a specific location, and when the indicating finger(s), hand or arm close to a special location (such as the location is of (geometric) significance like the center, middle point, alignment with some edges or on the extension with certain degree with a line or surface, etc.), when user in such range/close by area that user could feel a certain pressure pattern to prompt user a "snapping point " is close (maybe also together w/ visual clue).
  • a special location such as the location is of (geometric) significance like the center, middle point, alignment with some edges or on the extension with certain degree with a line or surface, etc.
  • Some example embodiments/scenarios With 6DOF or 9DOF IMUs on palm, on forearm, IMU on at least one section of thumb and at least one section of index finger, flex sensor on all 5 fingers, IMU on upper arm.
  • IMU and flex sensor can be used to track movements of the finger/parm/arm and establish a "skeleton" model, similar to that of the leap motion and Kinect sensor, and tracking each individual moving parts. Since acceleration and rotation info are measured by more than one IMUs, it is convenient to use related mathematical /signal processing methods to filter out the noise and provide more accurate output.
  • the signals might be send to distributed or central computer/micro controller for filtering and integration/signal processing.
  • a button at the shoulder that must be touched by same side hand will automatically align the sensors (meg and gyro).
  • user's elbow is supported and forearm is relatively vertical (+45 deg , -20 deg) to the table/chair arm surface, and palm can face either down (side within +- 40 deg) as “touch” interface, or vertical (+- 15 deg) as “gun/trigger” interface. If face up it is interpreted as "remote” mode.
  • index finger In gun/trigger mode, if index finger is straight, then move (turn) to the direction it is pointing (moving, using inc mode/gyro, not absolute, because of meg sensor issue). If index finger is flexed/curved, then it is shooting mode.
  • the software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, also use the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
  • the movement info or the position info such as pitch, direction
  • limbs / moving body part such as upper arm, elbow, forearm, wrist, palm
  • torso such as, upper arm >forearm > palm
  • torso such as, upper arm >forearm
  • the software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, also including triggering/switching to a mode of operation, such as but not limited to, mouse, joystick, simulated "gun aiming pose” based on the corresponding Ergonometric simulated pose.
  • a mode of operation such as but not limited to, mouse, joystick, simulated "gun aiming pose” based on the corresponding Ergonometric simulated pose.
  • a mouse input simulation mode is triggered by user taking a Ergonometric simulated pose of mouse operation.
  • the wearable might have computing/signal processing capabilities (such as but not limited to gesture related, 3d modeling related, collision detection related, and etc) besides the communicate/control ability, and could also have other sensors such as temperature, humidity, electro/magnetic field strength, illumination and etc. on board.
  • computing/signal processing capabilities such as but not limited to gesture related, 3d modeling related, collision detection related, and etc
  • sensors such as temperature, humidity, electro/magnetic field strength, illumination and etc. on board.
  • a related embodiment also include means to integrate information from different devices, (and possibly also with information from wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc), to one or more "virtual dashboard”(s), which could be displayed on wearable displays (or external display currently visible to user), and allow control different devices (such as independently , or collectively with some tool/integrator/script).
  • wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc
  • the feedback for the control could be tactile/pressure pattern or other physics effect (such as but not limited to temperature, airflow/ wind, humidity changes, and etc.) provided by the tactile displays and/or light weight effectors located on the wearable (such as tactile display on tip and other sections of the finger/palm).
  • tactile/pressure pattern or other physics effect such as but not limited to temperature, airflow/ wind, humidity changes, and etc.
  • Such Fixel is controlled by a interface control system that generate output based on user's gesture (such as hand,finger, plam, wrist, arm) movement , and they can be actuated to (as feedback) to enhance user experiences. It might actuate when certain criteria is met or events happens such as collision with an object is detected, determined by the control system.
  • one or more "mask” with the "ergonometric " shape such as with a concaved curve shape with size similar to the "finger print” side of that finger, (which fits to a finger) and with a plurality of Fixels located on at least one such surface that is accommodating a finger tip (the "finger print” side), and at least one fixels on other sections of the hand.
  • a fixel is a pneumatic actuator made of flexible materials that can change physical shape and excerpt pressure to user's skin, such fixel may have one input and from zero to 2 output, and with one or more flexible pneumatic thin hose attach to/coupling with it , (or it could be printed/flat air hose circuit/slots directly from the glove material), the hose linked to the output of one or more air switch(es)/
  • the mask maybe made of flexible materials such as but not limited to plastic or rubber, with certain rigidity so that one would feel only the tixcel where pressure is applied and other places do not have feel-able force, to maintain a good signal to noise ratio (S/N Rate).
  • S/N Rate signal to noise ratio
  • the effector(s) to provide physics feedback on the wearable (as mentioned in above embodiments) to user can be other micro actuators, such as one specialized in
  • vibration(inducer, peizo), micro solenoids, memory alloy, or the combination are vibrations(inducer, peizo), micro solenoids, memory alloy, or the combination.
  • one or more (a plurality of) lightweight servo-operated air switch is placed on the glove, preferably on the back of the hand, and maybe together with light weight solenoids, that (control the distribution of) or direct the compressed air to the end effectors.
  • the switch/solenoids are connect to one or more (different pressure) air source.
  • the air/gas source is a compact sized C02 tank/bottle (similar to those) used in paintball with one or more pressure-control valve(s)/regulator(s) to provide air/ gas, maybe in different pressure level.
  • Such level might be adjustable
  • the tactile/physics feedback could be provided on front and back of all fingers (each section have at least one effector, although several effector might link to same output to save switch). Thumb, plam. And also on forearm and upper arm.
  • the "pneumatic type” effector/cell or fixel can have 2 types— one is
  • Controller for these effectors can use different methods such as pressure division/modulation (might use 2 or more air sources with different pressure) or time-division modulation (one air source modulated by time) or adjustable pressure regulator, to maintain desired pressure of a given fixel/cell/effector so that a pressure pattern can be formed, maybe dynamically
  • control/Switch system might use micro solenoids located close by to the effector/cell/fixel, such as but not limited to located on the back of the hand on the wearable, on the forearm accessories mentioned above, and etc.
  • control/Switch system might include servo- based, rotationally pneumatic switch like for control the inflation/deflation of
  • fixel(s)/cell(s)/effector(s) by controlling the servo turning angle.
  • the switch/solenoid bank is on board(palm/back) or on glove, and can be connected to a pneumatic source on user (such as forearm) via a (single) hose which have a quick connect/detacth mechanism.
  • providing physics (effect) feedback might including
  • heating/cooling device such as but not limited to:
  • physics (effect) feedback could include the vibration provided by Piezo actuator/components, inducer and other possible low profile vibration generator.
  • providing physics (effect) feedback could including providing the airflow/wind effect by the using nozzle or "selectively/controlled bleeding" from the pneumatic effectors or hoses.
  • physics (effect) feedback could also including force generated by actuator/servo with exoskeleton that attached to the wearable device.
  • these different physics effectors might be on the same "fixel”, for example at different layers of the wearable on the same location, or very close by (and some of them could be at the same layer).
  • tactile/pressure signifies/hints or simulates different spatial relationship with the object/key/button. From no touch to fully pressed: for none for not touched, one center point for barely touched but not "Pressed” to all 5 points actuated to provide feedback for position/status that the button/control is fully pressed (there could also be "half pressed” position in the middle, or in the "seam” between situation2 keys where only 2 sides of finger pressed (and corresponding fixels actuated), but not the center. Such patterns might be triggered by corresponding events.
  • pressure/force sensor(s) on parts of hand such as fingers, plam etc. to provide sensing of force/pressure the part of hand
  • each cell/effector/fixel could have a pressure sensor to sense the current pressure from that fixel.
  • the pressure sensor when doing "Time division” style “inflation”, could determine that specific cell is charged or not, how much pressure is (still) needed, or provide feedback to control system to close/ shutdown the valve when designated pressure is reached.
  • An (independent) embodiment for an Apparatus for human centric UI input comprise of: Wearable device to accommodate the size of a user's hand and have multiple IMU
  • MEMES MEMES sensors
  • inertia/gyro scope sensors may include by not limited to: inertia/gyro scope sensors, magnetic sensors, pressure/ temperature sensors and etc) located on the joints or movable parts of human limb /hand/ fingers(not quite there) for tracking movements;
  • IMU signals from different moving parts of body/limb(such as those from fingers and those from hands) can be used together to "filtering" out environment noise to improve measurement accuracy, and can provide "difference" signal (by subtracting on the same eula coordinate system on the same scale) so that local "relative" movement (such as finger relative to palm, or one finger relative to other finger, or one section of a finger relative to other sections, and etc.) can be accurately measured.
  • Such info can be used to drive events/messages that about local movement/location of the fingers (irregardless of "global" position relative to ground)
  • wearable human centric UI Gesture recognition (maybe context aware) is used and "local" wearable display such as a "handback" display shown in Fig.5 or a HMD/Glass display/ Augmented reality display can be used to display UI, current operation mode, status and etc. related (feedback) information about current user action/selection; such feedback can be provided audibly via earphone/speaker
  • "local" wearable display such as a "handback” display shown in Fig.5 or a HMD/Glass display/ Augmented reality display can be used to display UI, current operation mode, status and etc. related (feedback) information about current user action/selection; such feedback can be provided audibly via earphone/speaker
  • the glove like wearable could be multi-layer with exchange-able, discard-able or washable inner surface.
  • the wearable system might be extensible in software and could have (managed) environment(s) for running 3 rd party apps or plugins, for purpose such as but not limited to : gesture recognition, collision detection, UI control, tactile output pattern control, messaging, communication, virtualization, and etc.
  • a gesture command based (or relying) on a single gesture of user might be "cross-device", or might be rendering the switch of device current wearable system is "currently engaged to”.
  • criteria such as: with at lest index finger, straight and parallel or within 30 deg (or configurable value no more than 40 degrees) of palm surface, the direction of the finger within +-20 degree (or configurable value no more than 30 degrees) of the direction from the that finger tip to the device, and when the criteria satisfied that device is considered “selected” for interaction (or engaged with).
  • a "narrow beam” detector/sensor on the index finger such as optical/ infra red camera, focused infra red/RF transmitter, high directional ultra sound or RF receiver/ transceiver, etc
  • corresponding locating mechanisms deployed on the device such as maker /pattern/beacon/emitter for cameras or optical/infra red/ultrasound/RF receivers/transceivers on the finger, or receiver/ transceiver for the emitter from the finger
  • the detection method/elimination method can be various and usually no more than 1 beacon/marker/speaker/microphone is need per device.
  • device selection might be using index and middle finer together.
  • index finger as "pointer” (indicating the device) and interpret "left click"/"left(button) down” like events (similar to the meaning and effects of mouse left click/mouse left button down) as (detection of) the flexing of thumb or middle finger over a certain (configurable) threshold so that system could know the action is intentional
  • "left click” can be assign meanings such as selection or other meanings like corresponding mouse events
  • interpret "right click” like events similar to the meaning and effects of mouse right click/mouse right button down) as (detection of) the flexing of pinky finger over a certain (configurable) threshold (so that system could determine the action is intentional
  • right click can be assigned meaning for (or triggering) context menu/commands/stafus information.
  • the wearable might allow extension by providing port/interfaces (such as but not limited to I2C, SPI, USB, Bluetooth and etc. and hardware port/ interface like but not limited to those depicted in Fig.6) and allow components to be attached/detached, such as in Fig 6.
  • port/interfaces such as but not limited to I2C, SPI, USB, Bluetooth and etc. and hardware port/ interface like but not limited to those depicted in Fig.6
  • a finger part of a glove can be exchanged, other "finger part” with different function using the same port/ interface or attached with other components to enhance/extend the functionality of the wearable, such as in Fig6.B directed a "narrow bean” detector component (such as but not limited to a camera, R antenna or IR beacon) can be attached to the index finger to perform "point and select" of devices, such sensor provide or enhance the reliability of the device selection/de-selection/communication.
  • a "narrow bean” detector component such as but not limited to a camera, R antenna or IR beacon
  • the wearable together with a Virtual "dashboard” integrated multiple devices (such as via IoT network) running on a computing system, such as onboard the wearable, or a server computer connected to the wearable, that can generate the synthesized/integrated multi-device information for displaying (GUI), which possibly displayed on a wearable screen.
  • GUI multi-device information for displaying
  • the human centric wearable could provide all key features mentioned above such as physics feedback (such as with related embodiments), able to handle multiple-device (such as by automatic context switch and/or visual dashboard), and provide an easy to use (intuitive), all-in-one and mobile platform.
  • a Human centered cross platform/device user interface and management system comprised of: communicate channel(s) with devices (such as but not limited to smart appliances, IoT(internet of things) devices, TV/setup boxes/AV systems, air
  • conditioning/temperature controls which user can interact with this system, such as but not limited to via Infra red, Ultrasound, RF, and network( wired or wirelessly);
  • a module/means for task/device management to determine which device is the one user wants to interact/control with ("engage with"), based on "human-focus” centered, user input such as controls from wearable, or (the interpretation) of the hand/arm
  • the "task/device management” module may provide device status feedback to user (for example but not limited to visually, in audio, tactile, other physics "feel-able” feedback and etc.)
  • the "user input" used to determine which device is the one user wants to interact/control is based on a gesture command which have a begin and end based on user's movement, such as but not limited to acceleration or deceleration, rotation/flexing of one or more body part segments such as segments of finger, palm, wrist and etc.
  • the gesture command could be based on interpreting one or more gesture(s) of user.
  • a gesture command based (or relying) on a single gesture of user might be "cross-device", or might be rendering the switch of device current wearable system is "engaged to"
  • criteria such as: with at lest index finger, straight and parallel or within 30 deg (or configurable value no more than 40 degrees) of palm surface, the direction of the finger within +-20 degree (or configurable value no more than 30 degrees) of the direction from the that finger tip to the device, and when the criteria satisfied that device is considered “selected” for interaction (or engaged with).
  • the detection method/elimination method can be various and usually no more than 1 beacon/marker/speaker/microphone is need per device.
  • UI events only feed to the "current” app/device— and which is the "current” might be related to "user's focus/intension", which is related to the gesture, and also that is the "context” for interpretation.
  • Each "device” might have a receiver for receiving commands (for example wireless or Infra Red), the commands/events are send to them only when that device/"context" is the "current”.
  • commands for example wireless or Infra Red
  • a system do all the translation of commands/interpretation on board with one or more processor(s) of the system, or in some simple situations (like TV, microwave, temperature) handle all the interpretation on board and out put "command” only.
  • the system might have output module simulate command s/events for corresponding devices it controls (which might be in a way such as but not limited to universal remote simulates other remote controls)
  • the Selection of devices is by Hint/ emulate instead pointing :
  • a device can be selected and the mode corresponding to the current device being controlled might be entered automatically, for example by "hint”.
  • hint is the pose that user usually do when operating such device - in an example, user palm down and fingers also face down to hint the use of a mouse, and palm side ways and finger “half closed” to hit using a joystick/vertical handle, and etc, and interpret the corresponding movement of body parts/limb as simulated device movement ,
  • Such system supports a wearable "feelable" feedback mechanism for providing user physics effect feedback (such as but not limited tactile pattern) about the operation of the device.
  • the TV/Setup box can received commands via infra red or IoT, when the device is selected, the wearable interpret gestures to commands, wherein waving up (With palm facing upwards) means sound volume up (whole palm movement), waving down means volume down, a "remote" hint with l and minus touch means change channel, or allow user scroll up/down/left/right by finger, User can use the screen keypad to enter channel number ro do search, and "kill” gesture means mute. Recording: left/right fast forward or rewind. Stop(palm facing forward).
  • swipe up and down can be interpreted as album selection, swipe left and right means sound track selection, waving up (With palm facing upwards) means sound volume up (whole palm movement), waving down means volume down, and etc.
  • Home environment control no GUI) light, blind/curtan, temperature/ AC: waving up or swipe up (depends on contiguration) might mean turning the control up, waving down or swipe down (depends on configuration) might mean turning the control down , and etc.
  • a feel-able user interface comprised of:
  • Sensors detecting user gesture information at least one finger of a user's hand's movement together with at least one movement info of the other part of hand (such as other finger, palm, wrist);
  • a wearable that can accommodate at least one finger of a user's hand and have a articulated part on user's hand back, (and optionally have an detectable part on user's forearm.) that can provide physics (“feel-able”) feedback thru the wearable;
  • detecting user gesture input data such as but not limited to finger, palm movements (such as acceleration, rotation, direction via magnetic) with the sensors on the wearable;
  • the physics ("feel-able") feedback including (dynamic) patterns "presented” to user on multiple points (or “Fixel”) on at least one finger tip;
  • detecting user gesture input data such as but not limited to finger, palm movements (such as acceleration, rotation, direction via magnetic) with the markers (such as but not limited to passive and/or active infra red beacons/markers, RF, magnetic, ultrasound) on the wearable, and optionally maybe together with sensors on the wearable (such as but not limited to MEMS IMU sensors, flex/bent sensors and etc.) to provide a "unified" finger/had movement/position model data (such as but not limited to "skeleton/bone” data).
  • markers such as but not limited to passive and/or active infra red beacons/markers, RF, magnetic, ultrasound
  • the UI system employs "hint based” ergonometric pose "detection” to determine the "mode” of input and switch seamlessly (automatic switch). - explain hint based system in detail (copy from previous application)
  • mode of operation of user's gesture which means user's gesture might be interpret to different meanings in different modes
  • mode can be entered, exited, changed or switched, however for any given time a single hand can only be in maximum of 1 mode/state.
  • the mode can be entered, exited, changed or switched in ways such as (but not limited to) manual switch (such as hint, press a button) or automatic switch, such as when selecting an object or "environment” which have “context” implications (in which system might switch to the only meaningful way of interaction, just like an OS input method (IME) window or dialog)
  • IME OS input method
  • Some example scenario includes:
  • a general "release control" will be palm down with all finger spread straight.
  • the "current" control might also be context sensitive, such as when user select a text box that only allow numbers, then some specific controls such as “num pad” instead of full keyboard or mouse might be “pop up” or made current.
  • the system having an architecture to allow plugin/different interpreter (some of them maybe context sensitive) to be used, and have mechanisms to switch those interpreters.
  • user interface system having at least one mode that rendering something (virtual object, system object, controls) being selected, and at least one mode that allow the selected to be manipulated in which the mode/interpretation of the gesture of the same body part might be changed according to context/mode.
  • left hand is keyboard
  • right hand is mouse (like the normal configuration for PC games).
  • the input is tactile/relative position/gesture oriented, rather than absolute location or visual oriented interaction—
  • a UI system using gesture, relative position/movement of segments of body parts (such as finger segments), so that the control is "tolerant” of slight movement of whole hand/environment vibration, and filter out/ reduction noise from the sensors, and focus on the “relative” and “intentional” movement of fingers. It could feel like (for user) the key/dialog box is move together with the hand (which could make it more stable and easy to control)
  • a keyboard computer, musical instrument
  • an musical instalment can be simulated by detecting the "signature of finger movement for a given key or note" (or “typical hand/finger key stroke action for a given key/note”).
  • signature movement can be detected using for example detecting relative movement/position of the fingers with palms, wrist/forearm or other fingers.
  • For instruments require hand/ wris forearm movement, their relative movement/position (with body, or the "virtual” instrument) might also be detected.
  • keyboard/instrument Since experienced user might rely on muscle memory more than visual, the display of keyboard/instrument might be optional in some cases, however it is still better to display the instrument in a way user can gain comfortable sense of space, and maybe visual/audio feedback. Since tactile feedback plays an important role in this kind of "blind type/performance", it is also desirable appropriate tactile feedback (such as but not limited to pressure pattern, vibration) being provided to user (for example via the wearable device on user's hand/finger) at the time when system detected such signature of finger movement for a given key or note" (or “typical hand/finger key stroke action for a given key/note") and determined a key/note is strike, so that user can directly "feel" the keyboard/instruments.
  • tactile feedback such as but not limited to pressure pattern, vibration
  • OS/GUI/Middleware system including at least one level of "high level” events/messages that can be passed to 3 rd party software, and for system providing different levels of events/messages, allowing custom filtering for different applications.
  • An graphical UI system or operating system having finger specific EVENTS/messages (this can be easily over passed by providing events for all, and user filtering finger ID), such messages maybe contain detailed 3D movement/location (relative or absolute) information and timestamp, and etc.
  • any 3D gestures requires just one finger (and might with palm/wrist movement).
  • One example is dealing cards, grabbing things, tossing something.
  • ⁇ finger specific gesture input for example pinky finger designated for context menu
  • keyboard/musical instrument simulation generally includes sensors for both hands (all fingers) such as 2 gloves (each be able to detect movement of all fingers on the hand and palm movements).
  • Example applications capable of using multiple tools simultaneously (or without switching) with tools assigned to Multiple different finger (such as 2D document, page editing, drawing/picture editing, 3D modeling, editing, animation creation).
  • the application having more than one "focus” or “selection/manipulation tool” assigned to different fingers, and might also being displayed on the screen, to represent different tools/functions.
  • representations such as cursor, tool icon,” focus area or other special effect area”(such as but not limited to magnifying, highlighting, "see-thru” rendering ) and etc), and these icons, focus area or special effect area might be displayed simultaneously and based on corresponding finger location/movement.
  • one finger such as index being assigned an "additive” tool such as a pen/brush in 2D workspace (or a volume adding tool/brush/ extruder in 3D workspace), and might be displayed as a pen/brush icon(or other icon corresponding to the specific tool assigned) on the screen, while an other finger (such as middle finger) being assigned as "subtractive” tool such as rubber in 2D workspace (or chisel/cutter/eraser in 3D space) and maybe (optionally) visually represented on the screen also, so that the 2 tools can both work at the same time, or being use without the need for switching tools, while yet another finger (such as thumb) being assigned another function such as smudge/ mooth/color and maybe (optionally) visually represented as related cursor/tool icon on the screen to also be used at the same time, or being used without switching tools.
  • an "additive” tool such as a pen/brush in 2D workspace (or a volume adding tool/brush/ extruder in 3D workspace)
  • an other finger such
  • tactile/vibration as well as other "physics feedback” such as temperature, wind/airflow might also be provided to the finger assigned specific tool so that user could have direct feeling if the tool is working/effective/touching the working article, besides visual (which might be more difficult in 3D).
  • Word/document processing or page editing There could be multiple mode of navigating: I s one is just simply simulates the mouse and key board,
  • 2 nd is like video game configuration, recognizing "ergonometric" gestures such as with forearm vertical or close to vertical (and elbow supported), and finger/hand movement to indicate caret/cursor location or page up/down (swipe, multiple finger, actually ignore index finger as this moves background, while index finger moves the cursor/foreground.) Gesture interpretation could including "Grab the current page/object with left hand” to create a screen copy so that user can see different portions/view port/perspectives of the article being editied.
  • 3D modeling grab and rotate, place an object, feel if it fits into other parts. It can be 2 hands, left navigating/selecting (back ground/foreground), right selecting/manipulating. Hit/press the model (to re-shape) with resistance/tactile feedback.
  • thumb we can set one finger (index) is selection, the other (middle) is un-selection, while thumb can be
  • GUI controls Such as a Knob, which can be pushed/pulled, and rotated
  • the content can be made an other screen copy, by detecting Left hand moving/ gr abbing ( so that user can compare left/right can see different portions/view ports of the content)
  • the "right simulated click" by the the pinky (little finger) or ring finger can trigger can bring up the context menu or trigger activities/events equivalent to that of right clicking of a mouse.
  • Finger tip "Fixel" array to hint or to simulates different spatial relationship/physics effect with the object/control(such as key/button/slide etc). From not touch (blank pressure pattern— non Fixel actuated) to merely touched (one fixel actuated in the pressure pattern, such as at center point if the finger lined up with the button/control) to fully pressed: (all points/fixels on the pressure pattern are actuated), One side of the key (only the fixel corresponding to the side of finger touched the object, are actuated). When finger pressed between 2 adjacent object/buttons (user could feel the "seam” between the 2 objects: only the 2 sides corresponding to the areas that finger tip touch the keys were actuated, but not the center "seam” area).
  • Pressure pattern/physics (“feelable") effects can be provided together with “Force feedback” provided by the exoskeletons(such as at finger/arm level).
  • Simulation of relative movement Having this "indicating dynamic pattern", which is a time sequence of frames (of pattern) in which the "movement" of tixels indicates/hints the direction/movement we want to prompt/hint to user.
  • a GUI system with objects/controls/widgets having such feedback properties (such as but not limit to as “material” or part of "material” of an object properties , as defined in many 3D editing environments).
  • An user interface system for interacting with a "model” in a computer system which have virtual object/control/widget, in 3D, for user to interact with; such virtual object might have "tactile material, temperature property, vibration/frequency, wind/airflow/humidity " etc. feel-able physics properties associated with it (might be dynamic or changeable); use gesture or finger position information to determine if a virtual object, component/ widget/control (that is displayed ? or that is not hidden; or better, not mention, b/c we can use the case for "blind typing'V'blind key stroke of piano") if a finger (or fingers) is intersect/collided/within or without the range of
  • OS system having different "pattern” of hitting effects such as single point(fade in-out with strength, frequency, duty cycle change), moving, ripple and other effects to feedback to user (“embellish” the feedback).
  • UI system utilizes physics feedback pattern mechanisms located on user, such as (multiple Fixles) on user's finger tip, to perform hint, indication, feedback (based on physics such as pressure or vibration pattern.) of virtual objects displayed to user, (and interacting with the corresponding part of body/limb of user)
  • index finger is a pencil
  • middle finger a rubber
  • the LII system includes at least 2 level of events, and allow custom filtering for different applications.
  • the UI system/middleware system/OS allow 3 rd party apps or plugins (that directly handling 3D models and "physical" events such as collision detection and output tbrce feedback, while bubbling "business event" to client PC system).
  • 3 rd party apps or plugins that directly handling 3D models and "physical" events such as collision detection and output tbrce feedback, while bubbling "business event" to client PC system.
  • the client system reacts slowly, the on board processing will make the user experience “still ok", this is a bit like when user pressed a button with slow system reaction, user will still feel the button (although the result of that press from client system comes late). It will not be appropriate when user trying to press a button but didn't feel anything until much later.
  • a wearable display located for example but not limited to on user's back-of-hand (and might be flexible), to display information such as (but not limited to) current operation mode and optionally feedback/device status information visually, and such information might also be provided audibly via earphone/speaker.
  • Fig.l shows a intuitive way of 3D manipulation.
  • Sensor detect movement 101 of index finger and depends on if user is trying to "catch” or “grab” the virtual object 102 (that is displayed on a 2D or 3D display), the interaction with the object will be different.
  • Fig.1 shows user's thumb 105 is not yet reached/touched the virtual object 102 while index finger (and its on-screen representation 103) have been determined to be touching the virtual object 103, if user is wearing a force feedback wearable device as described in the specification, then user's index finger will have the touch/tactile feeling/prompt. And user can rely on this feedback to manipulate the object. (Visual clues might also displayed on the screen as shown).
  • the movement 101 of finger is act as pushing a side of the virtual object and thus make it rotate, so that user can examine the other side of the object. If user also use thumb 105 help to "grab" the object with 2 fingers, the movement can be interpreted as moving the object from the current location to a new one, without rotation. This is an example of using intuitive
  • Fig.2 shows a hint or "Ergonometric simulated pose" for a gun aiming position in a FPS game (or similar VR environment/simulation) showing on screen 201, sensors detecting movement/rotation/position of moving limb/palm/ lingers can be used or "mapped" to regular FPS action commands such as turn, movement, pitch, yaw and etc that normally provided by using keyboard/mouse or joystick.
  • the forearm is like a joystick, its movement forward (direction 203A) and backwards (direction 203C) maps to joystick(or key board) forward and backwards, or corresponding direct Xinput commands to the FPS game engine.
  • Side movement (203B and 203D) of forearm means moving (without turning) sideways, just like using joystick or keyboard.
  • the twist action of palm/wrist horizontally can be mapped to yaw (turning horizontally) control, CW or CCW, (this mainly used for direction and aim), and the up-down twist of index finger/wrist movement can be interpreted as pitch (up, down) control which also controls the direction and aim.
  • pitch up, down
  • user flex(bent) the index finger to an (configurable) extent it can be interpreted as firing the weapon 202 in the picture.
  • Fig.3A shows a dynamic tactile feedback pressure pattern is provided to user according to the relative position (and "touch” status) of the finger tip and the button/control displayed in 3D.
  • Fig 3B and Figure 3C shows an pneumatic actuator (connecting hose not shown) with 5 "fixels" to finger tip and its “mask” (can be such as but not limited to: relative resilient(but more rigid than the "membrane” or film on top of the openings on the mask in Fig 3C), or at a rigid at some degree that is comfortable to user, and etc.) that fits to a user's finger tip.
  • Fig3C shows the mask without a covering "membrane”/film(s) or other kind of resilient material, which cover the 5 openings shown in Fig 3C.
  • Air/gas from the hoses (not shown) through the 5 connecters (as shown in Fig 3B) will (selectively) enter the chamber that is formed by the 5 openings and the resilient "membrane”/ film(s), and inflate the "membrane'Vfilm(s) to form pressure at corresponding position of the finger tip.
  • the pressure pattern formed by the 5 "fixels” and the pressure of a single fixel can be controlled.
  • Fig 5 shows a gesture based UI with local wearable display
  • the image displayed on the wearable display can help user to use their finger only, without any actual physical keyboard, to implement an "Air keyboard” function, without the need of any other GUI or "remote'Vextemal display that is not wearable, (with or without tactile feedback).
  • user can use this the same way they use any remote control - use index finger to select and "press” button, while using thumb with other fingers or palm to
  • user can use 2 fingers (such as THUMB and middle/4 h or pinky) for indicating for "zoom in” and “zoom out” which is a similar movement to what he/she could do on a smartphone touch screen, and the wearable system can capture such finger movement of user, maybe filter out noise and "drifting" by referring to sensors on other part of the hand/arm or body; similarly the wearable system could allow/detect user's using of index finger (or middle finger, or together) for navigating (Up/Down/Left/Right), and index finger for click (middle finger can be dedicated for scroll up/down, similar the way user use a mouse).
  • index finger or middle finger, or together
  • index finger for click index finger can be dedicated for scroll up/down, similar the way user use a mouse.
  • Such movement of UI (controls), focus, and resulting status changes such as
  • a wearable display such as a orientation changeable display on the wearable itsel or a HMD/glass display/(see- thru type) augmented reality display that is wom before user's eyes. So that user will get the visual feedback while navigating/typing with the UI, without the need of an external display (such as of a device, or a fixed display device connected to), and enable to user to control the device via the wearable in a similar the way user use a remote (for example finding a key and press it).
  • an external display such as of a device, or a fixed display device connected to
  • a glove like wearable device equipped with multiple IMUs (701) on the key moving sections of the finger, which could detect accelerations and possibly together with gyro(angular speed/ acc) magnetic , and could have other type of sensors such as for components detecting bent/flex of the joint 703 (resistor or fiber optic cable, magnetic and etc.)
  • IMUs in forms such as MEMS device, can communicate with a (central) processing unit 702 (for purpose such as but not limited to DSP, noise reduction, gesture recognition, machine learning, communication, event/message generation and handling, task management, 3 rd party plugin/app management and running, and etc.) via links such as I2C, SPI and etc.
  • a (central) processing unit 702 for purpose such as but not limited to DSP, noise reduction, gesture recognition, machine learning, communication, event/message generation and handling, task management, 3 rd party plugin/app management and running, and etc.
  • links such as I2C, SPI and etc.
  • the information is gathered, and processed (not necessary by 702), to form events at different levels— some movement based (not collision based) events can already be generated, and maybe skeleton info. And if we have a VR (close to body part) model, we can detect collisions (for example on board).
  • Tactile/Pressure display arrays is located on key sections of the fingers and maybe also other part of the hand. As magnified in Fig 1 A - they are like one or more "mask” with a “ergonometric " (such as a concaved curve shape with size fits to the "finger print” side of that finger) shape , and with a plurality of Fixels located on at least one such mask/surface that is accommodating a finger tip (the "finger print” side);
  • a fixel is a pneumatic actuator made of flexible materials that can change physical shape and excerpt pressure to user's skin, such fixel may have one input and from zero to 2 output, and with one or more flexible pneumatic thin hose attach to/coupling with it , (or it could be printed/flat air hose circuit/slots directly from the glove material), the hose linked to the output of one or more air switch(es)/ multiplexer(s)/ solenoids which output multiple states.) which controls the
  • the effector can also be other micro actuators, such as one specialized in
  • vibration(inducer, peizo), micro solenoids, memory alloy, or the combination are vibrations(inducer, peizo), micro solenoids, memory alloy, or the combination.
  • the switch/solenoids are connect to one or more (different pressure) air source.
  • the airsource might be located on accessories on the arm and link with controllers 707 via detachable connector 706.
  • air source 705 is small C02 tank used in paintball with one or more pressure-control valve/devices (in 708) to provide air, gas, maybe in different pressure level.
  • pressure-control valve/devices in 708, to provide air, gas, maybe in different pressure level.
  • level might be adjustable (dynamically).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This invention is related to methods of multi-mode (intuitive) UI input or gesture recognition, general human-computer interface relating to input and output/feedback (such as but not limited to tactile signal), and integration/simulation of input methods. This is also related to providing human centered hyper UI that might be cross-platform or cross-device and using a "feel-able" method to improve user interaction with computer systems, especially for objects/widgets/controls that have a visual representation (with graphical user interface). Also, part of the invention is related to wearable human - computer interface: Human centric wearable Interface that can use possible context aware gesture input, and optionally a wearable dynamic pattern display system that provides physics effects (such as tactile pressure/vibration, airflow/wind, temperature change/humidity change) to wearer ('s skin). Or, optionally it have other ways of feedback such as visual channel on the wearable and etc.

Description

Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" User Interface that could be cross-platform / cross- device and possibly with local feel-able/tangible feedback
Cross reference
This invention claims priority of US provisional application. 62092710, 62110403 and 62099915.
Field of invention
This invention is about intuitive human computer interface, more specifically related to methods of multi-mode UI input or gesture recognition, general human-computer interface relating to input and output (such as but not limited to tactile signal), and integration/simulation of input methods.
This is also related to providing human centered hyper UI that might be cross-platform or cross-device and using a "feel-able" method to improve user interaction with computer systems, especially for objects/ widgets/controls that have a visual representation (with graphical user interface).
This is also related to wearable human -computer interface:
It is human centric wearable Interface that can use possible context aware gesture input, and optionally a wearable dynamic partem display system that provides physics effects (such as tactile pressure/vibration, airflow/wind, temperature change/humidity change) to wearer (' s skin). It is flexible, compact and high resolution to allow multiple part of finger tip to feel different force s/effects. It can be controlled to present different feedback according to finger's movement, or pose, or position (note this might have issue), or gesture. Or, optionally it have other ways of feedback such as vi ual such as a round OLED display on the wearable such as on back of user's hand can be used to provide feedback to user's gesture input (such as status, current focus/control, and etc.
(And optionally,) with plugin/switch-able gesture input/recognition methods, allows single hand easy navigation and selection on mobile (wearable) displays that might located on the same wearable, and could generate Images/Patterns (in response to events such as but not limited to scroll, highlight, cursor/indicator and etc interpreted from gesture recognition) for the corresponding control (being selected), and display on wearable screen, such as hand, wrist, glass(display)/HMD/VR/MR/augmented display.
Backgrounds, concepts and terms used in describing solutions/claims
A user interface is for user to control the machine/device and interact with the functions, "objects" or "controls" provided by the machine/device. In an example, a graphical user interface (GUI) allows a large number of graphical objects or items to be displayed on a display screen at the same time, such visualizations(such as icons, windows, cubes) can be used for user to interact with computer/device (elements provide mechanisms for accessing and/or activating the system objects corresponding to the displayed
representations. ) . In a GUI system , a user can generally interact with the "desktop" and other windows/views/objects using a mouse, trackball, track pad or other known pointing device. If the GUI is touch sensitive, then a stylus or one or more fingers can be used to interact with the desktop. A desktop GUI can be two-dimensional ("2D") or three- dimensional ("3D").
Today, due to the inaccuracy of "depth sense" of many human user— as normal user is more comfortable working in 2D, a 3D GUI can be difficult or intuitive to navigate using conventional means, such as a finger or stylus. Thus a more intuitive solution for HCI (human computer interface) is purposed in the application.
Also, as the controls/device number user could interact with increased rapidly, such as computers, smart phones, TVs, remote controls, there are several challenges for the current "non-human centric" UI such as: user needs to remember how to interact (such as password, location of the remote control), user might need physically to go to different devices to interact them directly (such as with touch screen). While IoT (internet of things) might provide some solutions to the connectivity, smart phone or "universal" remote might provide some integration, they all have their own drawbacks or does not provide a whole/unified solution— for example, the current "universal remote" by itself it a quite un-intuitive interface that need user to remember many things (such as mode switching and key definitions), so it is still difficult to use, and it can also could easily misplaced and appear "lost" to user, the smart phone on the other hand difficult to operate with one hand for sophisticated (such as gesture related) interactions, and etc.
So in this application, a much better yet unobvious solution is purposed, which is related to shifting from a "device centric" Ul/way of interaction to a "human centric" Ul/way of interaction - by making the UI or control from devices accessible at user's finger tips without the need for user to move to the device, and determining the context and input from user's gesture input—which is more nature and intuitive to user (very few learning is needed)-- a wearable design (such as a glove like device) with possible accessories could provide a centralized and reliable gesture detection architecture while allowing feedback such as tactile and other display (such as visual wearable displays) being integrated, and allow single hand sophisticated operations, maybe to multiple devices.
So one focus of this invention is about a human centric approach, more specific with wearable that could provide intuitive control (such as context aware gesture inputs) that might be able to communicate/control with device(s), with UI and feedbacks provided to user locally (without the need to go to the devices), and possibly with the ability to integrate information from different devices, and possibly with information from wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc, to one or more virtual dashboards, which could be displayed on wearable displays (or external display currently visible to user), and control different devices accordingly.
With "surface-less" or spatal/3D gesture input/recognition and interaction, proper feedback such as from tactile (or visual) is preferred to greatly improve user interface. Previously there are some efforts to make output from computer tagileble/tactile, for example the device for the blind, and MIT's tangible media lab, however they are heavy, cumbersome and can not be made wearable.
The current invention includes a lightweight wearable dynamic pattern display system design that provides physics effects (such as tactile pressure/vibration, airflow/wind, temperature change/humidity change) to wearer ('s skin). It is flexible, compact and high resolution to allow multiple part of finger tip to feel different forces/effects. It can be controlled to present different feedback according to finger's movement, or pose, or position (note this might have issue), or gesture.
Optionally, a gesture controlled wearable can have visual display located on the back of the hand, (such in Fig 2)
It can be carried by ppl and communicate with computer wirelessly. Optionally it contains light weight C02 (or other liquid) gas tank/source.
Optionally it is combined with IMU sensors, flex sensors and other pose (local) to provide pose/position related info.
IMU— here refer to inertia measurement unit, this is currently usually integrated circuit MEMS device/sensor that can provide multiple degree of freedom (DOF) inertia and navigational signals such as acceleration (x,y,z), rotation (angular speed, around x,y,z axis), 3d-magnetic compass (x,y,z direction) and etc.
Ergonometric simulated pose or "hint pose": This is a pose (of limbs and possibly with palm) that user
can comfortably take when using the purposed input device/input method, in some cases the pose can be kept in an extended period of time (such as more than several minutes) without getting fatigue, and such pose is similar to the pose user would use when manipulating tools/devices that is being simulated by the purposed input device/input method, so it can also be considered "hint". For example: the "Ergonometric simulated pose" (or "hint pose") for simulating using a mouse, is "palm face down" (and maybe together with forearm considered to be "level" within some tolerances, such as with in plus or minus 30 degrees, and such tolerance range might be adjustable/configurable), just like the way people would normally use a mouse on a flat surface; the "Ergonometric simulated pose" for simulating using a joystick or jet "steering stick" is palm facing side ways, just like the way people would normally use a joystick/steering stick, The
"Ergonometric simulated pose" or "hint" for simulating using a remote control is palm up, just like the way people would normally use a remote control , and "Ergonometric simulated pose" for simulating swipe/touch of a smart-phone/tablet will be like the way people would normally use a smart phone/tablet, such as palm forward or down or with some angle, which might be configurable, (and might with 1 or 2 fingers stick to the surface), in yet another example, the "Ergonometric simulated pose" ("hint") for aiming an weapon (such as a gun) could be either exactly the way user would hold such weapon (for example for rifle is using two hands, with right palm close-sideways and index finger on "trigger" if user is right handed and vice versa for left handed), or a "symbolized" pose that using one pose of holding kind of weapon (such as pistol, need to use just one hand) to represent all weapons, and also might allow optimize the pose for the purpose of comfort, such as allow elbow being supported (on the table or on the arm of chair, etc), and for related activity such as changing the clip of the weapon, the "Ergonometric simulated pose" could be some activities that similar or symbolize that activity such as rotate the palm from vertical (facing sideways) to facing up, so that it is same or close to the actual activity so it is can be easily understand or memorized and almost no need to learn in most cases. Above are just some examples, In general, if the tool being simulated have a handle, then the default "Ergonometric simulated pose" is the pose user would normally take to manipulate such handle.
"Feelable" , such as mechanical pressure, and temp change and etc. on multiple points on a surface.
Feel-able (feedback), (or physics effect patterns): Physical feedback that user can feel with their skins such as but not limited to: tactile/pressure pattern, vibration, temperature, wind/air flow, humidity change, and etc.
A (dynamic) pressure pattern consists of a time sequence of frames of static pressure pattern "presents" to user, (for example can be used to indicate the direction or movement of something on the displaying surface)
"human-focus" centered user input: Traditional User Interface is "device-centered" which means they usually located on the device themselves or have a corresponding remote control for the device, "human-focus" centered user input is "human centered" interface in which sensors/controls, some maybe in wearable form, detecting/sensing user's movement/position (or get user input from wearable controls) and generate commands for device(s) from user input or interpretation based on user's movement (such as but not limited to acceleration or deceleration, rotation/flexing, 3D position change or relative position change of one or more body part segments such as segments of finger, palm, wrist and etc.)
"Fixel": like a pixel in a picture, this representing a point in a "feel-able" pattern, and physics effects such as pressure/vibration, temperature changes, wind/airflow and etc can be actuated (to make it feel-able to user's skin)
Description of Preferred Embodiments . In a first embodiment, a proposed method of providing user INPUT to machine including: 1) use sensor to (reliably) detect user finger, palm and arm (fore arm, upper arm) movement (not static position) , (and maybe optionally calculate by using forward kinetics or "infer" the position information) , use the movement info (acceleration, angular acceleration/turn) to provide "delta" or "incremental" information for events/callbacks to software system such as OS or application, middleware.
. In a related embodiment, the accumulation or other "stateful" calculation can be provided to client in a "per session" (or per app) way, such accumulated value (such as static position calculated from incremental movement events or from acceleration data) , can be calibrated by the static pose/position/direction/flex sensor info (such as tilt sensor, bent/flex sensor and magnetic sensor/compass), and thus provide more accurate/robust real-time position and movement info simultaneously that can be passed to client (such as in form of events/callback) to user. The information of speed/movement/position/rotation could be forexample (but not limit to) 3D or 4D vector format, and maybe together with time delta/time stamp information for further (digital) signal processing.
In another embodiment, a proposed method of providing user INPUT to machine including: Using the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
, In a related embodiment according to 2, including tiiggering/switching to a mode of operation, such as but not limited to, mouse, joystick, simulated "gun aiming pose" based on the corresponding Ergonometric simulated pose. For example a mouse input simulation mode is triggered by user taking an Ergonometric simulated pose of mouse operation. Optionally, in some cases the "automatic" trigger can be turned off, so that to allow client application to "capture" the device and interpret the full information, or the client need continues/uninterrupted tracking of movements that do not allow mode switch.
. (Independent), a method of "extracting" input from user's hand/finger and forearm movement, (or interpreting, or recognizing) by determining which kind of input device user "hints" (hint based), for example the hint is the pose that user usually do when operating such device - in an example, user palm down and fingers also face down to hint the use of a mouse, and palm side ways and finger "half closed" to hit using a joystick/vertical handle, and etc, and interpret the corresponding movement of body parts/limb as simulated device movement , (such as hand use the horizontal movement for mouse movement when user hint for "mouse", or use the hand/wrist rotation/movement information for joystick simulation when user hints "joystick"), sometimes this requires limit the input range to what the simulated device can provide, usually in 2D movement, for the current popular GUI devices (mouse, joystick, touch pad/touch screen). In a related apparatus for input embodiment: have one or more sensor(s) to determine higher level limb pose such as that of forearm and palm; Have sensors to determine finger movements; Provide interpretation of finger movement/input device according to the mode determined by the higher level limb pose data. An (independent) embodiment for an Apparatus for UI input comprise of:
wearable device to accommodate the size of a user's hand and have multiple IMU (MEMES) sensors (may include by not limited to: inertia/gyro scope sensors, magnetic sensors, pressure/temperature sensors and etc) located on the joints or movable parts of human limb/hand/fingers for tracking movements; Using the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
In a related embodiment, IMU signals from different moving parts of
body/limb(such as those from fingers and those from hands) can be used together to "filtering" out environment noise to improve measurement accuracy, and can provide "difference" signal (by subtracting on the same (eula) coordinate system on the same scale) so that local "relative" movement (such as finger relative to palm, or one finger relative to other finger, or one section of a finger relative to other sections, and etc.) can be accurately measured. In an related embodiment, other kind of sensors such as flex sensor (changes output value according to the degree of curve/flex) is used to provide additional info for detecting the movement of body parts (such as finger and palm, etc), and such information can be integrated to improve the accuracy of the judgment from IMU sensor information.
In an related wearable device embodiment, buttons or touch sensors (such as but not limit to touch pads) can be placed on the side of index finger for thumb to operate— that can simulate the function buttons of a game control pad (such as but not limit to direction(L/RU/D) ,A, B, Attack , Select buttons of the corresponding hand), or it can be used for some short-cut function such as menu, hold, capture(inhibit mode change) and etc. function.
A wearable apparatus related to 7 , on the finger tips and sections of finger, palm could have pressure sensor to sense the pressure user actually feel on the surface/object. 12. In a related embodiment, instead of "built-in" to a wearable device such as a glove, those IMU sensors can just be attached/detached ("patch on/patch off') ,
13. In one embodiment for intuitive user interface, an wearable device having an ergonometric shape to accommodate user's hand (and fingers, maybe like for example but not limited to gloves), and might also include parts that is wearable to limb and torso, that provides mechanical/tactile pressure pattern and/or other kind of physics feedback such as temperature change, airflow/wind to user, using actuator(or micro actuators) such as but not limited to: pneumatic, solenoids, piezo components(actuator), memory alloy and etc, such mechanical/tactile pressure pattern /physics feedback serves as purpose to prompt user for certain events, certain position or range of positions in space, certain pose, or simulate certain physical characteristics/properties of one or more virtual objects that interact with user, so that user can have a more tangible/direct way of interacting with the virtual environment/objects/user interface.
14. In an related embodiment , the virtual objects can be simultaneously displayed on a 2D display or 3-D display (such as but not limited to 3D stereoscoptic display, 3D holograph display and etc).
15. In an related embodiment for intuitive user interface, a software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, maybe (but not mandatory) together with visual/audio representations to prompt user certain events or the result/feedback of actions related to the movement of user's limb(s)/figer(s)/palm/other bodyparts, such as but not limited to: touch some object (which could be virtual object defined in the space the corresponding moving body part of user resides) or left/detach from object/virtual object, to press/release some control (might also be UI control, virtual 3D control and etc) , catch/release/drag and drop certain objects/virtual objects or representatives and etc.
In a related embodiment, A wearable apparatus to use tactile/ pressure pattern to simulate the touch of an object (a certain places or gesture) that might in accordance or in sync with the object presented to user visually, to prompt user certain finger reached a certain position in space.
In an related method for intuitive input is to provide a "snap in" effect when user's finger, /palm/arm are used for pointing a specific location, and when the indicating finger(s), hand or arm close to a special location (such as the location is of (geometric) significance like the center, middle point, alignment with some edges or on the extension with certain degree with a line or surface, etc.), when user in such range/close by area that user could feel a certain pressure pattern to prompt user a "snapping point " is close (maybe also together w/ visual clue). The result is user do not need to rely heavily on vision (in some cases the point is difficult to see or being hidden by other objects in 3D), thus greatly improve the productivity as user do not need to rotate the model to find out a viewable "angle" to perform such activity. Some example embodiments/scenarios: With 6DOF or 9DOF IMUs on palm, on forearm, IMU on at least one section of thumb and at least one section of index finger, flex sensor on all 5 fingers, IMU on upper arm. Such array of IMU and flex sensor can be used to track movements of the finger/parm/arm and establish a "skeleton" model, similar to that of the leap motion and Kinect sensor, and tracking each individual moving parts. Since acceleration and rotation info are measured by more than one IMUs, it is convenient to use related mathematical /signal processing methods to filter out the noise and provide more accurate output. The signals might be send to distributed or central computer/micro controller for filtering and integration/signal processing.
Some typical scenarios for the high intuitive HCI solution could be:
Using multi-mode context related gesture input/recognition it is possible to generate appropriate (corresponding/"smart") UI events/commands:
When user hands down, like manipulating something (mouse) on a table/flat surface - in which as user's palm is faced down and user's fore-arm is almost parallel (within +- 30 deg) to the surface. This mode we decide this is mouse mode, and we only report 2D messages (X inc,Y inc) to computer, unless this is "override by some
control/ function/command or process that PREVENTS mode switch"
A button at the shoulder that must be touched by same side hand will automatically align the sensors (meg and gyro).
In another scenario, user's elbow is supported and forearm is relatively vertical (+45 deg , -20 deg) to the table/chair arm surface, and palm can face either down (side within +- 40 deg) as "touch" interface, or vertical (+- 15 deg) as "gun/trigger" interface. If face up it is interpreted as "remote" mode.
In gun/trigger mode, if index finger is straight, then move (turn) to the direction it is pointing (moving, using inc mode/gyro, not absolute, because of meg sensor issue). If index finger is flexed/curved, then it is shooting mode.
User forearm as "joy stick".
In an related embodiment to all of the above, the software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, also use the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
In an related embodiment to all of the above, the software system such as but not limited to operating system, UI widget, graphical system, 3D systems, game systems and etc, to use (drive the output of) the mechanical/tactile pressure pattern /physics feedback, also including triggering/switching to a mode of operation, such as but not limited to, mouse, joystick, simulated "gun aiming pose" based on the corresponding Ergonometric simulated pose. For example a mouse input simulation mode is triggered by user taking a Ergonometric simulated pose of mouse operation.
In a method for human-device/machine interaction, it is desirable human centric approach rather than device centric approach is used, in a way by providing wearable(s) that allow intuitive control (such as context aware gesture inputs) that is be able to
communicate/control with the device(s), or maybe with multiple devices, with UI and feedbacks provided to user locally (without the need to go to the devices).
Ina related embodiment, the wearable might have computing/signal processing capabilities (such as but not limited to gesture related, 3d modeling related, collision detection related, and etc) besides the communicate/control ability, and could also have other sensors such as temperature, humidity, electro/magnetic field strength, illumination and etc. on board.
In a related embodiment , also include means to integrate information from different devices, (and possibly also with information from wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc), to one or more "virtual dashboard"(s), which could be displayed on wearable displays (or external display currently visible to user), and allow control different devices (such as independently , or collectively with some tool/integrator/script).
In a related embodiment, the feedback for the control could be tactile/pressure pattern or other physics effect (such as but not limited to temperature, airflow/ wind, humidity changes, and etc.) provided by the tactile displays and/or light weight effectors located on the wearable (such as tactile display on tip and other sections of the finger/palm).
A wearable tactile display with an ergonometric shape capable of accommodating user's hand, and might also include limb and torso, ( and wearable), that can provides "feel- able" physics effect patterns (with more than 1 controllable points or pixels in the pattern) to user, that surrounds/ wrapping the hand (such as but not limit to like a glove) and provides tactile and optionally temperature, airflow etc effects to individual fingers, that could have more than one "Fixel" per section (of the finger or other body part). Such Fixel is controlled by a interface control system that generate output based on user's gesture (such as hand,finger, plam, wrist, arm) movement , and they can be actuated to (as feedback) to enhance user experiences. It might actuate when certain criteria is met or events happens such as collision with an object is detected, determined by the control system.
In a related embodiment, there could also be accessories on user's arm (such as forearm, upper arm) or other part of body to provide sensing (such as position, movement etc.), energy/ working medium storage (battery, compressed air/gas etc.), and other
functionalities(such as but not limited to processing, communication, image/pattern generating and etc.) that can be easily attached/detached with the glove like wearable. (—Tactile display— )
In a first embodiment, one or more "mask" with the "ergonometric " shape, such as with a concaved curve shape with size similar to the "finger print" side of that finger, (which fits to a finger) and with a plurality of Fixels located on at least one such surface that is accommodating a finger tip (the "finger print" side), and at least one fixels on other sections of the hand.
In a related embodiment, a fixel is a pneumatic actuator made of flexible materials that can change physical shape and excerpt pressure to user's skin, such fixel may have one input and from zero to 2 output, and with one or more flexible pneumatic thin hose attach to/coupling with it , (or it could be printed/flat air hose circuit/slots directly from the glove material), the hose linked to the output of one or more air switch(es)/
multiplexer(s)/ solenoids which output multiple states.) which controls the
inflation/deflation/pressure of the cell/ fixel.
The mask maybe made of flexible materials such as but not limited to plastic or rubber, with certain rigidity so that one would feel only the tixcel where pressure is applied and other places do not have feel-able force, to maintain a good signal to noise ratio (S/N Rate).
In a related embodiment there might be no mask but just multiple end effectors arranged on the surface of the glove part facing the finger tip.
The effector(s) to provide physics feedback on the wearable (as mentioned in above embodiments) to user can be other micro actuators, such as one specialized in
vibration(inducer, peizo), micro solenoids, memory alloy, or the combination.
In an related embodiment according to the above embodiments, one or more (a plurality of) lightweight servo-operated air switch is placed on the glove, preferably on the back of the hand, and maybe together with light weight solenoids, that (control the distribution of) or direct the compressed air to the end effectors. The switch/solenoids are connect to one or more (different pressure) air source.
In an related embodiment, the air/gas source is a compact sized C02 tank/bottle (similar to those) used in paintball with one or more pressure-control valve(s)/regulator(s) to provide air/ gas, maybe in different pressure level. Such level might be adjustable
(dynamically).
Location wise, the tactile/physics feedback could be provided on front and back of all fingers (each section have at least one effector, although several effector might link to same output to save switch). Thumb, plam. And also on forearm and upper arm.
So basically, the "pneumatic type" effector/cell or fixel, can have 2 types— one is
"leaking" type which could gradually lose the pressure of inside the chamber of the effector/cell, and might not need a "pressure release" channel/gateway, the other type is "non-leaking" type which allow outside control (one or more pa sage/gateway such as but not limited to valve/solinodis/s witches) to adjust the pressure. Controller for these effectors (such as via the solenoids and switch) can use different methods such as pressure division/modulation (might use 2 or more air sources with different pressure) or time-division modulation (one air source modulated by time) or adjustable pressure regulator, to maintain desired pressure of a given fixel/cell/effector so that a pressure pattern can be formed, maybe dynamically
In a related embodiment, the control/Switch system might use micro solenoids located close by to the effector/cell/fixel, such as but not limited to located on the back of the hand on the wearable, on the forearm accessories mentioned above, and etc.
In a related embodiment to the above the control/Switch system might include servo- based, rotationally pneumatic switch like for control the inflation/deflation of
fixel(s)/cell(s)/effector(s) by controlling the servo turning angle.
In another related embodiment, the switch/solenoid bank is on board(palm/back) or on glove, and can be connected to a pneumatic source on user (such as forearm) via a (single) hose which have a quick connect/detacth mechanism.
(— Multi-effects— )
In a related embodiment , providing physics (effect) feedback might including
temperature/change provided by heating/cooling device such as but not limited to:
semiconductor heat pump/Solid State/Peltier thermal component(s) on the fixel, inflating the fixels with different temperature working mediums (such as air, liquid) and etc.,
In a related embodiment, physics (effect) feedback could include the vibration provided by Piezo actuator/components, inducer and other possible low profile vibration generator.
In a related embodiment, providing physics (effect) feedback could including providing the airflow/wind effect by the using nozzle or "selectively/controlled bleeding" from the pneumatic effectors or hoses.
In a related embodiment, physics (effect) feedback could also including force generated by actuator/servo with exoskeleton that attached to the wearable device.
In a related embodiment, these different physics effectors (such as thermal, vibration, airflow) might be on the same "fixel", for example at different layers of the wearable on the same location, or very close by (and some of them could be at the same layer).
(— Simulating "touch" Effects with dynamic partem—)
Like in FiglA, Different patterns (of physics effects such as but not limited to
tactile/pressure) signifies/hints or simulates different spatial relationship with the object/key/button. From no touch to fully pressed: for none for not touched, one center point for barely touched but not "Pressed" to all 5 points actuated to provide feedback for position/status that the button/control is fully pressed (there could also be "half pressed" position in the middle, or in the "seam" between situation2 keys where only 2 sides of finger pressed (and corresponding fixels actuated), but not the center. Such patterns might be triggered by corresponding events.
In a wearable embodiment, there could be pressure/force sensor(s) on parts of hand such as fingers, plam etc. to provide sensing of force/pressure the part of hand
exerted/received from the external object interact with the hand. Optionally when wearable equipped with a pneumatic "pressure" pattern display/ fixel array, each cell/effector/fixel could have a pressure sensor to sense the current pressure from that fixel.
In a related embodiment to the above, the pressure sensor, when doing "Time division" style "inflation", could determine that specific cell is charged or not, how much pressure is (still) needed, or provide feedback to control system to close/ shutdown the valve when designated pressure is reached.
An (independent) embodiment for an Apparatus for human centric UI input comprise of: Wearable device to accommodate the size of a user's hand and have multiple IMU
(MEMES) sensors (may include by not limited to: inertia/gyro scope sensors, magnetic sensors, pressure/ temperature sensors and etc) located on the joints or movable parts of human limb /hand/ fingers(not quite there) for tracking movements;
In a related embodiment, IMU signals from different moving parts of body/limb(such as those from fingers and those from hands) can be used together to "filtering" out environment noise to improve measurement accuracy, and can provide "difference" signal (by subtracting on the same eula coordinate system on the same scale) so that local "relative" movement (such as finger relative to palm, or one finger relative to other finger, or one section of a finger relative to other sections, and etc.) can be accurately measured. -> Such info can be used to drive events/messages that about local movement/location of the fingers (irregardless of "global" position relative to ground)
In a related embodiment, including tracing the finger tip "end effect", and it might be used as a higher priority/ weight than the information from tracing the "joint relative movement" in determine the "signature" move ( as different people might use different joint/muscle to perform the same "end effect"); optionally 2 "score system"
(probabilities scores/parameters) of each can calculated or be used together to provide better gesture recognition, and related factors (such as which score system have high weight) can be changed/configured, or it maybe by "improved" by ways such as machine learning/training of the automatic system.
In an embodiment of wearable human centric UI, Gesture recognition (maybe context aware) is used and "local" wearable display such as a "handback" display shown in Fig.5 or a HMD/Glass display/ Augmented reality display can be used to display UI, current operation mode, status and etc. related (feedback) information about current user action/selection; such feedback can be provided audibly via earphone/speaker In an embodiment that could related to all of the above, the glove like wearable could be multi-layer with exchange-able, discard-able or washable inner surface.
The wearable system might be extensible in software and could have (managed) environment(s) for running 3rd party apps or plugins, for purpose such as but not limited to : gesture recognition, collision detection, UI control, tactile output pattern control, messaging, communication, virtualization, and etc.
For software interfaces, such as sensor out put, we can simulate "leap motion" as there's no special interest in hiding it. What ever leap motion disclosed or publicized, we can do the samething in the same area. Output/ feedback is another completely different realm. Only paid vendors can develop on special modified "sandboxes" and the distribution must use the highst secured digital signature via the online app store and being approved. It must be a ''end product app" which can not re-expose the interface to 3rd party. We need to describe this thing in whole fasion, such as a glove but might not be a full glove, just the parts moves TOGETHER with the finger, finger tip that's it.
In an embodiment, A gesture command based (or relying) on a single gesture of user might be "cross-device", or might be rendering the switch of device current wearable system is "currently engaged to".
an example embodiment according to above is like follows: Selection/interaction by "Point to"—system interprets user "point" gesture with criteria such as: with at lest index finger, straight and parallel or within 30 deg (or configurable value no more than 40 degrees) of palm surface, the direction of the finger within +-20 degree (or configurable value no more than 30 degrees) of the direction from the that finger tip to the device, and when the criteria satisfied that device is considered "selected" for interaction (or engaged with). This can be achieved by many methods, such as but not limited to:
1. pre-register device locations (such as in a 3D-map that can loaded or queried by the "task/device management" module), and get the location/direction information of the finger to compare with the pointing direction and the device to pointing hand position, this might be happen only when user have an movement, or a special gesture of user was detected (such as when detected that user is pointing at something).
2. Or detect dynamically which device is "in range" of user pointing , and this might be happen only when user have an movement, or a special gesture of user was detected (such as when detected that user is pointing at something). This can be done with a range of implementations such as but not limited to. Have a "narrow beam" detector/sensor on the index finger (such as optical/ infra red camera, focused infra red/RF transmitter, high directional ultra sound or RF receiver/ transceiver, etc), and maybe also with corresponding locating mechanisms deployed on the device (such as maker /pattern/beacon/emitter for cameras or optical/infra red/ultrasound/RF receivers/transceivers on the finger, or receiver/ transceiver for the emitter from the finger)
3. The detection method/elimination method can be various and usually no more than 1 beacon/marker/speaker/microphone is need per device. In an embodiment according to above, device selection might be using index and middle finer together.
In a related embodiment including using index finger as "pointer" (indicating the device) and interpret "left click"/"left(button) down" like events (similar to the meaning and effects of mouse left click/mouse left button down) as (detection of) the flexing of thumb or middle finger over a certain (configurable) threshold so that system could know the action is intentional, and such "left click" can be assign meanings such as selection or other meanings like corresponding mouse events, and interpret "right click" like events (similar to the meaning and effects of mouse right click/mouse right button down) as (detection of) the flexing of pinky finger over a certain (configurable) threshold (so that system could determine the action is intentional), and right click can be assigned meaning for (or triggering) context menu/commands/stafus information.
The wearable might allow extension by providing port/interfaces (such as but not limited to I2C, SPI, USB, Bluetooth and etc. and hardware port/ interface like but not limited to those depicted in Fig.6) and allow components to be attached/detached, such as in Fig 6. in which a finger part of a glove (wearable) can be exchanged, other "finger part" with different function using the same port/ interface or attached with other components to enhance/extend the functionality of the wearable, such as in Fig6.B directed a "narrow bean" detector component (such as but not limited to a camera, R antenna or IR beacon) can be attached to the index finger to perform "point and select" of devices, such sensor provide or enhance the reliability of the device selection/de-selection/communication.
In an embodiment, the wearable, together with a Virtual "dashboard" integrated multiple devices (such as via IoT network) running on a computing system, such as onboard the wearable, or a server computer connected to the wearable, that can generate the synthesized/integrated multi-device information for displaying (GUI), which possibly displayed on a wearable screen. There might be an option for user to pick (or "zoom in") a specific device via this GUI/integrated dashboard, such as but not limited to selecting an icon on the "desktop" or "tree" in the dashboard/GUI.
In an embodiment, the human centric wearable could provide all key features mentioned above such as physics feedback (such as with related embodiments), able to handle multiple-device (such as by automatic context switch and/or visual dashboard), and provide an easy to use (intuitive), all-in-one and mobile platform.
In an embodiment , A Human centered cross platform/device user interface and management system, comprised of: communicate channel(s) with devices (such as but not limited to smart appliances, IoT(internet of things) devices, TV/setup boxes/AV systems, air
conditioning/temperature controls, light/curtain controls, phones/communication devices, PC/game consoles and etc.) which user can interact with this system, such as but not limited to via Infra red, Ultrasound, RF, and network( wired or wirelessly);
A module for processing "human-focus" centered, user input;
A module/means (for task/device management) to determine which device is the one user wants to interact/control with ("engage with"), based on "human-focus" centered, user input such as controls from wearable, or (the interpretation) of the hand/arm
movement/gesture of user;
And send commands based on user input such as controls from wearable, or (the interpretation) of the hand/arm movement/gesture of user to the device via said communication channel.
In a mobile/ wearable cross-device user interface and management system embodiment according to above, the "task/device management" module may provide device status feedback to user (for example but not limited to visually, in audio, tactile, other physics "feel-able" feedback and etc.)
In a system according to above, the "user input" used to determine which device is the one user wants to interact/control is based on a gesture command which have a begin and end based on user's movement, such as but not limited to acceleration or deceleration, rotation/flexing of one or more body part segments such as segments of finger, palm, wrist and etc.
In a system according to above, the gesture command could be based on interpreting one or more gesture(s) of user.
In a embodiment according to above, A gesture command based (or relying) on a single gesture of user might be "cross-device", or might be rendering the switch of device current wearable system is "engaged to"
an example embodiment according to above is like follows: Selection/interaction by "Point to"—system interprets user "point" gesture with criteria such as: with at lest index finger, straight and parallel or within 30 deg (or configurable value no more than 40 degrees) of palm surface, the direction of the finger within +-20 degree (or configurable value no more than 30 degrees) of the direction from the that finger tip to the device, and when the criteria satisfied that device is considered "selected" for interaction (or engaged with). This can be achieved by many methods, such as but not limited to:
• pre-register device locations (such as in a 3D-map that can loaded or queried by the "task/device management" module), and get the location/direction information of the finger to compare with the pointing direction and the device to pointing hand position, this might be happen only when user have an movement, or a special gesture of user was detected (such as when detected that user is pointing at something).
• Or detect dynamically which device is "in range" of user pointing , and this might be happen only when user have an movement, or a special gesture of user was detected (such as when detected that user is pointing at something). This can be done with a range of implementations such as but not limited to: Have a "narrow beam" detector/sensor on the index finger (such as optical/ infra red camera, focused infra red/RF transmitter, high directional ultra sound or RF receiver/ transceiver, etc), and maybe also with corresponding locating mechanisms deployed on the device (such as maker/pattern/beacon/ emitter for cameras or optical/infra red/ultrasound/RF receivers/transceivers on the finger, or receiver/ transceiver for the emitter from the finger)
• The detection method/elimination method can be various and usually no more than 1 beacon/marker/speaker/microphone is need per device.
In one embodiment, It is desirable UI events (might) only feed to the "current" app/device— and which is the "current" might be related to "user's focus/intension", which is related to the gesture, and also that is the "context" for interpretation. Each "device" might have a receiver for receiving commands (for example wireless or Infra Red), the commands/events are send to them only when that device/"context" is the "current". A system do all the translation of commands/interpretation on board with one or more processor(s) of the system, or in some simple situations (like TV, microwave, temperature) handle all the interpretation on board and out put "command" only. The system might have output module simulate command s/events for corresponding devices it controls (which might be in a way such as but not limited to universal remote simulates other remote controls)
In some embodiments, the Selection of devices is by Hint/ emulate instead pointing : In a "multi-device" operating UI, A device can be selected and the mode corresponding to the current device being controlled might be entered automatically, for example by "hint". By "extracting" input from user's hand/finger and forearm movement, (or interpreting, or recognizing) by determining which kind of input device user "hints" (hint based), for example the hint is the pose that user usually do when operating such device - in an example, user palm down and fingers also face down to hint the use of a mouse, and palm side ways and finger "half closed" to hit using a joystick/vertical handle, and etc, and interpret the corresponding movement of body parts/limb as simulated device movement ,
(such as hand use the horizontal movement for mouse movement when user "hint" for "mouse", or use the hand/wrist rotation/movement information for joystick simulation when user hints "joystick"), sometimes this requires limit the input range to what the simulated device can provide, usually in 2D movement, for the current popular GUI devices (mouse, joystick, touch pad/touch screen).
There are several method: we can allow user to "pickup" a controller or put down/release a control, just like they normally did with remote control or mouse. An example "release control" may be done by detecting user palm down with all finger spread straight. In a related embodiment of Human centered cross platform/device user interface and management system, such system supports a wearable display (for information such as current device/mode or device information/feedback) for example but not limited to on the back of user's hand or wrist like a watch, on the forearm so that user can know the current mode or current "context" current status.
In a related embodiment of Human centered cross platform/device user interface and management system, such system supports a wearable "feelable" feedback mechanism for providing user physics effect feedback (such as but not limited tactile pattern) about the operation of the device.
Some typical scenarios of controlling devices (related to cross-device hyper UI)
TV (with GUI) (setup box, video recording/playback)
A scenario interacting with TV: The TV/Setup box can received commands via infra red or IoT, when the device is selected, the wearable interpret gestures to commands, wherein waving up (With palm facing upwards) means sound volume up (whole palm movement), waving down means volume down, a "remote" hint with l and minus touch means change channel, or allow user scroll up/down/left/right by finger, User can use the screen keypad to enter channel number ro do search, and "kill" gesture means mute. Recording: left/right fast forward or rewind. Stop(palm facing forward).
For audio system: swipe up and down can be interpreted as album selection, swipe left and right means sound track selection, waving up (With palm facing upwards) means sound volume up (whole palm movement), waving down means volume down, and etc. Home environment control (no GUI) light, blind/curtan, temperature/ AC: waving up or swipe up (depends on contiguration) might mean turning the control up, waving down or swipe down (depends on configuration) might mean turning the control down , and etc.
In one embodiment, A feel-able user interface, comprised of:
Sensors detecting user gesture information at least one finger of a user's hand's movement together with at least one movement info of the other part of hand (such as other finger, palm, wrist);
A wearable that can accommodate at least one finger of a user's hand and have a articulated part on user's hand back, (and optionally have an detectable part on user's forearm.) that can provide physics ("feel-able") feedback thru the wearable;
In a related embodiment according to above, detecting user gesture input data such as but not limited to finger, palm movements (such as acceleration, rotation, direction via magnetic) with the sensors on the wearable;
In a related embodiment, the physics ("feel-able") feedback including (dynamic) patterns "presented" to user on multiple points (or "Fixel") on at least one finger tip;
In a related interface method according to above, detecting user gesture input data such as but not limited to finger, palm movements (such as acceleration, rotation, direction via magnetic) with the markers (such as but not limited to passive and/or active infra red beacons/markers, RF, magnetic, ultrasound) on the wearable, and optionally maybe together with sensors on the wearable (such as but not limited to MEMS IMU sensors, flex/bent sensors and etc.) to provide a "unified" finger/had movement/position model data (such as but not limited to "skeleton/bone" data).
Some Example scenarios
In an related embodiment, the UI system employs "hint based" ergonometric pose "detection" to determine the "mode" of input and switch seamlessly (automatic switch). - explain hint based system in detail (copy from previous application)
In a related (interface) embodiment, there could be multiple "mode of operation" of user's gesture which means user's gesture might be interpret to different meanings in different modes, such mode can be entered, exited, changed or switched, however for any given time a single hand can only be in maximum of 1 mode/state. The mode can be entered, exited, changed or switched in ways such as (but not limited to) manual switch (such as hint, press a button) or automatic switch, such as when selecting an object or "environment" which have "context" implications (in which system might switch to the only meaningful way of interaction, just like an OS input method (IME) window or dialog)
Some example scenario includes:
User pick a mode with gesture, then do whatever it requires, user might exit the mode by hint, by explicitly user "release/escape" gesture, or use some button.
A general "release control" will be palm down with all finger spread straight.
The "current" control might also be context sensitive, such as when user select a text box that only allow numbers, then some specific controls such as "num pad" instead of full keyboard or mouse might be "pop up" or made current.
The system having an architecture to allow plugin/different interpreter (some of them maybe context sensitive) to be used, and have mechanisms to switch those interpreters.
In A related embodiment, user interface system having at least one mode that rendering something (virtual object, system object, controls) being selected, and at least one mode that allow the selected to be manipulated in which the mode/interpretation of the gesture of the same body part might be changed according to context/mode.
In some embodiments it is desirable to simulate 2 different kind of input devices for let/right hand at the same time, for example left hand is keyboard, right hand is mouse (like the normal configuration for PC games).
In some embodiments it is desirable the input is tactile/relative position/gesture oriented, rather than absolute location or visual oriented interaction— A UI system, using gesture, relative position/movement of segments of body parts (such as finger segments), so that the control is "tolerant" of slight movement of whole hand/environment vibration, and filter out/ reduction noise from the sensors, and focus on the "relative" and "intentional" movement of fingers. It could feel like (for user) the key/dialog box is move together with the hand (which could make it more stable and easy to control)
In an embodiment, a keyboard (computer, musical instrument) or an musical instalment can be simulated by detecting the "signature of finger movement for a given key or note" (or "typical hand/finger key stroke action for a given key/note"). This is because for a standard keyboard or musical instrument, experienced user usually (trained) to use a fixed "gesture" or "key/note strike action" to reach corresponding keys/notes (such as hitting space bar with thumb, and hand and other fingers almost have no movement). Such signature movement can be detected using for example detecting relative movement/position of the fingers with palms, wrist/forearm or other fingers. For instruments require hand/ wris forearm movement, their relative movement/position (with body, or the "virtual" instrument) might also be detected. Since experienced user might rely on muscle memory more than visual, the display of keyboard/instrument might be optional in some cases, however it is still better to display the instrument in a way user can gain comfortable sense of space, and maybe visual/audio feedback. Since tactile feedback plays an important role in this kind of "blind type/performance", it is also desirable appropriate tactile feedback (such as but not limited to pressure pattern, vibration) being provided to user (for example via the wearable device on user's hand/finger) at the time when system detected such signature of finger movement for a given key or note" (or "typical hand/finger key stroke action for a given key/note") and determined a key/note is strike, so that user can directly "feel" the keyboard/instruments.
In one embodiment of OS/GUI/Middleware system, including at least one level of "high level" events/messages that can be passed to 3rd party software, and for system providing different levels of events/messages, allowing custom filtering for different applications.
■ An graphical UI system or operating system, having finger specific EVENTS/messages (this can be easily over passed by providing events for all, and user filtering finger ID), such messages maybe contain detailed 3D movement/location (relative or absolute) information and timestamp, and etc.
■ An graphical UI system, having finger specific gesture/relative movement or location EVENTS/messages (this helps for simulating keyboard). "Air typewriter" and "Air keyboard"
■ This includes EVENTS for (served for, or reflecting) some activity that requires two/more finger "collaborate" such as pinch(with 2 or more fingers)/clip(/clamp) an object, which is a 3D movement different from the "zoom in" 2D gesture.
■ any 3D gestures requires just one finger (and might with palm/wrist movement).
One example is dealing cards, grabbing things, tossing something.
■ For any selected object, there's a standard "context" based gesture operation like expand or collapse, go to next-previous, search,
■ finger specific gesture input: for example pinky finger designated for context menu
■ Tactile feedback on "progress" and prompt when something is finished or waiting for input.
In embodiments that require keyboard/musical instrument simulation, generally includes sensors for both hands (all fingers) such as 2 gloves (each be able to detect movement of all fingers on the hand and palm movements).
Example applications capable of using multiple tools simultaneously (or without switching) with tools assigned to Multiple different finger: (such as 2D document, page editing, drawing/picture editing, 3D modeling, editing, animation creation). The application having more than one "focus" or "selection/manipulation tool" assigned to different fingers, and might also being displayed on the screen, to represent different tools/functions.
It is desirable an application with Graphical UI , having a 2-D or 3-D working space (or both) presented to user, use the finger movement/location information (such as from events) for multiple fingers simultaneously, with at least 2 fingers being assigned different functions(specific to that finger) which have different screen visual
representations(such as cursor, tool icon," focus area or other special effect area"(such as but not limited to magnifying, highlighting, "see-thru" rendering ) and etc), and these icons, focus area or special effect area might be displayed simultaneously and based on corresponding finger location/movement. For example one finger such as index being assigned an "additive" tool such as a pen/brush in 2D workspace (or a volume adding tool/brush/ extruder in 3D workspace), and might be displayed as a pen/brush icon(or other icon corresponding to the specific tool assigned) on the screen, while an other finger (such as middle finger) being assigned as "subtractive" tool such as rubber in 2D workspace (or chisel/cutter/eraser in 3D space) and maybe (optionally) visually represented on the screen also, so that the 2 tools can both work at the same time, or being use without the need for switching tools, while yet another finger (such as thumb) being assigned another function such as smudge/ mooth/color and maybe (optionally) visually represented as related cursor/tool icon on the screen to also be used at the same time, or being used without switching tools.
In the above example, tactile/vibration as well as other "physics feedback" such as temperature, wind/airflow might also be provided to the finger assigned specific tool so that user could have direct feeling if the tool is working/effective/touching the working article, besides visual (which might be more difficult in 3D).
Other example application using hand+wrist or palm, 3D movement hint:
Using hint for (physically) Turning a page (use hand ) -> this hint/interpretation is useful in "word/document" processing in which user can navigate through a stack of
things/book pages and no need to swipe left and right.
Word/document processing or page editing: There could be multiple mode of navigating: Is one is just simply simulates the mouse and key board,
2nd is like video game configuration, recognizing "ergonometric" gestures such as with forearm vertical or close to vertical (and elbow supported), and finger/hand movement to indicate caret/cursor location or page up/down (swipe, multiple finger, actually ignore index finger as this moves background, while index finger moves the cursor/foreground.) Gesture interpretation could including "Grab the current page/object with left hand" to create a screen copy so that user can see different portions/view port/perspectives of the article being editied.
Photoshop like picture editing: grab one object on top of the other create a Mask/ cut edges.
3D modeling: grab and rotate, place an object, feel if it fits into other parts. It can be 2 hands, left navigating/selecting (back ground/foreground), right selecting/manipulating. Hit/press the model (to re-shape) with resistance/tactile feedback.
In one embodiments, like the way mouse is used, for selections, we can set one finger (index) is selection, the other (middle) is un-selection, while thumb can be
moving/dragging. Pinky still context menu/search.
Some examples of visual GUI controls and methods to control them (with gesture) Such as a Knob, which can be pushed/pulled, and rotated
In an content management editing system, the content can be made an other screen copy, by detecting Left hand moving/ gr abbing ( so that user can compare left/right can see different portions/view ports of the content)
In one embodiment, the "right simulated click" by the the pinky (little finger) or ring finger can trigger can bring up the context menu or trigger activities/events equivalent to that of right clicking of a mouse.
In an related method embodiment: of using different patterns generated by Finger tip "Fixel" array to hint or to simulates different spatial relationship/physics effect with the object/control(such as key/button/slide etc). From not touch (blank pressure pattern— non Fixel actuated) to merely touched (one fixel actuated in the pressure pattern, such as at center point if the finger lined up with the button/control) to fully pressed: (all points/fixels on the pressure pattern are actuated), One side of the key (only the fixel corresponding to the side of finger touched the object, are actuated). When finger pressed between 2 adjacent object/buttons (user could feel the "seam" between the 2 objects: only the 2 sides corresponding to the areas that finger tip touch the keys were actuated, but not the center "seam" area).
In an related method embodiment, to simulate "slide'Vrelative movement with vibration, or pattern change, and simulate "Hold" with static pattern. Maybe together with visual and/or sound effect. In an related method embodiment,To simulate the "uneven of surface" or the protrusion and/or "depressed area" of the surface by providing different pressure patterns.
In an related method embodiment, Pressure pattern/physics ("feelable") effects can be provided together with "Force feedback" provided by the exoskeletons(such as at finger/arm level).
(localized coordinates— instruments follows the hand)"A software UI, have 3D model that simulates the musical instrument, and use local (relative, preferred as instrument move together with hand/arm) or global position to determine the relationship of the instrument and human limb/body (we usually treated as they attached to the limb/body), and what parts are interacted".
Simulation of relative movement: Having this "indicating dynamic pattern", which is a time sequence of frames (of pattern) in which the "movement" of tixels indicates/hints the direction/movement we want to prompt/hint to user.
** Super important: Prompt for short-cut key/ short cut gesture! (fast learning to advanced level)
A GUI system with objects/controls/widgets having such feedback properties (such as but not limit to as "material" or part of "material" of an object properties , as defined in many 3D editing environments).
some (additional) sample embodiments / scenarios
In an embodiment, An user interface system for interacting with a "model" in a computer system, which have virtual object/control/widget, in 3D, for user to interact with; such virtual object might have "tactile material, temperature property, vibration/frequency, wind/airflow/humidity " etc. feel-able physics properties associated with it (might be dynamic or changeable); use gesture or finger position information to determine if a virtual object, component/ widget/control (that is displayed ? or that is not hidden; or better, not mention, b/c we can use the case for "blind typing'V'blind key stroke of piano") if a finger (or fingers) is intersect/collided/within or without the range of
"active'Veffective zone of such control, and generate related events (not quite new if only events) maybe according to context, and at the same time generate physics feedback to user's limb part such as finger(s), palm, wrist or arm, so that user will know if a press is effective, halfway or not touched, such "force/physics" feedback might be based on said physics property of the control/widget, and optionally visual and/or audio feedback might also be provided to user.
In an embodiment according to above, OS system having different "pattern" of hitting effects such as single point(fade in-out with strength, frequency, duty cycle change), moving, ripple and other effects to feedback to user ("embellish" the feedback). In an related embodiment (to the Is ), such UI system, utilizes physics feedback pattern mechanisms located on user, such as (multiple Fixles) on user's finger tip, to perform hint, indication, feedback (based on physics such as pressure or vibration pattern.) of virtual objects displayed to user, (and interacting with the corresponding part of body/limb of user)
In an related method embodiment including assigning different fingers with different tools and allow them to be used SIMULTANEOUSLY. For example, index finger is a pencil, middle finger a rubber.
In an related method embodiment including on board (or accelerated) collision model processing : for a given mode such as key board, or some other button related and button following mode, such "3D" model of the object to touch/collide is sent to the device and it can directly process the collision, so there's no need for CPU or delays to interference with user experience.
In a embodiment according to above , the LII system includes at least 2 level of events, and allow custom filtering for different applications.
Such system, employ "hint based" ergonometric pose "detection" to determine the "mode" of input and switch seamlessly (automatic switch). - explain hint based system in detail (copy from previous application)
In a related embodiment, the UI system/middleware system/OS allow 3rd party apps or plugins (that directly handling 3D models and "physical" events such as collision detection and output tbrce feedback, while bubbling "business event" to client PC system). In the scenario the client system reacts slowly, the on board processing will make the user experience "still ok", this is a bit like when user pressed a button with slow system reaction, user will still feel the button (although the result of that press from client system comes late). It will not be appropriate when user trying to press a button but didn't feel anything until much later.
In some embodiments including a wearable display, located for example but not limited to on user's back-of-hand (and might be flexible), to display information such as (but not limited to) current operation mode and optionally feedback/device status information visually, and such information might also be provided audibly via earphone/speaker.
As the suitable system and method here (such as but not limt to sensors, IMU, actuators, controllers etc) may be embodied in a wide variety of forms, some of which may be quite different from those of the disclosed embodiment. Consequently, the specific structural and functional details disclosed herein are merely representative; yet in that regard, they are deemed to afford the best embodiment for purposes of disclosure and to provide a basis for the claims herein which define the scope of the present invention. Description of drawings
Fig.l shows a intuitive way of 3D manipulation. Sensor detect movement 101 of index finger and depends on if user is trying to "catch" or "grab" the virtual object 102 (that is displayed on a 2D or 3D display), the interaction with the object will be different. Fig.1 shows user's thumb 105 is not yet reached/touched the virtual object 102 while index finger (and its on-screen representation 103) have been determined to be touching the virtual object 103, if user is wearing a force feedback wearable device as described in the specification, then user's index finger will have the touch/tactile feeling/prompt. And user can rely on this feedback to manipulate the object. (Visual clues might also displayed on the screen as shown). In the current situation the movement 101 of finger is act as pushing a side of the virtual object and thus make it rotate, so that user can examine the other side of the object. If user also use thumb 105 help to "grab" the object with 2 fingers, the movement can be interpreted as moving the object from the current location to a new one, without rotation. This is an example of using intuitive
interpretation of user movement and manipulate object WITHOUT input command or selecting menu commands.
Fig.2 shows a hint or "Ergonometric simulated pose" for a gun aiming position in a FPS game (or similar VR environment/simulation) showing on screen 201, sensors detecting movement/rotation/position of moving limb/palm/ lingers can be used or "mapped" to regular FPS action commands such as turn, movement, pitch, yaw and etc that normally provided by using keyboard/mouse or joystick. In this scenario, the forearm is like a joystick, its movement forward (direction 203A) and backwards (direction 203C) maps to joystick(or key board) forward and backwards, or corresponding direct Xinput commands to the FPS game engine. Side movement (203B and 203D) of forearm means moving (without turning) sideways, just like using joystick or keyboard. The twist action of palm/wrist horizontally can be mapped to yaw (turning horizontally) control, CW or CCW, (this mainly used for direction and aim), and the up-down twist of index finger/wrist movement can be interpreted as pitch (up, down) control which also controls the direction and aim. When user flex(bent) the index finger to an (configurable) extent, it can be interpreted as firing the weapon 202 in the picture. These mappings/hints are intuitive and user almost do not need to learn anything, and since user's elbow can be supported on the table or arm of chair it will not cause user to fatigue when using this kind of mapping/pose.
Fig.3A shows a dynamic tactile feedback pressure pattern is provided to user according to the relative position (and "touch" status) of the finger tip and the button/control displayed in 3D. Fig 3B and Figure 3C shows an pneumatic actuator (connecting hose not shown) with 5 "fixels" to finger tip and its "mask" (can be such as but not limited to: relative resilient(but more rigid than the "membrane" or film on top of the openings on the mask in Fig 3C), or at a rigid at some degree that is comfortable to user, and etc.) that fits to a user's finger tip. Fig3C shows the mask without a covering "membrane"/film(s) or other kind of resilient material, which cover the 5 openings shown in Fig 3C. Air/gas from the hoses (not shown) through the 5 connecters (as shown in Fig 3B) will (selectively) enter the chamber that is formed by the 5 openings and the resilient "membrane"/ film(s), and inflate the "membrane'Vfilm(s) to form pressure at corresponding position of the finger tip. Changing which openings get inflated/deflated, for how long, the pressure pattern formed by the 5 "fixels" and the pressure of a single fixel can be controlled.
In Fig 4
Most importantly, the VR display, or Augmented (see thru) display , using the
(broadened) "gesture/pose" info or hand/finger pose/movement info, and its relation to the head/eye position, to determine where (such as in stereo images) the avatar or the text/object/control following the finger/hand movement's screen position of the argument reality display.
Fig 5 shows a gesture based UI with local wearable display, the image displayed on the wearable display can help user to use their finger only, without any actual physical keyboard, to implement an "Air keyboard" function, without the need of any other GUI or "remote'Vextemal display that is not wearable, (with or without tactile feedback). Basically user can use this the same way they use any remote control - use index finger to select and "press" button, while using thumb with other fingers or palm to
"navigate'Vscroll the remote panel if the button is blocked. For example, user can use 2 fingers (such as THUMB and middle/4 h or pinky) for indicating for "zoom in" and "zoom out" which is a similar movement to what he/she could do on a smartphone touch screen, and the wearable system can capture such finger movement of user, maybe filter out noise and "drifting" by referring to sensors on other part of the hand/arm or body; similarly the wearable system could allow/detect user's using of index finger (or middle finger, or together) for navigating (Up/Down/Left/Right), and index finger for click (middle finger can be dedicated for scroll up/down, similar the way user use a mouse). Such movement of UI (controls), focus, and resulting status changes (such as
number/menu item selected, button pressed or command issued, and their result, or subsequent dialog box, warning etc.) can be displayed on a (local) wearable display such as a orientation changeable display on the wearable itsel or a HMD/glass display/(see- thru type) augmented reality display that is wom before user's eyes. So that user will get the visual feedback while navigating/typing with the UI, without the need of an external display (such as of a device, or a fixed display device connected to), and enable to user to control the device via the wearable in a similar the way user use a remote (for example finding a key and press it).
OR, in case there's a well defined standard "key stroke" or "finger movement reaching the keys/controls of an instrument" (or "signature movement") like the those being tought for piano, typewriter etc., then such keystroke/signature movement of hand(including fingers, palm, wrist and etc.), arm and etc. , such as acceleration, rotation, relative traveling distance/direction from finger tip to palm, wrist or other fingers, or relative movement/rotation of hand/palm to wrist/forearm, and etc. (basically those considered deliberate movement of certain sections of body part, relative to some other close by body parts that is relatively "stable" or "stationary"), and possible (optionally) also with tactile feedback. In such way, a display of the UI itself might not be necessary (no need for a (local) wearable display or remote display, for the purpose of displaying the control as user can already "blind type") .
Fig7:
In figure 7, a glove like wearable device equipped with multiple IMUs (701) on the key moving sections of the finger, which could detect accelerations and possibly together with gyro(angular speed/ acc) magnetic , and could have other type of sensors such as for components detecting bent/flex of the joint 703 (resistor or fiber optic cable, magnetic and etc.)
These IMUs, in forms such as MEMS device, can communicate with a (central) processing unit 702 (for purpose such as but not limited to DSP, noise reduction, gesture recognition, machine learning, communication, event/message generation and handling, task management, 3rd party plugin/app management and running, and etc.) via links such as I2C, SPI and etc.
The information is gathered, and processed (not necessary by 702), to form events at different levels— some movement based (not collision based) events can already be generated, and maybe skeleton info. And if we have a VR (close to body part) model, we can detect collisions (for example on board).
• Diff between palm and finger, or tip with base of finger, provides relative finger section movement (events) that software developer can subscribe to.
• Cross-filtering with bent/ flex sensors, or other fingers help to canceling the noise or "drifting"
If we have collision based events, it can be processed to generate tactile feedback.
Or, if it is control level feedback (auto following), then we just need to determine if a certain "signature pose" is achieved, to provide feedback.
Tactile/Pressure display arrays is located on key sections of the fingers and maybe also other part of the hand. As magnified in Fig 1 A - they are like one or more "mask" with a "ergonometric " (such as a concaved curve shape with size fits to the "finger print" side of that finger) shape , and with a plurality of Fixels located on at least one such mask/surface that is accommodating a finger tip (the "finger print" side);
In a related embodiment, a fixel is a pneumatic actuator made of flexible materials that can change physical shape and excerpt pressure to user's skin, such fixel may have one input and from zero to 2 output, and with one or more flexible pneumatic thin hose attach to/coupling with it , (or it could be printed/flat air hose circuit/slots directly from the glove material), the hose linked to the output of one or more air switch(es)/ multiplexer(s)/ solenoids which output multiple states.) which controls the
inflation/deflation/pressure of the cell/fixel.
The effector can also be other micro actuators, such as one specialized in
vibration(inducer, peizo), micro solenoids, memory alloy, or the combination.
There could be one or more (a plurality of) lightweight servo-operated air switch is placed on the glove like wearable, preferably on the back of the hand, and maybe together with light weight solenoids, that (control the distribution of) or direct the compressed air to the end effectors. The switch/solenoids are connect to one or more (different pressure) air source. The airsource might be located on accessories on the arm and link with controllers 707 via detachable connector 706.
In an embodiment air source 705 is small C02 tank used in paintball with one or more pressure-control valve/devices (in 708) to provide air, gas, maybe in different pressure level. Such level might be adjustable (dynamically).

Claims

Claims:
1. An apparatus for intuitive UI input comprise of: wearable device to accommodate the size of a user's hand and have multiple EVIU (MEMES) sensors (may include by not limited to: inertia/gyro scope sensors, magnetic sensors, pressure/temperature sensors and etc) located on the joints or movable parts of human limb/hand/fingers for tracking movements; Using the movement info or the position info (such as pitch, direction) of limbs / moving body part such as upper arm, elbow, forearm, wrist, palm , that is higher in the hierarchy of the connection sequence to torso (such as, upper arm >forearm > palm) - or "more distance" from the end of the limb - to determine the mode of input, or how to interpret the movement of the limb/body moving part(such as palm, finger) that is lower the hierarchy of the connection sequence to torso (or closer to the end of the limb).
2. In a apparatus according to 1, EVIU signals from different moving parts of
body/limb(such as those from fingers and those from hands) can be used together to "filtering" out environment noise to improve measurement accuracy, and can provide "difference" signal (by subtracting on the same (eula) coordinate system on the same scale) so that local "relative" movement (such as finger relative to palm, or one finger relative to other finger, or one section of a finger relative to other sections, and etc.) can be accurately measured.
3. In a apparatus according to claim 1, other kind of sensors such as flex sensor (changes output value according to the degree of curve/flex) is used to provide additional info for detecting the movement of body parts (such as finger and palm, etc), and such information can be integ racy of the judgment from EVIU sensor information.
4. In a method for human- device/machine interaction including using human centric approach rather than device centric approach is used, in a way by providing wearable(s) that allow intuitive control (such as context aware gesture inputs) that is be able to communicate/control with the device(s), or maybe with multiple devices, with UI and feedbacks provided to user locally (without the need to go to the devices).
Ina related embodiment, the wearable might have computing/signal processing capabilities (such as but not limited to gesture related, 3d modeling related, collision detection related, and etc) besides the communicate/control ability, and could also have other sensors such as temperature, humidity, electro/magnetic field strength, illumination and etc. on board.
5. In a method related to claim 4 , also include means to integrate information from different devices, (and possibly also with information from wearable(s) such as temperature, humidity, electro/magnetic field strength, illumination and etc), to one or more "virtual dashboard"(s), which could be displayed on wearable displays (or external display currently visible to user), and allow control different devices (such as
independently , or collectively with some tool/integrator/script).
6. In a method related to claim 4, the feedback for the control could be tactile/pressure pattern or other physics effect (such as but not limited to temperature, airflow/wind, humidity changes, and etc.) provided by the tactile displays and/or light weight effectors located on the wearable (such as tactile display on tip and other sections of the
finger/palm).
7. A Human centered cross platform/device user interface and management system, comprised of:
communicate channel(s) with devices (such as but not limited to smart appliances, IoT(internet of things) devices, TV/setup boxes/ AV systems, air
conditioning/temperature controls, light/curtain controls, phones/communication devices, PC/game consoles and etc.) which user can interact with this system, such as but not limited to via Infra red, Ultrasound, RF, and network( wired or wirelessly);
A module for processing "human-focus" centered, user input;
A module/means (for task/device management) to determine which device is the one user wants to interact/control with ("engage with"), based on "human-focus" centered, user input such as controls from wearable, or (the interpretation) of the hand/arm
movement/gesture of user;
And send commands based on user input such as controls from wearable, or (the interpretation) of the hand/arm movement/gesture of user to the device via said communication channel.
8. In a mobile/wearable cross-device user interface and management system embodiment according to claim 7, the "task/device management" module may provide device status feedback to user (for examnle but not limited to visually, in audio, tactile, other physics "feel-able" feedback anc
9. In a system according to claim 7, the "user input" used to determine which device is the one user wants to interact/control is based on a gesture command which have a begin and end based on user's movement, such as but not limited to acceleration or deceleration, rotation/flexing of one or more body part segments such as segments of finger, palm, wrist and etc.
10 In a system according to claim 7, the gesture command could be based on interpreting one or more gesture(s) of user.
PCT/IB2015/002356 2014-12-16 2015-12-16 Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback WO2016097841A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580069328.1A CN107209582A (en) 2014-12-16 2015-12-16 The method and apparatus of high intuitive man-machine interface
EP15869412.5A EP3234742A4 (en) 2014-12-16 2015-12-16 Methods and apparatus for high intuitive human-computer interface

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462092710P 2014-12-16 2014-12-16
US62/092,710 2014-12-16
US201562099915P 2015-01-05 2015-01-05
US62/099,915 2015-01-05
US201562110403P 2015-01-18 2015-01-18
US62/110,403 2015-01-18

Publications (2)

Publication Number Publication Date
WO2016097841A2 true WO2016097841A2 (en) 2016-06-23
WO2016097841A3 WO2016097841A3 (en) 2016-08-11

Family

ID=56127780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/002356 WO2016097841A2 (en) 2014-12-16 2015-12-16 Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback

Country Status (3)

Country Link
EP (1) EP3234742A4 (en)
CN (1) CN107209582A (en)
WO (1) WO2016097841A2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101937686B1 (en) 2017-10-27 2019-01-14 박재홍 Wearable control device
RU186397U1 (en) * 2017-06-07 2019-01-17 Федоров Александр Владимирович VIRTUAL REALITY GLOVE
DE102017122377A1 (en) * 2017-09-27 2019-03-28 Deutsches Zentrum für Luft- und Raumfahrt e.V. Glove-type input / output device and method for outputting thermo-receptive information via a force
WO2019067144A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Imu-based glove
WO2019083406A1 (en) * 2017-10-27 2019-05-02 Федоров, Александр Владимирович Method of producing a virtual reality glove (embodiments)
KR20190125873A (en) * 2018-04-30 2019-11-07 한국과학기술원 Method of outputting tactile pattern and apparatuses performing the same
WO2019216864A1 (en) * 2018-05-09 2019-11-14 Havelsan Hava Elektronik Sanayi Ve Ticaret Anonim Sirketi Explosives training simulator
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard
WO2020117537A1 (en) * 2018-12-03 2020-06-11 Microsoft Technology Licensing, Llc Augmenting the functionality of user input devices using a digital glove
WO2020117533A1 (en) * 2018-12-03 2020-06-11 Microsoft Technology Licensing, Llc Augmenting the functionality of non-digital objects using a digital glove
CN111724487A (en) * 2020-06-19 2020-09-29 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
US10914567B2 (en) 2018-02-23 2021-02-09 Apple Inc. Magnetic sensor based proximity sensing
CN113220117A (en) * 2021-04-16 2021-08-06 邬宗秀 Device for human-computer interaction
US11137905B2 (en) 2018-12-03 2021-10-05 Microsoft Technology Licensing, Llc Modeless augmentations to a virtual trackpad on a multiple screen computing device
CN113589923A (en) * 2021-06-28 2021-11-02 深圳先进技术研究院 Gesture control-oriented human-computer interaction system and method
WO2021259986A1 (en) * 2020-06-23 2021-12-30 Otto Bock Healthcare Products Gmbh Prosthetic device having a prosthetic hand and method for operating a prosthetic device
US11314409B2 (en) 2018-12-03 2022-04-26 Microsoft Technology Licensing, Llc Modeless augmentations to a virtual trackpad on a multiple screen computing device

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10698497B2 (en) * 2017-09-29 2020-06-30 Apple Inc. Vein scanning device for automatic gesture and finger recognition
WO2019122915A1 (en) * 2017-12-22 2019-06-27 Ultrahaptics Ip Ltd Human interactions with mid-air haptic systems
CN108205373B (en) * 2017-12-25 2021-08-13 北京致臻智造科技有限公司 Interaction method and system
US10617942B2 (en) 2017-12-29 2020-04-14 Microsoft Technology Licensing, Llc Controller with haptic feedback
CN109992093B (en) * 2017-12-29 2024-05-03 博世汽车部件(苏州)有限公司 Gesture comparison method and gesture comparison system
WO2019142525A1 (en) * 2018-01-19 2019-07-25 ソニー株式会社 Wearable device and mounting fixture
CN108803872B (en) * 2018-05-08 2021-07-27 上海嘉奥信息科技发展有限公司 Plug-in system for invoking force feedback hardware in illusion engine
CN109032343B (en) * 2018-07-04 2022-02-11 青岛理工大学 Industrial man-machine interaction system and method based on vision and haptic augmented reality
CN109276881A (en) * 2018-08-31 2019-01-29 努比亚技术有限公司 A kind of game control method, equipment
CN109512457B (en) * 2018-10-15 2021-06-29 东软医疗系统股份有限公司 Method, device and equipment for adjusting gain compensation of ultrasonic image and storage medium
US10860065B2 (en) * 2018-11-15 2020-12-08 Dell Products, L.P. Multi-form factor information handling system (IHS) with automatically reconfigurable hardware keys
CN109697013A (en) * 2018-12-26 2019-04-30 北京硬壳科技有限公司 Control light calibration method, cursor control and cursor control device
CN109771905A (en) * 2019-01-25 2019-05-21 北京航空航天大学 Virtual reality interactive training restoring gloves based on touch driving
US11137908B2 (en) * 2019-04-15 2021-10-05 Apple Inc. Keyboard operation with head-mounted device
CN110236559A (en) * 2019-05-30 2019-09-17 南京航空航天大学 The multi-modal feature extracting method of inertia gloves towards piano playing
US20210089131A1 (en) * 2019-09-23 2021-03-25 Apple Inc. Finger-Mounted Input Devices
CN110434882B (en) * 2019-09-26 2024-05-24 滁州职业技术学院 Three finger holders of imitative people finger inflatable software
CN111601129B (en) * 2020-06-05 2022-04-01 北京字节跳动网络技术有限公司 Control method, control device, terminal and storage medium
CN112114663B (en) * 2020-08-05 2022-05-17 北京航空航天大学 Implementation method of virtual reality software framework suitable for visual and tactile fusion feedback
CN112450995B (en) * 2020-10-28 2022-05-10 杭州无创光电有限公司 Situation simulation endoscope system
CN114083544B (en) * 2022-01-21 2022-04-12 成都市皓煊光电新材料科技研发中心有限公司 Method and device for controlling movement of hand-shaped equipment, hand-shaped equipment and storage medium
CN114816625B (en) * 2022-04-08 2023-06-16 郑州铁路职业技术学院 Automatic interaction system interface design method and device
WO2023197475A1 (en) * 2022-04-15 2023-10-19 深圳波唯科技有限公司 Control method and control apparatus for electric apparatus, and sensor
CN115016635A (en) * 2022-04-18 2022-09-06 深圳原宇科技有限公司 Target control method and system based on motion recognition
CN114976595A (en) * 2022-05-17 2022-08-30 南昌黑鲨科技有限公司 Intelligent antenna system
US11874964B1 (en) 2022-12-02 2024-01-16 Htc Corporation Glove

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002230814A1 (en) * 2000-11-02 2002-05-15 Essential Reality, Llc Electronic user worn interface device
WO2004114107A1 (en) * 2003-06-20 2004-12-29 Nadeem Mohammad Qadir Human-assistive wearable audio-visual inter-communication apparatus.
US20070113207A1 (en) * 2005-11-16 2007-05-17 Hillcrest Laboratories, Inc. Methods and systems for gesture classification in 3D pointing devices
CN101581990B (en) * 2008-05-13 2011-12-07 联想(北京)有限公司 Electronic equipment as well as wearable pointing device and method applied to same
US9298260B2 (en) * 2010-03-12 2016-03-29 Broadcom Corporation Tactile communication system with communications based on capabilities of a remote system
US20130021374A1 (en) * 2011-07-20 2013-01-24 Google Inc. Manipulating And Displaying An Image On A Wearable Computing System
GB2510774A (en) * 2011-11-30 2014-08-13 Hewlett Packard Development Co Input mode based on location of hand gesture
EP2613223A1 (en) * 2012-01-09 2013-07-10 Softkinetic Software System and method for enhanced gesture-based interaction
US9411423B2 (en) * 2012-02-08 2016-08-09 Immersion Corporation Method and apparatus for haptic flex gesturing
US20130249793A1 (en) * 2012-03-22 2013-09-26 Ingeonix Corporation Touch free user input recognition
US9170674B2 (en) * 2012-04-09 2015-10-27 Qualcomm Incorporated Gesture-based device control using pressure-sensitive sensors
US20140198130A1 (en) * 2013-01-15 2014-07-17 Immersion Corporation Augmented reality user interface with haptic feedback
WO2014144015A2 (en) * 2013-03-15 2014-09-18 Keller Eric Jeffrey Computing interface system
CN203941498U (en) * 2014-02-21 2014-11-12 上海市七宝中学 A kind of sensor glove for drawing three-dimensional model
CN104007844B (en) * 2014-06-18 2017-05-24 原硕朋 Electronic instrument and wearable type input device for same

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU186397U1 (en) * 2017-06-07 2019-01-17 Федоров Александр Владимирович VIRTUAL REALITY GLOVE
WO2019063522A1 (en) * 2017-09-27 2019-04-04 Deutsches Zentrum für Luft- und Raumfahrt e.V. Glove-type input/output device and method for outputting a piece of information which can be detected by thermoreception
DE102017122377A1 (en) * 2017-09-27 2019-03-28 Deutsches Zentrum für Luft- und Raumfahrt e.V. Glove-type input / output device and method for outputting thermo-receptive information via a force
KR20200035296A (en) * 2017-09-29 2020-04-02 애플 인크. IMU-based gloves
WO2019067144A1 (en) * 2017-09-29 2019-04-04 Apple Inc. Imu-based glove
US10877557B2 (en) 2017-09-29 2020-12-29 Apple Inc. IMU-based glove
KR102414497B1 (en) * 2017-09-29 2022-06-29 애플 인크. IMU-Based Gloves
WO2019083406A1 (en) * 2017-10-27 2019-05-02 Федоров, Александр Владимирович Method of producing a virtual reality glove (embodiments)
KR101937686B1 (en) 2017-10-27 2019-01-14 박재홍 Wearable control device
US10914567B2 (en) 2018-02-23 2021-02-09 Apple Inc. Magnetic sensor based proximity sensing
KR20190125873A (en) * 2018-04-30 2019-11-07 한국과학기술원 Method of outputting tactile pattern and apparatuses performing the same
KR102095639B1 (en) * 2018-04-30 2020-03-31 한국과학기술원 Method of outputting tactile pattern and apparatuses performing the same
WO2019216864A1 (en) * 2018-05-09 2019-11-14 Havelsan Hava Elektronik Sanayi Ve Ticaret Anonim Sirketi Explosives training simulator
WO2020117533A1 (en) * 2018-12-03 2020-06-11 Microsoft Technology Licensing, Llc Augmenting the functionality of non-digital objects using a digital glove
WO2020117537A1 (en) * 2018-12-03 2020-06-11 Microsoft Technology Licensing, Llc Augmenting the functionality of user input devices using a digital glove
US11137905B2 (en) 2018-12-03 2021-10-05 Microsoft Technology Licensing, Llc Modeless augmentations to a virtual trackpad on a multiple screen computing device
US11199901B2 (en) 2018-12-03 2021-12-14 Microsoft Technology Licensing, Llc Augmenting the functionality of non-digital objects using a digital glove
US11294463B2 (en) 2018-12-03 2022-04-05 Microsoft Technology Licensing, Llc Augmenting the functionality of user input devices using a digital glove
US11314409B2 (en) 2018-12-03 2022-04-26 Microsoft Technology Licensing, Llc Modeless augmentations to a virtual trackpad on a multiple screen computing device
CN111158476A (en) * 2019-12-25 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Key identification method, system, equipment and storage medium of virtual keyboard
CN111724487A (en) * 2020-06-19 2020-09-29 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
CN111724487B (en) * 2020-06-19 2023-05-16 广东浪潮大数据研究有限公司 Flow field data visualization method, device, equipment and storage medium
WO2021259986A1 (en) * 2020-06-23 2021-12-30 Otto Bock Healthcare Products Gmbh Prosthetic device having a prosthetic hand and method for operating a prosthetic device
CN113220117A (en) * 2021-04-16 2021-08-06 邬宗秀 Device for human-computer interaction
CN113220117B (en) * 2021-04-16 2023-12-29 邬宗秀 Device for human-computer interaction
CN113589923A (en) * 2021-06-28 2021-11-02 深圳先进技术研究院 Gesture control-oriented human-computer interaction system and method

Also Published As

Publication number Publication date
WO2016097841A3 (en) 2016-08-11
EP3234742A4 (en) 2018-08-08
CN107209582A (en) 2017-09-26
EP3234742A2 (en) 2017-10-25

Similar Documents

Publication Publication Date Title
EP3234742A2 (en) Methods and apparatus for high intuitive human-computer interface
US11221730B2 (en) Input device for VR/AR applications
CN110603509B (en) Joint of direct and indirect interactions in a computer-mediated reality environment
US11112856B2 (en) Transition between virtual and augmented reality
US20220121344A1 (en) Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
US10545584B2 (en) Virtual/augmented reality input device
US9996153B1 (en) Haptic interaction method, tool and system
CN107533373B (en) Input via context-sensitive collision of hands with objects in virtual reality
WO2016189372A2 (en) Methods and apparatus for human centric "hyper ui for devices"architecture that could serve as an integration point with multiple target/endpoints (devices) and related methods/system with dynamic context aware gesture input towards a "modular" universal controller platform and input device virtualization
US20200310561A1 (en) Input device for use in 2d and 3d environments
US8570273B1 (en) Input device configured to control a computing device
JP5784003B2 (en) Multi-telepointer, virtual object display device, and virtual object control method
KR101546654B1 (en) Method and apparatus for providing augmented reality service in wearable computing environment
CN110476142A (en) Virtual objects user interface is shown
EP2538309A2 (en) Remote control with motion sensitive devices
US20150253908A1 (en) Electrical device for determining user input by using a magnetometer
EP3549127B1 (en) A system for importing user interface devices into virtual/augmented reality
US10203781B2 (en) Integrated free space and surface input device
US11397478B1 (en) Systems, devices, and methods for physical surface tracking with a stylus device in an AR/VR environment
EP2538308A2 (en) Motion-based control of a controllled device
Bai et al. Asymmetric Bimanual Interaction for Mobile Virtual Reality.
JP2024048680A (en) Control device, control method, and program
KR101962464B1 (en) Gesture recognition apparatus for functional control
KR102322968B1 (en) a short key instruction device using finger gestures and the short key instruction method using thereof
Nguyen 3DTouch: Towards a Wearable 3D Input Device for 3D Applications

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015869412

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15869412

Country of ref document: EP

Kind code of ref document: A2