FI20225387A1 - Interaction device - Google Patents

Interaction device Download PDF

Info

Publication number
FI20225387A1
FI20225387A1 FI20225387A FI20225387A FI20225387A1 FI 20225387 A1 FI20225387 A1 FI 20225387A1 FI 20225387 A FI20225387 A FI 20225387A FI 20225387 A FI20225387 A FI 20225387A FI 20225387 A1 FI20225387 A1 FI 20225387A1
Authority
FI
Finland
Prior art keywords
user
orientation
palm
determining
interface element
Prior art date
Application number
FI20225387A
Other languages
Finnish (fi)
Swedish (sv)
Inventor
Henrik Terävä
Edvard Ramsay
Mikko Hämeranta
Original Assignee
Ai2Ai Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ai2Ai Oy filed Critical Ai2Ai Oy
Priority to FI20225387A priority Critical patent/FI20225387A1/en
Priority to PCT/FI2023/050227 priority patent/WO2023214113A1/en
Publication of FI20225387A1 publication Critical patent/FI20225387A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/218Input arrangements for video game devices characterised by their sensors, purposes or types using pressure sensors, e.g. generating a signal proportional to the pressure applied by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/26Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • A63F13/285Generating tactile feedback signals via the game input device, e.g. force feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/045Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using resistive elements, e.g. a single continuous surface or two parallel surfaces put in contact
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

Various example embodiments relate to determining position of user input or output elements on the surface of the interaction device (100). An apparatus (100, 600) may detect position and orientation of a palm of a user (110) on a surface of the apparatus (100, 600); determine, based on the position and orientation of the palm of the user (110), a position of at least one user interface element (102, 104) to be activated at the surface of the apparatus (100, 600); and receive user input or provide output with the at least one user interface element (102, 104).

Description

INTERACTION DEVICE
TECHNICAL FIELD
[0001] Various example embodiments generally relate to an interaction device.
Some example embodiments relate to determining position of user input or output element(s) on the surface of the interaction device.
BACKGROUND
[0002] In various applications, such as for example gaming, well-being, or emergency alert systems, it may be beneficial to provide easy access to user input or device output functions when handling a device. For example, a game controller may be shaped to guide users fingers such that they hit the desired buttons. Modern technologies such as touch screens enable configurable user interfaces (UI), where locations of virtual buttons may be freely selected.
SUMMARY
[0003] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed — subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0004] Example embodiments improve usability of an interaction device. This benefit may be achieved by the features of the independent claims. Further 3 implementation forms are provided in the dependent claims, the description, and ro 25 — the drawings. x [0005] According to a first aspect, an apparatus comprises: means for detecting
I position and orientation of a palm of a user on a surface of the apparatus; means for a
N determining, based on the position and orientation of the palm of the user, a position 3 of at least one user interface element to be activated at the surface of the apparatus;
N 30 and means for receiving user input or providing output with the at least one user
N interface element.
[0006] According to an example embodiment of the first aspect, the apparatus further comprises: means for determining positions of fingers of the user on the surface of the apparatus; and means for determining a blocked area of the surface of the apparatus comprising the position of the palm of the user and the positions of the fingers of the user, wherein the position of the at least one user interface element to be activated is outside the blocked area.
[0007] According to an example embodiment of the first aspect, the at least one user interface element to be activated comprises at least one speaker, at least one light, at least one microphone, or at least a portion of a display. — [0008] According to an example embodiment of the first aspect, the apparatus further comprises: means for determining a position of at least one user input finger of the user on the surface of the apparatus, wherein the at least one user interface element comprises a virtual or physical user input element at the position of the at least one user input finger on the surface of the apparatus. — [0009] According to an example embodiment of the first aspect, the position of the at least one user input finger comprises a position of at least one distal phalange of the at least one user input finger.
[0010] According to an example embodiment of the first aspect, the apparatus further comprises: means for determining the positions of the fingers of the user and/or the position of the at least one user input finger on the surface of the apparatus based on asymmetry of the palm of the user or at least one portion of the palm of the user.
N [0011] According to an example embodiment of the first aspect, the at least one
O portion of the palm of the user comprises a thenar eminence or a hypothenar ro 25 eminence of the user. x [0012] According to an example embodiment of the first aspect, the apparatus
I further comprises: means for detecting a position and/or orientation of at least one
N knuckle, at least one proximal phalange, and/or at least one intermediate phalange 3 of the user on the surface of the apparatus; means for determining, based on the
N 30 — position and orientation of the palm of the user and the position and/or orientation
N of the at least one knuckle, the at least one proximal phalange, and/or the at least one intermediate phalange of the user, a position of at least one distal phalange of the user on the surface of the apparatus, wherein the position of the at least one user interface element to be activated overlaps with the position of the at least one distal phalange of the user on the surface of the apparatus.
[0013] According to an example embodiment of the first aspect, the apparatus further comprises: means for determining an initial orientation and/or an initial location of the apparatus, in response to determining the position of the at least one user interface element at the surface of the apparatus; means for monitoring current orientation and/or current location of the apparatus with respect to the initial orientation and/or the initial location by at least one sensor of the apparatus; means — for detecting a release of the apparatus by the user; means for performing at least one of: determining, based on the current orientation of the apparatus, at least one updated position at the surface of the apparatus pointing towards the user and providing further output with at least one user interface element located at the at least one updated position at the surface of the apparatus, or adjusting, based on a — distance between the current location of the apparatus and the initial location of the apparatus, at least one parameter associated with the at least one user interface element located at the at least one updated position at the surface of the apparatus.
[0014] According to an example embodiment of the first aspect, the initial orientation and/or the initial location of the apparatus is determined further in — response to detecting the apparatus to be substantially stationary.
[0015] According to an example embodiment of the first aspect, wherein the means for determining the initial orientation or the current orientation of the
N apparatus comprises at least one gyroscope, and/or wherein the means for
O determining the current location of the apparatus comprises at least one acceleration
LÖ 25 sensor. <Q x [0016] According to an example embodiment of the first aspect, the means for
I detecting the position and orientation of the palm of the user on the surface of the
N apparatus comprises at least one of: at least one touch sensor, at least one magnetic 3 sensor, at least one force resistive sensor, at least one pressure sensor, or at least
N 30 one ambient light sensor.
N [0017] According to an example embodiment of the first aspect, the means for detecting the position and orientation of the palm of the user on the surface of the apparatus comprises a machine learning model configured to take as input data captured by at least one of: the at least one touch sensor, the at least one magnetic sensor, the at least one force resistive sensor, the at least one pressure sensor, or the at least one ambient light sensor.
[0018] According to an example embodiment of the first aspect, the surface of the apparatus is substantially spherical or substantially elliptical.
[0019] According to a second aspect, a method comprises: detecting position and orientation of a palm of a user on a surface of an apparatus; determining, based on the position and orientation of the palm of the user, a position of at least one user interface element to be activated at the surface of the apparatus; and receiving user input or providing output with the at least one user interface element.
[0020] According to an example embodiment of the second aspect, the method further comprises: determining positions of fingers of the user on the surface of the apparatus; and means for determining a blocked area of the surface of the apparatus comprising the position of the palm of the user and the positions of the fingers of the user, wherein the position of the at least one user interface element to be activated is outside the blocked area.
[0021] According to an example embodiment of the second aspect, the at least one user interface element to be activated comprises at least one speaker, at least one — light, at least one microphone, or at least a portion of a display.
[0022] According to an example embodiment of the second aspect, the method further comprises: determining a position of at least one user input finger of the user
N on the surface of the apparatus, wherein the at least one user interface element
O comprises a virtual or physical user input element at the position of the at least one ro 25 user input finger on the surface of the apparatus. x [0023] According to an example embodiment of the second aspect, the position
I of the at least one user input finger comprises a position of at least one distal
N phalange of the at least one user input finger. 3 [0024] According to an example embodiment of the second aspect, the method
N 30 — further comprises: determining the positions of the fingers of the user and/or the
N position of the at least one user input finger on the surface of the apparatus based on asymmetry of the palm of the user or at least one portion of the palm of the user.
[0025] According to an example embodiment of the second aspect, the at least one portion of the palm of the user comprises a thenar eminence or a hypothenar eminence of the user.
[0026] According to an example embodiment of the second aspect, the method 5 further comprises: detecting a position and/or orientation of at least one knuckle, at least one proximal phalange, and/or at least one intermediate phalange of the user on the surface of the apparatus; determining, based on the position and orientation of the palm of the user and the position and/or orientation of the at least one knuckle, the at least one proximal phalange, and/or the at least one intermediate phalange of — the user, a position of at least one distal phalange of the user on the surface of the apparatus, wherein the position of the at least one user interface element to be activated overlaps with the position of the at least one distal phalange of the user on the surface of the apparatus.
[0027] According to an example embodiment of the second aspect, the method — further comprises: determining an initial orientation and/or an initial location of the apparatus, in response to determining the position of the at least one user interface element at the surface of the apparatus; monitoring current orientation and/or current location of the apparatus with respect to the initial orientation and/or the initial location by at least one sensor of the apparatus; detecting a release of the — apparatus by the user; performing at least one of: determining, based on the current orientation of the apparatus, at least one updated position at the surface of the apparatus pointing towards the user and providing further output with at least one
N user interface element located at the at least one updated position at the surface of
O the apparatus, or adjusting, based on a distance between the current location of the ro 25 apparatus and the initial location of the apparatus, at least one parameter associated x with the at least one user interface element located at the at least one updated
I position at the surface of the apparatus.
N [0028] According to an example embodiment of the second aspect, the initial 3 orientation and/or the initial location of the apparatus is determined further in
N 30 response to detecting the apparatus to be substantially stationary.
N [0029] According to an example embodiment of the first aspect, wherein determining the initial orientation or the current orientation of the apparatus is performed based on data captured by at least one gyroscope, and/or wherein determining the current location of the apparatus is performed based on data captured by at least one acceleration sensor.
[0030] According to an example embodiment of the second aspect, wherein the detecting the position and orientation of the palm of the user on the surface of the apparatus is performed based on data captured by at least one of: at least one touch sensor, at least one magnetic sensor, at least one force resistive sensor, at least one pressure sensor, or at least one ambient light sensor.
[0031] According to an example embodiment of the second aspect, the method — further comprises: detecting the position and orientation of the palm of the user on the surface of the apparatus by a machine learning model configured to take as input data captured by at least one of: the at least one touch sensor, the at least one magnetic sensor, the at least one force resistive sensor, the at least one pressure sensor, or the at least one ambient light sensor. — [0032] According to an example embodiment of the second aspect, the surface of the apparatus is substantially spherical or substantially elliptical.
[0033] According to a third aspect, a computer program or a computer program product comprises program code configured to, when executed, cause an apparatus at least to: detect position and orientation of a palm of a user on a surface of the apparatus; determine, based on the position and orientation of the palm of the user, a position of at least one user interface element to be activated at the surface of the apparatus; and receive user input or providing output with the at least one user
N interface element. The computer program or computer program product may further
O comprise instructions for causing the apparatus to perform any example ro 25 embodiment of the method of the second aspect. x [0034] According to a fourth aspect, an apparatus comprises at least one
I processor; and at least one memory including computer program code, the at least
N one memory and computer program code configured to, with the at least one 3 processor, cause the apparatus to: The at least one memory and the computer code
N 30 — may be further configured to, with the at least one processor, cause the apparatus to
N perform any example embodiment of the method of the second aspect.
[0035] Any example embodiment may be combined with one or more other example embodiments. Many of the attendant features will be more readily appreciated as they become better understood by reference to the following detailed description considered in connection with the accompanying drawings.
DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings, which are included to provide a further understanding of the example embodiments and constitute a part of this specification, illustrate example embodiments and together with the description — help to understand the example embodiments. In the drawings:
[0037] FIG. 1 illustrates an example of an interaction device handled by a user;
[0038] FIG. 2 illustrates another example of an interaction device handled by a user;
[0039] FIG. 3 illustrates an example of a method for training machine learning —model(s) for determining position(s) of hand portion(s) on a surface of an interaction device based on sensor data;
[0040] FIG. 4 illustrates an example of extrapolating finger position(s) based position and orientation of the palm of a user;
[0041] FIG. 5 illustrates an example of updating positions or parameters of active user interface elements at a surface of an interaction device;
[0042] FIG. 6 illustrates an example of an apparatus configured to practise one or more example embodiments; and
[0043] FIG. 7 illustrates a method for determining position(s) of user interface 3 element(s) to be activated at a surface of an interaction device. ro 25 [0044] Like references are used to designate like parts in the accompanying x drawings. z
N DETAILED DESCRIPTION
3 [0045] Reference will now be made in detail to example embodiments, examples
N 30 — of which are illustrated in the accompanying drawings. The detailed description
N provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
[0046] Example embodiments of the present disclosure relate to an interaction device, for example a game controller. An interaction device may have any suitable shape enabling a user to grab the interaction device in more than one position and orientation. Examples of such devices include spherical and elliptical interaction devices, which the user may grab in practically any orientation. Dimensions of the interaction device, e.g. radius, may be such that the user is enabled to grab the interaction device with one hand. The radius may be for example lower than 5 cm or 10 cm. It is however possible to apply the embodiments with interaction devices configured to be handled with both hands.
[0047] In order to ease user access to functionality provided by the interaction — device, position of user input and output UI element(s) (e.g. virtual button, speaker, microphone, LED, active portion of display) to be activated at the surface of the interaction device may be determined based on position and orientation of user’s palm on the surface of the interaction device. This enables use of the interaction device regardless of the grabbing position or orientation. Also, power may be saved — by disabling elements blocked by the hand of the user. Position(s) of the finger(s) of the user (e.g. user input finger(s)) and/or virtual button(s) may be determined based on asymmetry of the palm, or portion(s) thereof, for example the thenar
N eminence of the user, and extrapolating this information to estimate finger
O positions. Furthermore, when the user releases the device, e.g. by throwing, ro 25 orientation of the device may be monitored and user output elements may be x selected such that they point towards the user. Further example embodiments are
I described below.
N [0048] FIG. 1 illustrates an example of an interaction device handled by a user. 3 Interaction device 100 may have a spherical shape, as illustrated in FIG. 1, but
N 30 interaction device 100 may have other shapes as well. Furthermore, interaction
N device 100 may be substantially spherical or substantially ellipsoidal, for example comprise a plurality of planes arranged to form a polyhedron. It is however noted that the example embodiments may be generally applied to any interaction device that enables user 110 to grab it in at least two different positions and/or orientations.
[0049] Interaction device 100 may comprise at least one sensor, for example for detecting position and/or orientation of hand portions on the surface of interaction device 100. The at least one sensor may comprise touch sensor(s), magnetic sensor(s), force resistive sensor(s), pressure sensor(s), ambient light sensor(s).
Interaction device 100 may further comprise acceleration sensor(s) (e.g. accelerometer) and/or gyroscope(s), which may be used for determining and monitoring orientation and/or location of interaction device 100. The sensor(s) may be used for detecting user input (e.g. a press or a swipe) at a particular location of the surface of interaction device 100. To enable this, interaction device 100 may for example have a touch-sensitive screen or surface.
[0050] Hand of user 110 may comprise fingers 112, for example thumb 112-1 and index finger 112-2. Fingers 112 may be used for providing user input to interaction — device 100, for example via respective virtual buttons 102-1 and 102-2. A finger for which a user input UI element is allocated or activated may be called a user input finger, in contrast to other fingers for which user input UI elements may not be allocated or activated. Even though some example embodiments have been described using thumb 112-1 or index finger 112-2 as an example, it is appreciated — that similar functionality may be implemented for other fingers. Also, user input need not be implemented with a single finger, e.g. by pressing a respective virtual button. Instead, user input may comprise other type of user input, e.g. a sgueeze, which may comprise contribution of a plurality of fingers and/or other part(s) of the 3 hand of the user 110. ro 25 [0051] Interaction device 100 may detect position and orientation of the palm of x user 110 on its surface. Based on the position and orientation of the palm,
I interaction device 100 may determine position(s) of UI element(s) to be activated. a
N Interaction device 100 may activate the UI element(s) at the determined position(s). 3 A UI element may comprise a fixed UI element (e.g. physical button, speaker, or
N 30 microphone), for which the position at the surface of interaction device 100 is fixed.
N Alternatively, a UI element may be configurable such that it’s position at the surface may be changed. Examples of configurable UI elements include virtual buttons and visual elements displayed at particular positions of the surface. UI elements enabling user 110 to control interaction device 100 may be called user input UI elements. Ul elements enabling interaction device 100 to provide information to user 110 may be called device output UI elements
[0052] In case of fixed UI elements, determining a position of a UI element may comprise selecting a particular UI element associated with a particular position at the surface of interaction device 100. For configurable UI elements, determining a position of a UI element may comprise selecting a position for provision of the UI element. UI element(s) at the determined position(s) may be activated. This enables — user interaction with the UI element(s) at the determined position(s) of the surface.
[0053] For example, interaction device 100 may determine to activate virtual button 102-1 at the position of the tip of thumb 112-1 and/or virtual button 102-2 at the position of the tip of the index finger 112-2 based on the position and orientation of the palm, when user 110 grabs interaction device 100. The position and — orientation of the palm may be detected based on data captured by the sensor(s) of interaction device 100, as will be further described below. Positions of the finger(s), for example finger tip(s) (distal phalange), may be determined based on the position and orientation of the palm.
[0054] FIG. 2 illustrates another example of an interaction device handled by a — user. In addition, or alternative to, the user input UI elements already described with reference to FIG. 1, interaction device 100 may comprise one or more microphones 102-3 as user input UI element(s). Interaction device 100 may comprise device output UI elements, such as for example one or more speakers 104-1, one or more 3 displays 104-2, or portion(s) of a display, and/or one or more lights 104-3. ro 25 [0055] Using display 104-2, various type of visual information, e.g images, x videos, or graphics, may be displayed to user 110. Even though illustrated as a
I discrete piece of display, it is understood that a displayable UI element may a
N comprise a visual element displayed at a particular position (portion) of a display. 3 For example, a portion of the surface of interaction device 100, or the entire surface,
N 30 may be configured as a display. A position of a visual UI element may be
N determined within the surface area covered by the display. Lights 104-3, for example their intensity or colour, may be controlled based on various type of information or parameters (e.g. a status of interaction device 100 or status of an application running at interaction device 100).
[0056] When a UI element comprises a user input UI element (e.g. physical or virtual button), the position of the user interface element may be determined such that the user element is overlapping with at least one portion of the hand of user 110. For example, interaction device 100 may determine position(s) of finger tip(s) of user 110 on the surface of interaction device 100 and allocate or activate user interface element(s) at the determined position(s). It is however possible that for other type of user input element(s), e.g. microphone 102-3, the position of the user interface element to be activated may be determined such that the position is not overlapping with the hand of user 110. This enables unobstructed provision of user input to interaction device 100, for example as voice commands via microphone 102-3.
[0057] When a UI element comprises a device output UI element (e.g. speaker, — display, or light), the position of the user interface element may be determined such that the UI element is not overlapping with the hand of user 110. This enables unobstructed provision of device output to user 110, for example as audio signals or visual elements. Device output UI element(s) overlapping with the hand of user 110 may be disabled. This enables to reduce power consumption. — [0058] FIG 3 illustrates an example of a method for training machine learning model(s) for determining position(s) of hand portion(s) on a surface of an interaction device based on sensor data. The sensor data may be captured for example by touch sensor(s), magnetic sensor(s), force resistive sensor(s), pressure 3 sensor(s), or ambient light sensor(s) of interaction device 100, or a combination of ro 25 — these sensors. x [0059] A machine learning (ML) model may comprise a mathematical model that
I is trained, based on training data, to provide a desired output for unseen input data.
N The training data may include samples of data inputs associated with ground-truth 3 data indicative of the desired output for the particular samples of data inputs. A
N 30 — benefit of machine learning is that while the ML model is trained with particular
N samples of the training data, the model learns to generalize, i.e., provide a (substantially) desired output also for data samples not included in the training data.
[0060] The ML model may be for example configured to output probability distribution(s) indicative of the probability of different positions (e.g. small areas) on the surface of interaction device 100 that are determined to be covered by the palm, or particular portion(s) thereof. The probability distribution may be for example represented by a vector, where each element of the vector indicates a probability of a particular area of the surface to be covered. The output may further comprise, for example concatenated with the probability distribution vector, the detected orientation of the palm, for example as a degree of rotation around an axis perpendicular to the surface at the centre (e.g. centre of mass) of the palm or palm — portion. The surface of interaction device 100 may be thus divided into a plurality of micro-areas for facilitating detection of the total area covered by the palm or palm portion(s). Examples of suitable machine learning models/methods include neural networks (e.g. LassoNet), principle component analysis, random forest, support vector machines, and gradient boosting. — [0061] At operation 301, initial data collection may be performed. The initial data collection may be based a controlled setup for collecting training data, e.g. sensor data samples recorded by interaction device 100 associated with respective ground-truth data, e.g. known positions and orientations of the palm or palm portion. The ground-truth data may be provided in a similar format, for example as a binary vector indicative of the locations covered by the palm or palm portion, optionally concatenated with the known orientation of the palm or palm portion.
The training data may be collected for at least ten persons and for at least twenty repetitions for each grabbing position. Training data may be collected for example 3 for at least ten (e.g. 10-15) grabbing positions. It is however noted that, in general, ro 25 — collecting more training data, either in terms of number of persons, repetitions, x and/or grabbing positions may improve performance of ML model(s) to be trained.
I [0062] At operation 302, initial ML model(s) may be trained. Training may a
N comprise modifying parameters of the ML model based on a comparison between 3 the ground-truth data and the output of the ML model for a sample or a set of
N 30 samples of the data inputs of the training data. For example, if the output of the ML
N model is a probability vector, the output of the ML model may be compared to the ground-truth data for example by the mean sguare error (MSE) between the vectors.
Changes to parameters of the ML model may be then determined and applied, based on the result of the comparison. For example, in case of neural networks gradients determined based on the comparison may be backpropagated through layers of the neural network and weights of particular nodes of the layers may be changed accordingly. When the ML model is trained iteratively using many training data samples, the ML model will eventually learn to substantially provide the desired output for any input data. This may be tested (cross-validated) with a test dataset, which may be for example 25 % of the entire set of training data. The test data may be used only for validating performance of the ML model, i.e., not for updating parameters of the ML model during the training. Once the ML model has been tested to provide sufficient performance, evaluated for example by accuracy, the (initial) training phase may be concluded.
[0063] It is noted that multiple ML models may be trained at this stage, for example for determining positions and/or orientations of particular palm portions, — such as for example the thenar eminence and/or the hypothenar eminence. In this case, the ground-truth data may comprise known positions and orientation of the palm portions. Detection of the orientation of the palm or the palm portion(s) may be based on their asymmetry. As the training data includes sensor data samples associated with known orientations of the palm or palm portion(s), the palm or the — palm portion(s) being inherently asymmetric, the ML model will learn to recognize the asymmetry from the sensor data and to output the orientation accordingly.
[0064] At operation 303, additional data collection may be performed. Further training data may be collected, for example from volunteers using interaction device 3 100. At this phase, the positions that are hard to predict may be weighted more than ro 25 — positions that are easy to detect. For example, information may be gathered during x the initial training (cf. operation 301) about grabbing positions for which the error
I between the output of the ML model and the ground-truth data was the largest. a
N Further training data may be then selectively gathered from the volunteers, 3 emphasizing the positions that were hard to predict.
N 30 [0065] At operation 304, the ML model(s) may be updated. The additional
N training data collected at operation 303 may be used to further train the ML model(s). During the further training the positions that are hard to predict may be again evaluated and further training data may be again collected at operation 303.
This enables gradual training of the ML model(s) based on information gathered from users of interaction device 100. Furthermore, more effective training is achieved by emphasising positions that are hard to predict.
[0066] FIG 4 illustrates an example of extrapolating finger position(s) based position and orientation of the palm of a user. As described above, when user 110 grabs interaction device 100, the palm, or portion(s) thereof, may be detected based on data captured by sensor(s) of interaction device 100, for example by ML model(s). The sensor(s) may be included at interaction device 100, for example at — orin proximity of its surface.
[0067] In one example, interaction device 100 may detect position and orientation of thenar eminence 402 of user 110. Position of thenar eminence 402 may be for example determined to be at the centre of mass of the areas covered by thenar eminence 402, as detected by a respective ML model. The centre of mass may be — defined by the areas, which the ML model indicates to be covered by thenar eminence 402, for example by means of the probability distribution vector. Based on the position and orientation of thenar eminence 402, interaction device 100 may estimate (e.g. extrapolate) a position of thumb 112-1. Extrapolation of the thumb position may comprise determining an area that is at a predefined distance and — direction (e.g. with respect to the detected orientation of thenar eminence 402) from the centre of mass of thenar eminence 402. Interaction device 100 may then determine a position of virtual button 102-1 at or in the neighbourhood of the estimated thumb position. Similarly, position of index finger 112-2 may be 3 estimated based on position and orientation of thenar eminence 402. Direction of ro 25 index finger 112-2 may be determined for example by a predetermined offset to the x predetermined direction of thumb 112-1. The offset may be for example 30-40
I degrees. The offset may be dependent on the dimensions (e.g. radius) of interaction a
N device 100. For example, the offset may be proportional to the radius of a 3 substantially spherical interaction device.
N 30 [0068] Itis however also possible to train a ML model to estimate the position of
N thumb 112-1 based on the position and orientation of the palm or palm portion(s).
In this case, input data to the ML model may comprise the position of the palm or palm portion, for example as the probability distribution vector indicative of the surface areas covered by the palm or palm portion. Ground-truth data may comprise the position of thumb 112-1, represented by a similar (but binary) probability distribution vector. Since the palm or palm portion is inherently asymmetric, the
ML model will learn to exploit the asymmetry of the palm or palm portion when determining the position of thumb 112-1.
[0069] Positions of other finger tip(s) (distal phalange) may be determined in a similar manner. It is however noted that also other parts of the palm, e.g. hypothenar eminence 404, may be used as a basis for determining the finger (tip) position(s).
Furthermore, one or more proximal phalanges 406 or one or more intermediate phalanges 408, or one or more knuckles of the hand of user 110 may be used as a basis for determining position of the finger (tip) position(s). A combination of any two or more of the palm/hand portions may be used for determining finger position(s). Use of multiple palm/hand portions increases the asymmetry of the (combination of) palm/hand portions. This enables to improve accuracy of estimating the finger (tip) position(s). Using other palm portions instead of estimating the finger position directly based on the finger tip improves reliability since a wider perspective for palm position and orientation may be obtained.
Determining user interface positions based on finger tips only may be more susceptible to estimation error due to the more point-like object to be estimated.
Furthermore, determining the finger positions based on the palm, enables to determine the finger positions even before the fingers actually touch the surface, or in general be sufficiently close to the surface to be detected. This improves user 3 experience, because the user interface elements may be already activated when the ro 25 fingers touch the surface. x [0070] Alternative to ML, the position and orientation of the palm or palm
I portion(s) may be determined algorithmically. For example, the obtained sensor a
N data may be processed and converted into a suitable format, for example an image 3 representative of the area covered by the palm or palm portion. This data may be
N 30 stored at a memory of interaction device 100. Different portions of the hand of user
N 110 may be then detected for example by comparing the obtained image to reference image(s), stored at the memory of interaction device 100. Asymmetry of the palm or palm portion(s) may be exploited in orientation detection, for example by rotating the obtained image at different degrees and comparing rotated versions of the obtained image to stored reference image(s), e.g. by means of a pixelwise correlation. Position of the palm or palm portion(s) may be determined based on the centre of mass of the palm in the image. Locations of particular parts of the hand, e.g. finger tips, may be then estimated based on the detected position and orientation of the palm or palm portion(s), as described above.
[0071] Interaction device 100 may thus detect position and orientation of the palm of user 110, either algorithmically or by means of machine learning. Based on the — position and orientation of the palm, interaction device 100 may determine position(s) of UI element(s) to be activated at the surface of interaction device 100.
Subseguently, user input may be received or output may be provided with the UI element(s).
[0072] Based on positions of the palm or palm portions, e.g. thenar eminence 402, — hypothenar eminence 404, and other portions of the hand, such as proximal phalange(s) 406, intermediate phalange(s) 408, and/or distal phalanges, interaction device 100 may determine a blocked surface area 410. The blocked surface area 410 may comprise for example a spherical cap configured to cover the locations of the hand of user 110 on the surface, as in the example FIG. 3, or it may follow the — shape of the hand more closely. In case of an ellipsoid, the blocked surface area 410 may comprise an ellipsoidal cap. Interaction device 100 may refrain from allocating or activating (e.g. certain types of) device output UI element(s), e.g. display, light(s), speaker(s) at the blocked surface area 410. In general, device output UI 3 elements may be activated outside the blocked surface area 410 (i.e. within an ro 25 allowed surface area 412). However, allocation or activation of certain type of 3 device output UI elements, e.g. haptic feedback, may be permitted also within the
I blocked surface area 410. Such device output UI elements may be activated for a example at location(s) of physical buttons or virtual buttons 102-1, 102-2, or in 3 general at positions overlapping with the hand of user 110.
N 30 — [0073] FIG. 5 illustrates an example of updating positions or parameters of active
N user interface elements at a surface of an interaction device. In response to detecting user 110 to grab interaction device 100, which may trigger the determination of the position of UI element(s), a current orientation of interaction device 100 may be stored as an initial orientation, for example by storing current gyroscope readings.
Alternatively, or additionally, the initial orientation may be stored in response to detecting interaction device 100 to be substantially stationary after the user has grabbed interaction device 100. Detecting interaction device 100 to be substantially stationary may be in response to detecting acceleration of interaction device 100 with respect to any direction to decrease below or equal to a threshold. This enables to estimate the most probable orientation of interaction device 100 for viewing by user 110. As a further condition, the initial orientation may be stored in response to — providing device output to user 110, for example by display 104-2. This further increases the likelihood of storing the viewing orientation as the initial orientation.
[0074] At the same time with storing the initial orientation, also an initial location of interaction device 100 may be determined. The initial location may comprise a reference location for subseguently tracking movement of interaction device 100, — for example by acceleration sensor of interaction device 100. Interaction device 100 may monitor current location and/or current orientation of interaction device 100 with respect to the initial location and/or initial orientation. It is noted that the absolute location (e.g. geographical coordinates) need not be determined.
[0075] At time 71, interaction device 100 may detect that user 110 has released interaction device 100. This may be done in response to determining that the hand of user 110 is no longer on the surface of interaction device 100, for example by an
ML model or directly based on sensor data. When interaction device 100 isreleased
N (e.g. thrown) by user 110, it may move along trajectory 502 until it stops.
O [0076] At time £, interaction device 100 may detect that it is again (i.e. after ro 25 detecting the release) stationary. In response, interaction device 100 may determine, x based on its current orientation, updated position(s) pointing towards the user for
I provision of further output with user interface element(s) located at the updated
N position(s). Furthermore, interaction device 100 may adjust, based on a distance 3 between the current and initial locations of interaction device 100, parameter(s) of
N 30 user interface element(s) located at the updated position(s) at its surface. For
N example, size of a visual UI element or volume of an audio signal may be increased proportional to the distance. It is however noted that updated position(s) and/or parameter(s) for the UI element(s) may be determined also during movement along trajectory 502. Updating the position or parameters of the UI elements enables user 110 to more easily observe the output provided by interaction device 100 after the release. Determining the updated location(s) after interaction device 100 has stopped enables to save power, since potentially unnecessary updates during the movement along trajectory 502 may be avoided.
[0077] FIG. 6 illustrates an example embodiment of an apparatus 600, for example interaction device 100, or a component (e.g. circuitry) thereof. Apparatus 600 may therefore be the interaction device 100 or be located within interaction device 100 as a sub-system of interaction device 100. Apparatus 600 may comprise at least one processor 602. At least one processor 602 may comprise, for example, one or more of various processing devices or processor circuitry, such as for example a co-processor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various — other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware (HW) accelerator, a special- purpose computer chip, or the like.
[0078] Apparatus 600 may further comprise at least one memory 604. The at least — one memory 604 may be configured to store, for example, computer program code or the like, for example operating system software and application software. The at least one memory 604 may comprise one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination thereof. For example, the 3 at least one memory 604 may be embodied as magnetic storage devices (such as ro 25 — hard disk drives, floppy disks, magnetic tapes, etc.), optical magnetic storage 3 devices, or semiconductor memories (such as mask ROM, PROM (programmable
I ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), a
N etc.). 3 [0079] Apparatus 600 may further comprise a communication interface 608
N 30 configured to enable apparatus 600 to transmit and/or receive information to/from
N other devices. Communication interface 608 may be configured to provide at least one wireless radio connection, such as for example a 3GPP mobile broadband connection (e.g. 3G, 4G, 5G, 6G). However, communication interface 608 may be configured to provide, alternatively or additionally, one or more other type of connections, for example a wireless local area network (WLAN) connection such as for example standardized by IEEE 802.11 series or Wi-Fi alliance; a short range wireless network connection such as for example a Bluetooth, NFC (near-field communication), or RFID connection; a wired connection such as for example a local area network (LAN) connection, a universal serial bus (USB) connection or an optical network connection, or the like; or a wired Internet connection.
[0080] Communication interface 608 may comprise, or be configured to be coupled to, an antenna or a plurality of antennas to transmit and/or receive radio frequency signals. One or more of the various types of connections may be also implemented as separate communication interfaces, which may be coupled or configured to be coupled to an antenna or a plurality of antennas.
[0081] Apparatus 600 may further comprise user interface 610 comprising at least one input device and/or at least one output device. The input device may take various forms such as for example a physical button, a virtual button, a touch screen, a microphone, or the like. The output device may for example comprise a display, a speaker, a light (e.g. LED), or the like. Apparatus 600 may further comprise one or more sensors 612, for example touch sensor(s), magnetic sensor(s), force resistive sensor(s), pressure sensor(s), ambient light sensor(s), acceleration sensor(s), and/or gyroscope(s), as described above.
[0082] When apparatus 600 is configured to implement some functionality, some
N component and/or components of the apparatus 600, such as for example the at least
O one processor 602 and/or the at least one memory 604, may be configured to ro 25 implement this functionality. Furthermore, when the at least one processor 602 is x configured to implement some functionality, this functionality may be implemented
I using the program code 606 comprised, for example, in the at least one memory - 604.
MN
3 [0083] The functionality described herein may be performed, at least in part, by
N 30 — one or more computer program product components such as software components.
N According to an embodiment, the apparatus comprises a processor or processor circuitry, such as for example a microcontroller, configured by the program code when executed to execute the embodiments of the operations and functionality described. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-Programmable Gate Arrays (FPGAs), application- specific Integrated Circuits (ASICs), application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic
Devices (CPLDs), Graphics Processing Units (GPUs).
[0084] Apparatus 600 may comprise means for performing at least one example embodiment described herein. In one example, the means comprises the at least one processor 602, the at least one memory 604 including the program code 606 configured to, when executed by the at least one processor 602, cause the apparatus 600 to perform the example embodiment(s).
[0085] Apparatus 600 may comprise for example a computing device such as for example a game controller, an emergency alert device, or the like. Although apparatus 600 is illustrated as a single device it is appreciated that, wherever applicable, functions of the apparatus 600 may be distributed to a plurality of devices, for example to implement example embodiments as a cloud computing service.
[0086] FIG. 7 illustrates a method for determining position(s) of user interface element(s) to be activated at a surface of an apparatus
[0087] At 701, the method may comprise detecting position and orientation of a palm of a user on a surface of the apparatus. 3 [0088] At 702, the method may comprise determining, based on the position and ro 25 orientation of the palm of the user, a position of at least one user interface element x to be activated at the surface of the apparatus.
I [0089] At 703, the method may comprise receiving user input or providing output a
N with the at least one user interface element. 3 [0090] Further features of the method may directly result for example from the
N 30 functionalities and parameters of the interaction device 100, or in general apparatus
N 600, as described in the appended claims and throughout the specification, and are therefore not repeated here. Different variations of the method may be also applied, as described in connection with the various example embodiments.
[0091] An apparatus may be configured to perform or cause performance of any aspect of the method(s) described herein. Further, a computer program or a computer program product may comprise instructions for causing, when executed, an apparatus to perform any aspect of the method(s) described herein. Further, an apparatus may comprise means for performing any aspect of the method(s) described herein. According to an example embodiment, the means comprises at least one processor, and at least one memory including program code, the at least — one processor, and program code configured to, when executed by the at least one processor, cause performance of any aspect of the method(s). An apparatus may therefore comprise at least one processor, and at least one memory including program code, the at least one processor, and program code configured to, when executed by the at least one processor, cause the apparatus to perform any aspect of — the method(s).
[0092] Any range or device value given herein may be extended or altered without losing the effect sought. Also, any embodiment may be combined with another embodiment unless explicitly disallowed.
[0093] Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are
N disclosed as examples of implementing the claims and other eguivalent features and
O acts are intended to be within the scope of the claims. ro 25 [0094] It will be understood that the benefits and advantages described above may x relate to one embodiment or may relate to several embodiments. The embodiments
I are not limited to those that solve any or all of the stated problems or those that have
N any or all of the stated benefits and advantages. It will further be understood that 3 reference to 'an' item may refer to one or more of those items.
N 30 — [0095] The steps or operations of the methods described herein may be carried out
N in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the embodiments described above may be combined with aspects of any of the other embodiments described to form further embodiments without losing the effect sought.
[0096] The term ‘comprising’ is used herein to mean including the method, blocks, or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
[0097] It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the — structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from scope of this specification.
N
N
O
N
<Q +
O
I a a
N co 0
LO
N
N
O
N

Claims (15)

1. An apparatus, comprising: means for detecting position and orientation of a palm of a user on a surface of the apparatus; means for determining, based on the position and orientation of the palm of the user, a position of at least one user interface element to be activated at the surface of the apparatus; and means for receiving user input or providing output with the at least one user interface element.
2. The apparatus according to claim 1, further comprising: means for determining positions of fingers of the user on the surface of the apparatus; and means for determining a blocked area of the surface of the apparatus comprising the position of the palm of the user and the positions of the fingers of the user, wherein the position of the at least one user interface element to be activated is outside the blocked area.
3. The apparatus according to claim 2, wherein the at least one user interface element to be activated comprises at least one speaker, at least one light, at least one microphone, or at least a portion of a display. N
O
4. The apparatus according to claim 1, further comprising: ro 25 means for determining a position of at least one user input finger of the user x on the surface of the apparatus, wherein the at least one user interface element I comprises a virtual or physical user input element at the position of the at least one N user input finger on the surface of the apparatus. 2 N 30
5. The apparatus according to claim 4, wherein the position of the at least N one user input finger comprises a position of at least one distal phalange of the at least one user input finger.
6. The apparatus according to any of claims 2 to 5, further comprising: means for determining the positions of the fingers of the user and/or the position of the at least one user input finger on the surface of the apparatus based on asymmetry of the palm of the user or at least one portion of the palm of the user.
7. The apparatus according to claim 6, wherein the at least one portion of the palm of the user comprises a thenar eminence or a hypothenar eminence of the user.
8. The apparatus according to any of claims 1 to 7, further comprising: means for detecting a position and/or orientation of at least one knuckle, at least one proximal phalange, and/or at least one intermediate phalange of the user on the surface of the apparatus; means for determining, based on the position and orientation of the palm of the user and the position and/or orientation of the at least one knuckle, the at least one proximal phalange, and/or the at least one intermediate phalange of the user, a position of at least one distal phalange of the user on the surface of the apparatus, wherein the position of the at least one user interface element to be activated overlaps with the position of the at least one distal phalange of the user on the surface of the apparatus.
N
9. The apparatus according to any of claims 1 to 8, further comprising: O means for determining an initial orientation and/or an initial location of the ro 25 — apparatus, in response to determining the position of the at least one user interface x element at the surface of the apparatus; I means for monitoring current orientation and/or current location of the N apparatus with respect to the initial orientation and/or the initial location by at least 3 one sensor of the apparatus; N 30 means for detecting a release of the apparatus by the user; N means for performing at least one of:
determining, based on the current orientation of the apparatus, at least one updated position at the surface of the apparatus pointing towards the user and providing further output with at least one user interface element located at the at least one updated position at the surface of the apparatus, or adjusting, based on a distance between the current location of the apparatus and the initial location of the apparatus, at least one parameter associated with the at least one user interface element located at the at least one updated position at the surface of the apparatus.
10. The apparatus according to claim 9, wherein the initial orientation and/or the initial location of the apparatus is determined further in response to detecting the apparatus to be substantially stationary.
11. The apparatus according to claim 9 or claim 10, wherein the means for determining the initial orientation or the current orientation of the apparatus comprises at least one gyroscope, and/or wherein the means for determining the current location of the apparatus comprises at least one acceleration sensor.
12. The apparatus according to any of claims 1 to 11, wherein the means for detecting the position and orientation of the palm of the user on the surface of the apparatus comprises at least one of: at least one touch sensor, at least one magnetic sensor, at least one force resistive sensor, at least one pressure sensor, or at least N one ambient light sensor. ä ro 25
13. The apparatus according to any of claims 1 to 12, wherein the surface of x the apparatus is substantially spherical or substantially elliptical. z a
14. A method, comprising: S detecting position and orientation of a palm of a user on a surface of the a 30 apparatus; N determining, based on the position and orientation of the palm of the user, a position of at least one user interface element to be activated at the surface of the apparatus; and receiving user input or providing output with the at least one user interface element.
15. A computer program comprising program code configured to, when executed, cause an apparatus at least to: detect position and orientation of a palm of a user on a surface of the — apparatus; determine, based on the position and orientation of the palm of the user, a position of at least one user interface element to be activated at the surface of the apparatus; and receive user input or providing output with the at least one user interface element. N N O N LÖ <Q < O I a a K co 0 LO N N O N
FI20225387A 2022-05-04 2022-05-04 Interaction device FI20225387A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FI20225387A FI20225387A1 (en) 2022-05-04 2022-05-04 Interaction device
PCT/FI2023/050227 WO2023214113A1 (en) 2022-05-04 2023-04-25 Interaction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20225387A FI20225387A1 (en) 2022-05-04 2022-05-04 Interaction device

Publications (1)

Publication Number Publication Date
FI20225387A1 true FI20225387A1 (en) 2023-11-05

Family

ID=86328350

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20225387A FI20225387A1 (en) 2022-05-04 2022-05-04 Interaction device

Country Status (2)

Country Link
FI (1) FI20225387A1 (en)
WO (1) WO2023214113A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430147B2 (en) * 2010-04-23 2016-08-30 Handscape Inc. Method for user input from alternative touchpads of a computerized system
US20130300668A1 (en) * 2012-01-17 2013-11-14 Microsoft Corporation Grip-Based Device Adaptations
KR102137240B1 (en) * 2013-04-16 2020-07-23 삼성전자주식회사 Method for adjusting display area and an electronic device thereof
US20160259544A1 (en) * 2015-03-04 2016-09-08 Artem Polikarpov Systems And Methods For Virtual Periphery Interaction
US10067668B2 (en) * 2016-09-09 2018-09-04 Htc Corporation Portable electronic device, operating method for the same, and non-transitory computer readable recording medium

Also Published As

Publication number Publication date
WO2023214113A1 (en) 2023-11-09

Similar Documents

Publication Publication Date Title
TWI786313B (en) Method, device, storage medium, and apparatus of tracking target
US10216406B2 (en) Classification of touch input as being unintended or intended
CN105308538B (en) The system and method acted based on detected dumb show performs device
TWI546725B (en) Continued virtual links between gestures and user interface elements
EP3734430A1 (en) Method for copying multiple text segments and mobile terminal
Ketabdar et al. Towards using embedded magnetic field sensor for around mobile device 3D interaction
CN111443842B (en) Method for controlling electronic equipment and electronic equipment
CN103562820B (en) Target ambiguities are eliminated and correction
WO2012001225A1 (en) Methods and apparatuses for facilitating task switching
JP2018073424A (en) Dynamic haptic generation based on detected video events
CN105900056A (en) Hover-sensitive control of secondary display
CN110007996B (en) Application program management method and terminal
CN110989881B (en) Icon arrangement method and electronic equipment
CN109376781B (en) Training method of image recognition model, image recognition method and related device
CN109568938A (en) More resource game touch operation methods, device, storage medium and terminal
CN108646973A (en) Put out screen display methods, mobile terminal and computer readable storage medium
CN109753425A (en) Pop-up processing method and processing device
CN109165075A (en) Application display method, device, electronic equipment and storage medium
WO2017114967A1 (en) Indoor room-localization system and method thereof
CN109828672B (en) Method and equipment for determining man-machine interaction information of intelligent equipment
CN111459350A (en) Icon sorting method and device and electronic equipment
CN107533566A (en) Method, portable electric appts and the graphic user interface retrieved to the content of picture
FI20225387A1 (en) Interaction device
CN110580055B (en) Action track identification method and mobile terminal
CN111459313A (en) Object control method, touch pen and electronic equipment