EP3161604A1 - Method for providing data input using a tangible user interface - Google Patents

Method for providing data input using a tangible user interface

Info

Publication number
EP3161604A1
EP3161604A1 EP15731054.1A EP15731054A EP3161604A1 EP 3161604 A1 EP3161604 A1 EP 3161604A1 EP 15731054 A EP15731054 A EP 15731054A EP 3161604 A1 EP3161604 A1 EP 3161604A1
Authority
EP
European Patent Office
Prior art keywords
data input
physical
objects
support
support portions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15731054.1A
Other languages
German (de)
French (fr)
Inventor
Eric Tobias
Valérie Maquil
Thibaud Latour
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luxembourg Institute of Science and Technology LIST
Original Assignee
Luxembourg Institute of Science and Technology LIST
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luxembourg Institute of Science and Technology LIST filed Critical Luxembourg Institute of Science and Technology LIST
Publication of EP3161604A1 publication Critical patent/EP3161604A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected

Definitions

  • the invention relates to the field of Tangible User Interfaces, TUI. Specifically, the invention relates to a method for providing data input at a computing device, wherein the data input is based on multiple values provided by one or more users using tangible user interface elements. Background of the invention
  • GUI Graphical User Interface
  • symbols are commonly used as metaphors for implying the functionality of a user interface element.
  • the use of efficient metaphors renders a user interface easy to manipulate and reduces the time and effort required by a user to get accustomed to using the interface.
  • a GUI button bearing the image of a floppy disk drive is instantly understood by a user as leading to the underlying software functionality of saving a document when the button is clicked using a pointing device. The stronger the user's association of the used metaphor with the associated functionality is, the more efficient an interface element will be.
  • Touch-based user interfaces have been able to propose several strong metaphors.
  • the known feature of pinching fingers on an image displayed on a touch-sensitive display to zoom into the image improves the usability of the software-implemented zoom functionality, as the user quickly associates the metaphor of the touch gesture to the underlying implied functionality.
  • TUI Tangible User Interface
  • the potential of using a Tangible User Interface, which relies on the interaction of a user with physical objects to interact with software functionalities, has been identified as a good candidate for providing strong user interface metaphors.
  • TUI Tangible User Interface
  • a method for providing data input at a computing device using a tangible human-computer interface comprises the steps of: providing a tangible human-computer interface comprising physical support means having a plurality of support portions, processing means, and probing means capable of detecting physical objects on each of said support portions;
  • the step of data input generation comprises assigning the number of physical objects detected on a support portion to the corresponding operand, and evaluating the predetermined data input function using the assigned operand values.
  • the physical support means comprises at least as many support portions as the predetermined data input function comprises operands.
  • an operand of said predetermined data input function may be associated with a plurality of said support portions.
  • each operand of said predetermined data input function may be associated with at least one of said support portions.
  • each operand of said predetermined data input function may be associated with exactly one of said support portions.
  • the step of detecting at least one physical object may preferably comprise counting the number of objects detected on any of said support portions. Even more preferably, the step may comprise detecting physical properties of any of the detected objects, said properties comprising, for example, any of the shape, physical dimensions, weight, or color of the objects.
  • the step of data input generation may preferably comprise associating the number of detected physical objects on a portion and/or their detected physical properties with the corresponding operand.
  • the physical support means may preferably comprise a substantially planar surface, and said support portions may preferably be portions of said surface.
  • the probing means may preferably be optical probing means comprising image sensing means. Alternatively, the probing means may comprise means capable of detecting electromagnetic signals, or metal probing means.
  • the human-computer interface may further preferably comprise a substantially planar surface, wherein the probing means are arranged so that they are capable of detecting the presence of said physical support means and/or physical objects on said surface.
  • Said surface of the human-computer interface may preferably comprise the physical support means.
  • the human-computing interface may preferably comprise feedback means.
  • the feedback means may comprise visual feedback means provided by display means or projection means.
  • the feedback means may further comprise audio feedback means or haptic feedback means.
  • the feedback means may be operatively connected to the processing means.
  • the method may advantageously further comprise the step of providing feedback, wherein the feedback indicates the location of the support portions and/or the operands associated with said support portions.
  • the method may further preferably comprise the step of providing a feedback that is generated upon provision of said data input at said computing device.
  • the substantially planar surface of the human-computer interface may preferably be at least partially translucent.
  • the probing means may preferably comprise image sensing means and optical means, and may further preferably be provided underneath said surface and comprise a field of view in which they are capable of detecting objects. The field of view may be directed towards said surface and said portions of the physical support means may be at least partially translucent, so that said probing means are capable of detecting a physical object on said portion from underneath, when the physical support means are placed on top of said surface.
  • the probing means may further preferably be configured for detecting physical properties of said objects. More preferably, the probing means may further be configured for detecting the relative positions of objects that are detected on any given surface portion with respect to each other.
  • the step of generating data input may comprise associating the detected objects with the corresponding operand depending on their detected relative positions.
  • the step of generating the input data may comprise the computation of a weighted average of the operands, wherein the weights correspond to the number of physical objects detected on the support portions corresponding to said operands. Further, the weights may preferably depend on the properties, physical or other, of the objects detected on the support portions corresponding to the operands.
  • the predetermined function may be a selection function, the evaluation of which results in the operand to which the highest value is assigned.
  • a device for carrying out the method according to the invention comprises a tangible human-computer interface, a memory element, and processing means, wherein the human-computer interface comprises physical support means having a plurality of support portions and probing means capable of detecting physical objects on each of said support portions.
  • the processing means are configured to:
  • the generation of the data input comprises assigning the number of physical objects detected on a support portion to the corresponding operand, and evaluating the predetermined data input function using the assigned operand values.
  • a computer capable of carrying out the method according to the invention is provided.
  • a computer program comprises computer readable code, which when run on a computer, causes the computer to carry out the method according to the invention.
  • a computer program product comprising a computer-readable medium is provided.
  • the computer program according to the invention is stored on the computer-readable medium.
  • the invention relates to Tangible User Interfaces and provides an efficient metaphor for enabling the collaborative input of data by multiple users.
  • An example of an application in which the invention finds its use may be a collaborative choice application or a collaborative vote among possible data input values. If the method is used by a single user, the user may provide multi-dimensional input data value in a natural way.
  • the user of a computing device may be confronted with a request from a software application that is executed on the device, for providing a combination of three predetermined color component values (Red, Green, and Blue).
  • a software application that is executed on the device, for providing a combination of three predetermined color component values (Red, Green, and Blue).
  • he may place objects onto three surface portions of the human-computer interface that represent the three options.
  • the method of the invention allows to weight the Red, Green, and Blue components that are chosen and to provide the resulting combination as input to the software application. This may for example be achieved either by considering the number of objects he puts on the respective surface portions, or for example by measuring their physical weight, or other physical properties.
  • a visual feedback channel may be provided in such an application, wherein the resulting color mix is visualized instantly to the user. The user may in return modify the position of the objects to change the data input.
  • each one of a plurality of users of a software application that is executed on a computing device may use objects of a specific color and put them onto support portions representing operands of a selection function.
  • the overall number of objects detected as a consequence on the support portions may for example lead to the overall voting result, based on majority or weighted voting.
  • the contribution of each user's input to the overall result may further be used as input by also taking the color of the detected objects on each support portion into account.
  • Each user can manipulate their respective objects independently and concurrently with each other user.
  • each user may modify their contribution as they wish, i.e., incrementally by adding/removing a single object from the corresponding surface portion, or by adding/removing a plurality of objects at the same time.
  • the interaction with the computing device is, therefore, enriched and rendered more natural by the present invention.
  • the data input contribution provided by each user remains persistent through the use of physical objects.
  • the position of the physical objects on the support means indicates the current data input contributions of each user at any time instant.
  • the method and device according to the invention enable collaborative data input and data manipulation at a computing device, which have not been achieved using known human- computer interfaces.
  • the underlying principle of the claimed invention may find application in many specific data input situations, mainly wherein collaborative input by multiple users is required. Brief description of the drawings
  • Figure 1 if a flow diagram illustrating the main method steps according to a preferred embodiment of the method according to the invention
  • Figure 2 is a schematic illustration of a preferred embodiment of a device for implementing the method according to the invention
  • Figure 3 is a schematic illustration of a preferred embodiment of a device for implementing the method according to the invention.
  • FIG. 4 is a schematic illustration of a detail of a preferred embodiment of a device for implementing the method according to the invention.
  • Figure 5 is a schematic illustration of a detail of a preferred embodiment of a device for implementing the method according to the invention in a lateral cut view;
  • Figures 6a, 6b and 6c are top views of a detail of a device for implementing the method according to the invention, according to three preferred embodiments.
  • FIG. 1 shows the main steps of the method according to the invention in a preferred embodiment.
  • Figure 2 illustrates elements of a device for implementing the method according to the invention.
  • a human-computer interface 100 specifically a Tangible User Interface, TUI
  • the TUI allows one or more users to provide input to a computing device 190, for which it acts as the interface.
  • the interface comprises probing means 130 capable of detecting physical objects 140 on portions 1 12 of a physical support means 1 10.
  • the physical support means may be a table top or a physical TUI widget placed upon a table top, or it may have any geometrical shape that allows it to support a physical object 140 detectable by said probing means.
  • the probing means 130 are operatively connected to processing means 120 such as a central processing unit, CPU.
  • the processing means 120 have read/write access to a non-illustrated memory element.
  • a predetermined data input function is associated with the physical support means.
  • the data input function may for example be any mathematical formula involving multiple operands to which individual input values may be assigned before the function is evaluated.
  • the predetermined data input function may be a selection function, the operands of which are available choices. It will be appreciated that any data input function involving multiple operands may be associated with the physical support means without leaving the framework of the present invention.
  • an operand 1 14 of said predetermined data input function is associated with each one of said support portions 1 12.
  • each operand 1 14 of the corresponding selection data input function may for example represent a possible answer to the question.
  • the mapping between support portions 1 12 and operands 1 14 is stored in the memory element to which the processing means 120 have read access.
  • a subsequent step 30 the presence of physical objects 140 on any of said support portions 1 12 is probed by said probing means 130.
  • data input 102 is generated and provided to the computing device 190.
  • the generation of the data input comprises assigning the physical objects 140 detected on a support portion 1 12 to the corresponding operand 1 14 of the predetermined data input function.
  • the result of the data input function is evaluated using the operand values that have been assigned to the operands in the previous step. The result if the provided input data input value. This enables for example the selection of one of a plurality of predetermined answers to a question, or to weigh each of the operands according to the number of detected objects on the corresponding support portion.
  • the result of the data input function may for example provide a weighted average of the operands 1 14, wherein the weights are given as a function of the number of objects detected on the corresponding support portions.
  • the predetermined data input function and the assignment of values to the corresponding operands thereof based on the detected presence of physical objects on the corresponding support portions will be applicable by the skilled person, without leaving the scope of the present invention.
  • the method may be conditionally repeated by updating the operands in step 20, preferably based on the provided input in the earlier instance of the method.
  • a simple non-limiting example is the use of a sum of two operands A+B as the data input function, wherein two distinct support portions are associated respectively with the operands A and B. The number count of detected objects on each of the corresponding support portions is assigned to the respective operands. Finally, the corresponding numbers are added to provide the final data input.
  • the complexity of the data input function may be chosen at will and depending on the specific needs arising in different application scenarios.
  • Figure 3 shows a further embodiment of the device according to the invention, wherein the human-computer interface 200 comprises a substantially planar surface 204, on which the physical support means 210 having the support portions 212 are placed.
  • the planar surface 204 may for example be a table-top whereas the support means 210 are a physical TUI widget delimiting the regions corresponding to portions 212.
  • a user may manipulate objects 240 on any one of the support portions 212, before passing the widget on to a different user, who in turn manipulates the objects on any of the support portions, whereby the two or more users collaboratively contribute to the final data input.
  • Probing means 230 are arranged so that they are capable of detecting the presence of the support means 210 as well as objects on the planar surface.
  • the probing means may for example comprise optical probing means 232 such as image sensing means or image depth sensing means coupled to appropriate focusing lenses.
  • the image sensing means may be provided by means of a charge-coupled-device array, while depth sensing means may be provided by means of a time-of-flight camera.
  • the field of view of such probing means is in that case oriented so as to cover at least the part of the surface 204 onto which the support means 210 may be placed.
  • the processing means 220 the detection of objects 240 based on the data provided by such probing means 230 is readily achieved.
  • Image segmentation and object detection algorithms useful to that end are as such known in the art and will not be explained in further detail within the context of the present invention.
  • FIG. 4 illustrates another embodiment according to which the planar surface 304 comprises the physical support means 310 having the support portions 312.
  • the planar surface may for example be a surface on which visual feedback is provided using feedback means operatively coupled to the processing means.
  • the surface may comprise a display device, which indicates the regions corresponding to the surface portions 312 to a user.
  • the visual feedback may alternatively be provided using display projection means.
  • the location of the surface portions 312 may be indicated by haptic feedback means.
  • the feedback means may also be used to provide feedback on the generated input to one or more users who have provided the objects based on which the input has been generated.
  • Such feedback may comprise optical feedback, haptic feedback, audio feedback, or any combinations thereof.
  • the feedback may also comprise updating the operands 314 associated with the support portions 312 as a function of the generated input data.
  • Figure 5 schematically illustrates a lateral cut view of a further preferred embodiment of a device according to the invention.
  • the probing means 433 are arranged underneath the surface 404 on which the support means 410 are placed.
  • both the surface 404 and the regions of the support means corresponding to the support portion 412 are made of a material that is at least partially translucent. This is to make sure that an object 440 supported by the support means 410 is not occluded from the field of view of the probing means by either the support means 410 or the surface 404.
  • the probing means 433 may for example be optical scanning means.
  • detection of a support portion and of the presence of one or several objects on a support portion may be achieved by several techniques as such known in the art.
  • an object to be detected may be tagged with an optical marker or graphical tag easily recognizable by the optical probing means.
  • the tag may be a simple high-contrast dot, or a combination of black and white squares.
  • the tags should be arranged on the objects so that they are within the field of view of and capable of being detected by the probing means. In the embodiment shown in Figure 3, the tag should be arranged on a side of an object 240 which is opposed to the side facing the support portion 212.
  • the tag should be on the bottom side of the object 440, which corresponds to the side that contacts the support portion 412.
  • the tags if they uniquely identify each object, may also be used by the processing means to identify the objects placed on the support portions.
  • the tags may alternatively emit electromagnetic signals, which are detectable by the appropriately chosen probing means.
  • Other tagging means are known in the art and applicable to the invention. Their specific description would, however, extend beyond the description of the invention as such.
  • the identity of each detected object on a detected support portion may lead to an identity-specific association of the object with the corresponding operand of the predetermined data input function.
  • all objects of a particular user may be tagged using the same tag, while each user is attributed a distinct tag.
  • the contribution of each user to each of the operands, and therefore to the final resulting input value, may then be straightforwardly tracked by the processing means.
  • a similar function may be achieved by using objects having different physical properties, such as size, shape, color or others, provided that the probing means used to detect the presence and identity of the objects are capable of probing such physical properties.
  • Such probing means are well known in the art and are within the reach of the skilled person.
  • the probing means are capable of identifying the relative positions of a plurality of objects on any of the support portions.
  • the relative position of a plurality of objects may be used as described above to influence the value that is attributed to the associated operand during the step of generating the data input.
  • physical clustering of objects may be interpreted by the processing means as providing more weight to a given value.
  • Figures 6a, 6b and 6c provide exemplary embodiments of physical support means for supporting objects as required by the invention. These configurations are applicable to any embodiments described above.
  • the physical support means 510 have a circular shape, wherein the portions 512 thereof, with which the operands 514 are associated, are equiangular sectors of the circular shape.
  • the support portions 612 with which the operands 614 are associated do not provide an integral partition of the support means 610.
  • the support means 710 comprise two disjoint support portions indicated by hatched rectangular features, which are associated with a first operand of the predetermined data input function. Two further disjoint support portions indicated by white rectangular features are associated with a second operand of the predetermined data input function, which is distinct from the first operand.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a method and device for using a Tangible User Interface, TUI, to collaboratively provide data input at a computing device. The method enables the use of strong metaphors for associating multiple predetermined choices in reply to an input request formulated by a software application that is executed by a computing device to which the TUI provides an input interface.

Description

Method for providing data input using a tangible user interface Technical field The invention relates to the field of Tangible User Interfaces, TUI. Specifically, the invention relates to a method for providing data input at a computing device, wherein the data input is based on multiple values provided by one or more users using tangible user interface elements. Background of the invention
The area of human-computer interfaces relies heavily on the use of metaphors. For example, in the field of Graphical User Interfaces, GUI, symbols are commonly used as metaphors for implying the functionality of a user interface element. The use of efficient metaphors renders a user interface easy to manipulate and reduces the time and effort required by a user to get accustomed to using the interface. For example, a GUI button bearing the image of a floppy disk drive is instantly understood by a user as leading to the underlying software functionality of saving a document when the button is clicked using a pointing device. The stronger the user's association of the used metaphor with the associated functionality is, the more efficient an interface element will be.
Touch-based user interfaces have been able to propose several strong metaphors. As an example, the known feature of pinching fingers on an image displayed on a touch-sensitive display to zoom into the image, improves the usability of the software-implemented zoom functionality, as the user quickly associates the metaphor of the touch gesture to the underlying implied functionality.
Human beings are used to manipulating physical objects. Therefore, the potential of using a Tangible User Interface, TUI, which relies on the interaction of a user with physical objects to interact with software functionalities, has been identified as a good candidate for providing strong user interface metaphors. For a TUI to be usable in a natural way, using not only the user's visual capacities as traditional GUIs do, but also to convey information on a haptic and topological level, the interface needs to apply an efficient set of metaphors to imply functionality [1 ,2]. It is known that the application of metaphors works by designing the physical part of a Tangible User Interface, also called the tangible widgets, to imply a metaphorical context [3]. The implication of functionality relies mainly on the shape, color and texture of the objects that are used as tangible widgets. However, any physical stimulus may be used to imply functionality, such as for example smells or sounds. The metaphorical context allows for the application of well-known metaphors to be used, mapping the source domain (containing a familiar concept such as rotating a radial or shaping clay) into the target domain (altering the state of a digital object).
Recent research in TUIs [4,5,6] has focused on the use of table-top surface-like devices. A milestone which has triggered a lot of research and implementations using tabletop TUIs has been reacTable [7], implementing an electronic music instrument. The work spent on the reacTable has been refined and distilled into the reacTIVision framework [8]. The framework provides facilities for tracking, using a below-table infrared camera, objects on the table via fiducial markers attached to their base.
Existing tabletop applications provide a variety of interaction components. They use the paradigm of direct manipulation [4] or convey a more generic meaning [9]. Most known approaches use a direct mapping between a single object that is manipulated and a single associated input data value. Some known examples [10] allow for dual hand input and have some specific interactions which specifically require the concurrent use of two pucks. Further, widgets that are able to alter their position or shape [1 1 ], and hence can provide haptic feedback, have been disclosed.
To the best of the applicant's knowledge, none of the interaction techniques of Tangible User Interfaces known from the prior art provide an efficient method for enabling the collaborative input of data by multiple users in a collaborative way. An example of an application in which such data input is needed may be a collaborative choice application or a collaborative vote among possible data input values.
Technical problem to be solved
It is an objective of the present invention to provide a method and device for providing data input at a computing device using a tangible human-computer interface, which overcomes or alleviates at least some of the drawbacks of the prior art.
Summary of the invention According to a first aspect of the invention, a method for providing data input at a computing device using a tangible human-computer interface is provided. The method comprises the steps of: providing a tangible human-computer interface comprising physical support means having a plurality of support portions, processing means, and probing means capable of detecting physical objects on each of said support portions;
associating a predetermined data input function with said physical support means; - associating an operand of said predetermined data input function with at least one of said support portions;
probing the presence of any physical objects on any of said support portions, and generating the data input and providing it at the computing device.
The step of data input generation comprises assigning the number of physical objects detected on a support portion to the corresponding operand, and evaluating the predetermined data input function using the assigned operand values.
Preferably, the physical support means comprises at least as many support portions as the predetermined data input function comprises operands.
Preferably, an operand of said predetermined data input function may be associated with a plurality of said support portions.
Preferably, each operand of said predetermined data input function may be associated with at least one of said support portions.
Preferably, each operand of said predetermined data input function may be associated with exactly one of said support portions. The step of detecting at least one physical object may preferably comprise counting the number of objects detected on any of said support portions. Even more preferably, the step may comprise detecting physical properties of any of the detected objects, said properties comprising, for example, any of the shape, physical dimensions, weight, or color of the objects.
The step of data input generation may preferably comprise associating the number of detected physical objects on a portion and/or their detected physical properties with the corresponding operand. The physical support means may preferably comprise a substantially planar surface, and said support portions may preferably be portions of said surface. The probing means may preferably be optical probing means comprising image sensing means. Alternatively, the probing means may comprise means capable of detecting electromagnetic signals, or metal probing means. The human-computer interface may further preferably comprise a substantially planar surface, wherein the probing means are arranged so that they are capable of detecting the presence of said physical support means and/or physical objects on said surface.
Said surface of the human-computer interface may preferably comprise the physical support means.
Further, the human-computing interface may preferably comprise feedback means. The feedback means may comprise visual feedback means provided by display means or projection means. The feedback means may further comprise audio feedback means or haptic feedback means. The feedback means may be operatively connected to the processing means.
The method may advantageously further comprise the step of providing feedback, wherein the feedback indicates the location of the support portions and/or the operands associated with said support portions.
The method may further preferably comprise the step of providing a feedback that is generated upon provision of said data input at said computing device. The substantially planar surface of the human-computer interface may preferably be at least partially translucent. The probing means may preferably comprise image sensing means and optical means, and may further preferably be provided underneath said surface and comprise a field of view in which they are capable of detecting objects. The field of view may be directed towards said surface and said portions of the physical support means may be at least partially translucent, so that said probing means are capable of detecting a physical object on said portion from underneath, when the physical support means are placed on top of said surface.
The probing means may further preferably be configured for detecting physical properties of said objects. More preferably, the probing means may further be configured for detecting the relative positions of objects that are detected on any given surface portion with respect to each other. The step of generating data input may comprise associating the detected objects with the corresponding operand depending on their detected relative positions.
Advantageously, the step of generating the input data may comprise the computation of a weighted average of the operands, wherein the weights correspond to the number of physical objects detected on the support portions corresponding to said operands. Further, the weights may preferably depend on the properties, physical or other, of the objects detected on the support portions corresponding to the operands.
Preferably, the predetermined function may be a selection function, the evaluation of which results in the operand to which the highest value is assigned. According to another aspect of the invention, a device for carrying out the method according to the invention is provided. The device comprises a tangible human-computer interface, a memory element, and processing means, wherein the human-computer interface comprises physical support means having a plurality of support portions and probing means capable of detecting physical objects on each of said support portions. The processing means are configured to:
- associate a predetermined data input function with said physical support means;
- associate an operand of said predetermined data input function, stored in said memory element, with at least one of said support portions;
- probing, using said probing means, the presence of any physical objects on any of said support portions, and
- generate data input and store it in said memory element, wherein the generation of the data input comprises assigning the number of physical objects detected on a support portion to the corresponding operand, and evaluating the predetermined data input function using the assigned operand values.
According to a further aspect of the invention, a computer capable of carrying out the method according to the invention is provided.
According to yet another aspect of the invention, a computer program is provided. The computer program comprises computer readable code, which when run on a computer, causes the computer to carry out the method according to the invention. According to a further aspect of the invention, a computer program product comprising a computer-readable medium is provided. The computer program according to the invention is stored on the computer-readable medium. The invention relates to Tangible User Interfaces and provides an efficient metaphor for enabling the collaborative input of data by multiple users. An example of an application in which the invention finds its use may be a collaborative choice application or a collaborative vote among possible data input values. If the method is used by a single user, the user may provide multi-dimensional input data value in a natural way. For example, the user of a computing device may be confronted with a request from a software application that is executed on the device, for providing a combination of three predetermined color component values (Red, Green, and Blue). Using the invention, he may place objects onto three surface portions of the human-computer interface that represent the three options. The method of the invention allows to weight the Red, Green, and Blue components that are chosen and to provide the resulting combination as input to the software application. This may for example be achieved either by considering the number of objects he puts on the respective surface portions, or for example by measuring their physical weight, or other physical properties. A visual feedback channel may be provided in such an application, wherein the resulting color mix is visualized instantly to the user. The user may in return modify the position of the objects to change the data input.
If the method is used by multiple users, collaborative choice or voting applications are enabled. For example, each one of a plurality of users of a software application that is executed on a computing device may use objects of a specific color and put them onto support portions representing operands of a selection function. The overall number of objects detected as a consequence on the support portions may for example lead to the overall voting result, based on majority or weighted voting. The contribution of each user's input to the overall result may further be used as input by also taking the color of the detected objects on each support portion into account. Each user can manipulate their respective objects independently and concurrently with each other user. Specifically, each user may modify their contribution as they wish, i.e., incrementally by adding/removing a single object from the corresponding surface portion, or by adding/removing a plurality of objects at the same time. The interaction with the computing device is, therefore, enriched and rendered more natural by the present invention. The data input contribution provided by each user remains persistent through the use of physical objects. The position of the physical objects on the support means indicates the current data input contributions of each user at any time instant.
The method and device according to the invention enable collaborative data input and data manipulation at a computing device, which have not been achieved using known human- computer interfaces. The underlying principle of the claimed invention may find application in many specific data input situations, mainly wherein collaborative input by multiple users is required. Brief description of the drawings
Several embodiments of the present invention are illustrated by way of figures, which do not limit the scope of the invention, wherein:
Figure 1 if a flow diagram illustrating the main method steps according to a preferred embodiment of the method according to the invention;
Figure 2 is a schematic illustration of a preferred embodiment of a device for implementing the method according to the invention;
Figure 3 is a schematic illustration of a preferred embodiment of a device for implementing the method according to the invention;
- Figure 4 is a schematic illustration of a detail of a preferred embodiment of a device for implementing the method according to the invention;
Figure 5 is a schematic illustration of a detail of a preferred embodiment of a device for implementing the method according to the invention in a lateral cut view;
Figures 6a, 6b and 6c are top views of a detail of a device for implementing the method according to the invention, according to three preferred embodiments.
Detailed description of the invention
This section describes the invention in further detail based on preferred embodiments and on the figures, without limiting the scope of the invention to the disclosed examples.
Throughout the description, similar reference numbers will be used to indicate similar concepts across different embodiments of the invention. For example, references 100, 200, 300, 400 each indicate a tangible Human-Computer Interface as required by the invention, in four different embodiments. The described embodiments are provided as examples only. It should be understood that features disclosed for a specific embodiment may be combined with the features of other embodiments, unless the contrary is specifically stated. Figure 1 shows the main steps of the method according to the invention in a preferred embodiment. Figure 2 illustrates elements of a device for implementing the method according to the invention. In a first step 10 a human-computer interface 100, specifically a Tangible User Interface, TUI, is provided. The TUI allows one or more users to provide input to a computing device 190, for which it acts as the interface. The interface comprises probing means 130 capable of detecting physical objects 140 on portions 1 12 of a physical support means 1 10. The physical support means may be a table top or a physical TUI widget placed upon a table top, or it may have any geometrical shape that allows it to support a physical object 140 detectable by said probing means. The probing means 130 are operatively connected to processing means 120 such as a central processing unit, CPU. The processing means 120 have read/write access to a non-illustrated memory element. A predetermined data input function is associated with the physical support means. The data input function may for example be any mathematical formula involving multiple operands to which individual input values may be assigned before the function is evaluated. The predetermined data input function may be a selection function, the operands of which are available choices. It will be appreciated that any data input function involving multiple operands may be associated with the physical support means without leaving the framework of the present invention.
In a second step 20, an operand 1 14 of said predetermined data input function is associated with each one of said support portions 1 12. Advantageously, there are a like number of support portions and operand. In a scenario in which an application that is executed by the computing device 190 prompts a user for an answer, each operand 1 14 of the corresponding selection data input function may for example represent a possible answer to the question. The mapping between support portions 1 12 and operands 1 14 is stored in the memory element to which the processing means 120 have read access.
In a subsequent step 30, the presence of physical objects 140 on any of said support portions 1 12 is probed by said probing means 130. In a final step 40, data input 102 is generated and provided to the computing device 190. The generation of the data input comprises assigning the physical objects 140 detected on a support portion 1 12 to the corresponding operand 1 14 of the predetermined data input function. Finally, the result of the data input function is evaluated using the operand values that have been assigned to the operands in the previous step. The result if the provided input data input value. This enables for example the selection of one of a plurality of predetermined answers to a question, or to weigh each of the operands according to the number of detected objects on the corresponding support portion. The result of the data input function may for example provide a weighted average of the operands 1 14, wherein the weights are given as a function of the number of objects detected on the corresponding support portions. For specific applications, other useful combinations of the predetermined data input function and the assignment of values to the corresponding operands thereof based on the detected presence of physical objects on the corresponding support portions will be applicable by the skilled person, without leaving the scope of the present invention.
As shown by the dashed lines on Figure 1 , the method may be conditionally repeated by updating the operands in step 20, preferably based on the provided input in the earlier instance of the method. A simple non-limiting example is the use of a sum of two operands A+B as the data input function, wherein two distinct support portions are associated respectively with the operands A and B. The number count of detected objects on each of the corresponding support portions is assigned to the respective operands. Finally, the corresponding numbers are added to provide the final data input. Clearly, the complexity of the data input function may be chosen at will and depending on the specific needs arising in different application scenarios.
Figure 3 shows a further embodiment of the device according to the invention, wherein the human-computer interface 200 comprises a substantially planar surface 204, on which the physical support means 210 having the support portions 212 are placed. The planar surface 204 may for example be a table-top whereas the support means 210 are a physical TUI widget delimiting the regions corresponding to portions 212. A user may manipulate objects 240 on any one of the support portions 212, before passing the widget on to a different user, who in turn manipulates the objects on any of the support portions, whereby the two or more users collaboratively contribute to the final data input. Probing means 230 are arranged so that they are capable of detecting the presence of the support means 210 as well as objects on the planar surface. The probing means may for example comprise optical probing means 232 such as image sensing means or image depth sensing means coupled to appropriate focusing lenses. The image sensing means may be provided by means of a charge-coupled-device array, while depth sensing means may be provided by means of a time-of-flight camera. The field of view of such probing means, indicated by dashed lines on Figure 3, is in that case oriented so as to cover at least the part of the surface 204 onto which the support means 210 may be placed. Using the processing means 220, the detection of objects 240 based on the data provided by such probing means 230 is readily achieved. Image segmentation and object detection algorithms useful to that end are as such known in the art and will not be explained in further detail within the context of the present invention.
Figure 4 illustrates another embodiment according to which the planar surface 304 comprises the physical support means 310 having the support portions 312. The planar surface may for example be a surface on which visual feedback is provided using feedback means operatively coupled to the processing means. Specifically, the surface may comprise a display device, which indicates the regions corresponding to the surface portions 312 to a user. The visual feedback may alternatively be provided using display projection means. In other embodiments, the location of the surface portions 312 may be indicated by haptic feedback means.
The feedback means may also be used to provide feedback on the generated input to one or more users who have provided the objects based on which the input has been generated. Such feedback may comprise optical feedback, haptic feedback, audio feedback, or any combinations thereof. By moving one or more objects from one surface portion 312 to another, the impact on the generated data input is in that case immediately appreciable by the user. The feedback may also comprise updating the operands 314 associated with the support portions 312 as a function of the generated input data.
Figure 5 schematically illustrates a lateral cut view of a further preferred embodiment of a device according to the invention. In this embodiment, the probing means 433 are arranged underneath the surface 404 on which the support means 410 are placed. In case the probing means 433 comprise optical probing means as described above, both the surface 404 and the regions of the support means corresponding to the support portion 412 are made of a material that is at least partially translucent. This is to make sure that an object 440 supported by the support means 410 is not occluded from the field of view of the probing means by either the support means 410 or the surface 404. The probing means 433 may for example be optical scanning means.
In all embodiments, detection of a support portion and of the presence of one or several objects on a support portion may be achieved by several techniques as such known in the art. For example, an object to be detected may be tagged with an optical marker or graphical tag easily recognizable by the optical probing means. The tag may be a simple high-contrast dot, or a combination of black and white squares. The tags should be arranged on the objects so that they are within the field of view of and capable of being detected by the probing means. In the embodiment shown in Figure 3, the tag should be arranged on a side of an object 240 which is opposed to the side facing the support portion 212. Conversely, in the embodiment shown in Figure 5, the tag should be on the bottom side of the object 440, which corresponds to the side that contacts the support portion 412. The tags, if they uniquely identify each object, may also be used by the processing means to identify the objects placed on the support portions. The tags may alternatively emit electromagnetic signals, which are detectable by the appropriately chosen probing means. Other tagging means are known in the art and applicable to the invention. Their specific description would, however, extend beyond the description of the invention as such. During the data input generation step, the identity of each detected object on a detected support portion may lead to an identity-specific association of the object with the corresponding operand of the predetermined data input function. For example, if multiple users provide objects on the support means, all objects of a particular user may be tagged using the same tag, while each user is attributed a distinct tag. The contribution of each user to each of the operands, and therefore to the final resulting input value, may then be straightforwardly tracked by the processing means.
A similar function may be achieved by using objects having different physical properties, such as size, shape, color or others, provided that the probing means used to detect the presence and identity of the objects are capable of probing such physical properties. Such probing means are well known in the art and are within the reach of the skilled person.
In yet another embodiment, the probing means are capable of identifying the relative positions of a plurality of objects on any of the support portions. The relative position of a plurality of objects may be used as described above to influence the value that is attributed to the associated operand during the step of generating the data input. Specifically, physical clustering of objects may be interpreted by the processing means as providing more weight to a given value. Figures 6a, 6b and 6c provide exemplary embodiments of physical support means for supporting objects as required by the invention. These configurations are applicable to any embodiments described above. In Figure 6a, the physical support means 510 have a circular shape, wherein the portions 512 thereof, with which the operands 514 are associated, are equiangular sectors of the circular shape. In Figure 6b, the support portions 612 with which the operands 614 are associated do not provide an integral partition of the support means 610. In Figure 6c, the support means 710 comprise two disjoint support portions indicated by hatched rectangular features, which are associated with a first operand of the predetermined data input function. Two further disjoint support portions indicated by white rectangular features are associated with a second operand of the predetermined data input function, which is distinct from the first operand. The skilled person will be able to provide a computer program implementing some or all of the method steps according to the invention based on the provided description and the accompanying drawings. Such a computer program, when run on a computer, will lead the computer to execute the described method steps. It should be understood that the detailed description of specific preferred embodiments is given by way of illustration only, since various changes and modifications within the scope of the invention will be apparent to the skilled man. The scope of protection is defined by the following set of claims.
References
I . "Metaphor Awareness and Vocabulary Retention", F. Boers, Applied Linguistics 21/4:
553-571 , Oxford University Press 2000.
2. "Understanding the characteristics of metaphors in tangible user interfaces", V. Maquil, E. Ras and O. Zephir, Mensch & Computer Workshopband, 47-52, 201 1.
3. "A taxonomy for and analysis of tangible interfaces", K. P. Fishkin, Pers. Ubiquit.
Comput. (2004) 8: 347-358.
4. "Urp: A Luminous-Tangible Workbench for Urban Planning and Design", J. Underkoffler and H. Ishii, CHI 99, 15-20 May 1999, 386-393.
5. "The Tangible User Interface and its Evolution", H. Ishii, Comm. Of the ACM, June 2008/Vol. 51 , No. 6, 32-36.
6. "Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis", B. Piper, C. Ratti and H. Ishii, CHI 2002, 20-25 April 2002, 355-362.
7. "The reacTable: Exploring the Synergy between Live Music Performance and Tabletop
Tangible Interfaces", S. Jorda, G. Geiger, M. Alonso and M. Kaltenbrunner, ΤΕΓ07, 15-
17 Feb 2007, Baton Rouge, LA, USA, 139-146.
8. "reacTIVision: A Computer-Vision Framework for Table-Based Tangible Interaction", M.
Kaltenbrunner and R. Bencina, ΤΕΙΌ7, 15-17 Feb 2007, Baton Rouge, LA, USA, 69-74. 9. "Augmenting Interactive Tabletops with Translucent Tangible Controls", M. Weiss, J.D.
Hollan and J. Borchers, Tabletops - Horizontal Interactive Displays, Human-Computer
Interaction Series, 149-170, Springer-Verlag 2010.
10. "Audiopad: A Tag-based Interface for Musical Performance", J. Patten, B. Recht and H.
Ishii, ΝΙΜΕΌ2, Proceedings of the 2002 conference on New interfaces for musical expression, 2002.
I I . "Tangible Bots: Interaction with Active Tangibles in Tabletop Interfaces", E. W.
Pederson and K. Hornbaek, CHI 201 1 , May 7-12 201 1 , Vancouver BC, Canada.

Claims

Claims
1 . A method for providing data input (102, 202, 302, 402) at a computing device (190, 290, 390, 490) using a tangible human-computer interface (100, 200, 300, 400), wherein the method comprises the steps of:
providing a tangible human-computer interface (100, 200, 300, 400) comprising physical support means (1 10, 210, 310, 410) having a plurality of support portions (1 12, 212, 312, 413), processing means (120, 220, 320, 420), and probing means (130, 230, 330, 430) capable of detecting physical objects (140, 240, 340, 440) on each of said support portions; (10)
associating a predetermined data input function with said physical support means (1 10, 210, 310, 410);
associating an operand (1 14, 214, 314, 414) of said predetermined data input function with at least one of said support portions (1 12, 212, 312, 412); (20) probing the presence of any physical objects (140, 240, 340, 440) on any of said support portions (1 12, 212, 312, 412); (30) and
generating the data input (102, 202, 302, 402) and providing it at the computing device,
wherein the generation of the data input comprises assigning the number of physical objects (140, 240, 340, 440) detected on a support portion (1 12, 212, 312, 412) to the corresponding operand (1 14, 214, 314, 414), and evaluating the predetermined data input function using the assigned operand values (40).
2. The method according to claim 1 , wherein the physical support means (1 10) comprise a substantially planar surface, and wherein said support portions (1 12) are portions of said surface.
3. The method according to any of claims 1 or 2, wherein the human-computer interface (200) comprises a substantially planar surface (204), wherein probing means (230) are arranged so that they are capable of detecting the presence of said physical support means (210) and/or physical objects (240) on said surface (204).
4. The method according to claim 3, wherein said surface of the human-computer interface (300) comprises said physical support means (310).
5. The method according to any of claims 1 to 4, wherein the human-computing interface further comprises feedback means.
6. The method according to claims 4 and 5, wherein the method comprises the further step of providing feedback, wherein the feedback indicates the location of the support portions and/or the predetermined data values associated with said support portions.
7. The method according to any of claims 5 or 6, wherein the method further comprises the step of providing a feedback that is generated upon provision of said data input at said computing device.
8. The method according to claim 3, wherein the substantially planar surface (404) of the human-computer interface (400) is at least partially translucent,
wherein said probing means (432) comprise image sensing means and optical means, which are provided underneath said surface (404), and which comprise a field of view in which they are capable of detecting objects, the field of view being directed towards said surface,
and further wherein said portions (412) of the physical support means (410) are at least partially translucent,
so that said probing means (430) are capable of detecting a physical object (440) on said portion (412) from underneath, when the physical support means (410) are placed on top of said surface (404).
9. The method according to any of claims 1 to 8, wherein said probing means are further configured for detecting physical properties of said objects.
10. The method according to any of claims 1 to 9, wherein said probing means are further configured for detecting the relative positions of objects that are detected on any given surface portion with respect to each other, and wherein the step of generating data input comprises associating the detected objects with the corresponding operand depending on their detected relative positions.
1 1 . The method according to any of claims 1 to 10, wherein said predetermined function is a selection function, the evaluation of which results in the operand to which the highest value is assigned.
12. A device for carrying out the method according to any of claims 1 to 1 1 , the device comprising a tangible human-computer interface, a memory element, and processing means, wherein the human-computer interface comprises physical support means having a plurality of support portions and probing means capable of detecting physical objects on each of said support portions, and wherein the processing means are configured to
associate a predetermined data input function with said physical support means (1 10, 210, 310, 410);
associate an operand of said predetermined data input function, stored in said memory element, with at least one of said support portions;
probe, using said probing means, the presence of any physical objects on any of said support portions, and
generate data input and store it in said memory element, wherein the generation of the data input comprises assigning the number of physical objects detected on a support portion to the corresponding operand, and evaluating the predetermined data input function using the assigned operand values.
13. A computer capable of carrying out the method according to any of claims 1 to 1 1 .
14. A computer program comprising computer readable code means, which when run on a computer, causes the computer to carry out the method according to any of claims 1 to 1 1 .
15. A computer program product comprising a computer-readable medium on which the computer program according to claim 14 is stored.
EP15731054.1A 2014-06-26 2015-06-24 Method for providing data input using a tangible user interface Withdrawn EP3161604A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14174048.0A EP2960769A1 (en) 2014-06-26 2014-06-26 Method for providing data input using a tangible user interface
PCT/EP2015/064254 WO2015197691A1 (en) 2014-06-26 2015-06-24 Method for providing data input using a tangible user interface

Publications (1)

Publication Number Publication Date
EP3161604A1 true EP3161604A1 (en) 2017-05-03

Family

ID=51059295

Family Applications (2)

Application Number Title Priority Date Filing Date
EP14174048.0A Withdrawn EP2960769A1 (en) 2014-06-26 2014-06-26 Method for providing data input using a tangible user interface
EP15731054.1A Withdrawn EP3161604A1 (en) 2014-06-26 2015-06-24 Method for providing data input using a tangible user interface

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP14174048.0A Withdrawn EP2960769A1 (en) 2014-06-26 2014-06-26 Method for providing data input using a tangible user interface

Country Status (2)

Country Link
EP (2) EP2960769A1 (en)
WO (1) WO2015197691A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PL417869A1 (en) 2016-07-06 2018-01-15 Michał Dziedziniewicz Device for generation of computer programs and method for generation of computer programs
LU100389B1 (en) * 2017-09-05 2019-03-19 Luxembourg Inst Science & Tech List Human-computer interface comprising a token

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175954B1 (en) * 1997-10-30 2001-01-16 Fuji Xerox Co., Ltd. Computer programming using tangible user interface where physical icons (phicons) indicate: beginning and end of statements and program constructs; statements generated with re-programmable phicons and stored
US8905834B2 (en) * 2007-11-09 2014-12-09 Igt Transparent card display
US7407106B2 (en) * 2004-09-28 2008-08-05 Microsoft Corporation Method and system for hiding visible infrared markings

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2015197691A1 *

Also Published As

Publication number Publication date
WO2015197691A1 (en) 2015-12-30
EP2960769A1 (en) 2015-12-30

Similar Documents

Publication Publication Date Title
US9652074B2 (en) Method and apparatus for detecting lift off of a touchscreen
KR101597844B1 (en) Interpreting ambiguous inputs on a touch-screen
US10013143B2 (en) Interfacing with a computing application using a multi-digit sensor
CN105339872B (en) The method of electronic equipment and the input in identification electronic equipment
Fukahori et al. Exploring subtle foot plantar-based gestures with sock-placed pressure sensors
CN102937876B (en) The dynamic scaling of touch sensor
KR101439855B1 (en) Touch screen controller and method for controlling thereof
US20110260976A1 (en) Tactile overlay for virtual keyboard
JP2005196740A (en) Graphic multi-user interface for solving contention and method for solving contention of graphic multiuser interface
JP6104108B2 (en) Determining input received via a haptic input device
CN106155409A (en) Capacitive character tolerance for patterns of change processes
Shimon et al. Exploring user-defined back-of-device gestures for mobile devices
US20110032194A1 (en) Method for detecting tracks of touch inputs on touch-sensitive panel and related computer program product and electronic apparatus using the same
JPWO2013084560A1 (en) Method for displaying electronic document, apparatus for the same, and computer program
CN105283828A (en) Touch detection at bezel edge
CN105144072A (en) Emulating pressure sensitivity on multi-touch devices
US20120182322A1 (en) Computing Device For Peforming Functions Of Multi-Touch Finger Gesture And Method Of The Same
Jiang et al. Snaptoquery: providing interactive feedback during exploratory query specification
JP2020170311A (en) Input device
CN102214039A (en) Multi-mode prosthetic device to facilitate multi-state touch screen detection
CN108027704A (en) Information processing equipment, information processing method and program
WO2015197691A1 (en) Method for providing data input using a tangible user interface
EP2796977A1 (en) A method for interfacing between a device and information carrier with transparent area(s)
US20180373381A1 (en) Palm touch detection in a touch screen device having a floating ground or a thin touch panel
Hollemans et al. Entertaible: Multi-user multi-object concurrent input

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20161216

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180906

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200103